0% found this document useful (0 votes)
165 views

Understanding InfiniBand

Understanding InfiniBand

Uploaded by

bakh777196
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
165 views

Understanding InfiniBand

Understanding InfiniBand

Uploaded by

bakh777196
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

E M E R G I N G T E C H N O L O G Y

Understanding
InfiniBand
By Gene Risi and Philip Bender

The rapidly increasing demand for higher interconnection bandwidth is surpassing the
performance capacity of existing servers, workstations, and interconnect devices that employ
a shared-bus architecture. Bandwidth-hungry systems require an open architecture that
improves data throughput and eliminates bottlenecks by handling multiple I/O streams
simultaneously. The InfiniBand™ specification is designed to address this need. InfiniBand
acts much like traditional mainframe-based network architectures, and it is rapidly gaining
industry attention and support.

I
nfinite bandwidth is the dream of anyone who has waited InfiniBand addresses the need
impatiently for a download to finish or a large file to traverseIn response to these limitations, the InfiniBand Trade Association
the network. While infinite bandwidth remains beyond (IBTA) was formed in 1999, evolving out of the Future I/O (FIO)
reach, its realization is much more possible with the creation of and Next Generation I/O (NGIO) forums. In late 2000, the IBTA
the new InfiniBand™ specification. unveiled a new, open, and interoperable I/O specification called
InfiniBand. Since then, the organization’s membership has grown
The crisis in network bandwidth to 180 companies, which include Dell, Hewlett-Packard, IBM, Intel,
Today’s network interconnection standards and technologies Microsoft, and Sun.
are built around existing shared-bus I/O architectures that The InfiniBand architecture offers a new approach to I/O. It is
have limited abilities to handle the increasing designed to simplify and speed server-to-server
data demands of today’s Web and network- connections and links to other server-related
powered businesses. Current bandwidth systems, such as remote storage and networking
limitations of the shared-bus architecture
InfiniBand can simplify devices, through a message-based fabric network.
require network I/O requests to compete network management The architecture provides high-performance capa-
with other attached devices—including SCSI, bilities for entry-level servers through high-end
Ethernet, Fibre Channel, and other target and will be supported by data-center machines. Featuring interoperable
devices—for access to the Peripheral Compo-
most operating systems, links, InfiniBand supports aggregate data band-
nent Interconnect (PCI) bus and CPU. widths ranging from 500 MB/sec to 6 GB/sec at a
Constant interruption to the CPU by these applications, and network wire-speed of 2.5 Gbps, and it can be imple-
devices slows overall CPU performance, mented in copper or fiber-optic cabling.
management software.
negating the benefits of faster processors Because InfiniBand replaces traditional bus
and increased memory. systems with an I/O fabric that supports parallel

www.dell.com/powersolutions PowerSolutions 125


E M E R G I N G T E C H N O L O G Y

for a time with other technologies such as Ethernet, Fibre


Application Data transport/ Systems
focus reliability management Channel, and proprietary clustering and storage fabrics. Although
InfiniBand Server I/O High reliability Built-in, in-band
each of these technologies includes some of the benefits that
and clustering fabric and hardware InfiniBand provides, the InfiniBand technology offers a broader
management
range of advantages.
Standard Local area Drops data packets No form factors
Ethernet networks during congestion— or built-in systems Figure 1 compares the benefits of InfiniBand with those of
no failover management
Ethernet and Fibre Channel.1 An InfiniBand-based infrastructure
Fibre Channel Storage area High reliability No form factors
networks or built-in systems can cost less than an equivalently configured infrastructure
management
using standard Ethernet or Fibre Channel and can provide
superior message-handling capacity. Figure 2 shows how a
Figure 1. The advantages of InfiniBand compared with those of Ethernet high-availability shared-I/O system using only Ethernet and
and Fibre Channel Fibre Channel connections compares to an InfiniBand-based
I/O system, in terms of equipment and wiring requirements
data transfers along multiple I/O links, its adoption could easily and management.
spread across the entire network over time, beginning with InfiniBand is ideally suited for clustering, offers the ability
server I/O and then moving into storage area networks (SANs) to use both fiber and copper connections, and off-loads
and local area networks (LANs). When initially deployed in hardware and fabric management from host CPUs. It can
Web and application server backplanes, InfiniBand will coexist simplify network management and will be supported by

High-availability I/O system

20 external 108 external 68 external 16 external


Ethernet Ethernet Fibre Channel Fibre Channel
connections connections connections connections

Network traffic
and management SAN
Ten 24-node LANs 3 Fibre Channel
Ethernet routers switches
5 Gigabit Tape backup
Ethernet* switches
16 PowerEdge
rack servers

*Gigabit Ethernet indicates compliance with IEEE® 802.3ab and does not connote speeds of 1 Gbps.

High-availability I/O system using InfiniBand connections

20 external 28 external 16 external


Ethernet Ethernet Fibre Channel
connections connections connections

Network traffic
and management SAN
Ten 24-node LANs
Ethernet routers 16 PowerEdge rack
2 Gigabit servers wired to Tape backup
Ethernet switches InfiniBand router**

Total reduced equipment:


3 Ethernet switches 68 Fibre Channel connections
80 Ethernet connections 3 Fibre Channel switches

**68 internal InfiniBand connections

Figure 2. How InfiniBand compares to other I/O systems

1 IBM Technology Group, “InfiniBand: Satisfying the Hunger for Network Bandwidth,” June 2001, http://www-3.ibm.com/chips/techlib/techlib.nsf/tech-
docs/6AB442669DFCFA0E87256A6F006855C1/$file/InfinWP.pdf.

126 PowerSolutions November 2002


most operating systems, applications, and network manage-
ment software. HCA = Host channel adapter
TCA = Target channel adapter
InfiniBand technology also can provide an excellent return xCA = HCA or TCA Target

on investment and is available now. The initial 10 Gigabit Ethernet TCA


products are not expected until late 2003 or early 2004, and the

Link
10 Gigabit Fibre Channel products will not be available until 2004. CPU
Memory

Host interconnect
However, major server original equipment manufacturers (OEMs) controller HCA Link Switch Link TCA Target

currently are testing and integrating InfiniBand hardware.

Link
CPU
System
memory
xCA xCA Link
A look at how InfiniBand works
CPU
InfiniBand defines the communication and management Router Network Router

infrastructure supporting both I/O and interprocessor commu-


nications. An InfiniBand system can range from a small
single-processor server with a few I/O devices to a massively Figure 3. The InfiniBand architecture model
parallel supercomputer.
InfiniBand incorporates the proven message-passing, provides the means to configure highly reliable servers from
memory-mapping, and point-to-point link technologies of commodity elements.
mainframe-based networks. Unlike bus-based technologies that
handle only one request at a time, InfiniBand lets multiple I/O The InfiniBand architecture provides scalability
devices make requests simultaneously to the system CPU for InfiniBand allows IT administrators to hot swap components. The
data, significantly reducing delays or congestion. switch fabric architecture recognizes the addition or removal of com-
The data flows in two directions, with inbound data and ponents in real time, eliminating the need for power-downs or dis-
outbound data in separate channels. The different performance ruptions to data center operations. InfiniBand also enables the linear
levels of InfiniBand—1X, 4X, 12X—refer to the width of the scalability of components, providing incremental performance boosts
2.5 Gbps I/O data channels implemented by the InfiniBand for each server added to the fabric. To maximize availability and
components. The 1X level comprises one 2.5 Gbps channel bandwidth, multiple paths between end nodes may be deployed
in each direction, the 4X level comprises four 2.5 Gbps channels within the switch fabric.
in each direction (providing a composite 10 Gbps in each The InfiniBand specification defines three basic components:
direction), and the 12X level comprises twelve 2.5 Gbps
channels in each direction (providing a composite 30 Gbps in A host channel adapter (HCA)
each direction). A target channel adapter (TCA)
InfiniBand hardware off-loads most of the I/O communication A fabric switch
from the CPU—a major change from existing communication pro-
tocols such as TCP/IP. This off-loading allows InfiniBand technology works by connecting
multiple concurrent communications without HCAs, TCAs, switches, and routers (see
the traditional overhead associated with commu- Figure 3). The InfiniBand devices in the end
nication protocols. InfiniBand hardware provides
InfiniBand hardware nodes are channel adapters that generate and
highly reliable, fault-tolerant communication provides highly reliable, consume packets.
to enable improved bandwidth, latency, and The HCA and TCA can provide a reliable
reliability of the system. fault-tolerant end-to-end connection that does not require
InfiniBand can support a shared I/O CPU intervention. The HCA resides in the
communication to enable
architecture in which more than one host can processor node and provides the path from the
use an I/O device. The I/O sharing allows improved bandwidth, system memory to the InfiniBand network. It
multiple commodity rack-mounted servers has a programmable direct-memory-access
to be clustered in such a way that a server latency, and reliability (DMA) engine with special protection and
failure causes a failover to another server. address-translation features that allow DMA
of the system.
This failover capability can allow for quick operations to be initiated locally or remotely by
and easy replacement of the failed server and another HCA or a TCA.

www.dell.com/powersolutions PowerSolutions 127


E M E R G I N G T E C H N O L O G Y

The TCA resides in the I/O unit and provides bottlenecks with a standards-based solution
the connection between an I/O device (such as a that offers high-performance, high-reliability
disk drive) or I/O network (such as Ethernet or
An InfiniBand-based connections. InfiniBand can lower total cost of
Fibre Channel) to the InfiniBand network. It imple- infrastructure can ownership (TCO) by supporting server scalabil-
ments the physical, link, and transport layers of the ity to meet growing processing demands, and its
InfiniBand protocol.
cost less than an shared I/O capability can enable more efficient
The switches are located between the channel equivalently configured use of I/O devices. Because this powerful archi-
adapters. They allow a few to several thousand tecture can coexist with other technologies,
infrastructure using
InfiniBand end nodes anywhere to interconnect its integration should cause no disruption to
into a single network that supports multiple standard Ethernet or operations. As InfiniBand replaces other tech-
concurrent connections. Switches neither generate nologies over time, it can help increase perfor-
Fibre Channel and can
nor consume packets; they simply pass them along mance and productivity in the data center by
based on the destination address in the packet’s provide superior message- requiring no additional personnel resources and
route header. The switches are transparent to the by reducing infrastructure complexity.
handling capacity.
end nodes, and the packets traverse the switch
fabric virtually unchanged by the fabric. Gene Risi ([email protected]) is a senior engineer
and architect for the IBM® Microelectronics Division
An evolution of performance in Essex Junction, Vermont. Gene has worked at IBM for more than
The InfiniBand architecture is rapidly gaining industry attention 23 years and is a key architect for the company’s InfiniBand product
and support. Although it is an open architecture, not all implemen- line. His technical accomplishments include significant contributions
tations of the new I/O specification are the same. Initial InfiniBand to several microprocessor designs, microprocessor chipset develop-
product offerings at the 1X level (a single 2.5 Gbps channel in each ment, bring-up system development, system-level simulation, logic
direction) have allowed early adopters to begin design and testing design, synthesis and simulation tool development, project manage-
efforts. Most of these early products were PCI compatible but not ment, applications engineering, and technical consultation.
PCI-Extended (PCI-X) compatible.
The full value of the InfiniBand specification is not realized in Philip Bender ([email protected]) is the InfiniBand product
an environment consisting solely of 1X products. Manufacturers of marketing manager for the IBM Microelectronics Division. Philip
second-generation InfiniBand products are offering performance at has more than 18 years of marketing and public relations strategy
the 4X level (four 2.5 Gbps channels in each direction) and at the and enablement experience, and he currently manages the public
12X level (twelve 2.5 Gbps channels in each direction). and product marketing activities for the InfiniBand product line on
Because most InfiniBand implementations will require behalf of the Division’s Standard Products and application-specific
bridging to and coexisting with present and emerging technolo- integrated circuits (ASICs) groups.
gies in the data center, vendors are developing products that
include interfaces to PCI-X 2.0/DDR (double data rate), Fibre
Channel, Ethernet, and PCI-Express. However, as InfiniBand
components expand throughout the data center, native imple-
mentations will play a greater role and InfiniBand could become
a complete end-to-end architecture solution for the entire data FOR MORE I NFORMATION
center, replacing other I/O technologies.
IBM InfiniBand products:
http://www-3.ibm.com/chips/products/infiniband
InfiniBand: A new approach to solving
challenges in the data center InfiniBand Trade Association:
http://www.infinibandta.org
The InfiniBand architecture can help IT administrators manage
their data centers and address existing clustering and I/O

128 PowerSolutions November 2002

You might also like