Understanding InfiniBand
Understanding InfiniBand
Understanding
InfiniBand
By Gene Risi and Philip Bender
The rapidly increasing demand for higher interconnection bandwidth is surpassing the
performance capacity of existing servers, workstations, and interconnect devices that employ
a shared-bus architecture. Bandwidth-hungry systems require an open architecture that
improves data throughput and eliminates bottlenecks by handling multiple I/O streams
simultaneously. The InfiniBand™ specification is designed to address this need. InfiniBand
acts much like traditional mainframe-based network architectures, and it is rapidly gaining
industry attention and support.
I
nfinite bandwidth is the dream of anyone who has waited InfiniBand addresses the need
impatiently for a download to finish or a large file to traverseIn response to these limitations, the InfiniBand Trade Association
the network. While infinite bandwidth remains beyond (IBTA) was formed in 1999, evolving out of the Future I/O (FIO)
reach, its realization is much more possible with the creation of and Next Generation I/O (NGIO) forums. In late 2000, the IBTA
the new InfiniBand™ specification. unveiled a new, open, and interoperable I/O specification called
InfiniBand. Since then, the organization’s membership has grown
The crisis in network bandwidth to 180 companies, which include Dell, Hewlett-Packard, IBM, Intel,
Today’s network interconnection standards and technologies Microsoft, and Sun.
are built around existing shared-bus I/O architectures that The InfiniBand architecture offers a new approach to I/O. It is
have limited abilities to handle the increasing designed to simplify and speed server-to-server
data demands of today’s Web and network- connections and links to other server-related
powered businesses. Current bandwidth systems, such as remote storage and networking
limitations of the shared-bus architecture
InfiniBand can simplify devices, through a message-based fabric network.
require network I/O requests to compete network management The architecture provides high-performance capa-
with other attached devices—including SCSI, bilities for entry-level servers through high-end
Ethernet, Fibre Channel, and other target and will be supported by data-center machines. Featuring interoperable
devices—for access to the Peripheral Compo-
most operating systems, links, InfiniBand supports aggregate data band-
nent Interconnect (PCI) bus and CPU. widths ranging from 500 MB/sec to 6 GB/sec at a
Constant interruption to the CPU by these applications, and network wire-speed of 2.5 Gbps, and it can be imple-
devices slows overall CPU performance, mented in copper or fiber-optic cabling.
management software.
negating the benefits of faster processors Because InfiniBand replaces traditional bus
and increased memory. systems with an I/O fabric that supports parallel
Network traffic
and management SAN
Ten 24-node LANs 3 Fibre Channel
Ethernet routers switches
5 Gigabit Tape backup
Ethernet* switches
16 PowerEdge
rack servers
*Gigabit Ethernet indicates compliance with IEEE® 802.3ab and does not connote speeds of 1 Gbps.
Network traffic
and management SAN
Ten 24-node LANs
Ethernet routers 16 PowerEdge rack
2 Gigabit servers wired to Tape backup
Ethernet switches InfiniBand router**
1 IBM Technology Group, “InfiniBand: Satisfying the Hunger for Network Bandwidth,” June 2001, http://www-3.ibm.com/chips/techlib/techlib.nsf/tech-
docs/6AB442669DFCFA0E87256A6F006855C1/$file/InfinWP.pdf.
Link
10 Gigabit Fibre Channel products will not be available until 2004. CPU
Memory
Host interconnect
However, major server original equipment manufacturers (OEMs) controller HCA Link Switch Link TCA Target
Link
CPU
System
memory
xCA xCA Link
A look at how InfiniBand works
CPU
InfiniBand defines the communication and management Router Network Router
The TCA resides in the I/O unit and provides bottlenecks with a standards-based solution
the connection between an I/O device (such as a that offers high-performance, high-reliability
disk drive) or I/O network (such as Ethernet or
An InfiniBand-based connections. InfiniBand can lower total cost of
Fibre Channel) to the InfiniBand network. It imple- infrastructure can ownership (TCO) by supporting server scalabil-
ments the physical, link, and transport layers of the ity to meet growing processing demands, and its
InfiniBand protocol.
cost less than an shared I/O capability can enable more efficient
The switches are located between the channel equivalently configured use of I/O devices. Because this powerful archi-
adapters. They allow a few to several thousand tecture can coexist with other technologies,
infrastructure using
InfiniBand end nodes anywhere to interconnect its integration should cause no disruption to
into a single network that supports multiple standard Ethernet or operations. As InfiniBand replaces other tech-
concurrent connections. Switches neither generate nologies over time, it can help increase perfor-
Fibre Channel and can
nor consume packets; they simply pass them along mance and productivity in the data center by
based on the destination address in the packet’s provide superior message- requiring no additional personnel resources and
route header. The switches are transparent to the by reducing infrastructure complexity.
handling capacity.
end nodes, and the packets traverse the switch
fabric virtually unchanged by the fabric. Gene Risi ([email protected]) is a senior engineer
and architect for the IBM® Microelectronics Division
An evolution of performance in Essex Junction, Vermont. Gene has worked at IBM for more than
The InfiniBand architecture is rapidly gaining industry attention 23 years and is a key architect for the company’s InfiniBand product
and support. Although it is an open architecture, not all implemen- line. His technical accomplishments include significant contributions
tations of the new I/O specification are the same. Initial InfiniBand to several microprocessor designs, microprocessor chipset develop-
product offerings at the 1X level (a single 2.5 Gbps channel in each ment, bring-up system development, system-level simulation, logic
direction) have allowed early adopters to begin design and testing design, synthesis and simulation tool development, project manage-
efforts. Most of these early products were PCI compatible but not ment, applications engineering, and technical consultation.
PCI-Extended (PCI-X) compatible.
The full value of the InfiniBand specification is not realized in Philip Bender ([email protected]) is the InfiniBand product
an environment consisting solely of 1X products. Manufacturers of marketing manager for the IBM Microelectronics Division. Philip
second-generation InfiniBand products are offering performance at has more than 18 years of marketing and public relations strategy
the 4X level (four 2.5 Gbps channels in each direction) and at the and enablement experience, and he currently manages the public
12X level (twelve 2.5 Gbps channels in each direction). and product marketing activities for the InfiniBand product line on
Because most InfiniBand implementations will require behalf of the Division’s Standard Products and application-specific
bridging to and coexisting with present and emerging technolo- integrated circuits (ASICs) groups.
gies in the data center, vendors are developing products that
include interfaces to PCI-X 2.0/DDR (double data rate), Fibre
Channel, Ethernet, and PCI-Express. However, as InfiniBand
components expand throughout the data center, native imple-
mentations will play a greater role and InfiniBand could become
a complete end-to-end architecture solution for the entire data FOR MORE I NFORMATION
center, replacing other I/O technologies.
IBM InfiniBand products:
http://www-3.ibm.com/chips/products/infiniband
InfiniBand: A new approach to solving
challenges in the data center InfiniBand Trade Association:
http://www.infinibandta.org
The InfiniBand architecture can help IT administrators manage
their data centers and address existing clustering and I/O