Computer Networks: Krishna Kant

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

Computer Networks 53 (2009) 2939–2965

Contents lists available at ScienceDirect

Computer Networks
journal homepage: www.elsevier.com/locate/comnet

Data center evolution


A tutorial on state of the art, issues, and challenges
Krishna Kant
Intel Corporation, Hillsboro, Oregon, USA

a r t i c l e i n f o a b s t r a c t

Keywords: Data centers form a key part of the infrastructure upon which a variety of information tech-
Data center nology services are built. As data centers continue to grow in size and complexity, it is
Virtualization desirable to understand aspects of their design that are worthy of carrying forward, as well
InfiniBand as existing or upcoming shortcomings and challenges that would have to be addressed. We
Ethernet
envision the data center evolving from owned physical entities to potentially outsourced,
Solid state storage
Power management
virtualized and geographically distributed infrastructures that still attempt to provide the
same level of control and isolation that owned infrastructures do. We define a layered
model for such data centers and provide a detailed treatment of state of the art and emerg-
ing challenges in storage, networking, management and power/thermal aspects.
Ó 2009 Published by Elsevier B.V.

1. Introduction trends – more fully discussed in Section 3 – are expected to


turn data centers into distributed, virtualized, multi-lay-
Data centers form the backbone of a wide variety of ser- ered infrastructures that pose a variety of difficult
vices offered via the Internet including Web-hosting, e- challenges.
commerce, social networking, and a variety of more gen- In this paper, we provide a tutorial coverage of a vari-
eral services such as software as a service (SAAS), platform ety of emerging issues in designing and managing large
as a service (PAAS), and grid/cloud computing. Some exam- virtualized data centers. In particular, we consider a lay-
ples of these generic service platforms are Microsoft’s ered model of virtualized data centers and discuss stor-
Azure platform, Google App engine, Amazon’s EC2 plat- age, networking, management, and power/thermal
form and Sun’s Grid Engine. Virtualization is the key to issues for such a model. Because of the vastness of the
providing many of these services and is being increasingly space, we shall avoid detailed treatment of certain well
used within data centers to achieve better server utiliza- researched issues. In particular, we do not delve into
tion and more flexible resource allocation. However, virtu- the intricacies of virtualization techniques, virtual
alization also makes many aspects of data center machine migration and scheduling in virtualized
management more challenging. environments.
As the complexity, variety, and penetration of such ser- The organization of the paper is as follows. Section 2
vices grows, data centers will continue to grow and prolif- discusses the organization of a data center and points out
erate. Several forces are shaping the data center landscape several challenging areas in data center management. Sec-
and we expect future data centers to be lot more than sim- tion 3 discusses emerging trends in data centers and new
ply bigger versions of those existing today. These emerging issues posed by them. Subsequent sections then discuss
specific issues in detail including storage, networking,
management and power/thermal issues. Finally, Section 8
E-mail address: [email protected] summarizes the discussion.

1389-1286/$ - see front matter Ó 2009 Published by Elsevier B.V.


doi:10.1016/j.comnet.2009.10.004
2940 K. Kant / Computer Networks 53 (2009) 2939–2965

2. Data center organization and issues loaded rack may not offer the required peak network or
storage bandwidth (BW) either, thereby requiring careful
2.1. Rack-level physical organization management of resources to stay within the BW limits.

A data center is generally organized in rows of ‘‘racks” 2.2. Storage and networking infrastructure
where each rack contains modular assets such as servers,
switches, storage ‘‘bricks”, or specialized appliances as Storage in data centers may be provided in multiple
shown in Fig. 1. A standard rack is 78 in. high, 23–25 in. ways. Often the high performance storage is housed in spe-
wide and 26–30 in. deep. Typically, each rack takes a num- cial ‘‘storage towers” that allow transparent remote access
ber of modular ‘‘rack mount” assets inserted horizontally to the storage irrespective of the number and types of
into the racks. The asset thickness is measured using an physical storage devices used. Storage may also be pro-
unit called ‘‘U”, which is 45 mm (or approximately vided in smaller ‘‘storage bricks” located in rack or chassis
1.8 in.). An overwhelming majority of servers are single slots or directly integrated with the servers. In all cases, an
or dual socket processors and can fit the 1U size, but larger efficient network access to the storage is crucial.
ones (e.g., 4-socket multiprocessors) may require 2U or lar- A data center typically requires four types of network
ger sizes. A standard rack can take a total of 42 1U assets accesses, and could potentially use four different types of
when completely filled. The sophistication of the rack itself physical networks. The client–server network provides
may vary greatly – in the simplest case, it is nothing more external access into the data center, and necessarily uses
than a metal enclosure. Additional features may include a commodity technology such as the wired Ethernet or
rack power distribution, built-in KVM (keyboard–video– wireless LAN. Server-to-server network provides high-
mouse) switch, rack-level air or liquid cooling, and perhaps speed communication between servers and may use Ether-
even a rack-level management unit. net, InfiniBand (IBA) or other technologies. The storage ac-
For greater compactness and functionality, servers can cess has traditionally been provided by Fiber Channel but
be housed in a self-contained chassis which itself slides could also use Ethernet or InfiniBand. Finally, the network
into the rack. With 13 in. high chassis, six chassis can fit used for management is also typically Ethernet but may
into a single rack. A chassis comes complete with its own either use separate cabling or exist as a ‘‘sideband” on
power supply, fans, backplane interconnect, and manage- the mainstream network.
ment infrastructure. The chassis provides standard size Both mainstream and storage networks typically follow
slots where one could insert modular assets (usually identical configuration. For blade servers mounted on a
known as blades). A single chassis can hold up to 16 1U chassis, the chassis provides a switch through which all
servers, thereby providing a theoretical rack capacity of the servers in the chassis connect to outside servers. The
96 modular assets. switches are duplexed for reliability and may be arranged
The substantial increase in server density achievable by for load sharing when both switches are working. In order
using the blade form factor results in corresponding in- to keep the network manageable, the overall topology is
crease in per-rack power consumption which, in turn, basically a tree with full connectivity at the root level.
can seriously tax the power delivery infrastructure. In par- For example, each chassis level (or level 1) switch has an
ticular, many older data centers are designed with about uplink leading to the level 2 switch, so that communication
7 KW per-rack power rating, whereas racks loaded with between two servers in different chassis must go through
blade servers could approach 21 KW. There is a similar is- at least three switches. Depending on the size of the data
sue with respect to thermal density – the cooling infra- center, the multiple level 2 switches may be either con-
structure may be unable to handle the offered thermal nected into a full mesh, or go through one or more level
load. The net result is that it may be impossible to load 3 switches. The biggest issue with such a structure is po-
the racks to their capacity. For some applications, a fully tential bandwidth inadequacy at higher levels. Generally,
uplinks are designed for a specific oversubscription ratio
since providing a full bisection bandwidth is usually not
feasible. For example, 20 servers, each with a 1 GB/s Ether-
net may share a single 10 GB/s Ethernet uplink for a over-
subscription ratio of 2.0. This may be troublesome if the
workload mapping is such that there is substantial non-lo-
cal communication. Since storage is traditionally provided
in a separate storage tower, all storage traffic usually
crosses the chassis uplink on the storage network. As data
centers grow in size, a more scalable network architecture
becomes necessary.

2.3. Management infrastructure

Each server usually carries a management controller


called the BMC (baseboard management controller). The
management network terminates at the BMC of each ser-
Fig. 1. Physical organization of a data center. ver. When the management network is implemented as a
K. Kant / Computer Networks 53 (2009) 2939–2965 2941

‘‘sideband” network, no additional switches are required loss. The output of UPS (usually 240/120 V, single phase)
for it; otherwise, a management switch is required in each is routed to the power distribution unit (PDU) which, in
chassis/rack to support external communication. The basic turn, supplies power to individual rack-mounted servers
functions of the BMC include monitoring of various hard- or blade chassis. Next the power is stepped down, con-
ware sensors, managing various hardware and software verted from AC to DC, and partially regulated in order to
alerts, booting up and shutting down the server, maintain- yield the typical 12 and 5 V outputs with the desired
ing configuration data of various devices and drivers, and current ratings (20–100 A). These voltages are delivered
providing remote management capabilities. Each chassis to the motherboard where the voltage regulators (VRs)
or rack may itself sport its own higher level management must convert them to as many voltage rails as the server
controller which communicates with the lower level design demands. For example, in an IBM blade server, the
controller. supported voltage rails include 5–6 V (3.3 V down to
Configuration management is a rather generic term and 1.1 V), in addition to the 12 V and 5 V rails.
can refer to management of parameter settings of a variety Each one of these power conversion/distribution stages
of objects that are of interest in effectively utilizing the results in power loss, with some stages showing efficien-
computer system infrastructure from individual devices cies in 85–95% range or worse. It is thus not surprising that
up to complex services running on large networked clus- the cumulative power efficiency by the time we get down
ters. Some of this management clearly belongs to the base- to voltage rails on the motherboard is only 50% or less
board management controller (BMC) or corresponding (excluding cooling, lighting, and other auxiliary power
higher level management chain. This is often known as uses). Thus there is a significant scope for gaining power
out-of-band (OOB) management since it is done without efficiencies by a better design of power distribution and
involvement of main CPU or the OS. Other activities may conversion infrastructure.
be more appropriate for in-band management and may be The cooling infrastructure in a data center can be quite
done by the main CPU in hardware, in OS, or in the middle- elaborate and expensive involving building level air-condi-
ware. The higher level management may run on separate tioning units requiring large chiller plants, fans and air
systems that have both in-band and OOB interfaces. On a recirculation systems. Evolving cooling technologies tend
server, the most critical OOB functions belong to the pre- to emphasize more localized cooling or try to simplify
boot phase and in monitoring of server health while the cooling infrastructure. The server racks are generally
OS is running. On other assets such as switches, routers, placed on a raised plenum and arranged in alternately
and storage bricks the management is necessarily OOB. back-facing and front-facing aisles as shown in Fig. 2. Cold
air is forced up in the front facing aisles and the server or
chassis fans draw the cold air through the server to the
2.4. Electrical and cooling infrastructure back. The hot air on the back then rises and is directed
(sometimes by using some deflectors) towards the chiller
Even medium-sized data centers can sport peak power plant for cooling and recirculation. This basic setup is not
consumption of several megawatts or more. For such expensive but can also create hot spots either due to un-
power loads, it becomes necessary to supply power using even cooling or the mixing of hot and cold air.
high voltage lines (e.g., 33 KV, 3 phase) and step it down
on premises to the 280–480 V (3 phase) range for routing
through the uninterrupted power supply (UPS). The UPS 2.5. Major data center issues
unit needs to convert AC to DC to charge its batteries and
then convert DC to AC on the output end. Since the UPS Data center applications increasingly involve access to
unit sits directly in the power path, it can continue to sup- massive data sets, real-time data mining, and streaming
ply output power uninterrupted in case of input power media delivery that place heavy demands on the storage

Fig. 2. Cooling in a data center.


2942 K. Kant / Computer Networks 53 (2009) 2939–2965

infrastructure. Efficient access to large amounts of storage easily exploited for disruption and denial of service. For
necessitates not only high performance file systems but example, any vulnerability in mapping VM level attributes
also high performance storage technologies such as solid- to the physical system can be exploited to sabotage the en-
state storage (SSD) media. These issues are discussed in tire system. Due to limited space, we do not, however,
Section 5. Streaming large amounts of data (from disks or delve into security issues in this paper.
SSDs) also requires high-speed, low-latency networks. In
clustered applications, the inter-process communication
(IPC) often involves rather small messages but with very 3. Future directions in data center evolution
low-latency requirements. These applications may also
use remote main memories as ‘‘network caches” of data Traditional data centers have evolved as large computa-
and thus tax the networking capabilities. It is much cheap- tional facilities solely owned and operated by a single en-
er to carry all types of data – client–server, IPC, storage and tity – commercial or otherwise. However, the forces in
perhaps management – on the same physical fabric such as play are resulting in data centers moving towards much
Ethernet. However, doing so requires sophisticated QoS more complex ownership scenarios. For example, just as
capabilities that are not necessarily available in existing virtualization allows consolidation and cost savings within
protocols. These aspects are discussed in Section 4. a data center, virtualization across data centers could allow
Configuration management is a vital component for the a much higher level of aggregation. This notion leads to the
smooth operation of data centers but has not received possibility of ‘‘out-sourced” data centers that allows an
much attention in literature. Configuration management organization to run a large data center without having to
is required at multiple levels, ranging from servers to ser- own the physical infrastructure. Cloud computing, in fact,
ver enclosures to the entire data center. Virtualized envi- provides exactly such a capability except that in cloud
ronments introduce issues of configuration management computing the resources are generally obtained dynami-
at a logical – rather than physical – level as well. As the cally for short periods and underlying management of
complexity of servers, operating environments, and appli- these resources is entirely hidden from the user. Subscrib-
cations increases, effective real-time management of large ers of virtual data centers would typically want longer-
heterogeneous data centers becomes quite complex. These term arrangements and much more control over the infra-
challenges and some approaches are discussed in Section 6. structure given to them. There is a move afoot to provide
The increasing size of data centers not only results in Enterprise Cloud facilities whose goals are similar to those
high utility costs [1] but also leads to significant challenges discussed here [2]. The distributed virtualized data center
in power and thermal management [82]. It is estimated model discussed here is similar to the one introduced in
that the total data center energy consumption as a percent- [78].
age of total US energy consumption doubled between 2000 In the following we present a 4-layer conceptual model
and 2007 and is set to double yet again by 2012. The high of future data centers shown in Fig. 3 that subsumes a wide
utility costs and environmental impact of such an increase range of emergent data center implementations. In this
are reasons enough to address power consumption. Addi- depiction, rectangles refer to software layers and ellipses
tionally, high power consumption also results in unsus- refer to the resulting abstractions.
tainable current, power, and thermal densities, and The bottom layer in this conceptual model is the Physi-
inefficient usage of data center space. Dealing with cal Infrastructure Layer (PIL) that manages the physical
power/thermal issues effectively requires power, cooling infrastructure (often known as ‘‘server farm”) installed in
and thermal control techniques at multiple levels (e.g., de- a given location. Because of the increasing cost of the
vice, system, enclosure, etc.) and across multiple domains power consumed, space occupied, and management per-
(e.g., hardware, OS and systems management). In many sonnel required, server farms are already being located clo-
cases, power/thermal management impacts performance ser to sources of cheap electricity, water, land, and
and thus requires a combined treatment of power and per- manpower. These locations are by their nature geographi-
formance. These issues are discussed in Section 7. cally removed from areas of heavy service demand, and
As data centers increase in size and criticality, they be- thus the developments in ultra high-speed networking
come increasingly attractive targets of attack since an iso- over long distances are essential enablers of such remotely
lated vulnerability can be exploited to impact a large located server farms. In addition to the management of
number of customers and/or large amounts of sensitive physical computing hardware, the PIL can allow for lar-
data [14]. Thus a fundamental security challenge for data ger-scale consolidation by providing capabilities to carve
centers is to find workable mechanisms that can reduce out well-isolated sections of the server farm (or ‘‘server
this growth of vulnerability with size. Basically, the secu- patches”) and assign them to different ‘‘customers.” In this
rity must be implemented so that no single compromise case, the PIL will be responsible for management of bound-
can provide access to a large number of machines or large aries around the server patch in terms of security, traffic
amount of data. Another important issue is that in a virtu- firewalling, and reserving access bandwidth. For example,
alized outsourced environment, it is no longer possible to set up and management of virtual LANs will be done by PIL.
speak of ‘‘inside” and ‘‘outside” of data center – the intrud- The next layer is the Virtual Infrastructure Layer (VIL)
ers could well be those sharing the same physical infra- which exploits the virtualization capabilities available in
structure for their business purposes. Finally, the basic individual servers, network and storage elements to sup-
virtualization techniques themselves enhance vulnerabili- port the notion of a virtual cluster, i.e., a set of virtual or real
ties since the flexibility provided by virtualization can be nodes along with QoS controlled paths to satisfy their
K. Kant / Computer Networks 53 (2009) 2939–2965 2943

Fig. 3. Logical organization of future data centers.

communication needs. In many cases, the VIL will be inter- tributed, fully virtualized data center where each layer
nal to an organization who has leased an entire physical possibly has a separate owner. The latter extreme provides
server patch to run its business. However, it is also con- a number of advantages in terms of consolidation, agility,
ceivable that VIL services are actually under the control and flexibility, but it also poses a number of difficult chal-
of infrastructure provider that effectively presents a virtual lenges in terms of security, SLA definition and enforce-
server patch abstraction to its customers. This is similar to ment, efficiency and issues of layer separation. For this
cloud computing, except that the subscriber to a virtual reason, real data centers are likely to be limited instances
server patch would expect explicit SLAs in terms of compu- of this general model.
tational, storage and networking infrastructure allocated to In subsequent sections, we shall address the needs of
it and would need enough visibility to provide its own next such DVDC’s when relevant, although many of the issues
level management required for running multiple services apply to traditional data centers as well.
or applications.
The third layer in our model is the Virtual Infrastructure 4. Data center networking
Coordination Layer (VICL) whose purpose is to tie up virtual
server patches across multiple physical server farms in or- 4.1. Networking infrastructure in data centers
der to create a geographically distributed virtualized data
center (DVDC). This layer must define and manage virtual The increasing complexity and sophistication of data
pipes between various virtual data centers. This layer center applications demands new features in the data cen-
would also be responsible for cross-geographic location ter network. For clustered applications, servers often need
application deployment, replication and migration when- to exchange inter-process communication (IPC) messages
ever that makes sense. Depending on its capabilities, VICL for synchronization and data exchange, and such messages
could be exploited for other purposes as well, such as may require very low-latency in order to reduce process
reducing energy costs by spreading load across time-zones stalls. Direct data exchange between servers may also be
and utility rates, providing disaster or large scale failure motivated by low access latency to data residing in the
tolerance, and even enabling truly large-scale distributed memory of another server as opposed to retrieving it from
computations. the local secondary storage [18]. Furthermore, mixing of
Finally, the Service Provider Layer (SPL) is responsible for different types of data on the same networking fabric
managing and running applications on the DVDC con- may necessitate QoS mechanisms for performance isola-
structed by the VICL. The SPL would require substantial tion. These requirements have led to considerable activity
visibility into the physical configuration, performance, la- in the design and use of low-latency specialized data cen-
tency, availability and other aspects of the DVDC so that ter fabrics such as PCI-Express based backplane intercon-
it can manage the applications effectively. It is expected nects, InfiniBand (IBA) [37], data center Ethernet [40,7],
that SPL will be owned by the customer directly. and lightweight transport protocols implemented directly
The model in Fig. 3 subsumes everything from a non- over the Ethernet layer [5]. We shall survey some of these
virtualized, single location data center entirely owned by developments in subsequent sections before examining
a single organization all the way up to a geographically dis- networking challenges in data centers.
2944 K. Kant / Computer Networks 53 (2009) 2939–2965

4.2. Overview of data center fabrics mum size settable to 256 bytes, 1 KB, 2 KB, or 4 KB. In sys-
tems with mixed MTUs, subnet management provides
In this section, we provide a brief overview of two ma- endpoints with the Path MTU appropriate to reach a given
jor network fabrics in the data center namely, Ethernet and destination. IBA switches may also optionally support mul-
InfiniBand (IBA). Although Fiber channel can be used as a ticast which allows for message replication at the switches.
general networking fabric as well, we do not discuss it here There are two important attributes for a transport proto-
because of its strong storage association. Also, although the col: (a) connectionless (datagram) vs. connection oriented
term ‘‘Ethernet” relates only to the MAC layer, in practice, transfer, and (b) reliable vs. unreliable transmission. In
TCP/UDP over IP over Ethernet is the more appropriate net- the Ethernet context, only the reliable connection oriented
work stack to compare against InfiniBand or Fiber-channel service (TCP) are unreliable datagram service (UDP) are
which specify their own network and transport layers. For supported. IBA supports all four combinations, known
this reason, we shall speak of ‘‘Ethernet-stack” instead of respectively as Reliable Connection (RC), Reliable Datagram
just the Ethernet. (RD), Unreliable Connection (UC), and Unreliable Datagram
(UD). All of these are provided in HW (e.g., unlike TCP/UDP)
4.2.1. InfiniBand and operate entirely in user mode which eliminates unnec-
InfiniBand architecture (IBA) was defined in late 1990s essary context switches or data copies.
specifically for inter-system IO in the data center and uses IBA implements the ‘‘virtual interface” (VI) – an abstract
the communication link semantics rather than the tradi- communication architecture defined in terms of a per-con-
tional memory read/write semantics [37]. IBA provides a nection send–receive queue pair (QP) in each direction and
complete fabric starting with its unique cabling/connec- a completion queue that holds the operation completion
tors, physical layer, link, network and transport layers. This notification. The completion queue may be exploited for
incompatibility with Ethernet at the very basic cabling/ handling completions in a flexible manner (e.g., polling,
connector level makes IBA (and other data center fabrics or interrupts with or without batching). The virtual inter-
such as Myrinet) difficult to introduce in an existing data face is intended to handle much of the IO operation in user
center. However, the available bridging products make it mode and thereby avoid the OS kernel bottleneck. In order
possible to mix IBA and Ethernet based clusters in the to facilitate orderly usage of the same physical device (e.g.,
same data center. NIC) by multiple processes, a VI capable device needs to
IBA links use bit-serial, differential signaling technology support ‘‘doorbell”, by which each process can post the
which can scale up to much higher data rates than the tra- descriptor of a new send/receive operation to the device.
ditional parallel link technologies. Traditional parallel The device also needs to support virtual-to-physical (VtP)
transmission technologies (with one wire per bit) suffer address translation in HW so that OS intervention is
from numerous problems such as severe cross-talk and avoided. Some OS involvement is still required: such as
skew between the bit timings, especially as the link speeds for registration of buffers, setup of VtP translation tables,
increase. The differential signaling, in contrast, uses two and interrupt handling. Virtual interface adapter (VIA) is
physical wires for each bit direction and the difference be- a well known networking standard that implements these
tween the two signals is used by the receiver. Each such ideas [13].
pair of wires is called a lane, and higher data throughput VI is a key technology in light-weight user-mode IO and
can be achieved by using multiple lanes, each of which forms the basis for implementing Remote Direct Memory Ac-
can independently deliver a data frame. As the link speeds cess (RDMA) capability that allows a chunk of data to be
move into multi GB/s range, differential bit-serial technol- moved directly from the source application buffer to the
ogy is being adopted almost universally for all types of destination application’s buffer. Such transfers can occur
links. It has also led to the idea of re-purposable PHY, with very low-latency since they do not involve any OS
wherein the same PHY layer can be configured to provide intervention or copies. Enabling the direct transfer does re-
the desired link type (e.g., IBA, PCI-Express, Fiber Channel, quire a setup phase where the two ends communicate in or-
Ethernet, etc.). der to register and pin the buffers on either side and do the
The generation 1 (or GEN1) bit-serial links run at appropriate access control checks. RDMA is appropriate for
2.5 GHz but use a 8–10 byte encoding for robustness which sustained communication between two applications and
effectively delivers 2 GB/s speed in each direction. GEN2 for transfers of large amounts of data since the setup over-
links double this rate to 4 GB/s. The GEN3 technology does head gets amortized over a large number of operations [6].
not use 8-10 byte encoding and can provide 10 GB/s speed RDMA has been exploited for high performance implemen-
over fiber. Implementing 10 GB/s on copper cables is very tations of a variety of functionalities such as virtual ma-
challenging, at least for distances of more than a few me- chine migration [19], implementation of MPI (message
ters. The IBA currently defines three lane widths – 1X, passing interface) [28], and network attached storage [33].
4X, and 12X. Thus, IBA can provide bandwidths up to One interesting feature of IBA link layer is the notion of
120 GB/s over fiber using GEN3 technology. ‘‘virtual lanes” (VLs).1 A maximum of 15 virtual lanes (VL0–
In IBA, a layer-2 network (called subnet) can be built VL14) can be specified for carrying normal traffic over an IBA
using IBA switches which can route messages based on link (VL15 is reserved for management traffic). A VL can be
explicitly configured routing tables. A switch can have up
to 256 ports; therefore, most subnets use a single switch.
For transmission, application messages are broken up into 1
A virtual lane has nothing to do with the physical ‘‘lane” concept
‘‘packets” or message transfer units (MTUs) with maxi- discussed above in the context of bit-serial links.
K. Kant / Computer Networks 53 (2009) 2939–2965 2945

designated as either high priority or low priority and all VLs belonging to a VLAN cannot be directed to ports that are
belonging to one priority level can be further differentiated not assigned to that VLAN. A dynamic assignment scheme
by a weighted round-robin scheduling among them. Mes- also exists which can map a VLAN to a unique set of ports
sages are actually marked with a more abstract concept depending on source MAC address or other attributes of
called Service Level (SL), and SLto VL mapping table is used the traffic.
at each switch to decide how to carry this message. This al- Given a set of layer-2 endpoints (e.g., servers or routers)
lows messages to pass through multiple switches, each with connected via a network of switches, a layer-2 routing
a different number of VL’s supported. mechanism is essential to provide a low-latency delivery
IBA uses a credit based flow control for congestion man- of Ethernet frames without any loops or complex configu-
agement on a per virtual lane basis in order to avoid packet ration. The original design of layer-2 routing focused pri-
losses [3,36]. A VL receiver periodically grants ‘‘credit” to marily on avoiding routing loops by defining a single
the sender for sending more data. As the congestion devel- spanning tree to cover all endpoints and switches. This
ops, the sender will have fewer credits and will be forced to spanning tree protocol (STP) (described in IEEE 802.1D
slow down. However, such a mechanism suffers from two standard) disables all links that are not a part of the span-
problems: (a) it is basically a ‘‘back-pressure” mechanism ning tree and hence their available bandwidth is wasted.
and could take a long time to squeeze flows that transit a Also, the tree structure results in a very uneven traffic dis-
large number of hops and (b) If multiple flows use the tribution over the used links. Several enhancements have
same virtual lane, the flows that are not responsible for been made to 802.1D to address these issues including
the congestion could also get shut-down unnecessarily. (a) Per VLAN spanning tree so that it is possible to use a dif-
To address these issues, IBA supports a Forward Explicit ferent subset of links for each VLANs and thereby spread
Congestion Notification (FECN) mechanism coupled with out the traffic, (b) Rapid STP (RSTP) that quickly detects
endpoint rate control. failed links and reconfigures the spanning tree to minimize
FECN tags the packets as they experience congestion on dropped frames, and (c) Multiple STP (MSTP) which uses
the way to destination and this information is returned several ‘‘regional” trees connected via a higher central
back in the acknowledgement packets to the source. Up spanning tree (CST). Other ways of using multiple trees in-
to 16 congestion levels are supported by IBA. A node clude (a) each switch port acting as the spanning tree root
receiving congestion notification can tell if it is the source for the traffic incoming at that port, and (b) directing traffic
or victim of congestion by checking whether its credits from the same source among multiple trees according to
have already been exhausted. A congestion source controls some criteria [20]. In spite of these mechanisms, balancing
the traffic injection rate successively based on received traffic among various links could still be challenging.
congestion indications. A subsequent rate increase is based On the QoS front, the Ethernet mechanisms started out
on a time-out that is relative to the latest congestion as quite primitive, but have been enhanced subsequently.
notification. In particular, the VLAN mechanism also includes a 3-bit
IBA also supports Automatic Path Migration (APM) CoS (class of service) field in the extended Ethernet header
which allows a queue pair (QP) to be associated with two for differentiating VLAN flows. This is exploited by the data
independent paths to the destination. Initially, the first center Ethernet [40] for differentiating between different
path is used, but a changeover is effected in case of failure types of traffic (e.g., storage vs. inter-process communica-
or significant errors. A changeback to the original path can tion vs. client–server). The IEEE task force on Ethernet con-
be effected under software control. gestion management, known as 802.1Qau
(www.ieee802.org/1/pages/802.1au.html), is currently
4.2.2. Ethernet stack examining ways of improving congestion notification and
Since the Ethernet stack is quite well known, we only management [7]. The main objectives of this effort are to
mention some of its salient points relative to the data cen- enable switches to mark packets and allow endpoint
ter environment. Let us start with the MAC layer (i.e., layer-2 to do 802.1x type link flow control at the level of
Ethernet per se). Ethernet layer is crucial in data centers individual CoS classes. (The default Ethernet link flow con-
since a typical data center sports far more (layer-2) trol happens at the level of entire link and thus is not very
switches than (layer-3) routers. This is a result of much useful when multiple traffic types are involved.)
lower cost, latency and configuration simplicity of a Although IP layer provides a rich set of mechanisms for
layer-2 switch as compared to a router. However, this QoS control, it adds significant additional latency both at
immediately implies that the things that IP layer can do routers and at endpoints. Similarly, the traditional TCP
reasonably well (e.g., routing, QoS, filtering, and security) sockets interface can incur large latencies especially with
are not well supported in a data center. Moreover, if we the traditional kernel based implementations. The adop-
were to simply implement all these mechanisms in layer- tion of VI architecture coupled with necessary HW support
2 directly, switches would become as complex, slow and can reduce end-to-end latencies under 10 ls range
hard to configure as the routers. [35,16,21,22], but this may still not satisfy emerging real-
One unique capability introduced in the Ethernet in time financial data mining and modeling applications that
IEEE 802.1q standard is the notion of virtual LAN or VLAN. require latencies as low as 1 ls. RDMA enabled network
The VLAN mechanism allows the traffic to be tagged with a interfaces can reduce the latencies further, however, IBA
12-bit VLAN id. In a simple static assignment case, VLAN with native RDMA generally provides significantly lower
id’s are statically mapped to switch ports. This allows the latencies [34]. One difficulty in implementing RDMA over
VLANs to provide a strong isolation in that the traffic TCP is the need for an intermediate layer called MPA to
2946 K. Kant / Computer Networks 53 (2009) 2939–2965

close the gap between the byte-stream nature of TCP and pose the deficiencies of available fabrics according to
message oriented transfer expected by RDMA. those requirements. Ref. [23] discusses the requirements
The main advantage of Ethernet stack is the TCP/UDP and approaches from the perspective of transport layer;
interface over which most applications operate. However, here we discuss these in a more general setting.
TCP was designed for highly variable open Internet environ- The notion of distributed virtualized data centers
ment – rather than data centers – and has numerous defi- (DVDC) discussed in Section 3 attempts to create the
ciencies from a data center perspective [24]. In particular, abstraction of a single data center that could be geograph-
it is well known that achieving good QoS is very difficult ically distributed. While this is a useful abstraction, it is
with TCP since multiple competing TCP flows will tend to di- crucial to take advantage of the 2-level structure lying
vide up the available bandwidth equally, rather than accord- underneath: high BW, low-latency, nearly error free com-
ing to the specified fractions [36,11]. Similarly, TCP munication within a physical server patch, and much high-
congestion control can be unnecessarily heavy-duty for data er-latency and lower-speed communication environment
centers. In particular, TCP provides elaborate schemes to between data centers. In particular, at the middleware le-
deal with packet losses, which rarely arise in well configured vel, resource allocation and migration should automati-
data centers. Packet losses can also be highly undesirable at cally account for this discrepancy. Similarly, at the
high data rates in that they can substantially degrade appli- transport level the protocols must be self-adjusting and
cation performance. Delay based TCP implementations [29] capable of working well both for paths that stay entirely
such as TCP-Vegas are much more appropriate for data cen- within a server patch and those that go across server
ters but such versions are unfortunately not very popular. patches.
TCP also suffers from a number of other weaknesses Since intra and inter server patch communications have
that have been addressed by other TCP compatible proto- very different characteristics, they result in very different
cols such as SCTP (stream control transmission protocol). challenges. For example, a simple credit-based flow control
SCTP grew out of the need to emulate Signaling System (e.g., such as the one used in IBA) is appropriate for intra
No. 7 (SS7) capabilities in the Internet [8,25]. Although server-patch communications because of small round-trip
SCTP is an even more heavy-duty protocol than TCP and times (RTTs), rare packet drops, and need for very low
thus may be difficult to scale to high-speeds, it does offer CPU overhead. On the other hand, for inter server patch
a number of features that can be useful in data centers. communications, good throughput under packet drops
These include: due to congestion or errors is very important and hence a
sophisticated control (as in TCP or SCTP) may be required.
1. Multi-homing, which allows a connection to use alter- Although wired networking technologies (copper and/
nate paths in case of primary path failure. This is similar or fiber versions) are expected to remain dominant in data
to IBA’s automatic path migration (APM) feature. centers, wireless technologies such as Wi-Fi, Ultra-wide-
2. Better resistance against denial of service (DoS) attacks band (UWB), and free-space optical are finding niche appli-
by delaying memory allocation for connection informa- cations as their available BW increases. For example, the
tion and challenge–response type of verification. In par- available wireless bandwidths may be adequate for low-
ticular, SCTP does not suffer from the well known ‘‘SYN end data centers running compute intensive applications.
attack” of TCP. Even in larger data centers, wireless may be quite adequate
3. Better robustness due to 32-bit CRC (vs. 16-bit for TCP) as a management fabric. Wireless technologies have the
and built-in heart-beat mechanism. At high data rates, important advantage of eliminating the wire management
16-bit CRC may lead to undetected errors quite problem, allow for ad hoc addition/deletion to the infra-
frequently. structure, and provide a convenient broadcast (as opposed
4. Protocol extensibility via the ‘‘chunk” mechanism, which to point to point) communication medium that can be
allows introduction of new control message types. exploited in clever ways. To support this diversity in
5. Preservation of upper layer message boundaries, which MAC layers, it should be possible to choose the congestion
simplifies RDMA implementation. control mechanism depending upon the MAC layers tra-
6. More flexible delivery (ordered or unordered, and con- versed [23,38]. For a connection oriented protocol, the con-
trol over number of retransmissions). For example, gestion control can be negotiated during the connection
ordered delivery is unnecessary for RDMA. setup; however, in some cases, automated dynamic adjust-
ments may also be necessary.
SCTP also supports the concept of a ‘‘stream”, which is a As the MAC technologies evolve, they are marching to-
logical flow within a connection with its own ordering con- wards unprecedented data rates. For example, a 12X GEN3
straints. The stream concept allows different but related IBA link can support bandwidths of 120 GB/s, and 100 GB/
types of data to be transmitted semi-independently with- s Ethernet is actively under development [31]. At 100 GB/
out having to establish and manage multiple connections. s, an averaged size 1000 byte packet must be processed in
Unfortunately, most SCTP implementations do not opti- less than 80 ns. With a complex protocol, it is very difficult
mize this feature [25], and its usefulness is unclear. to complete MAC, network and transport layer processing in
80 ns, particularly when memory accesses are involved. It is
4.3. Data center networking challenges clear that ultimately the network speed bumps will be lim-
ited by the ‘‘memory-wall” phenomenon (see Section 7.1).
In this section, we identify the networking require- Thus, in addition to direct placement of data in caches, It
ments imposed by evolution in data centers and then ex- is thus necessary to go beyond the VI architecture and make
K. Kant / Computer Networks 53 (2009) 2939–2965 2947

the protocols as lean as possible. The leanness flies directly At layer-3, the MPLS (multi-protocol label switching) al-
in the face of greater functionality required to address secu- ready provides sophisticated mechanisms to tag flows
rity, flexibility and other issues. At very high data rates, the [30,17] and the corresponding resource reservation mech-
entire protocol stack including MAC, network and transport anisms such as RSVP-TE [4] can be used to automate the
layers must be thinned out. This can pose significant chal- setup. These can be used for inter server-patch path setups,
lenges in maintaining compatibility with standards. but are not useful within a data center because of abun-
As discussed in Section 4.1, the traditional tree network dance of layer-2 switches. Finally, tagging at the transport
architecture provides limited cross-section bandwidth that layer is easy to implement but will have only endpoint sig-
could become a problem in large data centers. Ref. [15] ad- nificance. That is, while the tags can be used for congestion
dresses this problem by using many more but lower capac- control of connections belonging to different virtual clus-
ity switches at higher levels of the hierarchy arrange in a ters, no enforcement will occur in the network itself. Ref.
Clos or fat-tree topology [27]. One issue with such an ap- [23] proposes such a tagging mechanism along with the
proach is increase in the number of assets to be managed. notion of collective bandwidth control described in [24]
Ref. [32] instead does away with spanning tree protocol by that automatically determines the needs of competing
exploiting a centralized fabric manager for the entire net- workloads and then attempts to allocate bandwidth pro-
work. This fabric manager uses hierarchical location based portionately during congestions. In general, an accurate
pseudo-MAC addresses to control routing while the edge control over bandwidths provided to competing applica-
switches translate between pseudo and real MAC ad- tions can be quite challenging with TCP-like congestion
dresses. Yet another approach is explored in [39] where control mechanisms.
standard Ethernet switches are replaced by new types of In a virtualized environment, communication between
switches called ‘‘Axons” to which unmodified Ethernet nodes needs a mechanism to automatically detect when
hosts can connect. Axons use source routing based on the two nodes are located on the same platform and thus can
routing table at the ingress Axon. The routing table main- communicate without involving the external network. Re-
tenance is done in SW running on IntelÒ Atom processor; cent advances such as XenLoop [41] provide a transparent
the actual routing is performed in HW using FPGA. Another mechanism to automatically intercept packets of co-resi-
such attempt called Ethane [9] provides centralized control dent VMs and shepherd them via the shared memory
over the entire network. Note that the centralized control interface. A somewhat similar issue arises for local vs.
solutions can be vulnerable to failures and attacks and non-local communication. For example, if the intra-chassis
may have scalability issues of their own. fabric is different from the mainstream fabric (e.g., PCI-Ex-
Section 3 introduced the concept of a virtual cluster press vs. Ethernet), it may be necessary to transparently
(VC), which requires provisioning QoS controlled commu- switch between transport protocols appropriate for the
nication paths between virtual nodes. To enable this, it is media. Providing a low-overhead and transparent commu-
necessary to tag all communications within a virtual clus- nication mechanism between VM’s that may be migrated
ter with tag so that it is possible to differentiate between dynamically and hardware support of it remains a chal-
multiple virtual clusters sharing the same communication lenging problem.
paths. Also, it is necessary to think of QoS in terms of the As data centers move from owned physical entities to
overall application needs rather than the needs of an indi- the DVDC model discussed here, it becomes much harder
vidual flow between two endpoints. This is the main dis- to protect them against denial of service and other types
tinction between the type of QoS we discuss here and the of attacks. Therefore, security mechanisms such as those
traditional QoS notions. The tags can be exploited to ensure adopted by SCTP become essential in spite of their substan-
that competing virtual clusters on a shared path are allo- tial overhead. The big challenge is to ensure the scalability
cated bandwidth either according to some fixed criteria of these protection mechanisms at very high data rates
(e.g., relative priority or type of application being run on that are likely to be seen in the future. Similarly, support
the virtual cluster) or based on dynamically changing for high availability mechanisms such as multi-homing,
needs of the applications. One way to estimate the band- connection migration, path diversity, and path control be-
width need dynamically is to keep track of actual band- come critical in this environment, but devising scalable
width usage during uncongested periods and then divide solutions for them can be very challenging.
up the available bandwidth in that proportion during con- Data center networks often deploy a variety of network
gestion periods [10,24]. appliances or middle-boxes such as domain name servers,
The tagging and corresponding bandwidth control can firewalls, load-balancers, Network address translation
be implemented at various levels of network stack with (NAT) devices, virtual private network (VPN) gateways,
different consequences. Tagging at the MAC level ensures malware scanning appliances, protocol accelerators, etc.
that (layer-2) switches can participate in tag examination The deployment, configuration, traffic engineering, and
and BW management [26]. The data center Ethernet pro- keeping them up to date is often a very challenging task
ject in IEEE [40] is basically concerned with exploiting and continues to increase in complexity as data centers
the existing three CoS (Class of Service) bits in Ethernet grow, but has not received much attention in the literature.
frame for such a tagging and bandwidth management. Managing these devices in a DVDC environment can be
Expansion of such a mechanism to full-fledged virtual clus- particularly challenging since the middle boxes themselves
ters would require significant perturbations to the existing might need to be virtualized without compromising the
Ethernet standard and would still require a mechanism at security, isolation, and performance features they are de-
the IP layer to handle virtual clusters going across layer-2. signed to provide.
2948 K. Kant / Computer Networks 53 (2009) 2939–2965

5. Data center storage traditional parallel SCSI interface with a serial link inter-
face (as opposed to bus interface for parallel SCSI). (See
In spite of tremendous growth in storage capacity in the Section 4.2.1 on more details regarding serial interfaces.)
last decade, the data tsunami shows no sign of abating; in With 6 GB/s links, a SAS drive can provide throughput rates
fact, powered by new large scale-applications, higher- of up to 2.4 GB/s. Clients traditionally have used the paral-
than-Moore’s-law growth in computational capacity, and lel ATA interface, which too are being replaced by the serial
expanding global connectivity, the storage growth contin- version called SATA.
ues to accelerate. According to IDC estimates, the data vol- Although direct attached storage (DAS) on each server
ume continues to increase 50–70% per year. These trends can provide the fastest access, it has numerous limitations
make storage management in data centers extremely chal- in terms of size and flexibility. Consequently, per server
lenging. In this section, we provide an overview of emerg- storage is generally small and reserved for local data such
ing storage technologies and application needs and then as boot image and swap space. The sharable storage is gen-
discuss major challenges. erally provisioned separately in a ‘‘storage tower” and ac-
cessed via NAS or SAN. NAS provides a convenient file or
object level access to the servers and can use traditional
5.1. Storage basics networking fabric such as Ethernet. However, the high le-
vel access may be too slow or unsuitable for applications
Until recently much of the storage relied upon rotating that prefer to do their own storage management (e.g., data-
magnetic media, and the storage architectures developed base systems). Both NAS and SAN (discussed next) tap out
around this technology. In this section we focus primarily at 8-16 TB storage limit because of 32-bit disk block
on this media; the emerging solid state disk (SSD) technol- addressing used in the implementations.
ogies are addressed in Section 5.2. SAN provides the block-level access to remote storage
Storage in data centers may take one (or a combination) and has traditionally used Fiber-Channel (FC) as the pre-
of the following three forms: Direct attached storage (DAS), ferred networking technology. Although FC is specifically
storage area network (SAN), and network attached storage designed for storage access, the need for separate network-
(NAS). DAS refers to block-oriented storage directly at- ing infrastructure and limited familiarity among the
tached to a server. SAN provides block-oriented storage administrators makes it expensive to operate and main-
that resides across a network. NAS also provides access tain. iSCSI (Internet SCSI) is an alternative to FC and allows
to storage residing across a network but accessible via a remote access to SCSI drives over the mainstream Ethernet.
higher level interface such as files or objects. (The hardware interface could be serial or parallel and does
The dominant DAS technology has been the hard disk not matter at the protocol level.) iSCSI typically runs on top
drive for quite some time and has continued to scale in of TCP and hence is easy to implement but the resulting
performance. However, there are several shortcomings heavy-duty layering can be significantly less performant
inherent to hard disks that are becoming harder to over- than FC. This issue can be partly addressed by iSCSI cards
come as we move into faster and denser design regimes. that implement iSCSI and underlying TCP/IP in hardware.
In particular, the mechanical movement implies that disks The emergence of inexpensive 10 GB/s Ethernet has also
will remain significantly faster for sequential accesses than made iSCSI considerably more attractive.
for random accesses and the gap will only grow. This can Because of the prevalence of FC, many applications that
severely limit the performance that hard disk-based sys- use low-level storage access (e.g., database management
tems are able to offer to workloads with significant ran- systems) are designed for FC storage. Thus, even with an
dom access component or lack of locality. Such eventual trend towards the much cheaper iSCSI or similar
performance deterioration is likely to be a problem in data Ethernet-based solutions, FC interfaces will be required
centers where consolidation can result in the multiplexing for quite some time. The 8–16 TB limit for a FC SAN almost
of unrelated workloads imparting randomness to their guarantees multiple islands of FC storage that need to be
aggregate. Although an individual state-of-the-art hard connected. Several standard protocols have been designed
disk consumes significantly less power than other compo- for interconnecting the FC and TCP/IP worlds. The FCIP pro-
nents of a server (e.g., about 12 W vs. 150 W for the proces- tocol encapsulates FC packet into TCP packets for transmis-
sor subsystem), the large number of storage devices means sion across an IP network. The FCoE (FC over Ethernet)
that a 20–30% of the data center power could be consumed encapsulates FC packets into Ethernet frames and is thus
by storage. not routable. The iFCP protocol is a gateway to gateway
The traditional secondary storage interface in the server protocol that allows a direct transmission of the FC packet
world has been SCSI (Small Computer System interface). payload over an intervening TCP/IP network.
SCSI can handle up to 15 hard drives and throughput rates In general, a storage volume could be spread over multi-
of up to 320 MB/s. Although a set of 15 simultaneously ple physical or logical storage devices, and a consistent
streaming hard drives can put out a much higher data rate, view requires ‘‘storage virtualization”. Storage virtualiza-
this is generally not a problem since even small fractions of tion could be host-based, network-based or storage de-
random access patterns can seriously degrade the through- vice-based. A widely deployed host-based solution is
put of a traditional hard disk. However, an array of 15 so- Logical Volume Manager (LVM) where the host OS manages
lid-state or hybrid drives could easily exceed the limit, storage volumes spread over devices under its control. Net-
thereby implying the need for faster interfaces. The Seri- work based virtualization instead accomplishes the same
al-attached SCSI (SAS) interface is already replacing the task using a few ‘‘appliances” directly connected to the ser-
K. Kant / Computer Networks 53 (2009) 2939–2965 2949

ver network. In a more sophisticated version of this ap- random requests, smaller form factors, lower power con-
proach the data and meta-data paths may go over separate sumption, lack of noise, and higher robustness to vibra-
networks. In yet another variation, the virtualization func- tions and temperature. Table 1 presents the key
tionality may be integrated in the SAN switch itself (per- characteristics of the important NVRAM technologies that
haps using ASICs), so that the switch can direct the exist today — some are more mature than others. Since
request to the appropriate storage volume and thereby re- the NAND flash memory (simply flash henceforth) is the
duce the number of network hops. Finally, the storage de- most mature and popular of these at this time, we will
vice itself (or a front-end processor connected to a ‘‘storage use it as the representative technology to drive our
tower”) may provide this functionality. With most of these discussion.
solutions the virtualization extends only to the scope of the We begin with some characteristics of flash based stor-
controlling agent (host OS, appliance, switch, etc.) and age. Flash devices require an erase operation before data
interoperability becomes difficult since different OSes, can be written, and erases can only be done in units of a
appliances and storage devices may implement virtualiza- block. A block comprises 64 or 128 physically contiguous
tion differently. ‘‘pages.” A page is the granularity of individual reads and
Unification of multiple virtualized storage subsystems writes and is typically 2KB in size. An erase operation is
requires a higher level entity to coordinate access across not only very slow (about 2 ms for 128 K block) but also re-
these subsystems. This unification is being required sults in a slight degradation of the flash, thereby limiting
increasingly due to 16 TB limit for traditional NAS/SAN the useful lifetime of a block. Each block typically has a
storage. A popular example of such coordination is the lifetime of 10 K–100 K erase operations [46]. Wear-leveling
clustered file system (CFS). CFS consists of a number of techniques that distribute the physical block location such
‘‘cluster heads” each of which is a specialized server that that erasures are evenly spread across the entire flash are
manages the storage under its control, provides cross-clus- an essential aspect of flash usage.
ter mapping of files and objects, and supports transparent Each flash page can be in one of three different states:
access to the files and objects stored anywhere and across (i) valid, (ii) invalid and (iii) free/erased. When no data has
cluster nodes. Nominal storage functions such as striping been written to a page, it is in the erased state. A write
and mirroring need to be provided by the CFS across the can be done only to an erased (or clean) page, changing
cluster in a transparent manner. CFS also needs to provide its state to valid. This restriction forces an written page
resilience in the face of storage devices across clusters to be left alone and instead write page updates to a differ-
experiencing failures. Some examples of CFS, designed for ent location. Such out-of-place writes result in pages
large scale HPC environments, are Lustre, parallel virtual whose contents are invalid (i.e., obsolete). A garbage col-
file system (PVFS) and IBM’s general parallel file system lector (GC) runs periodically and identifies blocks that only
(GPFS). contain invalid pages and erases them. During periods of
GC, the throughput offered by a flash device can decrease
5.2. Solid-state and hybrid storage significantly. The frequency of GC and its computational
overheads worsen with increased randomness in writes.
The continued improvement in the cost and perfor- Finally, flash memory cells could be either Single-Level-
mance of flash based storage [43] has made solid state Cell (SLC) or Multi-Level-Cell (MLC). As the name implies,
disks (SSDs) a viable technology in data centers. Further- SLC stores one bit per cell and MLC stores more than one.
more, there are a host of other non-volatile RAM (NVRAM) MLC obviously provides higher density and thus lower
technologies under development that may significantly al- overall cost, however, this comes at the expense of slower
ter the storage landscape in the near future. Some of the speed, significantly lower lifetime, and lower operating
more prominent NVRAM technologies include magnetic temperature (due to more likelihood of errors caused by
RAM (MRAM), phase-change memory (PRAM or PCM), leakage current at higher temperatures). Consequently, so-
and Ferroelectic RAM (FeRAM) [54,50]. NVRAM technolo- lid state drives (SSDs) invariably use SLC, with MLC more
gies offer several advantages over the rotating magnetic common in consumer applications such as thumb drives.
media: lower and more predictable access latencies for The data given in Table 1 corresponds to SLC.

Table 1
Sample characteristics of some storage technologies.

Latency Power Lifetime Cost


Read Write Erase Consumption (Write cycles) ($/MB)
SRAM [45] 2–3 ns 2–3 ns N/A 1W N/A 1.24
SDRAM 40–75 ns 40–75 ns N/A 2–10 mW 1015 0.0073

NOR [57] 85 ns 6.5 ls 700 ms/blk 375–540 mW 100 K 0.9111


NAND [48] 16 ls 200 ns 2 ms/blk .06–2.5 W SSD 100 K 0.0049
MRAM [45] 35 ns 35 ns None 24 mW 1015 36.66
FeRAM [50] 85 ns 85 ns None 3.6 mW 1015 47.04
PCM [51,59] 62 ns 300 ns N/A 480 lW > 107 N/A

Magnetic disk 1–5 ms 1–5 ms None 5–15 W MTTF = 1.2 Mhr 0.003
2950 K. Kant / Computer Networks 53 (2009) 2939–2965

In order to maintain compatibility with hard drives, or regenerated each time by running the appropriate trans-
SSDs are designed to interface to standard I/O busses such formation, simulation or filtering program. Quantification
as SCSI or SATA. An embedded processor implements the of such computation/storage trade-offs requires addressing
so called Flash Translation Layer (FTL) to hide the idiosyn- issues such as (a) storage power vs. CPU power consumed,
crasies of flash so that the same software can work with (b) performance impact of storage saving techniques, and
both hard drives and SSDs. The key functionality imple- (c) data placement and migration across the storage
mented by the FTL includes: (i) translation from logical hierarchy.
to physical addresses to allow for wear leveling, (ii) out- Due to their significantly different operational charac-
of-place updates and garbage collection, and (iii) wear-lev- teristics from both HDDs and main memory technologies,
eling policies. The quality of FTL implementation is a key to SSDs require novel modeling approaches. In particular,
SSD performance; for example, it is found that for certain out-of-place updates, garbage collection and wear-leveling
random write-dominated workloads (e.g., DBMS work- perturb the access characteristics of the incoming traffic
loads explored in [52]), the overheads of GC and wear-lev- and need to be addressed in the modeling [44]. Also, as a
eling can sometimes make SSDs slower than HDDs. For SSD gets increasingly worn out with time, its erase opera-
sequential accesses, HDDs can easily outperform SSDs. tions slow down considerably, requiring more retries and
Nevertheless, SSDs hold a lot of potential for higher and bad block remapping, thereby reducing the effective
more predictable performance than HDDs. throughput of the device [12]. The GC and wear-leveling
Although SSDs may be useful as stand-alone secondary algorithms also affect power consumption and lifetime in
storage for very high throughput and low-latency applica- complex ways that are non-trivial to model.
tions, they are generally expected to remain in the sup- A related issue with respect to SSD – and more gener-
porting role for hard disks in the foreseeable future. ally NVRAM based storage – is to re-examine the tradi-
Many papers have explored SSD as an intermediate layer tional distinction between main memory and secondary
in the storage hierarchy between main memory and HDD storage access. When a HW thread stalls for disk access,
based secondary storage [55,53]. However, many other is- the OS takes control and switches the thread to another
sues in the integration of SSD’s and HDDs for faster and SW process since the latency of IO completion is very large
more consistent performance remain to be resolved. compared with the cost of a context switch. However, such
a switch does not make sense for a memory access. With
5.3. Challenges in data center storage extremely fast solid state storage, such a distinction may
no longer hold and an adaptive context switch mechanism
Although virtualization and clustering provide mecha- may be required. Furthermore, a fast storage access ex-
nisms to handle large amounts of storage, the ever-increas- poses the high overhead of the traditional file-system
ing volume of stored data will continue to pose scalability layer, and it is necessary to reexamine traditional file-ac-
challenges at multiple levels. Managing a large number of cess model to make it substantially leaner. In this context,
storage devices (which may be further divided into multi- a relevant question to ask is whether intermediate storage
ple clusters) do pose significant challenges in terms of per- layer should really be accessed as secondary storage or
formance and availability. Another issue concerns the simply as a higher level memory, or perhaps as something
efficient management of a huge number of objects (such in between? In this context, Refs. [51,59,49] examine mul-
as files) that a large storage system will be expected to ti-level memory system using NVRAM to assess the perfor-
host. The object sizes themselves could vary over a huge mance benefits of this approach.
range. In particular, the typical Zipf-like file-size distribu- The storage virtualization techniques discussed in Sec-
tion implies that (a) data centers will have a large number tion 5.1 are primarily directed towards aggregating a large
of small files and (b) the files on the large end could be ex- number of storage devices and creating the appearance of a
tremely large in size and could be spread over multiple de- single logical storage subsystem from which storage can be
vices or even clusters. Keeping track of large numbers of allocated to various applications. Such a view is inadequate
files involves challenges in how to efficiently represent, for the virtualized data center (VDC) model discussed in
manage, and manipulate file meta-data. However, we can- Section 3. In particular, each VDC may require its own par-
not simply design mechanisms that work well only for tition of storage with adequate isolation and protection
very large file systems. The number of objects managed from other VDCs, and yet it should be possible to move
by various file system instances is itself likely to follow the partition boundaries as needed. Furthermore, it should
Zipf-like distributions. Consequently, meta-data manage- be possible to manage the storage in each VDC at a fairly
ment should be designed to take advantage of small file low level so that each VDC can configure the storage based
system sizes whenever that is the case. Similar issues apply on its needs. Providing such a ‘‘disaggregation” capability
with respect to file sizes as well. The design should be able in addition to the usual aggregation capability is currently
to provide efficient mapping, access, and updates to not an open problem that is not addressed by the current cloud
only huge files running into petabytes but also to small storage capabilities.
files that are only a few hundred bytes.
Many emerging applications involve working with large
amounts of data – both permanent and transient (or tem- 6. Configuration management in data centers
porary) kind. An adaptive trade-off between computation
and storage can be useful in working with such data. For A comprehensive management of data center assets
example, infrequently accessed data could be compressed needs to deal with their entire life cycle. The life-cycle ex-
K. Kant / Computer Networks 53 (2009) 2939–2965 2951

tends from the point the asset is initially brought into the 3. Bare metal provisioning: This step prepares the server for
data center until it is finally retired from service, as dis- installation and involves a variety of tasks such as HW
cussed in more detail in Section 6.1. Management usually partitioning, configuration, tuning, and loading basic
involves two distinct parts: (a) operations and (b) control. system software. The partitioning/configuration is
Roughly speaking, operations refer to installation, configu- likely to depend on the server patch(es) to which the
ration, patching and other coarse time-granularity activi- new asset will be added.
ties whereas control refers to finer-grain management of 4. Service provisioning: This step would assign the server
resources. Traditionally, the control part has been handled partitions (or even the VMs running on them) to the
by the ‘‘in-band” side (i.e., by OS and middleware) whereas appropriate virtual data centers, and provision them
the operations are handled by the out-of-band (OOB) side, with necessary application software, network/storage
which runs on the baseboard management controller access, etc. so that they can start providing intended
(BMC). However, as we move to management of the more services.
general DVDC model discussed in Section 3, this distinction 5. Monitoring and tuning: This refers to constant monitor-
between operations and control or the separation between ing of vital parameters of the server and taking suitable
OOB and in-band activities becomes less clear. For exam- actions. Monitoring data from various HW and SW ele-
ple, configuring or reconfiguring a virtual machine could ments would typically involve filtering, storage and
be considered a part of control. Similarly, the information fusion in order to detect and resolve performance prob-
obtained from both OOB and in-band sides must be com- lems, minimize power consumption, determine secu-
bined for an effective configuration and control. rity attacks, etc.
Management of modern data centers involves a large 6. Remediation: Refers to activities related to fault detec-
number of issues which become more challenging as the tion/diagnosis, security related quarantine, repair,
number of objects to be managed increases. In the follow- upgrade and replacement. Remediation may be
ing we discuss these issues and the corresponding scalabil- required while the server is in use and thus may inter-
ity problems. fere with the service.

6.1. Life-cycle management The first three steps in this list involve BMC, which is
the only part of the server that will automatically come
It may be tempting to think of data centers and IT facil- up when a new server is plugged into the slot. The provi-
ities as static, where the facilities are installed and then sioning starts with the BMC turning on the main server
used for a long time before being retired. In reality, most and communicating with its firmware in order to bootstrap
facilities are subject to constant churn: new assets are the process of discovery, qualification and bare metal pro-
brought in and installed, and existing ones are patched, visioning. Many of the other tasks can be done in OOB or
reconfigured, repurposed, moved to another geographic in-band manner or by a combination of the two.
location, or replaced. Consequently, it is important to auto-
mate these tasks as far as possible so that management 6.2. Management frameworks
activities can be done cost-effectively (e.g., with minimal
IT administrator’s time), rapidly, safely, and w/o subject There are two fundamental requirements to enable
to human errors. In the following, we elaborate on the automated discovery and configuration: (a) Availability of
challenges of such an automated life-cycle management. configuration information in a standardized format at each
Consider a situation where a new modular server ar- device and at higher levels, and (b) A standardized frame-
rives at a facility and is plugged into some empty slot in work for retrieving and processing this information. The
a rack or a chassis. In order to logically add this server to Common Information Model (CIM) was developed by dis-
the facility, the following tasks are required: tributed management task force (DMTF) for describing
computing and business entities and has been adopted
1. Discovery: This step involves discovering the HW/SW widely [62]. CIM is a hierarchical, object-oriented manage-
configuration of each device, and the server as a whole ment information language based on UML (unified model-
so that it can be deployed correctly. The information ing language) for defining objects and interdependencies
produced by the discovery needs to be stored in a stan- between them. Other than structural relationships, CIM
dard form so that it can be used for the qualification and can express a variety of dependencies such as those be-
provisioning steps discussed next. tween network connections and underlying network
2. Qualification: A new server could well host malicious adapters, SW and the HW on which it runs, etc.
HW/SW and cannot be allowed access to the facility A CIM schema defines an object in the entity-relation-
w/o a proper assurance procedure. This step initially ship style and allows for object-oriented modeling con-
quarantines the server by firewalling it at the connect- cepts such as nested classes, instantiation, inheritance,
ing switch so that it is unable to send packets to arbi- and aggregation to allow compact description of complex
trary destinations. Assurance checks involves at least systems in terms of its components. As an example, a
3 aspects: (a) authentication (perhaps based on a certif- CIM model of an Ethernet NIC will be expected to provide
icate stored in tamper proof memory [TPM] module of not only the physical structure of the NIC but also the
the server), (b) scanning to detect malware, and (c) parameters/capabilities that are needed for using and con-
compliance checks to ensure that it conforms to the figuring the NIC, e.g. available PHY speeds or HW CRC
desired IT policies. check capability. The CIM model also provides their set-
2952 K. Kant / Computer Networks 53 (2009) 2939–2965

tings (e.g., current PHY speed and whether HW CRC is en- example, a number of packages are available for bare metal
abled) and methods to change the values. provisioning, performance monitoring, application man-
DMTF has also developed Web-Based Enterprise Man- agement, migration, etc. We henceforth call them External
agement (WBEM) specification that provides mechanisms Management Packages (EMPs). Many EMPs use private data
to exchange CIM information in an interoperable and effi- repositories for convenience which may not be compatible
cient manner. The components of WBEM include represen- with others. The data from some of the EMPs (usually those
tation of CIM information using XML, CIM operations over by the same vendor) may be consolidated into a single
HTTP, web services based management (WSMAN), CIM CMDB, but this still leaves multiple CMDBs. The net result
query language, CIM command language interface (CLI), is a number of repositories with overlapping but incompat-
and CIM service location protocol. A popular open-source ible information. The alternative approach of a single com-
implementation of WBEM is a called CIMOM (CIM object prehensive management system from a single vendor is
manager). WSMAN defines a set of operations over SOAP also undesirable due to inflexibility and lock-in issues.
(simple object access protocol) that can be used to query In the past, configuration databases – whether CIM-DB,
and update CIM repositories. SOAP runs on top of HTTP package DB or CMDB – tended to be rather static. In fact,
and because of its plain-text nature, SOAP based operations CIM-DB’s – being firmware based and difficult to modify
are easy to debug but can be very heavy duty in terms of – still primarily contain information that may be occasion-
overhead. ally modified via BIOS, EFI (extended firmware interface) or
CIM models represent systems and their parameters other pre-boot control program. An example of such an
mostly at the structural level – much of the semantics of infrequently invoked function is enabling/disabling HW
the parameters and the intelligence to properly set them threading. On the other hand, an agile management re-
is outside the purview of CIM. For example, CIM is not de- quires access to a lot of dynamic information such as cur-
signed to specify complex relationships between parame- rent power draw, utilization, and available BW. In a
ter values of various entities or the conditions under virtualized environment, even the configuration parame-
which parameters should be set in a particular way. Tradi- ters such as amount of installed memory and virtual NIC
tionally, such intelligence is buried in management code. BW become dynamically modifiable parameters. This
The world-wide-web consortium (W3C) has recently stan- dynamicity brings in a number of challenges and the solu-
dardized the services modeling language (SML) tions become increasingly difficult to scale up for large
[www.w3.org/XML/SML/#public_drafts] to fill this gap. data centers.
SML can describe schemas using XML DTD’s (data type def- Keeping the asset level information (e.g., current NIC
initions). SML documents can refer to elements in other speed) in a local repository (such as CIM-DB or other
SML documents and can specify complex relationships disk/SSD based repository) is attractive as it allows for a
using schematron (www.schematron.com). Thus SML can clean, decentralized and easily parallelizable management.
allow for resource management based on declared con- In the virtualized environment, the parameters of all VMs
straints. However, specifying and processing complex con- running on the machine are also best maintained locally.
straints using a declarative language such as SML remains However, decisions regarding where a new VM should be
quite challenging. allocated would require at least part of the information
available at a higher level.
6.3. Storage of management data The data duplication across multiple repositories imme-
diately brings in the question of consistency maintenance
The CIM repository of a data center asset can be re- and forces a careful consideration of what information is
garded as a local configuration database that can be que- kept where and in what form. As one extreme, only the sta-
ried and updated using WSMAN, CIM-CLI or other means. tic (or rarely changed) data is retained into higher level
However, depending exclusively on CIM repository to DB’s and all dynamic data is fetched from asset repository
make provisioning or other decisions becomes impractical as needed. This approach quickly becomes unscalable as
even with a small number of servers for two reasons: (a) the dynamic data increases, particularly with respect to
CIM repositories typically store detailed parameter values firmware resident information. On the other extreme,
of individual devices rather than higher level attributes maintaining dynamic data primarily in external databases
(e.g., server capacity) that are required for dynamic man- is not only unscalable but also introduces undesirable
agement and (b) access to CIM repositories is usually very dependencies. For example, inability to access external
slow because its firmware base and web-services interface. database would corrupt asset configuration and cause
A workable management invariably requires some higher crashes.
level database that holds not only portions of CIM reposi- Clearly, an intermediate approach is desired, but there
tory contents but also some derived attributes that can is no theoretical framework to guide what information
be used more directly in making provisioning decisions. should go where. The general idea would be to store more
Such a database is often known as configuration manage- directly usable but more abstract information at higher
ment database (CMDB). In fact, a CMDB does not depend levels; however, it is very challenging to formalize such a
entirely on CIM repositories; it may also contain a signifi- notion. As an example, the CMDB might store the computa-
cant amount of operational data obtained both from OOB tion capacity of the server, which, in turn, depends on lower
and in-band interfaces. level parameters such as the number of enabled cores or
In practice, management SW vendors offer a variety of memory speed. In this case, if an update is made to the as-
products targeted towards managing specific aspects. For set parameters, we need to determine if it affects the
K. Kant / Computer Networks 53 (2009) 2939–2965 2953

CMDB data and if so, the derived CMDB data needs to be that will be triggered either via alerts from EMPs or due to
updated. The obvious pitfall here is that if the relationship threshold crossings of data directly monitored in CIM-DBs.
between the abstracted data and lower level data is not ex- GRD can be implemented using LDAP (lightweight data ac-
plicit (e.g., buried in code), there is both the consistency cess protocol) based implementations such as Microsoft
(false negative) and unnecessary update (false positive) Active-Directory or Novel’s e-directory. Such implementa-
problem with updates. tions are highly scalable for read-only data, but are not de-
To enable integrated management of a complex system, signed to handle highly dynamic data.
it is often necessary to interrelate information from multi- In the context of our 4-layer DVDC model of Section 3,
ple databases. For example, a provisioning package may configuration management is required at all four layers.
maintain detailed resource usage information but only ab- For example, the PIL needs to manage an entire server farm
stracted information about faults. In contrast, a package and provide support for creating server patches. The VIL
dealing with asset replacement/upgrade may do just the should be able to use these capabilities to create and man-
opposite. Combining information from these two dat- age virtual data centers. The VICL then uses the VIL capa-
abases can be very difficult because of differences in level bilities at multiple locations in order to support the
of detail and the precise semantics of data. Furthermore, DVDC concept. Finally, the SPL needs to manage applica-
the filtering/abstraction employed on incoming data before tions and their deployment and should have enough visi-
storing it may be buried in the code rather than specified bility into the lower layers in order to make appropriate
clearly or formally. provisioning and re-provisioning decisions. Providing ade-
There are two approaches to coordinating multiple quate visibility, abstraction and protection across this hier-
CMDBs and both involve a higher level entity which needs archy and doing so in a scalable fashion poses significant
to have interfaces to each existing CMDB or EMP databases management challenges.
with which it interacts. One possibility is to let the higher
level entity itself be a CMDB that maintains a coherent ver- 6.4. Data collection, filtering and fusion
sion of all relevant data abstracted from lower level dat-
abases. This is challenging not only in terms of creating a The management of various life-cycle stages discussed
unified view of data but may also be unscalable due to in Section 6.1 require different types of data ranging from
the centralization of all data into one CMDB. very static to highly dynamic. The most dynamic data re-
An alternate approach is to make the higher level entity lates to the Monitoring and Tuning phase (and to a lesser
a global reference directory (GRD) similar to the scalable extent to remediation phase). Active monitoring can easily
enterprise resource directory (SERD) based architecture generate so much data that its resource requirements rival
defined by Dell and implemented by Altiris (www.alt- or exceed that of the application data. This immediately
iris.com/upload/dell_cto.pdf). Such an architecture is illus- brings the challenging problem of collecting and intelli-
trated in Fig. 4 with a few specific packages and associated gently filtering the data so as to simultaneously satisfy
databases. It also shows a pool of resources to be managed. conflicting goals of minimizing the amount of stored data
The main difference between GRD and top level CMDB is and ensuring that no important events or characteristics
that GRD primarily stores ‘‘handles” to data located in are missed. In addition, as the number of assets in the data
other databases, and some derived attributes or higher level center increase, more intelligent techniques are required to
data that may be more directly useful in management deci- ensure that the complexity and latency of migration/
sions. The methods to relate derived objects to base object reconfiguration decisions increases significantly slower
parameters are necessarily a part of this description. GRD than linear in the number of nodes.
also maintains two other things: (a) Policies for deciding In large systems, an effective data collection, filtering
which EMP to invoke and what parameters to pass to it, and fusion must necessarily be opportunistic since stati-
and (b) Triggers that define the management functionality cally deciding what data to collect, how to filter it and
how to fuse data from different nodes becomes unscalable.
In particular, we would normally like to keep the data col-
lection sparse, use aggressive filtering, and avoid large
scale data fusion until more detailed information collection
is required. In general, it is very difficult to recognize onset
of ‘‘interesting events” with sparse data, since by defini-
tion, an advance recognition of interesting events requires
more aggressive data collection. Machine learning ap-
proaches can be useful in this context but are traditionally
not designed for this environment.
One important aspect in data collection and manipula-
tion is its overhead both in terms of processing as well as
communication among nodes. The interference between
application and opportunistic monitoring can lead to seri-
ous coupling and even instabilities and needs to be
avoided. Consider, for example, a situation where the mon-
itoring detects the onset of congestion and goes on to do
Fig. 4. Illustration of global resource directory based management. more intensive monitoring thereby creating more over-
2954 K. Kant / Computer Networks 53 (2009) 2939–2965

head and perhaps more congestion. This dynamics, cou- and the additivity property does not hold. The notion of
pled with the dynamics of the application, could lead to equivalent bandwidth is often used in networking context
false corrective actions, oscillations or other problems. In to allow additivity [63]; a similar notion is needed for com-
theory, these problems can be alleviated by capacity reser- putational and storage capacities as well.
vation; however, this can be quite difficult or infeasible For certain environments and applications, the available
especially for communication bandwidth. Instead, a more and required capacities may fluctuate significantly, an
intelligent control over the capacities used by the opera- accurate initial estimate may not even be very valuable.
tions side is required. One extreme method is to select servers based on minimal
One way to isolate operations related communications information on utilization of various resources and known
from regular ones is by keeping the former entirely on gross workload features (e.g., CPU bound vs. IO bound). In
the OOB infrastructure. In particular, any communication this case, the problem of adequate capacity allocation is
of data required for fusion is marshaled by the BMC over handled via performance driven dynamic VM resizing or
the management network. While achieving isolation, this migration. A more accurate estimation of available capac-
approach raises several issues of its own: ity can be done by the BMC or higher level controller run-
ning ‘‘micro-benchmarks” and required capacities via
1. Limited bandwidth on the management network. Cur- workload modeling. Obviously, there is a tradeoff between
rent proposals on sideband network bandwidth allow accuracy of the estimate, workload stability and migration
up to 100 MB/s. While such bandwidth is plenty for frequency, but this is not easy to characterize. Machine
routine management alerts, it can be quite inadequate learning techniques such as those in [67] can be useful in
for aggressive usage. this regard.
2. Communication limitations between in-band and OOB Dynamic reprovisioning of a service or application can
sides. Typically, this communication is handled via a be triggered by one of three considerations: (a) resource
device interface, i.e., DMA into or out of main memory oversubscription at one or more nodes (including commu-
with BMC acting as a device. Such transfers can be slow. nication bandwidth), (b) optimization considerations (e.g.,
3. BMC processing limitations. The platform data available moving the application from a lightly loaded server so that
to the BMC may need to be filtered and reduced before the server can be place in a low power state), or (c) occur-
communication, but the limited processing power of rence of specific events such as failures or maintenance
BMC may make this difficult. activities. Of these, (a) and (b) need to tradeoff several fac-
tors including cost of not making the change, monitoring
It is clear from the above discussion that the data collec- cost, reprovisioning cost, and cost of incorrect choice of
tion, filtering and fusion is a challenging problem that be- servers to which the workload is moved. In most cases, it
comes more acute with the data center size. It is also is difficult to make these tradeoffs because of the complex-
important to realize that as the complexity and latency ity of the environment. Ref. [60] discusses the use of ma-
of these tasks increases, the ultimate control itself becomes chine learning techniques for coordinated management
more difficult because of the problem of lag, i.e., remedial of multiple resources in multiprocessors – similar tech-
action based on conditions that no longer exist. These niques can be useful in more general dynamic provisioning
problems become even more complex for DVDCs if appli- contexts as well. In case of (c), the most important aspect is
cations span across geographic locations. to quickly resume service instead of making the optimal
choice of a new server. For example, the service may be
6.5. Service provisioning challenges first moved to another server in the same chassis/rack to
minimize VM migration latency and yet another move-
In a large heterogeneous virtualized environment, pro- ment contemplated later accordingly considerations (a)
visioning of an application or a service can be quite a chal- and (b).
lenging problem [64,66]. Generally, a new service needs to
use several servers and locating appropriate servers re- 6.6. Challenges in management processing
quires at least three aspects: (a) residual server capacity,
(b) available network and storage bandwidth, and (c) ac- The discussion in Section 6.3 focused on management
cess latencies to the data that the application is intended database hierarchy and the issues brought about by multi-
to work with. For clustered applications, there is also the ple databases. This discussion implicitly assumed that the
fourth element that relates to the inter-server communica- external management packages (EMPs) can transparently
tion bandwidth and latency. handle all assets in the data center. The asset data itself
There are at least three problems to be solved in accom- could be contained in a single database or partitioned at
plishing these tasks: (a) Translation of application work- the highest levels only (e.g., database per physical site).
load characteristics into required capacities, (b) Estimation However, the management functionality itself requires a
of available capacities of servers and network, and (c) De- more decentralized structure [69]. For example, the data
sign of algorithms to map applications/services based on center architecture forces a management hierarchy involv-
available capacities and features. Each of these is a very dif- ing server level, chassis/rack-level, and server patch level.
ficult issue and becomes even more so for DVDC’s running In fact, a comprehensive management involves multiple
a wide variety of workloads. Furthermore, the notion of domains and a hierarchy within each domain. In a virtual-
‘‘available” and ‘‘required” capacities assumes additivity. ized data center, there are at least four distinct domains of
In general, workloads may interfere with one another interest, each with a management hierarchy as illustrated
K. Kant / Computer Networks 53 (2009) 2939–2965 2955

in Fig. 5. These domains and potential levels are briefly de- abases. Also, many activities require cooperation between
scribed below: various domains: for example, provisioning a clustered
application requires establishing a new group of VMs;
1. Physical assets: This hierarchy deals with physical however, the mapping of this group on to the physical
groupings of assets and various attributes of each phys- infrastructure requires interaction between virtual and
ical group (e.g., current and maximum allowed power physical domains. In other words, the controllers of the
consumption, number of underutilized servers, etc.). four hierarchies shown in Fig. 5 do not operate in indepen-
The top level in this hierarchy is relevant only if the dently; they need to communicate and coordinate in order
data center spans across multiple physical locations. to accomplish various life-cycle tasks discussed in Sec-
2. Virtual assets: This hierarchy deals with virtual machines tion 6.1. Thus designing an overall architecture of cooper-
and their groupings in terms of application cluster, vir- ating hierarchies of controllers is itself a challenging task.
tual data center (defined over a server farm), and the The question of in-band vs. OOB control is also impor-
entire DVDC. This hierarchy is required for provisioning tant in designing a comprehensive architecture. As stated
resources for applications and virtual data centers. earlier, the server level OOB processor (or BMC) monitors
3. Network infrastructure: Network infrastructure manage- and controls certain aspects such as server status, fan-
ment deals with the management plane of switches and speeds or power draw whereas the in-band controller is
routers. This hierarchy reflects the physical network more concerned with performance issues. In general, it is
structure. Network management across physical server possible that OOB and in-band functionalities have their
farms is the domain of ISPs and hence not included. own hierarchies, each supplied by a different vendor. Coor-
4. Software infrastructure: Software infrastructure is con- dination between the two sides in this case is difficult but
cerned with keeping track of software components essential for an effective management.
and their dependencies. Because of their modular nature, data center assets can
be easily moved around, and usually do for a variety of rea-
The main purpose of the hierarchical structure is to sim- sons. In a large data center, it becomes difficult to keep
plify management. The hierarchy requires two important track of these assets. Thus asset management has emerged
functions: (a) decomposition of a higher level request into as an important problem in data centers and some solu-
sub-requests that can be delegated to lower levels, and tions for localizing servers in a data center have appeared
(b) propagation of consolidated results and exceptions to [61]. Reference [68] shows how the emerging wireless
the higher level. For example, when provisioning an appli- USB standard can be exploited for accurate asset localiza-
cation requiring many servers, we can choose the set of tion and reference [65] builds on it to provide location
racks that will host the application, and leave the task of based services in the data center.
choosing actual servers within the rack to the rack-man-
ager. This would allow proprietary algorithms to be used
within a rack – the only thing that needs to be standardized 7. Power and thermal management
is the interface between levels. If the racks level is unable to
handle the assigned task, it would raise an exception to the The importance of effective power and thermal man-
higher level. Such a mechanism provides for a whole con- agement in data centers was introduced in Section 2.5. In
tinuum of inter-level interaction policies: on one extreme, what follows, we first provide necessary technological
the higher level can select the next level entities almost background on different aspects of power and thermal
randomly and depend on the exception mechanism to cor- management and then identify key opportunities and chal-
rect things, and on the other the delegation is carefully lenges, existing methodologies, and problems that need to
managed so that exception are truly rare. be addressed.
While the challenges of negotiation, delegation and Reducing power consumption in data centers involves a
exception feedback arise in any hierarchy, they are further number of facets including: (a) low power hardware design,
complicated by the presence of multiple incompatible dat- (b) restructuring of software to reduce power consumption,

Fig. 5. Illustration of levels of management of IT resources.


2956 K. Kant / Computer Networks 53 (2009) 2939–2965

(c) exploitation of various power states available in hard- With respect to the computing part of the platform,
ware, and (d) proper design, usage and control of data cen- ACPI defines five ‘‘system” states denoted S0. . .S5, with
ter infrastructure [106]. Of these, (a) and (b) are not specific the following being the most relevant: S0 (working), S3
to data centers, therefore, we address them here only (standby – or inactive with state saved into DRAM), and
briefly. S5 (hibernating – or inactive with state saved into second-
As the hardware technology progresses, it will continue ary storage). In S0, ACPI defines various abstract device
to use smaller feature sizes, which automatically reduces de- states that can be mapped to the states actually provided
vice power consumption. However, the increasing wire by the hardware, as discussed in the following.
resistance, clock rates, and transistor counts not only in-
crease overall power consumption but also result in enor- 7.1.1. CPU power states
mous and unsustainable power and current densities. The CPU power consumption depends on the number of
Furthermore, the desire to quickly enter/exit low power state processing cores, their operational frequencies and volt-
could result in intolerable di=dt (rate of change of current). ages, and the workload. CPU offers the richest set of power
This further implies that we may be unable to reduce these states. For single-core CPUs, these are the C states (inac-
latencies very much. Similarly, as the feature size decreases, tive) and P and T states (active) as shown in Fig. 6.
the thickness of the insulator layer will decrease and hence The best known states are the P (performance) states,
the leakage current will increase. This remains true in spite where P0 refers to the highest frequency state and P1, P2,
of recent success of high dielectric insulator layer. etc. refer to progressively lower frequency states. Lower
With respect to software design issues, while it is gen- frequency allows operation at a lower voltage as well and
erally true that software that is optimized for performance thus each P state corresponds to a supported (voltage, fre-
would also be power efficient, power efficiency is not syn- quency) pair as illustrated in Fig. 6. The active power con-
onymous with computational efficiency. In particular, sumption is proportional to frequency but goes as square
batching of computations and data transfers improves of the voltage – thus a simultaneous voltage and frequency
power efficiency since it elongates idle periods and allows reduction can result in cubic decrease in the power con-
devices to go into low power states [96]. Also, certain oper- sumption. Furthermore, a lower voltage decreases the
ations or sequence of operations take less energy than oth- leakage current as well and thus results in lower static
ers for equivalent effective work. Although low power power consumption also. For this reason, dynamic volt-
software design is well studied for embedded software age-frequency switching (DVFS) is among the most ex-
[103], development of general frameworks for low-power plored power management technologies. Many papers
software design and characterization of its power-perfor- have explored compiler and OS assisted use of DVFS to
mance tradeoff is a major outstanding challenge. optimize power consumption [70,109,83].
In the following, we shall elaborate on aspects (c) and Although transitions are possible from any P state to an-
(d) in some detail since they are central to much of ongoing other, a significant transition latency may be involved
work on data center power management. depending on the implementation. In particular, the imple-
Almost all major components comprising a modern ser- mentation needs to allow for not only the underlying hard-
ver offer control knobs in the form of power states – a collec- ware latencies such as voltage settling time and locking
tion of operational modes that trade off power time to a new frequency but also software overhead such
consumption for performance in different ways. Power con- as ACPI table lookup, user-kernel mode transition, and run-
trol techniques can be defined at multiple levels within the ning the decision algorithm. It is crucial to consider these
hardware/software hierarchy with intricate relationships switching overheads in estimating the power-performance
between knobs across layers. We call a power state for a tradeoff due to P state usage.
component active if the component remains operational The successive generations of the semiconductor tech-
while in that state; otherwise we call the state inactive. nology not only reduce feature size, but also operating
These active and inactive states offer temporal power control voltages. The upcoming 22 nm technology would likely
for the associated components. Another form of power con- operate at 1.1 V, which leaves very little room for further
trol is spatial in nature, wherein identical copies of a repli- lowering of voltages. Consequently, future P states are
cated component operate in different active/inactive states, likely to be primarily a frequency play and thus not very
with the overall power/performance trade-off depending attractive for reducing power consumption. In fact, it can
on the combination of states. The more general integrated be argued that with relatively little scope for voltage
power control refers to a cooperative power management changes, the so called ‘‘race to halt” strategy may be pre-
of multiple homogeneous or heterogeneous components. ferred than DVFS. Race to halt refers to the strategy of fin-
ishing up work at the highest possible rate and then move
7.1. Active and inactive power states to an inactive state.
T (throttling) states assert STPCLK (stop clock) signal
A computing platform and most devices that are a part every few clock-cycles and thus enforce a duty cycle in
of the platform provide active and inactive states. The Ad- CPU activity. State T0 corresponds to 100% duty cycle
vanced Configuration and Power Interface (ACPI) provides (i.e., no STPCLK), and states T1, T2, etc. to progressively
a standardized nomenclature for these states and also de- lower duty cycles as shown in Fig. 6. T states are intended
fines SW interfaces for managing them. ACPI is imple- primarily for thermal control (e.g., to ensure that junction
mented in all major OSes as the OS-directed Power temperature does not become too high) and may use long
Management (OSPM). STPCLK periods. The CPU stalls introduced by T states are
K. Kant / Computer Networks 53 (2009) 2939–2965 2957

Fig. 6. System vs. CPU power states.

usually a performance killer, but performance is of second- 7.1.2. Memory power states
ary importance in thermal protection scenarios. Memory is composed of a number of DIMMs, and the
The basic inactive states are denoted as C0, C1, C2, . . ., memory power consumption is a function of number of
where C0 is the active (operational) state and others are DIMMs, DIMM size, channel speed, and channel utilization.
inactive states with increasing power savings and entry/ Here we focus on DIMMs based on the popular double-data
exit latencies. When in the C0 state, the CPU can also tran- rate (DDR) technology which has evolved from DDR1 to
sition to P and T states. The interpretation of and transitions DDR2 to the now proliferating DDR3. Although DDR tech-
between various C states are architecture dependent. Gen- nology continues to drive down per GB power consump-
erally, C1 and C2 states only turn off clock whereas higher tion, the ever increasing appetite for more memory is
states may also flush core-specific or even shared caches already making memory power consumption rival CPU
and may lower the voltage. It is expected that future pro- consumption. Thus aggressive management of memory
cessors will have even deeper inactive states. The realiza- power is essential.
tion of various inactive states and hence the power Each DDR DIMM is divided into ‘‘ranks”, with 1, 2, or 4
savings they offer is vendor dependent. For example, in ranks per DIMM. A ‘‘rank” is a set of memory devices that
some current processors, the interesting C states are C1, can independently provide the entire 64 bits (8 bytes)
C3, and C6, with power consumption (as a percentage of needed to transfer a chunk over the memory channel that
C0 power consumption) in the range of 50%, 25%, and 10% the DIMM is connected to. The most common server DIM-
respectively, and transition latencies of the order of 10 ns, Ms are dual-rank with ECC (error correcting code) enabled
a few ls, and about 10 ls, respectively. and involve nine x8 devices per rank. Each DDR3 device is
In case of a multi-core CPU, most of the above states ap- internally organized as a set of 8 ‘‘banks” that can be ac-
ply to individual cores as well. However, there are some cessed in parallel. Memory controllers often support multi-
important differences, primarily relating to the architec- ple channels, each allowing one or more DIMMs and
tural considerations. For example, if all cores lie on the capable of independent data transfer. Since the data from
same voltage plane, they can only allow independent fre- all ranks of all DIMMs on a channel must flow over that
quency control and thus limit the ‘‘P” states for cores. We channel, the ranks can be lightly utilized even if the chan-
refer to core states using ‘‘C” as a prefix, e.g., core C states nel is quite busy.
are called CC states, core P states CP states, etc. The overall As its name implies, a DDR DRAM transfers 8 bytes
or package-level CPU states are still meaningful, and indi- (64 bits) of data on both edges of the clock. It thus takes four
cate how the non-core, package level logic should be han- cycles to transfer a typical 64 byte cacheline. DDR3 can sup-
dled. This logic includes core-interconnect, integrated port clock frequencies of up to 1 GHz, which translates into
memory controller, shared cache, etc. Clearly, the package 16 GB/s per channel, which can be quite plentiful. Unfortu-
state can be no lower power than the highest power core nately, DRAM latencies have stayed almost constant even
state. For example, if some cores are in CC6 state while oth- as the clock rates have increased substantially. This trend
ers are in CC3, the package state will be generally set as C3 is expected to continue in the foreseeable future and is of-
and suitable low power states to use for non-core compo- ten referred to as the memory wall phenomenon.
nents can be decided accordingly. A CPU package state DDR technology includes several mechanisms to reduce
could imply more global actions as well, which we discuss power consumption. The most basic is the use of lower
under integrated control. voltages in newer versions (e.g., 1.5 V in DDR3 vs. 1.8 V
2958 K. Kant / Computer Networks 53 (2009) 2939–2965

Fig. 7. DDR memory power states.

for DDR2, and 1.2 V for future DDR4). Through dynamic It is seen that the self-refresh latencies are extremely
management of electrical terminations such as input buffer long, even without PLL off. Thus, self-refresh is not useful
termination (IBT) and on-die termination (ODT), the tech- in normal (C0) operating mode; self-refresh is typically
nology minimizes the standby power consumption and employed when the entire socket is in low-power mode
improves signal integrity at the cost of some additional la- such as C6.
tency. In particular, with 2 GB DIMMs the standby power In the above, we consider inactive power states of
consumption is approximately 3 W whereas the 100% memory. By running DRAM at lower clock rates, the active
loaded power consumption is approximately 9.8 W. Sev- power can be reduced significantly at the cost of lower
eral low-power idle states have also been identified and memory BW. The clock rate reduction often does not in-
are briefly explained below: crease access latencies significantly since the RAS, CAS
and page close operations can be performed in fewer
 Fast CKE: In this state (denoted CKEf) the clock enable clocks at lower clock rates. This situation is a result of
(CKE) signal for a rank is de-asserted and I/O buffers, the ‘‘memory wall” phenomenon discussed above.
sense amplifiers, and row/column decoders are all deac-
tivated. However, the DLL (digital locked loop) is left
7.1.3. Interconnection network and links
running.
Modern computer systems use a variety of networking
 Slow CKE: In this state (denoted CKEs), the DLL is also
media both ‘‘inside-the-box” and outside. The out-of-box
turned off, resulting in lower power consumption but
interconnects such as Ethernet, Fiber-Channel, and Infini-
higher exit latency. If all ranks of a DIMM are in slow
Band, are well known and their ports can consume a sub-
CKE mode, the DIMM register can also be turned off
stantial percentage of the IT power in a large data center.
thereby leading to even lower power consumption w/o
However, internal interconnects can also collectively con-
any additional latency. Further circuitry can be turned
sume a significant percentage of platform power due to
off if all DIMMs on a channel are in CKEs mode. We
their number, high switching rates and increasing silicon
denote these modes as CKEsr.
wire resistance [98]. A modern platform sports a number
 Self-refresh: In this mode much of the DRAM circuitry is
of inter-connects including PCI-Express (PCI-E), link be-
placed in an inactive low power mode and the DRAM
tween ‘‘south-bridge” and ‘‘north-bridge”, processor-mem-
refreshes itself rather than under the control of the
ory interconnect (such as QuickPathTM or HyperTransportTM),
memory controller. There are two such modes: S/Rf
and inter-core interconnects. An intelligent power manage-
(self-refresh fast) where the phase locked loop (PLL)
ment of such interconnects also becomes crucial for plat-
required for synchronizing DRAM clock with external
form power reduction.
clock signal remains on, and S/Rs (self-refresh slow)
As with other devices, the active power modes in links
where it is turned off.
result from the provision of multiple speeds. Depending
on the type of link, the speed change may be either a matter
Fig. 7 shows power consumption and exit latencies in
of simply changing the clock rate or a switch-over to a dif-
various power modes of DRAM using a ‘‘waterfall diagram”.2
ferent PHY. An example of the latter is the Ethernet operat-
It is seen that CKEf can save 1.5 W per DIMM with only four
ing at standard rates such as 100 MB/s or 1 GB/s. Such a PHY
DRAM clocks (dclks) of latency. On the other hand, CKEs saves
switch can be extremely slow. Even the clock rate changes
only 0.6 W more with considerably larger latency. It would
can be quite slow since they invariably require a handshake
appear that a right strategy would be to use CKEf frequently
to ensure that both directions are operating at the same
and during longer idle periods promote the state from CKEf
clock rate. The energy efficient Ethernet [81,72] is consider-
to CKEs. Unfortunately, transitions between the two modes
ing a rapid PHY switching scheme, but even this can be ex-
are expensive, adding some 12 dclks of latency.
tremely slow for a fine granularity control.
Most current links support at least two low power
2
The stated numbers are for a 2-rank, 2 GB DDR3/1333 DIMM with 2x states, called L0s and L1, respectively [86]. For L0s, the
refresh rate, and could vary a lot from vendor to vendor. power consumption ranges between 20% and 50% of the
K. Kant / Computer Networks 53 (2009) 2939–2965 2959

idle (or L0 state) power consumption and the entry/exit states. For example, Ref. [107] considers a closed loop con-
latencies are in the tens to hundreds ns range. L1 power trol for multi-core processors that keeps temperature be-
consumption is generally much smaller at the cost of exit low a certain limit. Reference [85] considers DVFS control
latencies in the range of several microseconds. These very of cores to meet given power budgets.
high exit latencies make L1 unsuitable for internal links in
the C0 state. Whereas most existing link control algorithms 7.2.2. Interconnection links
deployed are reactive in nature (e.g., exit from low power Most of the link types described earlier are now based on
state when the traffic does arrive), it is possible to consider bit-serial technologies with differential signaling where the
proactive algorithms that attempt to be ready for arriving link bandwidth is scaled by running multiple ‘‘lanes.” Such
traffic. The efficacy of combining reactive and predictive links allow dynamic width changes wherein certain lanes
techniques for link state control has been investigated in can be put in low power modes to trade-off power consump-
[86]. More sophisticated schemes, appropriate for out-of- tion against bandwidth. A highly desirable feature of width
box links have also been explored [79]. control is that so long as some lanes are active, the non-zero
communication bandwidth significantly reduces the impact
7.1.4. Storage media of high latencies associated with the state change.
The ‘‘data tsunami” problem discussed earlier means A dynamic width control algorithm has to operate within
that the power consumption in storage media will remain constraints of feasible widths associated with the underly-
a significant percentage of data center power consumption. ing link hardware. For example for a x10 link, the supported
Yet, unlike CPU and memory, the traditional magnetic widths may be only x10, x4, x2, and x1. As with temporal
disks do not allow aggressive power management. With control, both reactive and proactive techniques can be used
magnetic disks, much of the power consumption is related for link width control. In [87], we discuss a complete algo-
to simply spinning the disk. Dynamic changes to the RPM rithm called DWCA (dynamic width control algorithm)
of hard disks (DRPM) has been proposed as one method and show that it easily outperforms the link power state
of disk power control [47]. However, at low rational control algorithm in terms of latency impact and power sav-
speeds, the applications will experience substantially high- ings. Note that when there is no traffic to transmit, the
er latencies and potentially unsatisfactory performance, width can and should be reduced down to zero. Thus, DWCA
even if the disk utilization is low. Furthermore, the impli- does include link power state control as a special case.
cations of dynamically varying rotational speeds on the
reliability of disks are still unclear. 7.2.3. Memory and secondary storage
In terms of spatial control of memory, it is possible to
7.2. Spatial control put a subset of ranks in low power mode in a synchronized
manner in order to have only a fraction ranks active at a
7.2.1. CPU cores time and thereby reduce the power consumption. Ref.
The package states introduced in Section 7.1 can be con- [89] discusses a closed loop control scheme that attempts
sidered as a special form of spatial control that relates to all to limit performance degradation for such a synchronized
cores. However, several other forms of spatial control can control. More intrusive controls are also possible: for
be used to in interesting ways: example, one could copy ‘‘hot” pages from an entire DIMM
to another DIMM and then put the DIMM in deep sleep
1. If nearby cores are inactive, it may be possible to run at mode (e.g., S/Rs); however, the effectiveness of such con-
a frequency above that of the CP0 state (potentially by trols needs to be evaluated carefully because of significant
raising voltage above CP0 level). This is referred to as latency, BW usage, and power consumption associated
turbo mode and is useful for workloads that do not scale with data movement.
well with the number of cores. The most basic spatial power control for hard disk
2. Ideally, running a core constantly in CP0 should not drives is to spin-down idle disks. Some recent research
result in any thermal events (e.g., PROCHOT signal being has also considered shutting down a subset of the disks
asserted). However, ensuring this requires substantial within RAID arrays to reduce power consumption without
margins in chip, package and heat-sink design. Instead, hurting performance by using smarter data placement and
it may be easier to keep a few unused cores and rotate dynamic rearrangement [56,58].
workload among them based on some thermal thresh- The substantial storage related power consumption in
old. This is normally referred to as core hopping. data centers can be alleviated by SSDs. As shown in Table 1,
3. With many cores available in future CPUs, it is expected all NVRAM technologies consume an order of magnitude
that the cores will not be well utilized. In this case it lower power than hard disk drives, and virtually no idle
helps to consolidate workload on a few cores that are power. The drawbacks inherent in most solid-state tech-
evenly distributed on the chip and run them in CP0 nologies with respect to their lifetimes and higher costs
state and put others in deep sleep mode. This is the tra- are likely to limit the extent of benefits they bring. As tech-
ditional width control applied to CPU cores. nological breakthroughs in solid-state technologies im-
prove their life-time and performance, they should play
In practice, it is possible to combine these and other an increasing role in reducing storage subsystem power.
spatial mechanisms in order to match CPU power and per- Meanwhile, complementary efforts are likely to continue
formance profile to the applications. Several papers have on various ways to make disk-based storage more
considered spatial control, but primarily using active power-efficient. Novel data layout techniques along with
2960 K. Kant / Computer Networks 53 (2009) 2939–2965

in-memory caching/buffering schemes can be used reduce such design needs to carefully consider the higher costs of
seek related power consumption. An example of this is the SSDs as well as the reliability problems inherent in them.
use of fractal trees [42] instead of the traditional B-trees in Vertical co-ordination must deal with appropriate map-
databases. The data layout could also be altered dynami- pings between the control options defined across different
cally or data could be migrated in order to facilitate shut- layers. In particular, the power management done by PMU
ting down subsets of disks while attempting to minimize could benefit from ‘‘hints” provided by the OS or the appli-
the impact on performance [58]. As an example, data that cation. The OS can specify to the CPU the maximum toler-
is less performance critical could be stored on slower disks able latency via the ‘‘MWAIT” instruction so that the PMU
that consume less power. can do a better job of choosing the C states. Similar argu-
ments hold at other levels, e.g., application or middleware
7.3. Integrated power management providing hints to the OS and for the control of other types
of states.
Integrated control is concerned with coordinated man-
agement of heterogeneous components or devices building
upon the individual temporal and spatial control knobs 7.4. Power conversion and distribution
discussed above. Integrated power management can be
viewed along two dimensions: (i) horizontal, which co- As mentioned in Section 2.4, a significant amount of
ordinates the control knobs within one level of the hard- power is wasted in the power conversion and distribution
ware/software stack and (ii) vertical, which co-ordinates infrastructure in the data center. Several technologies are
the operation across different layers. currently being considered in order make this infrastruc-
The CPU package state introduced earlier can be thought ture more efficient. First, if the distribution voltage to racks
of as a composite state that involves integrated power man- can be raised from the current 110–220 V to 400–440 V, it
agement of not only the cores but other non-core entities will automatically reduce losses. However, there are safety
such as core-interconnect, shared cache and integrated and insulation issues with ‘‘high voltage data centers” that
memory controller. In particular, if all the cores are in a need to be worked out. Second, power conversion losses
state where their core caches are flushed, the correspond- can be trimmed by reducing the number of times AC–DC
ing package state may specify partly flushing of the shared conversion takes place in the data center. In particular,
cache as well. A low-power package state could even trigger after an initial conversion to DC, all further conversion
power state transitions for components outside the pack- and distribution can stay DC. Both of these changes are
age. For example, if all two CPU packages are in, say, C6, rather radical and feasible only in new data centers.
the interconnect between them could be transitioned to A different approach is to make server and client power
L1 state, perhaps after some suitable delay. However, much supplies more energy efficient and smarter. By some esti-
of this is currently handled in an ad hoc manner. mates there are currently more than 10 billion power sup-
Modern processors are beginning to design functionality plies in use in the world [91]. Fig. 8 shows the efficiency of
to exercise the power control options for various compo- high efficiency and normal (low-efficiency) power supply
nents in a coordinated fashion. For example, the power man- units (PSUs) as a function of load. It is seen that at low loads,
agement unit (PMU) is a dedicated micro-controller on the the PSU efficiency can be quite poor. Most servers in data
newer Intel cores whose sole purpose is to manage the centers run at rather low utilization and dual redundant
power envelope of the processor (See communities.intel. power supplies are often used for reliability, thereby result-
com/community/openportit/server/blog/2009/04/27/). This ing in a rather low sustained load and efficiency. Further-
design allows for increased flexibility by allowing function- more, PSU inefficiency applies to the entire input power
ality to be moved to firmware rather than being hardware- drawn by the server, which means wasted watts could be
based. The control mechanisms implemented by the power significant at all load levels. A secondary effect of PSU ineffi-
control unit are driven by real-time measurements from ciency is the heat generation that PSU fans need to remove.
sensors built into the main cores that monitor temperature, It is clear that technologies that can maintain high PSU
power, and current. There could be a hierarchy of PMUs, efficiency at all load levels are highly desirable. Phase
with those at the higher levels operating at coarser temporal shedding smart PSUs are one such solution. Such a power
and spatial granularities and interacting with lower level supply uses some number N > 2 of phases, of which only
PMUs. An effective control may require higher level PMU(s) n 6 N phases are active simultaneously. The change can
to coordinate with the OOB side that collects a variety of be initiated either by the PSU itself based on the power
board and system level power/thermal data. drawn or via a change signal provided by the control soft-
Horizontal integrated control of heterogeneous devices ware running either on the BMC or main CPU. On each
has become an interesting and viable option in secondary change, the phases need to be rebalanced. For example, a
storage with the presence of NVRAM technologies. As dis- 450 W power draw on a 6 phase power supply should have
cussed in Section 5.2, the NAND Flash-based drives consume 60° separation between phases, with each phase delivering
significantly lower power compared to magnetic hard disk 75 W RMS power. Changing the number of phases is neces-
drives. They could be used for building more power-efficient sarily very slow since the rebalancing will require at least
storage systems in data centers where they are used for one cycle (16.7 ms at 60 Hz). Furthermore, the digital
selective storage of performance-critical and popular con- interface to PSU and power/temperature sensors tends to
tent thereby providing increased opportunities for turning be a rather slow polling based interface. This brings in
off the power-hungry hard disk drives. As discussed earlier, challenges in accurate control due to significant lag.
K. Kant / Computer Networks 53 (2009) 2939–2965 2961

air filtration. The result was 67% power savings using


ambient cooling 91% of the time [84].
In order to compensate for the higher temperature of
the air drawn from outside, ambient cooling generally
needs to circulate a larger volume of air through the data
center. This requires the design of larger and/or more fans
which, in turn, consume more energy. This increase in en-
ergy expended towards circulating the larger volume of air
must be carefully traded-off against the reduction due to
getting rid of the CRAC unit. In order to ease movement
of large volume of air (and thereby reduce energy con-
Fig. 8. Load vs. efficiency for power supplies. sumption), rack occupancies can be reduced, but this in-
creases the real-estate cost of the data center.
In addition to PSU adjustments, server-side adjustments More localized cooling solutions may use cooling di-
may also be needed. If the PSU detects a loss of input power, rectly built into a rack. Such solutions can be much more
a quick notification to server/BMC can start a shutdown of efficient in that they place cooling next to the heat source
inessential operations and devices to conserve power. In as a vertical unit attachable to the side of a rack. Modular
the extreme case where there is no UPS (but backup gener- cooling may augment or replace the building or room level
ators are available), the backup power can take 10–15 s to cooling. Modular solutions may also be assisted with tem-
come on and it may be necessary to start shutdown by com- perature and humidity sensors along with variable-speed
mitting information to non-volatile storage. Even with UPS, fans and humidity control. Modular solutions pose inter-
it helps to reduce the required UPS capacity and in this case, esting load distribution problems – in particular, the opti-
servers need to take a variety of actions (e.g., shutting down mal strategy is to concentrate all load on certain number of
most cores in a CPU or putting them in the lowest P state) in racks operating at some optimal cooling level whereas
order to quickly match the drawn power to the installed servers in other racks are placed in low-power mode or
UPS capacity. As sustainability and data center cost issues shut down and left without any cooling.
grow, good solutions to adapting drawn power to available Moving down the spatial hierarchy, smarter cooling
power will become more important. solutions have emerged at the chassis, enclosures, and ser-
Next, we consider voltage regulators (VRs). Tradition- ver-level in the form of variable-speed fans modulated by
ally, voltage regulators are located on the motherboard temperature measurements. Server level fans may be aug-
and help cope with the proliferation of different voltages mented or replaced by shared chassis and/or rack-level fans.
required by various components. VRs do DC to DC voltage With decreasing size of servers and multi-core CPUs, the
conversion by modulating the input voltage with a duty challenge of moving a sufficient amount of air through the
cycle and passing it through a low pass filter. The process server requires multiple small, high-velocity fans, which
is often only about 85–90% efficient with efficiency drop- bring in noise issues. Also, modeling thermal and cooling
ping with the load as in power supplies. Phase shedding behavior of tight enclosures with multiple heat generating
VRs have been proposed as a solution, but suffer from is- elements and shared fans/air-flow becomes extremely diffi-
sues similar to those for power supplies (besides being cult. Much of the current design methods depend upon
expensive). A coordinated control of multiple phase shed- empirical models of thermal coupling between various
ding VRs can be quite challenging since different VRs have components based on typical air-flow patterns; more ana-
different time constants. lytic design methods are required but can be very difficult.

7.5. Cooling infrastructure 7.6. Power/thermal management challenges

Traditional data center cooling infrastructure can be 7.6.1. Measurement and metrics
very expensive and consumes 25% or more of the total data Power measurement capabilities exist both within most
center power. It also often does not work very efficiently. modern servers and power distribution units (PDUs). To
Evolving cooling technologies emphasize more localized enable provisioning and dynamic control solutions that
cooling or try to simplify cooling infrastructure. can modulate power consumption within different parts
Chillerless, ambient or ‘‘free” cooling solutions do away of the data centers in desirable ways, it is important to
with chiller plants which can be expensive, take up a lot of understand the dependency of power usage upon utiliza-
space, consume energy, and waste a lot of water in form of tion of various resources within a data center. Another
evaporation. These solutions are ‘‘open-loop” in that cooler important issue is to predict the power needs of an aggre-
ambient air is taken in and hot air is expelled from the gate such as a rack of servers. Some preliminary work on
building. Depending on the local weather, ambient cooling this relies on deriving probability distributions of the
may result in higher temperature operation. Some large- power and resource consumption of individual workloads
scale studies indicate that ambient cooling can reduce using offline profiling and then using standard statistical
cooling costs significantly without degrading reliability. techniques to aggregate them is contained in [74].
For example, in a proof-of-concept data center operated Temperature measurement facilities exist at various
by Intel, ambient cooling with 100% air exchange at up to levels (starting from chip level) and rely on sensors of var-
90 F was used without any humidity control and minimal ious degrees of sophistication. It is often possible to exploit
2962 K. Kant / Computer Networks 53 (2009) 2939–2965

the measurement time series to predict temperature What are the spatial and temporal granularities at which
events into the future and these can be used for control these knobs should operate and what are appropriate ways
purposes. A number of recent efforts have devised models to partition functionality between hardware/firmware and
and simulation tools for capturing thermal effects ranging various layers of software?
from chip-level [101], server, disks [80], rack [75], to room- A variety of power and thermal control loops exist in
level [99]. It would be desirable to develop integrated tools data centers at different levels within the spatial hierarchy
for modeling and prediction of thermal phenomena at ranging from chip-level, component, server, rack, room-le-
these multiple spatial levels and their interactions. Addi- vel to data center. The granularity of decision making ranges
tionally, models providing different tradeoffs between from less than a ls at the chip-level, seconds at server level,
accuracy and computational overheads are also important. and minutes or hours at room-level. Often different control
The notion of energy-proportionality has been recognized loops work on different objectives such as minimizing aver-
as a desirable objective, which requires that some useful age power, peak power or temperature variations. These
notion of performance scale linearly with the power con- control loops may be designed independently and may con-
sumed [73]. A true energy proportionality is clearly flict in their control strategies. The required coordination
unachievable since most devices have significant idle mode among the loops may need to abide by some specific rules
power consumption and any transition to lower power as well. For example, the peak power control loop trying
mode incurs both the performance overhead (entry/exit la- to enforce fuse limits should be given priority over the aver-
tency) and power overhead (power consumption during age power control loop trying to minimize energy con-
entry and exit and). However, the concept can be useful sumption. These challenges have started to receive some
in that it provides for idealized behavior to target. Server attention [97,108], but significant additional challenges re-
virtualization coupled with efficient migration techniques main. In particular, different control loops may be designed
provides a way to consolidate workload on fewest number by different vendors (HW/FW embedded loops vs. those
of servers so that the rest of them can be shut down. This operated by BMC vs. those in OS/middleware). An effective
provides one way of approximating the energy proportion- control requires well-defined interfaces through which the
ality ideal across a sets of servers in the data center [105]. control loops can cooperate (instead of simply competing).
In order to better focus efforts on data center energy effi- Since power/thermal effects are a property of a physical
ciency, it is important to develop meaningful metrics relat- resource, power/thermal management is often based on the
ing to cost, energy, cooling and performance. Currently the behavior of the physical resource directly. However, in a
only widely used metric is PUE (power use efficiency) virtualized environment, a more intelligent management
which measures the total power consumed by the data cen- may be possible by considering the behavior of individual
ter divided by the IT power. Although useful, PUE is a rather VM’s sharing that physical resource. For example, the VM
crude metric. It is easy to define a PUE like metric at almost that consumes the most power or causes other VMs to con-
any level however, since efficiencies generally improve sume more power because of its poor cache behavior may
with load, the metric would be load dependent. Moreover, be subject to a tighter control than others. This can be quite
power efficiency without accounting for performance im- challenging since in general it is very difficult to accurately
pact may not be very useful. Another popular metric, attribute power and thermal effects to individual VMs. Ref-
namely performance per watt attempts to capture perfor- erence [95] defines virtual power states on a per-VM basis
mance-power tradeoff, but it too is fraught with problems which can be manipulated independently and the net effect
since the meaning of ‘‘performance” is application depen- mapped to the physical device. However, a good mapping
dent. A related problem is that characterizing the impact scheme can be quite challenging in general.
of power on performance is often very challenging and
there is little in the way of formal models to address this 7.6.3. Provisioning of power and cooling equipment
gap [88]. Finally, in a virtualized environment, it should In order to cover the worst case situations, it is normal
be possible to estimate power and thermal effects of indi- to over-provision systems at all levels of the power hierar-
vidual VM’s, but this can be very challenging since the chy, ranging from the power supplies within servers [90],
VMs can interact in complex ways and the power consump- Power Distribution Units (PDUs), Uninterrupted Power
tions don’t simply add up. For example, a poorly behaved Supply (UPS) units, etc. Reference [76] estimates that
VM can increase the power consumption of other VMs. over-provisioning in Google data centers to be about 40%.
This over-estimation is driven by the use of ‘‘nameplate”
7.6.2. Challenges in integrated control power/thermal requirements of servers, which often as-
Although the presence of myriad power knobs offered sume that the server is not only provisioned with maxi-
by various components offers opportunities for power sav- mum possible physical resources such as DRAM, IO
ings and control, it also brings about significant manage- adapters and disks, but all these devices simultaneously
ment challenges. How should these knobs be manipulated operate close to their capacity. In practice, it is extremely
in a coordinated fashion to achieve desirable aggregate rare to find workloads that can simultaneously stress more
behavior? Answering this question requires ways to model than two resources. For example, CPU bound workloads
their inter-dependencies and reason about integrated con- typically do very little IO and vice versa. Also, most servers
trol. A dual question to address is how many control knobs typically run at much lower utilization than 100%.
are desirable? From the last section, it is clear that, espe- Such over-provisioning of power increases data center
cially at the hardware component-level, the options for setup and maintenance costs for power conversion/distri-
coordinated control are numerous in existing machines. bution infrastructure and cooling infrastructure at all lev-
K. Kant / Computer Networks 53 (2009) 2939–2965 2963

els. Since up to 3/4th of the power is simply wasted, it also energy consumption. While estimating direct energy con-
increases the utility costs. Data centers are already begin- sumption of executing a piece of code is hard enough, what
ning to discount name-plate power of servers by a certain is really required is an applications share of energy costs
percentage (often 25% or higher) in estimating the power associated with all aspects of operating a data center
distribution and cooling needs. However, a more accurate including storage access, networking and even cooling.
estimation of the ‘‘discounting” is very useful. Refs. New challenges related to coordination of IT and cool-
[76,77] examine attempt to do this for hosted workloads ing control loops are arising due to the agility and flexibil-
in order to cost-effectively provision the power supply ity offered by server virtualization. For example, exploiting
hierarchy. In order to address the drop in efficiency with virtualization to dynamically modulate the number of ac-
load, Ref. [92] considers replacing high capacity PDUs with tive servers based on utilization, may create thermal
a larger number of smaller capacity PDUs. imbalances. This, in turn, may require more cooling and
As the provisioned power capacity moves closer to the thereby actually increase the overall electricity consump-
average required capacity, there is a non-negligible proba- tion [94]. Therefore, the algorithm that decides the physi-
bility that the provisioned capacity will occasionally prove cal placement of virtual machines should also incorporate
to be inadequate. There are two ways to address such poten- projections of the impact of its decision-making on power
tial deficit: (a) local protection mechanisms, and (b) load draw and temperature.
migration. Local protection mechanisms attempt to cap The VICL, as envisioned in Section 1 would allow crea-
power consumption in order to stay within the allowed bud- tion of DVDC from resources spanning multiple physical
get at various levels (e.g., servers, racks, data center). This data centers. Some significant challenges, in addition to
can be done by exploiting any of the active or inactive power the issues of accounting, would arise in such settings. Gi-
modes – the main difference being that the adaptation is not ven that the price and availability of power could vary
workload driven, but rather driven by the limits. This re- across the locations where the physical server patches
quires a completely different set of algorithms and brings are located, it would be interesting to examine the problem
in unique problems of its own. For example, normally, the of constructing DVDCs so as to exploit this aspect in lower-
P-state switchover is triggered by the processor utilization ing energy costs. There is, of course, a temporal aspect to
(as in Intel’s demand based switching or AMD’s PowerNow the problem as well since the energy prices and availability
schemes). In the power deficient situation, we may want may vary according to overall demand and supply issues
to force CPU into P1 state just when the utilization shoots that are beginning to be exposed to customers by the util-
up to 100%. This can result in some undesirable behavior ities and may be forced by greater use of renewable energy.
that needs to be managed properly [100]. Load migration re- Similarly, the exposure of carbon footprint of various types
fers to simply moving the workload (e.g., VM) to a different of energy supply and carbon based pricing opens up pros-
server and must account for migration power and latencies. pects for more sophisticated considerations in designing
Uneven heat generation and cooling in a data center can and operating DVDCs.
be quite inefficient. Rack based cooling solutions discussed
above help, but can be expensive and still suffer from imbal- 8. Conclusions
ances within a rack. An alternate approach is to allow for
smart cooling control by sensing hot-spots and adjusting In this paper we provided a detailed discussion of a vari-
air-flow appropriately [71]. A complementary technique is ety of issues faced by data centers, the current state of af-
to balance the heat generation by properly balancing the fairs and a vision for the future. We also discussed a
load among active racks and servers. In this context, tem- variety of challenges that need to be solved in the areas of
perature aware workload placement [93], cooling aware data center storage, networking, management and power/
scheduling [104] and dynamic load migration techniques thermal issues. It is hoped that the article will provide
[102] have been investigated in the literature. The location researchers many interesting avenues to explore in realiz-
based services discussed in Section 6.6 can also be exploited ing the vision of highly scalable, well managed and energy
for better power/thermal balancing as illustrated in [65]. efficient, distributed virtualized data centers of the future.
However, significantly more work remains to be done on
the issues of intelligent power distribution, power capping, Acknowledgements
thermal balancing and energy efficient cooling.
The author is grateful to Dr. Bhuvan Urgaonkar of Penn-
7.6.4. Power/thermal issues in DVDCs sylvania State University for his substantial contributions
The use of server, storage, and network virtualization to the Storage and Power/Thermal sections of this paper.
raises several challenges in power and thermal manage- He also provided significant comments on other sections
ment of data centers. In particular, if multiple virtualized of the paper and helped with polishing them. Comments
data centers share the same underlying server infrastruc- from anonymous reviewers were very helpful in improving
ture, it becomes difficult to clearly isolate power consump- the organization and presentation of the paper.
tion and cooling costs associated with each virtualized data
center. At lower levels, virtualization means that the power References
consumption associated with individual applications or
services cannot be accurately measured or predicted there- [1] C. Belady, In the Data Center, Power and Cooling Costs More Than
The IT Equipment it Supports, ElectronicsCooling 13 (1) (2007).
by making it difficult to carefully track the energy efficiency [2] J. Zhen, Five Key Challenges of Enterprise Cloud Computing,
of various applications or to charge for services based on <cloudcomputing.sys-con.com/node/659288>.
2964 K. Kant / Computer Networks 53 (2009) 2939–2965

[3] F.J. Alfaro, J.L. Sanchez, J. Duato, QoS in InfiniBand Subnetworks, [34] C.B. Reardon, A.D. George, C.T. Cole, I. Ammasso, Comparative
IEEE trans. on parallel & distributed systems 15 (9) (2004). performance analysis of RDMA-enhanced Ethernet, in: Workshop
[4] D. Awduche, L. Berger, et al., RSVP-TE: Extensions to RSVP for LSP on High-Performance Interconnects for Distributed Computing,
Tunnels, <http://www.ietf.org/rfc/rfc3209.txt>. 2005.
[5] P. Balaji, P. Shivam, P. Wyckoff, D. Panda, High performance user [35] G. Regnier, S. Makineni, et al., TCP onloading for data center servers,
level sockets over Gigabit Ethernet, in: Proc. of IEEE Int. Conf. on in: IEEE Computer, Special Issue on Internet Data Centers,
Cluster Computing, September 2002 179–186. November, 2004.
[6] P. Balaji, H.V. Shah, D.K. Panda, Sockets vs RDMA interface over 10- [36] S-A. Reinemo, T. Skeie, et al., An overview of QoS capabilities in
Gigabit networks: an in-depth analysis of the memory traffic InfiniBand, advanced switching interconnect, and ethernet, in: IEEE
bottleneck”, Proc. of RAIT Workshop, September 2004. Communications Magazine, July, 2006, pp. 32–38.
[7] D. Bergamasco, Ethernet Congestion Manager (ECM), See au- [37] T. Shanley, InfiniBand Network Architecture, Mindshare Inc., 2002.
bergamasco-ethernet-congestion-manager-070313.pdf at [38] Y. Tien, K. Xu, N. Ansari, TCP in wireless environments: problems
<www.ieee802.org/1/files/public/docs2007>. and solutions, in: IEEE Radio Communications, March, 2005, pp.
[8] A.L. Caro, J.R. Iyengar, et al., SCTP : A proposed standard for robust 27–32.
Internet data transport, IEEE Computer 36 (11) (2203) 20–27. [39] J.P.G. Sterbenz, G.M. Parulkar, Axon: a high speed communication
[9] M. Casado, M. Freedman, J. Pettit, et al., Ethane: Taking control of architecture for distributed applications, in: Proc. of IEEE INFOCOM,
the enterprise, in: Proc. of SIGCOMM, August, 2007. June, 1990, pp. 415–425.
[10] S. Cho, R. Bettati, Aggregated aggressiveness control on groups of [40] M. Wadekar, G. Hegde, et al., Proposal for Traffic Differentiation in
TCP flows, in: Proc. of Networking, 2005. Ethernet Networks, See new-wadekar-virtual%20-links-0305.pdf at
[11] P.J. Crowcroft, P. Oechslin, Differentiated end to end internet <www.ieee802.org/1/files/public/docs2005>.
services using a weighted proportional fair sharing, Proc. of 1998 [41] J. Wang, K. Wright, K. Gopalan, XenLoop: a transparent high
ACM SIGCOMM. performance inter-vm network loopback, in: Proc. of 17th
[12] P. Desnoyers, Empirical evaluation of NAND flash memory International Symposium on High Performance Distributed
performance, in: Proc. of HotStorage, Big Sky, Montana, 2009. Computing, Boston, MA, June, 2008.
[13] D. Dunning, G. Regnier, et al., The virtual interface architecture, [42] M.A. Bender, M.F. Colton, B.C. Kuszmaul, Cache-oblivious string B-
IEEE Micro 18 (2) (1998) 66–76. trees, in: Proceedings of PODS, 2006, pp. 233–224.
[14] H. Dwivedi, Securing Storage: A Practical Guide to SAN and NAS [43] P. Cappelletti, An overview of Flash Memories, <www.mdm.infm.it/
Security, Addison-Wesley, 2005. Versatile/Essderc2007/9-00.pdf>.
[15] M. Al-Fares, A. Loukissas, A. Vahdat, A scalable, commodity data [44] F. Chen, D. Koufaty, X. Zhang, Understanding intrinsic
center network architecture, in: Proc. of 2008 ACM SIGCOMM. characteristics and system implications of flash memory based
[16] W. Feng, P. Balaji, et al., Performance characterization of a 10- solid state drives, in: Proceedings of ACM Sigmetrics 2009, Seattle,
gigabit ethernet TOE, Proc. of HPCA (2005) 58–63. June, 2009, pp. 181–192.
[17] F. Le Faucheur (Ed.), Multiprotocol label Switching (MPLS) Support [45] R. Desikan, S. Keckler, D. Burger, Assessment of MRAM Technology
of Differentiated Services, http://www.ietf.org/rfc/rfc3270.txt. Characteristics and Architectures, TR CS Dept., University of Texas
[18] R. Greenwald, R. Stakowiak, J. Stern, Oracle Essentials, fourth ed., at Austin, 2002.
O’Reilly, 2007. [46] E. Gal, S. Toledo, Algorithms and data structures for flash memories,
[19] W. Huang, Q. Gao, J. Liu, D.K. Panda, High performance virtual ACM Computing Survey 37 (2) (2005) 138–163.
machine migration with RDMA over modern interconnects, in: IEEE [47] S. Gurumurthi, A. Sivasubramaniam, M. Kandemir, H. Franke,
Intl. Conf. on Cluster Computing (Cluster’07), Austin, TX, September, DRPM: dynamic speed control for power management in server
2007. class disks, in: Proceedings of ISCA 2003, pp. 169–179.
[20] G. Ibanex, A. Garcia, A. Azcorra, Alternative multiple spanning tree [48] Intel X25-E SATA Solid State Drive Product Manual, <http://
protocol (AMSTP) for optical ethernet backbones, in: Proc. of 29th download.intel.com/design/flash/nand/extreme/319984.pdf>.
IEEE Conference on Local Networks, 2004. [49] K. Kant, Role of Compression in Multi-level Memory Systems,
[21] H.-W. Jin, P. Balaji, et al., Exploiting NIC architectural support for <www.kkant.net/download.html>.
enhancing IP based protocols on high performance networks, J. [50] H. Kohlstedt, Y. Mustafa, et al., Current status and challenges of
Parallel Distributed Comput., in press. ferroelectric memory devices, Microelectronic Engineering 80
[22] K. Kant, TCP offload performance for front-end servers, in: Proc. of (2005) 296–304.
Globecom, 2003, San Francisco, CA. [51] B.C. Lee, E. Ipek, O. Mutlu, D. Burger, Architecting phase change
[23] K. Kant, Towards a virtualized data center transport protocol, in: memory as a scalable DRAM alternative. Proc. of ISCA-2009.
Proceedings of 2008 INFOCOM Workshop on High Speed Networks, [52] S. Lee and B. Moon, Design of Flash-based DBMS: An In-Page
Phoenix, AZ, April, 2008. Logging Approach, Proc. of ACM SIGMOD, August, 2007, pp 55–
[24] K. Kant, Application centric autonomic BW control in utility 66.
computing, in: Sixth IEEE/ACM Workshop on Grid Computing, [53] A. Leventhal, Flash Storage Memory, Communications of the ACM
Seattle, WA, November, 2005. (2008) 47–51.
[25] K. Kant, N. Jani, SCTP performance in data center environments, in: [54] G. Muller, T. Happ, et al., Status and outlook of emerging
Proceedings of SPECTS, July 2005, Philadelphia, PA. nonvolatile memory technologies, IEEE Intl. Electron Devices
[26] K. Kant, Virtual link: an enabler of enterprise utility computing, in: Meeting (2004) 567–570.
Proceedings of International Symposium on Parallel Processing & [55] D. Narayanan, E. Thereska, A. Donnelly, et al., Migrating server
Applications (ISPA), Sorrento, Italy, December, 2006. storage to SSDs: analysis of tradeoffs, in: Proc. of 4th European
[27] C.E. Leiserson, Fat-trees: universal networks for hardware-efficient conference on Computer systems (EuroSys 2009), Nuremberg,
supercomputing, IEEE Transactions on Computers 34 (10) (1985) Germany, pp. 145–158.
892–901. [56] E. Pinheiro, R. Bianchini, C. Dubnicki, Exploiting redundancy to
[28] J. Liu, J. Wu, D.K. Panda, High performance RDMA-based MPI conserve energy in storage systems, in: Proc. of ACM SIGMETRICS,
implementation over infiniband, International Journal of Parallel 2006, pp. 15–26.
Programming 32 (3) (2004). [57] C.H. Wu, T.W. Kuo, L.P. Chang, An efficient B-tree layer
[29] J. Martin, A. Nilsson, I. Rhee, Delay based congestion avoidance in implementation for flash-memory storage systems, ACM Trans.
TCP, IEEE/ACM Transactions on Networking 11 (3) (2003) 356–369. Embedded Computing Syst. 6 (3) (2007).
[30] QoS Support in MPLS networks, MPLS/Frame Relay alliance [58] Q. Zhu, Z. Chen, L. Tan, et al., Hibernator: helping disk arrays sleep
whitepaper, May, 2003. through the winter, in: Proc. of SOSP, 2005, pp. 177–190.
[31] J. McDonough, Moving Standards to 100 GbE and beyond, IEEE [59] P. Zhou, Bo Zhao, et al., A durable and energy efficient main
applications and practice, November, 2007. memory using phase change memory technology, in: Proceedings
[32] R.N. Mysore, A. Pamboris, N. Farrington, et al., PortLand: a scalable of ISCA-2009.
fault-tolerant layer 2 data center network fabric, in: Proc. of 2009 [60] R. Bitirgen, E. Ipek, J.F. Martinez, Coordinated management of
ACM SIGCOMM. multiple interacting resources in chip multiprocessors: a machine
[33] R. Noronha, X. Ouyang, D.K. Panda, Designing a high-performance learning approach, in: Proceedings of 41st IEEE/ACM International
clustered NAS: a case study with pNFS over RDMA on InfiniBand, Symposium on Microarchitecture, 2008, pp. 318–329.
in: Intl. Conf. on High Performance Computing (HiPC 08), [61] C. Brignone et al., Real time asset tracking in the data center,
December, 2008. Distributed Parallel Databases 21 (2007) 145–165.
K. Kant / Computer Networks 53 (2009) 2939–2965 2965

[62] Common Information Model. <www.wbemsolutions.com/tutorials/ [89] K. Kant, A control scheme for batching DRAM requests to improve
CIM/cim-specification.html>. power efficiency. <www.kkant.net/download.html>.
[63] E. Gelene, M. Xiaowen, R. Onvural, Bandwidth allocation and call [90] C. Lefurgy, X. Wang, M. Ware, Server-level power control, in:
admission control in high speed networks, IEEE Commun. Mag. Proceedings of International Conference on Autonomic Computing,
(1997) 122–129. 2007.
[64] M. Kallahalla, M. Uysal, R. Swaminathan, et al., SoftUDC: a software- [91] Bob Mammano, Improving Power Supply Efficiency The Global
based data center for utility computing, IEEE Computer 37 (11) Perspective, Texas Instruments report available at focus.ti.com/
(2004) 38–46. download/trng/docs/seminar/Topic1BM.pdf.
[65] K. Kant, N. Udar, R. Viswanathan, Enabling location based services [92] D. Meisner, B.T. Gold, T.F. Wenisch, PowerNap: eliminating server
in data centers, IEEE Network Mag. 22 (6) (2008) 20–25. idle power, in: Proc. of ASPLOS, 2009.
[66] P. Padala, K. Shin, X. Zhu, et al., Adaptive control of virtualized [93] J. Moore, J. Chase, P. Ranganathan, R. Sharma, Making scheduling
resources in utility computing environments, in: Proceedings of 2007 cool: temperature-aware workload placement in data centers, in:
ACM SIGOPS/EuroSys Conference Lisbon, Portugal, pp. 289–302. Proc. of Usenix Annual Technical Conf., April, 2005.
[67] J. Rao, C.-Z. Xu, CoSL: a coordinated statistical learning approach to [94] R. Nathuji, A. Somani, K. Schwan, Y. Joshi, CoolIT: Coordinating
measuring the capacity of multi-tier websites, in: Proceedings of Facility and IT Management for Efficient Datacenters, HotPower,
International Conference on Parallel and Distributed Processing, 2008.
April 2008, pp 1–12. [95] R. Nathuji, K. Schwan, VirtualPower: coordinated power
[68] N. Udar, K. Kant, R. Viswanathan, Asset localization in data center management in virtualized enterprise systems, in: Proc. of SOSP
using WUSB radios, in: Proceedings of IFIP Networking, May 2008, 2007.
pp. 756–767. [96] A. Papathanasiou, M. Scott, Energy efficiency through burstiness, in:
[69] KJ. Xu, M. Zhao, J. Fortes, et al., On the use of fuzzy modeling in Proc of the 5th IEEE Workshop on Mobile Computing Systems and
virtualized data center management, in: Proceedings of Interna- Applications (WMCSA’03), October, 2003, pp. 44–53.
tional Conference on Autonomic Computing, June 2007, pp. 25–35. [97] R. Raghavendra, P. Ranganathan, et al., No power struggles:
[70] N. AbouGhazleh, D. Mosse, B.R. Childers, R. Melhem, Collaborative coordinated multi-level power management for the data center,
operating system and compiler power management for real-time in: Proc. of 13th ASPLOS, March, 2008.
applications, ACM Trans. Embedded Systems 5(1) 82–115. [98] V. Raghunathan, M.B. Srivastava, R.K. Gupta, A survey of techniques
[71] C.E. Bash, C.D. Patel, R.K. Sharma, Dynamic thermal management of for energy efficient on-chip communication, in: Proc. the 40th
air cooled data centers, in: Proceedings of 10th Conference on Conference on Design Automation, 2003.
Thermal and Thermomechanical Phenomenon in Electronic [99] L. Ramos, R. Bianchini, Predictive thermal management for data
Systems, June 2006, pp 452–459. centers, in: Proc. of HPCA, 2008.
[72] F. Blanquicet, K. Christensen, An initial performance evaluation of [100] P. Ranganathan, P. Leech, D. Irwin, J. Chase, Ensemble-level power
rapid PHY Selection (RPS) for energy efficient ethernet, in: IEEE management for dense blade servers, in: Proc. of ISCA 2006, pp. 66–
Conference on Local Computer Networks October 2007, pp. 223–225. 77.
[73] L.A. Barroso, U. Holzle, The case for energy-proportional computing, [101] T.D. Richardson, Y. Xie, Evaluation of thermal-aware design
IEEE Computer 40 (12) (2007) 33–37. techniques for microprocessors, in: Proc. of Int. Conf. on ASICs,
[74] J. Choi, S. Govindan, B. Urgaonkar, et al., Profiling, prediction, and 2005, pp. 62–65.
capping of power consumption in consolidated environments, in: [102] R.K. Sharma, C.E. Bash, C.D. Patel, et al., Balance of power: dynamic
Proceedings of MASCOTS, 2008, pp. 3–12. thermal management for internet data centers, IEEE Internet
[75] J. Choi, Y. Kim, A. Sivasubramanium, et al., Modeling and managing Computing 9 (1) (2005) 42–49.
thermal profiles of rack-mounted servers with thermostat, in: [103] T.K. Tan, A. Raghunathan, N.K. Jha, Software architectural
Proceedings of HPCA, 2007, pp. 205–215. transformations: a new approach to low energy embedded
[76] X. Fan, W.-D. Weber, L.A. Barroso, Power provisioning for a software, in: A. Jerraya et al. (Eds.), Book Chapter in Embedded
warehouse-sized computer, in: Proceedings of 34th International Software for SOC, Kluwer Academic Publishers, 2003, pp. 467–
Symposium on Computer Architecture (ISCA), 2007. 484.
[77] S. Govindan, J. Choi, B. Urgaonkar, et al, Statistical profiling-based [104] Q. Tang, S.K. Gupta, D. Stanzione, P. Cayton, Thermal-aware task
techniques for effective power provisioning in data centers, in: scheduling to minimize energy usage of blade server based
Proceedings of 4th European Conference on Computer systems datacenters, in: Proc. of Second IEEE Symp. on Autonomic and
(EuroSys 2009), Nuremberg, Germany, pp. 317–330. Secure Computing, October, 2006, pp. 195–202.
[78] S. Graupner, V. Kotov, H. Trinks, Resource-sharing and service [105] N. Tolia, Z. Wang, M. Marwah, et al., Delivering energy
deployment in virtual data centers, in: Proceedings of ICDCS proportionality with non energy-proportional systems –
Workshop, July 2002, pp. 666–671. optimizing the ensemble, in: Proc. of HotPower, 2008.
[79] M. Gupta, S. Singh, Dynamic link shutdown for power conservation [106] V. Venkatachalam, M. Franz, Power reduction techniques for
on ethernet links, in: Proceedings of IEEE International Conference microprocessors, ACM computing surveys 37 (3) (2005) 195–237.
on Communications, June 2007. <http://www.ics.uci.edu/vvenkata/finalpaper.pdf>.
[80] S. Gurumurthi, Y. Kim, A. Sivasubramanium, Using STEAM for ther- [107] Y. Wang, K. Ma, X. Wang, Temperature-constrained power control
mal simulation of storage systems, IEEE Micro 26 (4) (2006) 43–51. for chip multiprocessors with online model estimation, in: Proc. of
[81] IEEE task group 802.3.az, Energy Efficienct Ethernet. <www. ISCA, 2009.
ieee802.org/3/az/public/nov07/hays_1_1107.pdf>. [108] X. Wang, and Y. Wang, Co-Con: Coordinated control of power and
[82] S. Harizopoulos, M.A. Shah, J. Meza, P. Ranganathan, Energy application performance for virtualized server clusters, Proc. of 17th
efficiency: the new holy grail of data management systems IEEE Intl. workshop on QoS, Charleston, SC, July 2009.
research, in: Conference on Innovative Data Systems Research, [109] Q. Wu, M. Martonosi, et al., A dynamic compilation framework for
January 4–7, 2009. controlling microprocessor energy and performance, in: Proc. of
[83] T. Heath, E. Pinheiro, et al., Application Transformations for Energy Int. Symp. on Microarhcitecture, Barcelona, 2005, pp. 271–282.
and Performance-Aware Device Management, in: Proceedings of
11th International Conference on Parallel Architectures and
Compilation, 2002.
[84] <http://www.itbusinessedge.com/cm/community/news/inf/blog/ Krishna Kant has been with Intel Corp since
intel-tests-free-cooling-in-the-data-center/?cs=20341>. 1997 where he has worked in a variety of
[85] C. Isci, A. Buyuktosunoglu, C.-Y. Cher, et. al., An analysis of efficient research areas including traffic characteriza-
multi-core global power management policies: maximizing tion, security/robustness in the Internet, data
performance for a given power budget, in: Proceedings of IEEE center networking, and power/thermal man-
Micro Conference, 2006. agement of computer systems. From 1991 to
[86] K. Kant, J. Alexander, Proactive vs. reactive idle power control, 1997 he was with Telcordia Technologies
August 2008, <www.kkant.net/download.html>. (formerly Bellcore) where he worked on SS7
[87] K. Kant, Power control of high speed network interconnects in data signaling and congestion control. Earlier, he
centers, in: Proceedings of High Speed Networks Symposium at spent 10 years in academia. He received his
INFOCOM, 2009. Ph.D. degree in Computer Science from Uni-
[88] K. Kant, Towards a science of power management, IEEE Computer
versity of Texas at Dallas in 1981.
42 (9) (2009) 99–101.

You might also like