0% found this document useful (0 votes)
75 views11 pages

GreenCloud A New Architecture For Green Data Cente

.

Uploaded by

mam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views11 pages

GreenCloud A New Architecture For Green Data Cente

.

Uploaded by

mam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/228649283

GreenCloud: a new architecture for green data center

Article · June 2009


DOI: 10.1145/1555312.1555319

CITATIONS READS
305 1,052

7 authors, including:

Liang Liu Wenbo He


IBM McMaster University
31 PUBLICATIONS   913 CITATIONS    97 PUBLICATIONS   2,948 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

cloud storage systems View project

Wireless Industrial Network View project

All content following this page was uploaded by Wenbo He on 03 April 2014.

The user has requested enhancement of the downloaded file.


GreenCloud: A New Architecture for Green Data Center
Liang Liu1, Hao Wang1, Xue Liu2, Xing Jin1, WenBo He3, QingBo Wang1, Ying Chen1
IBM China Research Laboratory1, McGill University2, University of New Mexico3
{liuliang, wanghcrl}@cn.ibm.com1, {xueliu}@cs.mcgill.ca2, {wenbohe}@cs.unm.edu3

ABSTRACT on demand. In other words, the Cloud appears to be a single point


Nowadays, power consumption of data centers has huge impacts of access for all the computing needs of consumers.
on environments. Researchers are seeking to find effective
solutions to make data centers reduce power consumption while
keep the desired quality of service or service level objectives. Though cloud computing technology is not mature for the mass
Virtual Machine (VM) technology has been widely applied in market yet, service providers are actively seeking to develop
data center environments due to its seminal features, including cloud computing platforms for consumers and enterprises to
reliability, flexibility, and the ease of management. We present facilitate the on-demand access regardless of time and location.
the GreenCloud architecture, which aims to reduce data center For example, Amazon Elastic Compute Cloud (EC2) [20]
power consumption, while guarantee the performance from users’ provides a virtualized computing environment which hosts
perspective. GreenCloud architecture enables comprehensive different kinds of Linux-based service. Another example is
online-monitoring, live virtual machine migration, and VM Microsoft Live Mesh [21], which provides a centralized storage
placement optimization. To verify the efficiency and effectiveness for applications and data, so users could access all the information
of the proposed architecture, we take an online real-time game, through a Web-based Live Desktop or his own devices with Live
Tremulous, as a VM application. Evaluation results show that we Mesh software installed.
can save up to 27% of the energy when applying GreenCloud
architecture.
Internet Data Center (IDC) is a common form to host cloud
computing. An IDC usually deploys hundreds or thousands of
Categories and Subject Descriptors blade servers, densely packed to maximize the space utilization.
C.3 SPECIAL-PURPOSE AND APPLICATION-BASED Running services in consolidated servers in IDCs provides
SYSTEMS customers an alternative to running their software or operating
their computer services in-house. The major benefits of IDCs
include the usage of economies of scale to amortize the cost of
General Terms ownership and the cost of system maintenance over a large
Management number of machines. With the rapid growth of IDCs in both
quantity and scale, the energy consumed by IDCs, directly related
Keywords to the number of hosted servers and their workload, has been
Green Cloud Computing, Virtualization, Power Saving skyrocketed [8]. A recent Internet Data Center report estimated
the worldwide cost on enterprise power consumption exceeds $30
billion in year 2008 and likely to even surpass spending on new
1. INTRODUCTION server hardware. The rated power consumptions of servers have
Recently, cloud computing [2] has attracted considerable attention. increased by 10 times over the past ten years [1]. This surging
Cloud computing is believed to become one of the most important demand calls for the urgent need of designing and deployment of
future computing and service paradigm. As stated in [3], a cloud energy-efficient Internet data centers.
is a type of parallel and distributed system consisting of a
collection of interconnected and virtualized computers that are
dynamically provisioned and presented as one or more unified Many efforts have been made to improve the energy efficiency of
computing resources based on service-level agreements IDC [9-18], including network power management [26], Chip-
established through negotiation between the service provider and Multiprocessing (CMP) energy efficiency [10], IDC power
consumers. By this means, customers will be able to access capping [16], storage power management solutions [11] etc.
applications and data from a “cloud” anywhere all over the world Among all these approaches, Virtual Machine (VM) technology
begins to emerge as a focus of research and deployment. Virtual
Permission to make digital or hard copies of all or part of this work for Machine (VM) technology (such as Xen[28], VMWare[32],
personal or classroom use is granted without fee provided that copies are Microsoft Virtual Servers[33], and the new Microsoft Hyper-V
not made or distributed for profit or commercial advantage and that technology [34] etc), enables multiple OS environments to co-
copies bear this notice and the full citation on the first page. To copy exist on the same physical computer, in strong isolation from each
otherwise, or republish, to post on servers or to redistribute to lists, other. VMs share the conventional hardware in a secure manner
requires prior specific permission and/or a fee.
ICAC-INDST’09, June 16, 2009, Barcelona, Spain.
with excellent resource management capacity, while each VM is
Copyright 2009 ACM 978-1-60558-612-0/09/06...$5.00. hosting its own operating system and applications. Hence, VM

29
platform can facilitate server-consolidation and co-located hosting 2. RELATED WORK
facilities [4][5][7]. In this section, we present work most pertinent to the discussion
of this paper in the field of power management, virtualization
technologies, and Cloud Computing.
Virtual machine migration, which is used to transfer a VM across
physical computers, has served as a main approach to achieve 2.1 Could Computing
better energy efficiency of IDCs. This is because in doing so, Cloud Computing, which refers to the concept of dynamically
server consolidation via VM migrations allows more computers to provisioning processing time and storage space from a ubiquitous
be turned off. Generally, there are two varieties [28]: regular “cloud” of computational resources, allows users to acquire and
migration and live migration. The former moves a VM from one release the resources on demand and provide access to data from
host to another by pausing the originally used server, copying its processing elements, while relegating the physical location and
memory contents, and then resuming it on the destination. The exact parameters of the resources. As the user could see, Cloud
latter performs the same logical functionality but without the need Computing means scalability on demand, flexibility to meet
to pause the server domain for the transition. In general when business changes and easy to use and manage.
performing live migrations the domain continues its usual
activities and from the user's perspective—the migration should
be imperceptible. It shows great potential of using VM and VM Therefore, the number of emerging Cloud Computing platforms
migration technology to efficiently manage workload has increased, including EC2 [20] and Microsoft Live Mesh [21].
consolidation, and therefore improve the total IDC power Moreover, Google has also published the Google App Engine [22]
efficiency [24]. allows a user to run Web applications written using the Python
programming language, and it also provides a Web-based
Administration Console for the user to easily manage his running
GreenCloud is an IDC architecture which aims to reduce data Web applications. Sun has unveiled Sun network.com (Sun Grid)
center power consumption, while at the same time guarantee the [23] which enables the user to run different kinds of applications,
performance from users’ perspective, leveraging live virtual such as SUN Solaris application. As Microsoft has presented the
machine migration technology. A big challenge for GreenCloud is Azure service platform, Azure is designed to provide a wide range
to automatically make the scheduling decision on dynamically of internet services that can be consumed from both on-premises
migrating/consolidating VMs among physical servers to meet the environments and the Internet [36]. The Azure Services Platform
workload requirements meanwhile saving energy, especially for uses a specialized operating system, Windows Azure, to run its
performance-sensitive (such as response time-sensitive) "fabric layer" — a cluster hosted at Microsoft's datacenters that
applications, e.g. online gaming servers. Hence, a real-time VM manages computing and storage resources of the computers and
consolidation is needed. An important aspect of this work is to provisions the resources (or a subset of them) to applications
utilize the live migration feature of Xen to implement our running on top of Windows Azure. Windows Azure, which has
GreenCloud architecture, which guarantees the real-time been described as a "cloud layer" on top of a number of Windows
performance requirement as well as saves the total energy Server systems, which use Windows Server 2008 and Hyper-V to
consumption of the IDC. In the design of GreenCloud architecture, provide virtualization of services [35].
we address several key issues including when to trigger VM
migration, and how to select alternative physical machines to
achieve optimal VM placement. To verify the effectiveness and For cloud computing platforms, both power consumption and
efficiency of our approach, we built an exploratory system which application performance are important concerns. The GreenCloud
monitors comprehensive factors in the data center, and architecture we present in this paper is an effective method to
intelligently schedule the workload migration to reduce reduce server power consumption while achieving required
unnecessary power consumption in the IDC. We take an online performance using VM technologies.
real-time gaming service, Tremulous Error! Reference source
not found., as a VM application on each VM. So when our
system is triggered to balance performance and power, players
2.2 Power Management in IDC
There are extensive researches on server and IDC power
enjoying the games hardly notice that their game server
management. Generally, we could categorize these individual
workloads are being or have been migrated.
solutions into four categories in accordance with their different
features [1]. The first one is the power management of the
objectives and constraints, which deals with the tradeoffs between
The rest of this paper is organized as the follows. We first briefly performance and the energy saving, such as whether transient
summarize the state-of–the-art power management solutions and power budget violation is allowed or not, with or without
virtualization technologies for IDCs in Section 2. In Section 3, we additional performance constraints. The second could be viewed
introduce the overview of our GreenCloud architecture. A as the solutions concerned with the scope and granularity. For
heuristic algorithm to achieve optimal VM migration is studied in instance, some of these solutions will have the best performance
Section 4. We present our prototype implementation of the at the embedded-level, while others are more efficient at the rack-
GreenCloud architecture and performance evaluation results in level, or datacenter-level. If we compare the different
Section 5. Finally, we conclude the paper and point out future management policies deployed at hardware, we will notice that
research directions in Section 6. even these solutions are limited at the lower level; it will have
better access to the system components and smaller time

30
granularity than the solutions of the software level. The third virtualized resource. They utilize the “Virtual Power” to represent
category is specified by the approaches utilized. Such as the local the ‘soft’ versions of the hardware power state, to facilitate the
server approach, the distribution scheduling, or maybe the virtual deployment of the power management policies. In order to map
machine consolidation. The last type of the solutions is the the ‘soft’ power state to the actual changes of the underlying
options that used by the power management solutions. These virtualized resource, the Virtual Power Management (VPM) state,
options include the DVFS, the system components turning channels, mechanisms, and rules are implemented as the multiple
On/Off, the sleeping method and etc. system level abstraction.

Dynamic Voltage/Frequency Scaling (DVFS) is one of the key In the early research, the Collective project [29], has designed
knobs adjusting the server power states. Horvath et al. [27] have VM migration as a tool to provide mobility to users who work on
studied how to dynamically adjust the server voltages to minimize different physical machines at different times. This solution aims
the total system power consumption at the same time meet end-to- at the process of transferring an OS instance through slow links
end delay constraints in a Multi-tier Web Service environment. and long time spans. With a set of enhancement work to reduce
Heo et al. [6] later studied how to combine the DVFS together the image size, it will stop the running of the VM during the
with server ON/OFF to further decrease total power consumption. migration duration. Zap [31], which implement the partial
In [16], the authors studied the power capping solutions which virtualization technology to enable the migration of process
ensures that the system will not violate the given power threshold. domains, using a modified Linux Kernel. Recently, researchers
In [25], Barrsos et al, studied how to use Chip Multi-Processor have noticed the performance deterioration brought out by the
(CMP) to achieve power management. In [26], Nedevschi et al. traditional VM migration, which may lead to service unavailable
studied how to maximize the network power saving via sleeping during the period of the migration, which could not be acceptable
and rate-adaption. After these individual solutions which attack in a performance-sensitive computing environment. To address
different aspects of IDC power management problem are studied, this challenge, NomadBIOS [37], which is a virtualization and
Raghavendra et al. have suggested the coordination architecture to migration technology built on top of the L4 microkernel [38],
regulate different individual approaches in multi-level power implements pre-copy migration to achieve very short best-case
management in [1]. migration downtimes. Later, with the research of live migration
conducted by Clark, the latest version of Xen now supports the
2.3 VM Power Management & Migration live migration of VM [30][28].
In IDC, there are two kinds of Virtualization technologies that are
studied a lot recently. One is full-virtualization technology, such 3. BACKGROUND & DESIGN OVERVIEW
as VMWare [32]. Full-virtualization, otherwise known as native The availability of inexpensive networking equipment, coupled
virtualization, uses a virtual machine that mediates between the with new standards for network cabling, led to use a hierarchical
guest operating systems and the native hardware. VMM mediates design in data centers environments for the ease of management.
between the guest operating systems and the bare hardware. In the hierarchical architecture of data centers, the redundancy in
Certain protected instructions must be trapped and handled within routing and storage is necessary for fault-tolerance. Hence, there
the hypervisor because the underlying hardware isn't owned by an are multiple choices to select physical servers to host a given VM.
operating system but is instead shared by it through the We will seek a cost-efficient solution to do a VM migration.
hypervisor. On the other hand, para-virtualization is a very
popular technique that has some similarities to full virtualization.
This method uses a hypervisor for shared access to the underlying
hardware but integrates virtualization-aware code into the
3.1 Live Migration
For performance-sensitive applications, VM live migration offers
operating system itself. This approach obviates the need for any
great benefits we attempt to optimize the utilization of available
recompilation or trapping because the operating systems
resources (e.g., CPU). In VM live migration, a VM is moved from
themselves cooperate in the virtualization process. A typical para-
on physical server to another while continuously running, without
virtualization product is Xen [28]
any noticeable effects from the point of view of the end users.
During this procedure, the memory of the virtual machine is
iteratively copied to the destination without stopping its execution.
While various management strategies have been developed to The halt of around 60–300 ms is required to perform the final
effectively reduce server power consumption by transitioning synchronization before the virtual machine begins executing at its
hardware components to lower-power states, they cannot be final destination, providing an illusion of seamless migration.
directly applied to today’s data centers that rely on virtualization However, with the traditional VM migration technology, which
technologies. In [41], Chen et al. have proposed ON/OFF control stops the running VM during the migration, will cause the failure
strategies to investigate the optimization of energy saving with to meet the Service Level Agreement (SLA) guarantees,
desired performance levels. especially in the response time sensitive computing.
Nathuji et al. [24] have proposed an online power management to
support the isolated and independent operation assumed by VMs
running on a virtualized platform and globally coordinate the
diverse power management strategies applied by the VMs to the

31
Workload Simulator
Monitoring Services Managed Environment

Resource utilization

Application
VM Workload

Virtual
On/Off Status Machine
E-map UI
Hypervisor
Power

Asset Physical
Repository Machine
Migration Scheduling Engine
Power
Data Services Meter
VM Migration
Control
On/Off Control

Migration Manager

Figure 1. GreenCloud Architecture

the success of operating a cloud computing environment. To


3.2 Performance Metric address such challenges, we design the GreenCloud architecture
In this paper, we investigate the power efficiency and
and the corresponding GreenCloud exploratory system. The
effectiveness for online gaming applications hosted in data center
exploratory system monitors a variety of system factors and
environment, achieved by live migration technology. We know
performance measures including application workload, resource
that total power consumption on a chip consists of two parts,
utilization and power consumption, hence the system is able to
static and dynamic power dissipation. The static power dissipation
dynamically adapt workload and resource utilization through VM
is primarily caused by various leakage currents, and the dynamic
live migration. Therefore, the GreenCloud architecture reduces
power dissipation is proportional to workload, e.g. CPU
unnecessary power consumption in a cloud computing
utilization. Due to the existence of static power dissipation, a
environment. Figure 1 demonstrates the GreenCloud architecture
server consumes considerable amount of power even if it is idle
and shows the functions of components and their relations in the
with power on. According to [41], a server with zero workload
architecture.
consumes about 60% of its peak power. One of straight forward
way to save power is to consolidate workload first and then turn
the unnecessary devices (e.g. idle devices) off. Monitoring Service monitors and collects comprehensive factors
such as application workload, resource utilization and power
consumption, etc. The Monitoring Service is built on top of IBM
During the procedure of VM live migration, one may concern Tivoli framework and Xen, where the IBM Tivoli framework is a
whether or not the performance of service from the end user point CORBA-based system management platform managing a large
of view will be sacrificed. Among these performance metrics, number of remote locations and devices; Xen is a virtual machine
round trip time (RTT) is an essential performance concern for monitor (VMM). The Monitoring Service serves as the global
online gaming applications. Generally, the acceptable quality for information provider and provides on-demand reports by
an online game requires RTT less than 600–800 ms. Hence, live performing the aggregation and pruning the historical raw
migration technology is feasible to address workload monitoring data to support to intelligent actions taken by
consolidation for online gaming applications. Migration Manager.

3.3 GreenCloud Architecture Migration Manager triggers live migration and makes decision on
As discussed above, cloud computing platform as the next the placement of virtual machines on physical servers based on
generation IT infrastructure enables enterprises to consolidate knowledge or information provided by the Monitoring Service.
computing resources, reduce management complexity and speed The migration scheduling engine searches the optimal placement
the response to business dynamics. Improving the resource by a heuristic algorithm, and sends instructions to execute the VM
utilization and reduce power consumption are key challenges to

32
migration and turn on or off a server. A heuristic algorithm to
search an optimal VM placement and the implementation details decision. Formally,
LB( p ) ≤ cost ( p + ) , where
p+
of Migration Manager will be discussed in Section IV. The output represents any placement reachable from
p in a single hop of
of the algorithm is an action list in terms of migrate actions (e.g.
Migrate VM1 from PM2 to PM4) and local adjustment actions migration, and
cost (⋅) represents user defined cost to execute a
(e.g. Set VM2 CPU to 1500MHz) [40]. certain application, or run a VM. In this paper, the cost function
balances the total power saving and performance of the system.
The cost function is given as below.
Managed Environment includes virtual machines, physical
machines, resources, devices, remote commands on VMs, and cost = C ( Migration) + C (# PM ) + C (Utilization)
applications with adaptive workload, etc.
where
C ( Migration) is the cost incurred by live migration.
E-Map is a web-based service with Flash front-end. It provides a We take the number of migrations from the start placement to
user interface (UI) to show the real-time view of present and past
current placement as the cost.
C (# PM )
is the energy
system on/off status, resource consumption, workload status,
temperature and energy consumption in the system at multiple consumed by physical machine, we take the number of PM used
scales, from high-level overview down to individual IT devices as such cost.
C (Utilization) is the measurement on how busy
(e.g. servers and storage devices) and other equipment (e.g. water- the servers are. Generally, if more servers are very busy, the
or air-cooling devices). E-map is connected to the Workload performance of the system is poorer under surge service demand.
Simulator, which predicts the consequences after a given actions We take the number of servers with more than 90% of CPU
adopted by the Migration Monitor through simulation in real
environment. Utilization as the cost. In our algorithm, we denote
bp as the
best placement found so far, and
cp means the current placement
Workload Simulator accepts user instructions to adapt workload, in the search, and
sp represents the initial placement, where the
e.g. CPU utilization, on servers, and enables the control of
Migration Manager under various workloads. Then, E-Map algorithm starts the search. Therefore, if
LB( p) > cost (bp ) ,
collects the corresponding real-time measurements, and it will not search the placements reachable from
p for the
demonstrates the performance of the system to users. Therefore, optimal placement.
users and system designers will verify the effectiveness of a
certain algorithm or adjust parameters of the algorithm to achieve
better performance.
When triggered to work, the algorithm uses two tables in the
search, the open table and the close table, where the open table
Asset Repository is a database to store the static server records the initial mapping between VMs to PMs (or the
information, such as IP address, type, CPU configuration, placement where the search algorithm begins) and the temporary
memory setting, and topology of the servers. placements which are possible to reach the optimal placement
during the search procedure. So the placements in open table are
those which need to be explored. The close table keeps the
The GreenCloud IDC management framework is running and already explored placements. When the open table becomes
accessible to IBM internal staffs and customers. They can view empty or some criteria are met (such as cost, search timeout)
up-to-date status of resources, configure their applications, during search, one or more than one near optimal placements may
allocate resources, and experience the live management system. be detected. Then one of them will be chosen based on some
standards or policies and related actions list will be generated for
further live migration and power on/off actions.
4. VM LIVE MIGRATION
4.1 Algorithm
We plug a heuristic algorithm in Migration Scheduing Engine to The heuristic algorithm we presented here is used to find a VM
search optimal placement of a virtual machine on a physical placement and related action list that minimizes total cost, in
machine to minimize the total cost. The cost includes the possible terms of both the Physical Machine (PM) cost and the VM live
migration cost and the execution cost thereafter. The algorithm migration cost in each optimization cycle. Figure 2 gives a
provides an open interface for users to define their own cost instance in searching the placement space, where a node
function depending on users’ requests and system specification. In represents a placement and an arrow indicates a live migration,
this paper, we present take the PM cost, the VM status and the which leads to another placement. In Figure 2, the black
VM migration cost as the inputs of searching algorithm. Next, we placements are those which have been explored but ruled out from
first present the notations and definitions used in the algorithm. the possibilities in the future search, because it is not possible for
First,
p represents a placement of a VM on a PM. Then, black placements and their “neighboring” placements to compete
with the current best solution. Hence, black nodes are called cut
LB( p) means the lower bound of the cost for all the placements, and the live migration to the cut placements are
placements reachable from
p by a single VM live migration
crossed out. The nodes in explored space except black nodes are

33
# VM Kernel for NFS booting
kernel="/boot/vmk/vmlinux"
name="VM05" # VM name
uuid="e1ccdf9d-44b5-1f84-d6d8-6536ae4fbf7a" # VM
UUID

# VM resource allocation
memory=2048
vcpus=4

# VM run level
Figure 2. Heuristic search for optimal replacement extra="5"
placed in close table, and the placements in the unexplored space
(i.e., the space to be explored) are in open table. #VM network and MAC
vif=[ 'mac=00:16:3e:60:b5:49', ]
4.2 Implementation
In our GreenCloud architecture, Xen has been used to support the vfb=['type=vnc,vncunused=1']
modified SUSE Linux Enterprise Server. The ported OSes are ip='9.186.63.125'
running on isolated VMs that are nominated "Domain U" in Xen
terminology. In Xen version 3.0 [28], the live migration is netmask='255.255.255.0'
supported and we will achieve better energy efficiency of broadcast='9.186.63.255'
GreenCloud computing architecture through the VM live gateway='9.186.63.1'
migration feature provided by Xen. Xen’s live migration
capability supports efficient transfer of the memory and thread of
control between two domains (which means the execution context # VM NFS boot
that hosts a running VM). It performs an iterative process that
root="/dev/nfs"
reduces the amount of time the virtual machine is unavailable to
an almost unnoticeable level [30]. However, Xen does not support nfs_server="9.186.63.112"
the migration of the root file system image. Xen assumes that the nfs_root="/share/xen/images/tremulous05/mount"
root file systems available on both the source and destination
hosts. In our GreenCloud solution, we implement the migration of
the file system through NFS. Because the guest domain will need network access at boot time,
the domain booting kernel is compiled to support necessary
advanced features (such as NFS booting and DHCP and so on)
It requires virtualized sharing storage to enable VM remote
besides compatible with Xen kernel.
booting for supporting live migration of running virtual machines
between physical hosts. In our environment, standard network
storage sharing protocols NFS is used to provide storage to virtual 5. GREENCLOUD EVALUATION
machines. Firstly a root file system in a directory on the NFS In this section, we present the experiment set up and evaluation of
server machine is populated. Then we configure the NFS server to our GreenCloud architecture.
mount the image disk of a VM and export this file system over the
network by adding a line to /etc/exports, for instance: 5.1 Experiment Setup
We implemented the GreenCloud architecture prototype in the
/share/xen/images/tremulous01/mount IBM China Research Lab (CRL), and carried out extensive
*(rw,sync,no_root_squash) experiments. In this prototype, 5 IBM x-series servers are
deployed: three IBM X346 with 4 cores of 3.0G CPU, one IBM
X3950 with 16 cores of 3.0 CPU and one IBM X336 with 2 cores
Finally, some values should be added into the domain of 3.0 CPU. The X3950 machine is deployed with 16 GB memory,
configuration file of a VM to support NFS root. The following is while the rest of the machines are all equipped with 3.0 GB
a typical domain configuration file sample to use NFS root in memory. Each machine has 2 NIC with 1 GB bandwidth, and
addition to the normal variables: Wake-On-LAN (WOL) support is enabled for all machines. The
configuration of all the physical machines is shown in Table 1
below:

34
Table 1. Physical Machine in GreenCloud
Physical Type CPU Memory Network WOL NFS
Machine Support Server
Green01 X336 3.0G 3.0G 2 *1 Y N
(2 Gigabyte
Core)
Green02 X395 3.0G 16.0G 2 *1 Y Y
0 (16 Gigabyte
Core)
Green03 X346 3.0G 3.0G 2 *1 Y N
(4 Gigabyte
Figure 3. Power Meters Deployment
Core)
Green04 X346 3.0G 3.0G 2 *1 Y N
(4 Gigabyte
Core)
Green05 X346 3.0G 3.0G 2 *1 Y N
(4 Gigabyte
Core)

Table 2. VM Configuration in GreenCloud


Virtual Type CPU Memory NFS Gaming
Machine Booting application
Support
VM01 ParaVirt 2 2.0G Y Tremulous
Core Figure 4. Energy Consumption Collector
VM02 ParaVirt 4 2.0G Y Tremulous
Core
VM03 ParaVirt 4 2.0G Y Tremulous
Core
VM04 ParaVirt 4 2.0G Y Tremulous
Core
VM05 ParaVirt 4 2.0G Y Tremulous
Core

As we stated before, Xen is utilized as the standard VMM in our


experiment. To verify the energy saving performance of the
GreenCloud prototype, we take an online real-time gaming
service Tremulous Error! Reference source not found., as VM Figure 5. Energy Consumption Visual Interface
application on each VM. Tremulous is a response-time-sensitive
online game, which serves well for our performance evaluation
purposes. On our GreenCloud prototype, it is worth noting that
Power Manager is implemented to collect monitored information
among all experiments during VM migrations, the players who
from power meters or other monitoring devices, consolidate data
played the game could not feel the small delay caused by server
into a database, analyze/mine historical measurements, and
migration. In fact, all participants could not notice the difference.
provide customized query services and reports so that server,
In the experiments, all the VMs are configured as the description
storage, and facilities measurements can be brought together into
of Table 2.
integrated views with visualization that provides a clear
understanding of data center energy consumption and temperature
5.2 Data Collection behavior. Figure 4 and figure 5 present the Energy Consumption
An iPDU (Intelligent Power Distribution Unit) power meter is Collector and Visualization interface of the GreenCloud
adopted to monitor the real-time power consumption of physical prototype, respectively.
machines. Power-related parameters monitored by the power
meter for a machine include Current, Voltage, Power and
Kilowatt-hour. To inspect energy consumption details of IT
5.3 VM & Physical Host Management
The Xen Management API is utilized in Migration Monitor as the
equipment and facilities in the system, the power meters are
interface to remotely configure and control VMs running on a
deployed as the following topology design as shown in Figure 3.
Xen-enabled host. The Xen API is built on top of XML- RPC and
can be used by the user space components of Xen, such as the xm

35
Figure 6. Performance of Heuristic Search

1.8
1.6
1.4
1.2
1
KWH

0.8
0.6
0.4
0.2
0
0 1 2 3 4 5 6 7 8 9 10 11 12
HOUR

Without GreenCloud With GreenCloud

Figure 7. Comparison of Energy Consumption

command-line tool to control the system. The xend daemon the IP address and the MAC number of the computer intended to
listens for XML-RPC connections and then performs a number of Wake Up. Most of host management functions including resource
administrative functions as required, such as dynamically migrate allocation, monitoring and provisioning VMs can be achieved by
VMs from a host to another. Xen Management API. For instance, to shutdown a host remotely,
users can call the following Xen API:
void shutdown (session_id s, host ref host).
On the other hand, in order to switch on a remote computer from
the network, the Migration Monitor will send a magic packet
(using the Wake-On-LAN technology) through the network using

36
5.4 Evaluations Table 3 Workload Simulation
First, we did a verification work to confirm the effectiveness of
CPU 30% 55% 75% 55% 30%
our heuristic search algorithm (Section IV). A running sample is Utilization
shown in Figure 6 . The x-axis shows the algorithm running time,
and y-axis shows the performance cost measure. The performance Time (Hour) 2 2 4 2 2
cost measure is in terms a normalized cost totaling the total server
energy consumption cost and server migration costs. The goal of
the heuristic algorithm is to find the optimal placement solution to 6. FUTURE WORK & CONCLUSION
minimize the total performance cost. As we can see from Figure 6,
our algorithm can obtain a near-optimal solution very fast in less
Cloud computing is emerging as a significant shift as today's
than 300 ms in such a test environment.
organizations which are facing extreme data overload and
skyrocketing energy costs. In this paper, we propose GreenCloud
In order to evaluate the energy saving brought by GreenCloud, we architecture, which can help consolidate workload and achieve
setup a typical application scenario for hosting real-time online significant energy saving for cloud computing environment, at the
gaming services. We developed a workload emulator to facilitate same time, guarantees the real-time performance for many
generating variable workload to the gaming servers easily. It performance-sensitive applications. The GreenCloud leverages
consists of two components: a front proxy agent and emulated the state-of-the-art live virtual machine migration technology to
user agents. The proxy agent can remotely control and manage achieve these goals. Through evaluation, we show that
emulated user agents distributed on different servers. It also GreenCloud achieved our design goals effectively in the Cloud
provides interfaces for testers to set an expected CPU utilization Computing environment.
(such as low, middle or high) on a designated server. When an
emulated user agent receives such commands to adjust the server
In the future, there are still a number of research activities that we
workload, it will start or stop user threads to occupy CPU time
plan to carry out, which could improve the performance of
using workloads with different characteristics designed. The
GreenCloud and bring solid value to users to achieve their
dynamically changed workload on servers will trigger the
business goals and their social responsibility in Green IT. First,
Migration Manager to take corresponding actions to balance the
further studies should be given to explore whether utility-based
performance and power consumption. The testing workload is
methodology can be used to communicate performance-power
presented in Table 3. As we can see from Table 3, the testing
tradeoffs between the OS and the application/middleware. Second,
workload is ramping up from 30% to 55% to 75%, then going
we need to adjust GreenCloud to meet the requirement of real
down back to 30% again during the total 12-hours experiment
business services, such as the web service, Online Transaction
period.
Processing (OLTP), and the human resource management etc. Our
future work also includes the VM live migration through Wide
The total energy consumption results are reported in Figure 7. We Area Network (WAN) with energy efficient scheme.
compared the case when there is no GreenCloud is deployed
(“without GreenCloud”) to that of the case when GreenCloud is 7. REFERENCES
deployed (“with GreenCloud”). The x-axis shows the experiment [1] R. Raghavendra, P. Ranganathan, V. Talwar, Z. Wang, X.
time, and y-axis shows the total energy consumptions of the Zhu. No Power Struggles: Coordinated multi-level power
servers in the unit of KWH. Each point corresponding to time T, management for the data center. In Thirteenth International
Conference on Architectural Support for Programming
corresponds the total energy used in the previous hour (i.e. in the Languages and Operating Systems (ASPLOS ’08), Mar.
time interval [T-1, T) ). We can see at the beginning and the end 2008.
of the experiment, when server workload is low, using [2] A. Weiss. Computing in the Clouds. netWorker, 11(4):16-25,
GreenCloud, we can get significant energy reduction. This is Dec. 2007.
because in this case, VMs are consolidated using live migration,
[3] R. Buyya, C. S. Yeo, S. Venugopa. Market-oriented cloud
hence only 1 of the physical servers is running while the other computing: Vision, hype, and reality for delivering it
servers are turned off by GreenCloud, thus gives us significant services as computing utilities. In Proceedings of the 10th
energy saving. When the workload reaches the highest (from IEEE International Conference on High Performance
hours [5-8] corresponding to the workload in Table 3), Computing and Communications (HPCC-08, IEEE CS Press,
GreenCloud use 4 of the servers, so the energy consumption is Los Alamitos, CA, USA) 2008.
still less than that when no GreenCloud is used. It is worth noting [4] Ensim. Ensim Virtual Private Servers,
that during the adjustment using GreenCloud, there is no http://www.ensim.com/products/materials/
interruption to the services running on the cloud computing datasheet_vps_051003.pdf, 2003.
platform: our prototype testbed shows that the response-time is [5] A. Whitaker, M. Shaw, S. D. Gribble, "Lightweight Virtual
below 750 ms, successfully meeting the SLA (Service Level Machines for Distributed and Networked Applications".
Agreement) of this response time sensitive gaming application. Technical Report 02-02-01, University of Washington, 2002.
[6] J. Heo, D. Henriksson, X. Liu, T. Abdelzaher, "Integrating
Adaptive Components: An Emerging Challenge in
As a conclusion, GreenCloud effectively saves energy by Performance-Adaptive Systems and a Server Farm Case-
dynamically adapts to workload leveraging live VM migrations, Study," in Proceedings of the 28th IEEE Real-Time Systems
Symposium (RTSS'07), Tucson, Arizona, 2007.
at the same time meeting system SLAs.

37
[7] P. Padala, K. G. Shin, X. Zhu, M. Uysal, Z. Wang, S. [25] L. A.Barroso, U. Hölzle, The Case for Energy-Proportional
Singhal, A. Merchant, K. Salem, "Adaptive control of Computing, IEEE Computer, vol 40, no. 12 (2007): 33-37.
virtualized resources in utility computing environments", [26] S. Nedevschi, L. Popa, G. Iannaccone, S. Ratnasamy, D.
Proceedings of the 2nd ACM SIGOPS/EuroSys European Wetherall, Reducing Network Energy Consumption via
Conference on Computer Systems, 2007 Sleeping and Rate-Adaptation, In Proceedings of the 5th
[8] EPA Report on Server and Data Center Energy Efficiency. USENIX Symposium on Networked Systems Design &
U.S. Environmental. Protection Agency, ENERGY STAR Implementations (NSDI'08), San Francisco, CA, April 2008.
Program, 2007. [27] T. Horvath, T.Abdelzaher, K. Skadron, X. Liu, Dynamic
[9] P. Bohrer et al. The case for power management in web Voltage Scaling in Multi-tier Web Servers with End-to-end
servers. In Power Aware Computing (PACS), 2002. Delay Control, in IEEE Transactions on Computers (ToC),
[10] D. Brooks, M. Martonosi. Dynamic thermal management for vol. 56, pp. 444-458, 2007
high-performance microprocessors. In 7th International [28] Xen User Manual, http://
Symposium on High-Performance Computer Architecture, bits.xensource.com/Xen/docs/user.pdf
2001. [29] C. P. Sapuntzakis, R. Chandra, B. Pfaff, J. Chow, M. S. Lam,
[11] E. V. Carrera, E. Pinheiro, R. Bianchini. Conserving disk M.Rosenblum. Optimizing the migration of virtual
energy in network servers. In 17th International Conference computers. In Proc. of the 5th Symposium on Operating
on Supercomputing, 2003. Systems Design and Implementation (OSDI-02), December
[12] J. Chase et al. Managing energy and server resources in 2002.
hosting centers. In 18th Symposium on Operating Systems [30] C. Clark, K. Fraser, S. Hand, J. Hansen, E. Jul, C. Limpach,
Principles (SOSP), 2001. I. Pratt, and A. Warfield. Live migration of Virtual
[13] J. Chase and R. Doyle. Balance of power: Energy Machines. In USENIX NSDI, 2005.
management for server clusters. In 8th Workshop on Hot [31] S. Osman, D. Subhraveti, G. Su, J. Nieh. The design and
Topics in Operating Systems, May 2001. implementation of zap: A system for migrating computing
[14] Y. Chen et al. Managing server energy and operational costs environments. In Proc. 5th USENIX Symposium on
in hosting centers. In ACM SIGMETRICS International Operating Systems Design and Implementation (OSDI-02),
Conference on Measurement and Modeling of Computer pages 361–376, December 2002.
Systems, June 2005. [32] VMWare, VMWare Inc. http://www.vmware.com
[15] M. Elnozahy, M. Kistler, R. Rajamony. Energy-efficient [33] Microsoft Virtual Server, Microsoft Coroporation, http://
server clusters. In Power Aware Computing Systems www.microsoft.com/windowsserversystem/virtualserver/
(PACS), February 2002. [34] Microsoft 2008 Hyper-V, Microsoft Corporation,
[16] X. Fan et al. Power provisioning for a warehouse-sized http://www.microsoft.com/canada/windowsserver2008/serve
computer, In 34th ACM International Symposium on runleashed/html/hyper-v.aspx?wt.srch=1
Computer Architecture, CA, June 2007. [35] Azure Service Platoform, Wikipedia,
[17] W. Felter et al., A performance-conserving approach for http://en.wikipedia.org/wiki/Microsoft_Azure
reducing peak power consumption in server systems. In 19th [36] Azure Service Platform, Micrsoft Corportation,
International Conference on Supercomputing, 2005. http://www.microsoft.com/azure/services.mspx
[18] M. Femal, V. Freeh. Safe over-provisioning: Using power [37] J. G. Hansen, A. K. Henriksen. Nomadic operating systems.
limits to increase aggregate throughput. In Power-Aware Master's thesis, Dept. of Computer Science, University of
Computing Systems (PACS), December 2004. Copenhagen, Denmark, 2002.
[19] Twenty Experts Define Cloud Computing, [38] H. H¨artig, M. Hohmuth, J. Liedtke, S. Sch¨onberg. The
http://cloudcomputing.syscon.com/read/612375_p.htm, July performance of microkernel-based systems. In Proceedings
2008. of the sixteenth ACM Symposium on Operating System
[20] Amazon Elastic Compute Cloud (EC2), Principles, pages 66–77. ACM Press, 1997.
http://www.amazon.com/ec2/, July 2008. [39] X. Jin, Q. Wang etc, "A Framework for Virtualized Service
[21] Microsoft Live Mesh, http://www.mesh.com Hosting", in preparation for publication.
[22] Google App Engine, http://appengine.google.com [40] G. Chen, W. He, J. Liu, S. Nath, L. Rigas, L. Xiao, F. Zhao.
[23] Sun network.com (Sun Grid), http://www.network.com "Energy-Aware Server Provisioning and Load Dispatching
for Connection-Intensive Internet Services", in Proceedgins
[24] R.Nathuji, K. Schwan, "VirtualPower: coordinated power of the 5th USENIX Symposium on Networked Systems
management in virtualized enterprise systems", ACM Design & Implementation (NSDI'08), San Francisco, CA,
Symposium on Operating Systems Principles, Proceedings of April 2008.
twenty-first ACM SIGOPS symposium on Operating
systems principles, 2007. [41] Tremulous official website, http://tremulous.net

38

View publication stats

You might also like