0% found this document useful (0 votes)
18 views6 pages

File 24

Uploaded by

theakmal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views6 pages

File 24

Uploaded by

theakmal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Proceedings of the UGC Sponsored National Conference on Advanced Networking and Applications,

27th March 2015

Green Computing Techniques to Power


Management and Energy Efficiency
Bobby. S
Assistant Professor, Department of Computer Science
St. Joseph’s College of Arts and Science for Women, Hosur.
Email: [email protected]

-------------------------------------------------------------------ABSTRACT----------------------------------------------------
Green computing is one of the emergent computing technology in the field of computer science engineering and
technology to provide Green Information technology (Green TI/GC). It is mainly used to protect environment,
optimize energy consumption and keeps Green environment. Increasing energy efficiency and reducing the use of
hazardous materials are the main goals of green computing. Green computing ultimately focuses on ways in reducing
overall environmental impacts. It require the integration of Green computing Practices such as recycling, electronic
waste removal power consumption, virtualization, improving cooling technology, and optimization of the
requirements. The major power consumption components are processors and the main memory in the servers.
Green computing is the concept which is trying to confine this procedure by inventing new methods that would work
efficiently while consuming less energy and making less population. This paper focuses on green computing
techniques, in order to achieve low power consumptions. This paper includes green computing techniques and power
saving.

Keywords - energy efficiency, electronic waste, green computing, power consumption, and recycling.
---------------------------------------------------------------------------------------------------------------------------------------
1. INTRODUCTION

Green computing is the environmentally responsible use of


computers and related resources such practices include the
implementation of energy-efficient central processing units,
servers and peripherals as well as reduced resource
consumption and proper disposal of electronic waste (e-
waste). It also defined as the designing
manufacturing/engineering, using and disposing of
computing devices in a way that reduces their
environmental impact. Main goals of green computing are Fig 1: energy consumption at different levels in computer
to reduce the use of toxic and hazards materials and systems
improve the energy efficiency, recycling factory waste. The energy consumption is not only determined by the
Green computing is the requirement to save the energy with efficiency of the physical resources, but it is also dependent
the expenses.
on the resource management system deployed in the
1.1. Advantages of Green computing infrastructure and efficiency of applications running in the
a. Reduce the energy usage from green computing system. Energy efficiency impacts end users in terms of
techniques translated into lower carbon dioxide resource usage costs, which are typically determined by the
emissions, stemming from a reduction in the fossil fuel Total Cost of Ownership (TCO) incurred by a resource
used in power plants and transportation. provider.
b. Conserving resources means less energy is required to
produce, use and dispose of products.
c. Saving energy and resources save money.
d. Green computing even includes changing government
policy to encourage recycling and lowering energy use by
individuals and businesses.
e. Reduce the risk existing in the laptops such as chemical
known to cause cancer, nerve damage and immune reactions
in humans

2. ENERGY CONSUMPTION AT DIFFERENT LEVELS IN


COMPUTING
Special Issue Published in Int. Jnl. Of Advanced Networking and Applications (IJANA) Page 107
Proceedings of the UGC Sponsored National Conference on Advanced Networking and Applications,
27th March 2015

Higher power consumption results not only in electricity power consumption will result in decreased costs of the
bills, but also in additional requirements to cooling system infrastructure provisioning, such as costs associated with
and power delivery infrastructure, i.e Uninterruptible Power capacities of UPS, PDU, power generators, cooling system
Supplies (UPS), Power Distribution Units (PDU),etc. With and power distribution equipment. On the other hand,
the grow computer components density, the cooling decreased energy consumption will lead to reduction of the
problem becomes crucial, as more heat has to dissipated for electricity bills.
a square meter. The problem is especially important for 1U
and blade servers. These form 3.1. Static and Dynamic Power Consumption
factors are the most difficult to cool because of high density
of the components, and thus lack of space for the air flow. The main power consumption in Complementary Metal-
Blade servers give the advantage of more computational Oxide-Semiconductor (CMOS) circuits comprises static and
power in less rack space. dynamic power.
Apart from the overwhelming operating costs and the Total The static power is mainly determined by the type of
Cost of Acquisition (TCA), another rising concern is the transistors and process technology. Reduction of the static
environmental impact in terms of carbon dioxide (CO2) power requires improvements of the low-level system
emissions caused by high energy consumption. Therefore, design.
the reduction of power and energy consumption has become Dynamic power consumption is created by circuit activity
a first-order objective in the design of modern computing (transistor switches, changes of values in registers, etc) The
systems. The roots of energy-efficient computing, or Green sources of the dynamic power consumption are short-circuit
IT, energy-efficient products in order to reduce the current and switched
greenhouse gas emissions. The term “green computing” was capacitance. Short –circuit causes only 10-15% of the total
introduced to refer to energy-efficient personal computers. power consumption and so far no way has been found to
End user and environmental requirements for IT equipment reduce this value without compromising the performance.
including video adapters, monitors, keyboards, computers, Switched capacitance is the primary source of the dynamic
peripherals, IT systems and even mobile phones. power consumption therefore the dynamic power
Energy-efficient resource management has been first consumption can be defined as in (3).
introduce in the context of battery feed mobile devices,
where energy consumption has to be reduced in order to Where a is the switching activity, C is the physical
improve the battery lifetime. capacitance, V is the supply voltage are determined, and f is
the clock frequency. Whereas combined reduction of the
3. POWER AND ENGREY MODELS supply voltage and clock frequency lies in the roots of the
widely adopted
Power and energy management mechanism it is essential to Pdynamic = a.C.V2 .f, (3)
clearly distinguish the background terms. Power and energy
can be defined in terms of work that a system performs. DPM technique called Dynamic Voltage and Frequency
Power is the rate at which the system performs the work, Scaling (DVFS). The main idea this technique is to
while energy is the total amount of work performed over a intentionally down-scale CPU performance, when it is not
period of time. Power and energy are measured in watts (W) fully utilized by decreasing the voltage and frequency of
and watt-hour (Wh) respectively. Electric current is the flow CPU that in ideal case should result in cubic reduction of
of electric charge measured in Amperes (Amps). Amperes the dynamic power consumption. DVFS is supported by
define the amount of electric charge transferred by a circuit most modern CPU including mobile desktop and server
per second. Work is done at the rate of one watt when on system.
Ampere is transferred through a potential.difference of
3.2. Sources of power consumption
one volt. A kilowatt-hour(kWh) is the amount of energy
equivalent to a power of 1 kilowatt(1000 watts) running for According to data provides provided by Intel Labs. The
1 hour. main part of power consumed by a server is drawn by the
Formally, power and energy can be defined as in (1) and CPU ,followed by the memory and losses due to power
(2). supply inefficiency. The data show that the CPU no longer
dominates power consumption by a server. This resulted
P=W/T (1) from continuous improvement of the CPU power efficiency
E=P.T (2) and application of power saving techniques. (e.g DVFS)
Where P is power, T is a period of time, W is the total work that enable active low- power modes.
performed in that period of time, and E is energy. The
difference between power and energy is very important,
because reduction of the power consumption does not
always reduce the consumed energy. For example, the
power consumption can be decreased by lowering the CPU
performance. However , in this case a program may require
longer time to complete its execution consuming the same
amount of energy. On one hand, reduction of the peak
Special Issue Published in Int. Jnl. Of Advanced Networking and Applications (IJANA) Page 108
Proceedings of the UGC Sponsored National Conference on Advanced Networking and Applications,
27th March 2015

service applications cannot be kept on fully utilized servers


as even non-significant workload fluctuation will lead to
performance degradation and failing to provide the expected
QoS. On the other hand servers in a non-virtualized data
center are unlikely to be completely idle because of
background tasks or distributed data bases or file systems.

4.2. High Energy Consumption

Considering the power consumption the main problem is the


Fig 2: Power consumption by servers components minimization of the peak power required to feed a
completely utilized system. In contrast, the energy
In these modes a CPU consumes a fraction of the total consumption is defined by the average power consumption
power, while preserving the ability to execute programs. As over a period of time. Therefore the actual energy
a result current desktop and server CPUs can consume less consumption by a data center does not affect the cost of the
than 30% of their peak power at low activity modes. infrastructure. On the other hand it is reflected in the
Leading to dynamic power range of more than 70% of the electricity cost consumed by the system during the period of
peak power. In contrast, dynamic power ranges of all other operation which is the main component of a data center’s
server’s components are much narrower. Less than 50% for operating costs.
DRAM ,25% for disk drives, 15% for network switches,
and negligible for other components can only be 5. POWER/ENERGY MANAGEMENT IN
completely or partially switched off. Reason for reduction COMPUTING SYSTEMS
of the fraction of power consumed by the CPU relatively to
the whole system is adoption of multi-core architectures. Large volume of research work has been done in the area of
Multi-core processors are much more efficient than power and energy-efficient resource management in
conventional. Adoption of multi core CPU along with the computing systems. As power and energy management
increasing use of virtualization technologies and data- techniques are closely connected from this point we will
intensive applicant resulted in growing amount memory in refer to them as power management.
servers.

3.3. Modeling power consumption

To develop new policies for DPM and understand their


model of dynamic power consumption. Such a Model has to
able to predict the actual value of the power consumption
based on some run-time system characteristics. One of the
way to accomplish this is to utilize power monitoring
capabilities that are built in modern computing Servers.
Strong relationship between the CPU. Utilization and total
power consumption by a server. The idea behind the
proposed model is that the power consumption in the idle
stat up the power consumed when the server is fully
utilized.
Fig3: Power and energy management
4. PROBLEMS OF HIGH POWER AND ENERGY
CONSUMPTION
The high level power management techniques can be
divided into static and dynamic. As power and energy
The energy consumption by computing facilities rises
management techniques are the hardware point of view,
monetary, environmental and system performance concerns.
Static Power Management (SPM) contains all the
Including low-power processors, solid state drives and
optimization methods that are applied at the design time at
energy-efficient monitors have alleviated the energy
the circuit, logic, architectural and system levels. Circuit
consumption issue to a certain degree a series of software
level optimizations are focused on the reduction of
approaches have significantly contributed to the
switching activity power of individual logic gates and
improvement and energy efficiency.
transistor level combinational circuits by the application of
complete gate design and system design it is extremely
4.1 High Power Consumption
important carefully consider implementation of programs
The main reason of them power inefficiency in data centers that are supposed to run in the system. DPM techniques that
in low average utilization of the resources. The main run- include methods and strategies for run-time adaptation of a
time reasons of underutilization in data centers are system’s state. DPM techniques can be distinguished by the
variability of the work load and statistical effects. Modern level at which they are applied hardware or software.

Special Issue Published in Int. Jnl. Of Advanced Networking and Applications (IJANA) Page 109
Proceedings of the UGC Sponsored National Conference on Advanced Networking and Applications,
27th March 2015

Hardware DPM varies for different hardware components, The DCD techniques are built upon the idea of the clock
but usually can be classified as Dynamic Performance gating of parts of an electronic components or complete
Scaling (DPS),such as DVFS, and partial or complete disabling during periods of inactivity. The problem could be
Dynamic Components Dynamic Component Deactivation easily solved if transitions between power states would
(DCD) during periods of inactivity. In contrast software cause negligible power and performance overhead.
DPM techniques utilize interface to the system’s power However transitions to low-power states usually lead to
management and according to their policies apply hardware additional power consumption and delays caused by the re-
DPM. The Advanced Power Management (APM) and its initialization of the components. A transition to low-power
successor, the Advanced configuration and Power Interface state is worthwhile only if the period of inactivity is longer
(ACPI) have drastically simplified the software power than the aggregated delay of transitions form and into the
management and resulted in board research studies in this active state and saved power is higher than required to
area. DVFS creates a board dynamic power range for the reinitialize the components.
CPU enabling extremely low-power active modes. Another
technology that can improve the utilization of resources and 6.1.1. Dynamic Component Deactivation (DCD)
thus reduce the power consumption is virtualization of
computer resources. Cloud computing naturally leads to Computer components that do not support performance
power-efficiency by providing the following characteristics: scaling and can only be deactivated idle. The problem is
 Economy of scale due to elimination of trivial in the case of a negligible transition has to be done
redundancies. only if the idle period is transitions lead not only to delays
 Improved utilization of the resources. which can degrade performance of the systems, but to
 Location independence –VMs can be moved to additional power. Therefore to achieve efficiency a
a place where energy is cheaper. transition has to be done only if the idle period is long
enough to cover the transition overhead. DCD techniques
 Scaling up and down resource usage can be
can be divided into predictive and stochastic.
adjusted to current requirements.
 Efficient resource management by the cloud
provide.
Therefore cloud providers have to deal with the power-
performance trade off minimization of the power
consumption while meeting the QoS requirements.

6. GREEN COMPUTING TECHNIQUES MANAGE


POWER IN COMPUTER SYSTEM
These techniques can be classified at different levels:

1. Hardware and Firmware Level


2. Operating System Level Fig 5. Hardware and Firmware Level
3. Virtualization Level
4. Data Center Level Predictive techniques are based on the correlation between
the past history of the system behavior and its near future. A
non-ideal prediction can result in an over-prediction or
under-prediction. An over-prediction means that the actual
idle period is shorter than the predicted leading to a
performance penalty. On the other hand, an under-
prediction means that the actual idle period is longer the
predicted . Predictive techniques can be further split into
static and adaptive, which are discussed below Static
techniques utilize some threshold for a real-time execution
parameter to make predictions of idle periods. The simplest
policy is called fixed timeout. The idea is to define the
length of time after which a period of inactivity can be
Fig.4.Power Management Techniques in Green Computing treated as long enough to do a transition to a low-power
state. Activation of the component is initiated once the first
6.1. Hardware and Firmware Level request to a component is received. Another way to provide
the adaptation is to maintain a list of possible values of the
The DPM techniques applied at the hardware and firmware parameter of interest and assign weights to the values
level can be divided in to two categories: according to their efficiency at previous intervals. The
actual value is obtained as a weighted average over all the
1. Dynamic Components Deactivation (DCD). values in the list. In general, adaptive techniques are more
2. Dynamic Performance Scaling (DPS). efficient than static when the type of the workload is
unknown a priori.
Special Issue Published in Int. Jnl. Of Advanced Networking and Applications (IJANA) Page 110
Proceedings of the UGC Sponsored National Conference on Advanced Networking and Applications,
27th March 2015

Another way to deal with non-deterministic system behavior


is to formulate the problem as a stochastic optimization,
which requires building of an appropriate probabilistic Project System Target Goal Power-
model of the system. It is important to note, that the results, name resources systems saving
obtained using the stochastic approach, are expected values, techniques
and there is no guarantee that the solution will be optimal Ondem and CPU Arbitrary Minimise DVFS
for a particular case. Moreover, constructing a stochastic Government power
model of the system in practice may not be straightforward. consumption
If the model is not accurate, the policies using this model Eco System CPU, Mobile Achieve Resource
may not provide an efficient system control. Memory, System target throttling
Disk battery
Storage, lifetime
6.1.2. Dynamic Performance Scaling (DPS) Network,
Interface
DPS includes different techniques that can be applied to Nemesis CPU, Mobile Achieve Resource
computer components supporting dynamic adjustment of OS, Memory, Systems Target throttling
their performance proportionally to the power consumption. Neugeba Disk battery
This idea lies in the roots of the widely adopted Dynamic and storage, lifetime
Voltage and Frequency Scaling (DVFS) technique. McAuley Network
Interface
6.1.2.1. Dynamic Voltage and Frequency Scaling (DVFS)
6.3. Virtualization Level
DVFS reduces the number of instructions a processor can
issue in a given amount of time, thus reducing the The virtualization level enables the abstraction of an OS and
performance. This, in turn, increases run time for program applications running on it from the hardware. The
segments which are sufficiently CPU-bound. Although the virtualization layer lies between the hardware and OS and;
application of DVFS may seem to be straightforward, real- therefore, a Virtual Machine Monitor (VMM) takes control
world systems raise many complexities that have to be over resource multiplexing and has to be involved in the
considered. First of all, due to complex architectures of system’s power management in order to provide efficient
modern CPUs (i.e. pipelining, multi-level cache, etc.), the operation. There are two ways of how a VMM can
prediction of the required CPU clock frequency that will participate in the power management:
meet application’s performance requirements is not trivial.
In summary, DVFS can provide substantial energy savings; 1. A VMM can act as a power-aware OS without distinction
however, it has to be applied carefully, as the result may between VMs: monitor the overall system’s performance
significantly vary for different hardware and software and appropriately apply DVFS or any DCD techniques to
system architectures. the system components.
2. Another way is to leverage OS’s specific power
6.2. Operating System Level management policies and application-level knowledge, and
map power management calls from different VMs on actual
The characteristics used to classify the operating system changes in the hardware’s power state or enforce system-
level. wide power limits in a coordinated manner.

6.4. Data Center Level

The main goal of data center level is:

 Minimize energy consumption, satisfy


performance requirements.
 Minimize power consumption, minimize
performance loss .

7. CONCLUSIONS AND FUTURE DIRECTIONS

Fig.6. Operating System Level In recent years, energy efficiency has emerged as one of the
most important design requirements for modern computing
systems, such as data centers and Clouds, as they continue
to consume enormous amounts of electrical power. Apart
Table 1.Characteristics of operating system level from high operating costs incurred by computing resources,

Special Issue Published in Int. Jnl. Of Advanced Networking and Applications (IJANA) Page 111
Proceedings of the UGC Sponsored National Conference on Advanced Networking and Applications,
27th March 2015

this leads to significant emissions of carbon dioxide into the USA, 2007, pp. 13–23.
environment. For example, currently IT infrastructures [11].D. G. Sachs, W. Yuan, C. J. Hughes, A. Harris, S. V.
contribute about 2% of total CO2 footprints. Unless energy- Adve, D. L. Jones, R. H. Kravets, and K. Nahrstedt,
efficient techniques and algorithms to manage computing ―GRACE: a hierarchical adaptation framework for saving
resources are developed, IT’s contribution in the world’s energy,‖ University of Illinois at Urbana-Champaign, Tech.
energy consumption and CO2 emissions is expected to Rep. UIUCDCS, pp. 2004–2409, 2003.
rapidly grow. This is obviously unacceptable in the age of [12]. V. Vardhan, D. G. Sachs, W. Yuan, A. F. Harris, S. V.
climate change and global warming. In this chapter, we Adve, D. L. Jones, R. H. Kravets, and K. Nahrstedt,
have studied and classified different ways to achieve power ―Integrating fine-grained application adaptation with
and energy efficiency in computing systems. The recent global adaptation for saving energy,‖ in International
developments have been discussed and categorized over the Workshop on Power-Aware Real-Time Computing, Jersey
hardware, operating system, virtualization and data center City, NJ, 2005.
levels. [13]. R. Rajkumar, K. Juvva, A. Molano, and S. Oikawa,
―Resource kernels: A resource-centric approach to real-
8. ACKNOWLEDGEMENT time and multimedia systems,‖ Readings in multimedia
computing and networking, Morgan Kaufmann, pp. 476–
First and foremost, praises and thanks to the God, the 490, 2001.
Almighty, for His showers of blessings throughout my work [14]. J. Flinn and M. Satyanarayanan, ―Managing battery
to complete the my work successfully. I am extremely lifetime with energy-aware adaptation,‖ ACM Transactions
grateful to my parents for their love, prayers, caring and on Computer Systems (TOCS), vol. 22, no. 2, p. 179, 2004.
sacrifices for educating and preparing me for my future. I [15].D. Meisner, B. T. Gold, and T. F. Wenisch,
am very much thankful to my husband and my sons for their ―PowerNap: eliminating server idle power,‖ ACM
love, understanding, prayers and continuing support to SIGPLAN Notices, vol. 44, no. 3, pp. 205–216, 2009.
complete this work.

REFERENCES

1] G. E. Moore et al., ―Cramming more components onto


integrated circuits,‖ Proceedings of the IEEE, vol. 86, no. 1,
pp. 82–85, 1998.
[2] J. G. Koomey, ―Estimating total power consumption by
servers in the US and the world,‖ Oakland, CA: Analytics
Press. February 15, 2007.
[3] L. Barroso, ―The price of performance,‖ Queue, ACM
Press, vol. 3, no. 7, p. 53, 2005.
[4] R. Brown et al., ―Report to congress on server and data
center energy efficiency: Public law 109-431,‖ Lawrence
Berkeley National Laboratory, 2008.
[5] L. Minas and B. Ellison, Energy Efficiency for
Information Technology: How to Reduce Power
Consumption in Servers and Data Centers. Intel Press, Aug.
2009.
[6] P. Ranganathan, P. Leech, D. Irwin, and J. Chase,
―Ensemble-level power management for dense blade
servers,‖ in Proceedings of the 33rd International
Symposium on Computer Architecture (ISCA 2006), 2006,
pp. 66–77.
[7] S. Rowe, ―Usenet archives,‖
http://groups.google.com/group/comp.misc/browse_thread/-
thread/5c4db94663b5808a/f99158e3743127f9, 1992.
[8] V. Venkatachalam and M. Franz, ―Power reduction
techniques for microprocessor systems,‖ ACM Computing
Surveys (CSUR), vol. 37, no. 3, pp. 195–237, 2005.
[9] L. A. Barroso and U. Holzle, ―The case for energy-
proportional computing,‖ Computer, pp. 33–37, 2007.
[10] X. Fan, W. D. Weber, and L. A. Barroso, ―Power
provisioning for a warehouse-sized computer,‖ in
Proceedings of the 34th Annual International Symposium on
Computer Architecture (ISCA 2007). ACM New York, NY,
Special Issue Published in Int. Jnl. Of Advanced Networking and Applications (IJANA) Page 112

You might also like