Report On Current Trends and Future Research Challenges in Green Cloud Computing

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

11 V May 2023

https://doi.org/10.22214/ijraset.2023.52911
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue V May 2023- Available at www.ijraset.com

Report on Current Trends and Future Research


Challenges in Green Cloud Computing
Priyanka Vijay Khutwad1, Dr. Nisha Auti2, Dr. Sulochana Sonkamble3
1, 2, 3
Department of Computer Engineering, JSPM NTC

Abstract: Cloud computing provides computing power and resources as a service to users across the globe. This scheme was
introduced as a means to an end for customer’s worldwide, providing high performance at a cheaper cost when compared to
dedicated high-performance computing machines. This provision requires huge data-centers to be tightly-coupled with the
system, the increasing use of which yields heavy consumption of energy and huge emission of CO2. Since energy has been a
prime concern of late, this issue generated the importance of green cloud computing that provides techniques and algorithms to
reduce energy wastage by incorporating its reuse. In this survey we discuss key techniques to reduce the energy consumption and
CO2 emission that can cause severe health issues. We begin with a discussion on green matrices appropriate for data-centers
and then throw light on green scheduling algorithms that facilitate reduction in energy consumption and CO2 emission levels in
the existing systems. At the same time the various existing architectures related to green cloud also discussed in this paper with
their pros and cons.
Keywords: Green cloud computing, energy efficiency, CO2 emission, Cloud, Environment safety.

I. INTRODUCTION
According to Wikipedia [wiki], Cloud computing is a collection of a variety of computing concepts in which thousands of
computers communicate in real-time to provide a seamless experience to the user, as if he/she is using a single huge resource. This
system provides multiple facilities like – web data stores, huge computing resources, data processing servers etc. The concept of
cloud computing is around since the early 1950s, although the term was not coined back then. Time sharing systems was how it was
addressed back then. During the period of 1960 -1990, a host of experts did hint the era of cloud computing in their books or quotes.
The term dumb terminal attached to the mainframes was more famous in this period, in-lieu of the term cloud computing. In the
early 1990s, even the telecommunications companies began offering VPNs (Virtual Private Networks) instead of dedicated
connections, which were decent in QoS but were comparatively cheaper. In 1999, Salesforce.com was among one of the first to
provide enterprise applications via a website. This move aided the advent of cloud computing which was introduced around 2002 by
Amazon, the organization which can be considered as one of the pioneers in the field with their Amazon Web Services (AWS) and
Elastic Compute Cloud (EC2). Since 2009, after the introduction of web 2.0, other big shots in the web industry viz. Google, Yahoo
etc. have also joined the club.

Figure 1. Cloud and Environment

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 5950
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue V May 2023- Available at www.ijraset.com

Cloud computing can be considered as a hierarchy of concepts, which comprises of several models. The first model is the Service
Model [11] which further includes three models namely – software as a service, platform as a service and infrastructure as a service.
Second is the Deployment model [11] which further comprises of public cloud, private cloud, community cloud and hybrid cloud.
According to National Institute of Standards and Technology (NIST) – “the major objective of cloud computing is to maximize the
shared resources and at the same time the disadvantage is its high infrastructure cost and unnecessary power consumption.”
According to National Institute of Standards and Technology (NIST) – “the major objective of cloud computing is to maximize the
shared resources and at the same time the disadvantage is its high infrastructure cost and unnecessary power consumption.”
Global warming has been a big concern of late, with high power consumption and CO2 emission acting as a catalyst to increase the
same. The world has become highly protective about the environment with inputs from contributors such as – Greenpeace,
Environmental Protection Agency (EPA) of the United States and the Climate Savers Computing Initiative to name a few. With the
continuously increasing popularity and usage of cloud computing and the increasing awareness of the people across the globe
towards the use of eco-friendly resources has forced the researchers to devise concepts towards an eco-friendly energy efficient
flavour of cloud computing called green cloud computing. According to the previous works green cloud computing facilitates the
reduction of power consumption and CO2 emission along with the reutilization of energy in an efficient way.
Cloud uses thousands of data-centers in order to process the user queries and to run these data-centers bulk amount of power is used
for cooling and other processes. Every year this power consumption is gradually increasing and green cloud computing endeavours
to reduce the same thus playing a helpful role to curb these issues. There are various techniques and algorithms used to minimize
this expenditure [13]. Among various avenues, one area of research focuses on reduction in energy consumption by computer
servers [11], whereas the other lays stress on dynamic cluster server configuration [20, 21] to reduce the total power consumption by
balancing load and effectively utilizing only a subset of the resources at hand. Similarly Dynamic CPU clock frequency scaling [22,
23] again incorporates some form of load balancing to save power during different load conditions. In addition to these, some more
techniques are used to measure the power consumption in data-centers. The first one was developed by the Green Grid called Power
Usage Effectiveness (PUE) metric to measure the effectiveness of data centers. PUE tells about the amount of extra power required
for cooling IT equipment [16].
It is clear from Figure 1 that in cloud scenario power consumption is very high with high carbon emission whereas at the same time
in green cloud this is very less as compared to traditional cloud. Green clouds avoid power wastage and this is the reason for
adoption of this technology by IT companies like Google, Microsoft, Yahoo!, etc. According to a survey done in the year 2007 IT
industries contribute to 2% of the total carbon emission every year [19]. European Union (EU) is also of the view that severe
reductions of the order of 15%-30% is required to maintain the global temperature and stop it from increasing drastically before
2020 [19].
The remainder of this article is organized as follows. Section II reviews previous research in the field of green cloud computing. In
Section III we briefly describe the approach used to address the problem. Section IV examines the proposed work with the existing
method. Finally, we summarize the study and give way for future research in Section V.

II. EXISTING WORK


The use of Green Cloud Computing has increased substantially in the recent past. A lot of research has been done to incorporate and
enhance the applicability of Green Cloud in real life scenarios with these help of various parameters. Usage of energy is
dramatically increases in data centers. Cavdar et al., [1,2] introduced for improving the energy efficiency of the running data
centers, the Green grid is proposing some parameters like Power Usage Effectiveness (PUE)[7] and Data centre Efficiency (DCE)
metrics [10], TDP (Thermal Design Power) [2], etc. PUE is the common parameter.
According to Wikipedia “PUE is a measure of how efficiently a computer datacenter uses its power “The range of PUE is varies
from 1.0 to infinity. If the value of PUE approaching
1.0 it means efficiency is 100% and full power is used by IT equipment’s. In recent years some companies achieved low PUE levels,
like Google PUE with 1.13 [9]. If the value of PUE is 1.5 it means that energy consumed by IT equipment in 1kWh, by data centre
1.5 kWh and 0.5 WH energy has wasted as fruitless work like cooling, CPU dissipation and other work. Table I explain some
parameters proposed for data centers. In many data centre the value of PUE reached to 3.0 or more but by using correct design 1.6
values should be achievable [5]. This calculation is done in Lawrence Berkley National Labs [8] which illustrate that 22 data centers
22 datacenters measured had PUE values in the 1.3 to 3.0 range [8].
Truong Duy, Sato and Inoguchi et al., [3] implement the green scheduling algorithm combines with neural network predictor for
reducing the energy consumption in cloud computing.

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 5951
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue V May 2023- Available at www.ijraset.com

In this algorithm, the server predicts the load from time t to the time it takes for restarting and calculates the peak load. According to
the peak load the number of server state is decided. Let, No is the number of server in ON state and Nn is the number of necessary
servers. If the Nn > No then, choose server in OFF state, signal them to restart and if Nn < No choose server in ON state and signal
them to shut down.
Fumiko Satoh et al., [4] also focus on reducing the usage of energy in data centers. But for the future energy management they
develop an energy management System for cloud by the use of sensor management function with an optimized VM allocation tool.
This system will help to reduce the energy consumption in multiple data centers and results shows that it will save 30% of energy.
This system also used to reduce the energy in carbon emissions.
Cooling is other major issue that consumes huge amount of energy in data centers. Previously, the cooling is done by using
mechanical refrigerator that supply chilled water for the IT equipments. Now a day’s pre cooling also called as free cooling is used.
Free cooling minimizes the use of mechanical cooling. Like Face book deploys their data centre in Sweden which has cold and dry
climate. Microsoft leaves servers in open air in order to cool the servers easily. Also Google uses river water to cool their data centre
[1]. There are different hardware technologies like virtualization and software technologies like software efficient algorithm used to
decrease the consumption of energy.
Rasoul Beik et al., [6] proposes an energy aware layer in software –architecture that calculate the energy consumption in data
centers and provide services to the users which uses energy efficiently. Bhanu Priya et al., [11] gave a cloud computing metrics to
make the cloud green in terms of energy efficiency, different energy models has been discussed in this paper to reduce the power
consumption and CO2 emission to make cloud more green. This survey takes three major factors under consideration; any cloud can
be green by following these factors, first cause to make cloud greener is virtualization, Second is Work load distribution and third is
software automation, some other factors are also discussed like pay-per-use and self-service which is proved as a key for reduction
of energy consumption.

Table 1. Green metrics power measurement [1, 2]


Metric Explanation Formula
Power usage It is the fraction of total
=
Effectivenes (PUE) energy consumed by the
service of a data centre to
the total energy consumed
by IT equipments
Carbon Usage It is a calculation of green
=
Effectiveness house gases (CO2, CH4)
(CUE) release in atmosphere by the
data centre
Water Usage It is calculation of yearly
=
Effectiveness water used by data centre
(WUE) like for cooling, energy
Production
Energy Reuse It calculates the reusable
=
Factor (ERF) energy Like hydro power,
solar power etc used by data
center.
Energy Reuse It is a parameter for −
=
Effectiveness measuring the profit of reuse
(ERE) energy from a data centre.
Data centre This factor is used to
= ∗ 100%
Infrastructure calculate The energy
Efficiency (DCiE) efficiency of a data Centre.

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 5952
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue V May 2023- Available at www.ijraset.com

According to Kliazovich and Pascal Bouvry [12] expenses on cloud data centers maintenance and operation done in cloud are
gradually increasing. In this paper author has focused on the work load distribution among the data centers so that energy
consumption can be calculated in terms of packet level. By this technique packet level communication is achieved. Packet level
simulation of energy has been done through the simulator, like for green cloud NS2 simulator and for cloud only one existing called
“cloudsim”. This simulation is done at three levels: “two-tier, three-tier, and three-tier high-speed data center architectures”. Kaur
and Singh et al., [13] performed the different challenges in the field of energy in cloud computing, a model is proposed by author to
calculate the energy wasted by producing various gases in environment. The proposed model contains various fields Data, Analysis,
Record, Put on guard, restrain along with the virtualization concept in green cloud to make it energy efficient and for healthy
environment.
Hosman and Baikie et al., [14] gave a new challenge in the field of cloud computing, datacenters consumes a lot of energy and
energy is available every time is not necessary, so the author is discussing in his paper about the solar energy. How the solar energy
can play a vital role in data centers energy consumption is the hot topic of discussion. In this paper author proposed a small level
cloud data center which is the combination of three technologies are “less power consumption platform, energy efficient cloud
computing and DC power distribution”. Owusu et al., [17] performed a survey to establish the current state of the art in the area of
energy efficiency in cloud computing. They beautifully mention the field of energy efficiency as a controversial area to cloud
computing. This paper discusses one area of controversy; the energy efficiency of cloud computing.
Yamini et al., [18] Introducing the key approaches like virtualization, Power Management, Recycling of material and
telecommuting of green cloud computing very beautifully. The major focus of this paper is the consolidation or scheduling of task
and resource utilization in green cloud computing to reduce the high consumption of energy. The decent results shown in the paper
not for the direct drastic energy reduction but applies possible saving of electricity in huge cloud data centers. According to Buyya
[19] the demand of cloud is drastically increasing now a day and the consumption of energy and excretion of harmful gases is also
extreme which is very harmful and a big issue in the field of health care and also a big reason of the increase in cost of operations in
cloud. Buyya gave a presentable and evidential literature survey of the various different members of cloud which participate in the
total energy consumption. Structure of cloud are discussed in this paper which turn on the use of green cloud computing.
Buyya et al., [24] Contributes carbon green cloud architecture which points on the third party concept, consist of two types of
directories named as green offer and carbon emission. These directories help us to provide and utilize the Green services from users
and providers both. Green brokers access the services from green offers directory and scheduled services according to least CO2
emission. Beloglazov and Buyya et al., [25] focuses on virtual machine for the reduction of the energy consumption. An author
proposes the dynamic reallocation technique for VMs and toggles off the unused servers which results, considerable energy saving
in the real Cloud Computing data centers.
Nimje et al., [28] addressed the security of the cloud data centres in order to achieve green cloud environment by using
virtualization concept. Various methods are involved in the paper to address the security and reduction of power consumption.
Virtualization here came in to picture because it reduces the load from the data centres and provides deployment, management and
delivery of resources in simple manner. Nimije included hypervisor environment to provide the virtualization and works as a
security tool to achieve high level of security in green cloud computing.

III. EXISTING APPROACHES


Buyya et al., [24] Contributes carbon green cloud architecture which points on the third party concept, consist of two types of
directories named as green offer and carbon emission. These directories help us to provide and utilize the Green services from users
and providers both.
The services of the providers are registered in the “Green offer Directory”. The Green Broker accessed these services and organized
it according to the price, time and the service that offer least CO2 emission. The Carbon Emission Directory keeps and stores the
data which contains the information of energy and cooling efficiency of cloud services and data centers. The green broker used the
up to date information about services.
Whenever the user request for the services, it contacts with the Green Broker. The Green Broker uses these directories and chooses
the green offer and energy efficiency information and allocates the services to the private cloud. And finally give the result to the
users. This directory idea is beautifully used by the Hulkury et al., [26] and Garg et al., [27] and proposes a new architecture called
as integrated green Cloud architecture (IGCA) shown in Figure 2. It smartly includes client oriented in the Cloud Middleware that
verifies the cloud computing is better than the local computing with QoS and budget.

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 5953
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue V May 2023- Available at www.ijraset.com

Figure 2. Integrated green Cloud architecture (IGCA)

This architecture has two elements; one is the client and second is the server side. In the client side the manager and the users are
present, which deals with the execution destination of the job and in the server side includes the green cloud middleware, green
broker and sub servers like processing servers , storage servers etc. The directory concept is used in the green broker layer of IGCA
for organizing all the information of the public cloud and provides the best green service to the user.
The green cloud middleware has two components. The manager is the main head that deals with one component and stores all the
information of the middleware. The usage of the user’s PC, the servers present on the private clouds all the information. The
frequencies of each sever like high, medium and low. The energy usage, storage capacity [26] and other information also exist in the
component of middleware.
When the manager got request from the client. The request is dividing into jobs and distributed among the users meanwhile they
also stores the information about job into the component. The carbon emission and energy used for the execution of job on the
private cloud by servers, on the public cloud by using green broker or on the client’s PC is calculated and show to the users. The
best green offer is selected by the manager by taking into consideration the security level of the job also. When the decision is
making out by the manager then this information is store in the XML file for future usage.
The second component is accessed by all the users for reading the XML file. This file stocks all the information of the execution of
job. The locations of the jobs are registered in the file and according to the addresses, they will execute. If the job entry is not in the
file then the job will be executed either on the PC of the client or in the private cloud. The execution of job is takes place in three
places. First if the job is executed LOCALLY (on the requester side) then this information is stored in the client side so next time
when the request arrives it will not get through will middleware. If the job is executed in the private cloud the location as well as the
server name is fetched from the file. Or if it is in public cloud, we will take help from the green broker to know the most excellent
green decision for the execution of the job. The middleware know all the information about the three places. Energy used by the
workers working in the company is also calculated by the middleware for taking further decisions.
The processing speed, energy consumption, bandwidth or others factors are responsible for deciding the best location for the
execution of the job. By considering all the factors the middleware will compute and judge the place from the three places. The
IGCA provides the balance in the job execution and provide the security and quality of service to the clients. The manager divides
the task and top quality green solution by considering all the places (public, private, local host).
In this architecture the manager plays the central coordinator work which allocates the job to the users and does all decision making.
But at the same time the manager is the weakest point in this architecture as it is the central point of failure, as if the manager fails
everything in the architecture collapses.

IV. ADVANTAGES AND DISADVANTAGES


As we have discussed above that all existing architectures have some constructive as well as destructive points. Buya et al., [19]
gave the architecture for green cloud the major advantage of this architecture is Co2 emission directory, this directory measures the
best suitable service which gives less carbon emission so straight away it indicates that energy will also decrease because Co2
emission and energy consumption both are directly proportionate to each other.

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 5954
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue V May 2023- Available at www.ijraset.com

Similarly the disadvantage is that only CO2 emission and energy is not the factor to be under consideration like Quality
Provisioning, Security, etc.
Hulkary et al., [26] covers these factors also under consideration by taking other components which search service first on the
private cloud later on public cloud this reduces the time consumption and provides better results as compare to Buya Architecture.
The major disadvantage which we observed here is that manager of the system is the central point of communication so if manager
will crash then whole system will fall apart at the same time decision making done by manager is not intelligent and all work has
been done manually.
These are the some of the advantages and disadvantages which observed here in these existing architectures and which can be
further improve for the future work.

V. CONCLUSIONS AND FUTURE WORK


In this paper we addressed the problem of traditional cloud and the use of green cloud at the same time we enlighten the recent work
which has been done in the field of green cloud computer for healthy and greener environment. Consequently we gave a
comparative study in the field of green cloud computing. There are many possible directions of future work. While in the paper we
address the problem of efficient way to fetch the results from the cloud so all the features covered in the paper can be achieved.
Further we can implement the approach to automate the manager of the green cloud who makes all the decisions regarding the
services .

REFERENCES
[1] D. Cavdar and F. Alagoz, (Eds.), “A Survey of Research on Greening Data Centers”, Proceedings of the IEEE Global Communications Conference
(GLOBECOM), (2012) December 3-7; Anaheim, CA.
[2] A. Jain, M. Mishra, S. Kumar Peddoju and N. Jain, (Eds.), “Energy Efficient Computing-Green Cloud Computing”, Proceedings of the International
Conference of the Energy Efficient Technologies for Sustainability (ICEETS), (2013) April 10-122; Nagercoil.
[3] T. Vinh T. Duy, Y. Sato and Y. Inoguchi, (Eds.), “Performance Evaluation of a Green Scheduling Algorithm for Energy Savings in Cloud Computing”,
Proceedings of the IEEE International Symposium of the Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW), (2010) April 19-23;
Atlanta, GA.
[4] F. Satoh, H. Yanagisawa, H. Takahashi and T. Kushida, (Eds.), “Total Energy Management system for Cloud Computing”, Proceedings of the IEEE
International Conference of the Cloud Engineering (IC2E), (2013), March 25-27; Redwood City, CA.
[5] C. Belady, (Ed.), “How to Minimize Data Centre Utility Bills”, US (2006).
[6] R. Beik, (Ed.), “Green Cloud Computing: An Energy-Aware Layer in Software Architecture”, Proceedings of the Spring Congress of the Engineering and
Technology (S-CET), (2012), May 27-30; Xian.
[7] “Green Grid Metrics—Describing Data Centres Power Efficiency”, Technical Committee White Paper by the Green Grid Industry Consortium, (2007)
February.
[8] S. Greenberg, E. Mills, B. Tschudi, P. Rumsey and B. Myatt, (Eds.), “Best Practices for Data Centres: Results from Benchmarking 22 Data Centres”,
Proceedings of the ACEEE Summer Study on Energy Efficiency in Buildings, (2006) April, pp. 3-76, -3-87.
[9] T. Kgil, D. Roberts and T. Mudge, “Pico Server: Using 3D Stacking Technology to Build Energy Efficient Servers”, vol. 4, no. 16, (2006).
[10] N. Rassmussen, (Ed.), “Electrical Efficiency Modelling of Data Centres”, American Power Conversion (APC) White Paper #113, (2007) October, pp.1-18.
[11] B. Priya, E. S. Pilli and R. C. Joshi, (Eds.), “A Survey on Energy and Power Consumption Models for Greener Cloud”, Proceeding of the IEEE 3rd
International Advance Computing Conference (IACC), (2013), February 22-23; Ghaziabad.
[12] D. Kliazovich and P. Bouvry, (Eds.), “Green Cloud: A Packet-level Simulator of Energy-aware Cloud Computing Data Centers”, Proceeding of the IEEE
Global Telecommunications Conference (GLOBECOM), (2010), December 6-8; Miami, FL.
[13] M. Kaur and P. Singh, (Eds.), “Energy Efficient Green Cloud: Underlying Structure”, Proceeding of the IEEE international conference of theEnergy Efficient
Technologies for Sustainability (ICEETS), (2013) April 10- 12, Nagercoil.
[14] L. Hosman and B. Baikie, (Eds.), “Solar-Powered Cloud Computing datacenters”, vol. 2, no. 15, (2013).
[15] F. Owusu and C. Pattinson, (Eds.), “The current state of understanding of the energy efficiency of cloud computing”, Proceeding of the IEEE11th International
Conference of the Trust, Security, Privacy in Computing and Communications (TrustCom), (2012) June 25-27; Liverpool.

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 5955

You might also like