Cloud Computing: Paper Presentation ON
Cloud Computing: Paper Presentation ON
ON
CLOUD COMPUTING
KAVALI,NELLORE(D.T)
PRESENTED BY:
I.VISWANATH REDDY G.VISHNU
08731A05C0 08731A05B8
[email protected] [email protected]
ABSTRACT: workloads to recover from many unavoidable
Cloud computing is a term used to describe both hardware/software failures
a platform and type of application. A
cloudcomputing platform dynamically
provisions, configures, reconfigures, and
deprovisions servers asneeded. Servers in the
cloud can be physical machines or virtual
machines. Advanced cloudstypically include
other computing resources such as storage area
networks (SANs), networkequipment, firewall
and other security devices.
Cloud computing also describes applications
that are extended to be accessible through the
Internet. These cloud applications use large data
centers and powerful servers that host
Webapplications and Web services. Anyone
with a suitable Internet connection and a
standardbrowser can access a cloud application.
INTRODUCTION:
Definition
Monitor resource use in real time to enable
A cloud is a pool of virtualized computer
rebalancing of allocations when needed
resources. A cloud can:
Cloud computing environments support grid
Host a variety of different workloads,
computing by quickly providing physical and
including batch-style back-end jobs and
virtual servers on which the grid applications
interactive,
can run. Cloud computing should not be
user-facing applications confused with grid computing. Grid computing
Allow workloads to be deployed and scaled- involves dividing a large task into many smaller
out quickly through the rapid provisioning of tasks that run in parallel on separate servers.
virtual machines or physical machines Grids require many computers, typically in the
Support redundant, self-recovering, highly thousands, and commonly use servers, desktops,
scalable programming models that allow and laptops.
Fig :example for cloud
Benefits
Cloud computing infrastructures can allow
enterprises to achieve more efficient use of their
IT hardware and software investments. They do
Clouds also support nongrid environments, such this by breaking down the physical barriers
as a three-tier Web architecture running standard inherent
or Web 2.0 applications. A cloud is more than a in isolated systems, and automating the
collection of computer resources because a management of the group of systems as a single
cloud provides a mechanism to manage those entity. Cloud computing is an example of an
resources. Management includes provisioning, ultimately virtualized system, and a natural
change requests, reimaging, workload evolution for data centers that employ automated
rebalancing, deprovisioning, and monitoring. systems management, workload balancing, and
virtualization technologies.
A cloud infrastructure can be a cost efficient
Software via the Internet:
model for delivering information services,
Microsoft in ‘Cloud’
reducing IT management complexity, promoting
Computing :
innovation, and increasing responsiveness
through realtime workload balancing. The Cloud
makes it possible to launch Web 2.0 applications
quickly and to scale up applications as much as
needed when needed. The platform supports
traditional Java™ and Linux, Apache, MySQL, for idea exchange, and the ability to rapidly
PHP (LAMP) stack-based applications as well develop and deploy new product prototypes.
as new architectures such as MapReduce and the In fact, HiPODS has been hosting IBM’s
Google File System, which provide a means to innovation portal on a virtualized cloud
scale applications across thousands of infrastructure in our Silicon Valley Lab for
servers instantly. nearly two years. We have over seventy active
innovations at a time, with each innovation
lasting on average six months. 50% of those
innovations are Web 2.0 projects (search,
collaboration, and social networking) and 27%
turn into products or solutions. Our success with
the innovation portal is documented in the
August 20 Business Week cover story on global
collaboration.
Personal hobbies
Innovation is no longer a concept developed and
owned by companies and businesses. It is
Tivoli Provisioning Manager automates interface,. Cloud provisioning and
imaging, deployment, installation, and management.
configuration of the Microsoft Windows and One interface provides basic screens for
Linux operating systems, along with the making provisioning requests.
installation / configuration of any software stack All requests are handled by Web2.0 components
that the user requests. Tivoli Provisioning deployed on the WebSphere Application Server.
Manager uses Websphere Application Server to Requests are forwarded to Tivoli Provisioning
communicate the provisioning status and Manager for provisioning/deprovisioning
availability of resources in the data center, to servers.
schedule the provisioning and deprovisioning of
Automated provisioning:
resources, and to reserve resources for future
The core functionality of a cloud is its ability to
use. As a result of the provisioning, virtual
automatically provision servers for innovators
machines are created using the XEN hypervisor
and to enable innovators, administrators, and
or physical machines are created using Network
others to use that function with a Web-based
Installation Manager, Remote Deployment
interface. The role-based interface abstracts out
Manager, or Cluster Systems Manager,
the complexity of IBM Tivoli Provisioning
depending upon the operating system and
Manager, Remote Deployment Manager,
platform. IBM Tivoli Monitoring Server
Network Installation Manager, business process
monitors the health (CPU, disk, and memory) of
execution language (BPEL), and Web services.
the servers provisioned by Tivoli Provisioning
Typically, a pilot team needs four to twelve
Manager. DB2 is the database server that Tivoli
weeks to identify, procure, and build a
Provisioning Manager uses to store the resource
pilotinfrastructure and additional time to build a
data. IBM Tivoli Monitoring agents that are
security compliant software stack so that
installed on the virtual and physical machines
developers can begin building or deploying
communicate with the Tivoli Monitoring server
applications and code. The cloud provides a
to get the health of the virtual machines and
framework andoffering that reduces that
provide the same to the user. The cloud
boarding process to aproximately one hour We
computing platform has two user interfaces to
accomplish this through a role-based Web portal
provision servers.
that allows innovators to fill out a formdefining
One interface is feature rich -- fully loaded
their hardware platform, CPU, memory, storage,
with the WebSphere suite of products -- and
operating system, middleware, and team
relatively more involved from a process
members and associated roles. This process
perspective. For more information on this
takes about five minutes. After submitting the
request through the portal, a cloud administrator
is notified and logs in to approve, modify, and/or
Open source
Open source solutions played an important role
in the development of the cloud. In particular, a
couple of projects have been foundations for
common cloud services such as virtualization
and parallel processing. Xen is an open-source
virtual machine implementation that allows
physical machines to host multiple copies of
operating systems. Xen is used in the cloud to
represent
machines as virtual images that can be easily Virtualization
and repeatedly provisioned and deprovisioned. Virtualization in a cloud can be implemented on
Hadoop, now under the Apache license, is an two levels. The first is at the hardware layer.
open-source framework for running large data Using hardware like the IBM System p™
processing applications on a cluster. It allows enables innovators to request virtualized,
the creation and execution of applications using dynamic LPARs with IBM ®AIX® or Linux
Google’s MapReduce programming paradigm, operating systems. The LPAR’s CPU resource is
which divides the application into small ideally managed by IBM® Enterprise Workload
fragments of work that can be executed on any Manager. Enterprise Workload Manager
node in the cluster. It also transparently supports monitors CPU demand and use and employs
reliability and data migration through the use of business policies to determine how much CPU
a distributed file system. Using Hadoop, the resource is assigned to each LPAR. The System
cloud can execute parallel applications on a p has micropartitioning capability, which allows
massive data set in a reasonable amount of time, the system to assign partial CPUs to LPARs. A
enabling computationally-intensive services partial CPU can be as granular as 1/10 of a
such as retrieving information efficiently, physical CPU. Micropartitioning combined with
customizing user sessions based on past history, the dynamic load balancing capabilities of
or generating results based on Monte Carlo Enterprise Workload Manager make a powerful
(probabilistic) virtualized infrastructure available for
algorithms. innovators. In this environment pilots and
prototypes are generally lightly used at the
beginning of the life cycle. During the startup
stage, CPU use is generally lower because there
is typically more development work and fewer
early adopters or pilot users. At the same time,
other more mature pilots and prototypes may
have hundreds or thousands of early adopters
who are accessing the servers. Accordingly,
those servers can take heavy loads at certain
times of the day, or days of the week, and this is
when Enterprise Workload Manager
dynamically allocates CPU resources to the
LPARs that need them.
The second implementation of virtualization
Storage architecture in
occurs at the software layer. Here technologies
such as Xen can provide tremendous advantages
the cloud
The storage architecture of the cloud includes
to a cloud environment. Our current
the capabilities of the Google file system along
implementations of the cloud support Xen
with the benefits of a storage area network
specifically but the framework also allows for
(SAN). Either technique can be used by itself, or
other software virtualization technologies such
both can be used together as needed. Computing
Software virtualization entails installing a
without data is as rare as data without
hypervisor on an IBM System x or IBM Sysem
computing. The combination of data and
physical server. The hypervisor supports
computer power is important. Computer power
multiple “guest” operating systems and provides
often is measured in the cycle speed of a
a layer of virtualization so that each guest
processor. Computer speed also needs to
operating system resides on the same physical
account for the number of processors. The
hardware without knowledge of the other guest
number of processors within an SMP and the
operating systems. Each guest operating system
number within a cluster may both be important.
is, physically protected from the other operating
When looking at disk storage, the amount of
systems and will not be affected by instability or
space is often the primary measure. The number
configuration issues of the other operating
of
systems.
gigabytes or terrabytes of data needed is
important. But access rates are often more
important. Being able to only read sixty
megabyes per second may limit your processing
capabilites below your computer capabilites. innovation. Through numerous engagements
Individual disks have limits on the rate at which with clients, IBM has discovered that
they can process data. A single computer may collaboration tools by themselves will not yield
have multiple disks, or with SAN file system be the desired results as effectively as having a
able to access data over the network. So data structured innovation platform and program in
placement can be an important factor in place .
achieving high data access rates. Spreading the
Conclusion
data over multiple computer nodes may be
desired, or having all the datam reside on a In today's global competitive market, companies
single node may be required for optimal must innovate and get the most from its
performance. The Google file structure can be resources to succeed. This requires enabling its
used in the cloud environment. When used, it employees, business partners, and users with the
uses the disks inside the machines, along with platforms and collaboration tools that promote
the network to provide a shared file system that innovation. Cloud computing infrastructures are
is redundant. This can increase the total data next generation platforms that can provide
processing speed when the data and processing tremendous value to companies of any size.
power is spread out efficiently. They can help companies achieve more efficient
The Google file system is a part use of their IT hardware and software
of a storage architecture but it is not considered investments and provide a means to accelerate
relies on an adapter other than an Ethernet in the increases profitability by improving resource
computer nodes, and has a network similar to an utilization. Costs are driven down by delivering
Etherent network that can then host various appropriate resources only for the time those
Acknowledgements
We acknowledge this paper’s major supporters
and contributors:
Executive sponsor: Willy Chiu
The HiPODS Architecture Board led by
Dennis Quan
The Incubation Solutions Team that owns the
Cloud strategy led by Jose Vargas
The Innovation Factory team led by Jeff
Coveyduc
Contributors to the white paper: Greg Boss,
Catherine Cuong Diep, Harold Hall, Susan
Holic,