0% found this document useful (0 votes)
37 views60 pages

UNIT 3-Distributed Cloud Computing

Uploaded by

Divya Devarajan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views60 pages

UNIT 3-Distributed Cloud Computing

Uploaded by

Divya Devarajan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 60

IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

S.A.ENGINEERING COLLEGE

S.A. ENGINEERING COLLEGE

DEPARTMENT OF CSE

IT1701 DISTRIBUTED SYSTEMS


AND CLOUD COMPUTING

IV YEAR – VII SEMESTER

LECTURE NOTES - UNIT I

Department of CSE VII Semester 1


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

UNIT III INTRODUCTION TO CLOUD COMPUTING


Introduction to Cloud Computing – Evolution of Cloud Computing – Cloud Characteristics – Elasticity
in Cloud – On-demand Provisioning – NIST Cloud Computing Reference Architecture– Architectural
Design Challenges – Deployment Models: Public, Private and Hybrid Clouds – Service Models: IaaS –
PaaS – SaaS – Benefits of Cloud Computing.

3.1 INTRODUCTION
EVOLUTION OF DISTRIBUTED COMPUTING
Grids enable access to shared computing power and storage capacity from your desktop.
Clouds enable access to leased computing power and storage capacity from your desktop.
• Grids are an open source technology. Resource users and providers alike can understand
and contribute to the management of their grid
• Clouds are a proprietary technology. Only the resource provider knows exactly how
their cloud manages data, job queues, security requirements and so on.
• The concept of grids was proposed in 1995. The Open science grid (OSG) started in 1995
The EDG (European Data Grid) project began in 2001.
• In the late 1990`s Oracle and EMC offered early private cloud solutions . However the
term cloud computing didn't gain prominence until 2007.
SCALABLE COMPUTING OVER THE INTERNET
Instead of using a centralized computer to solve computational problems, a parallel and
distributed computing system uses multiple computers to solve large-scale problems over the
Internet. Thus, distributed computing becomes data-intensive and network-centric.
The Age of Internet Computing
o high-performance computing (HPC) applications is no longer optimal for measuring
system performance
o The emergence of computing clouds instead demands high-throughput computing (HTC)
systems built with parallel and distributed computing technologies
o We have to upgrade data centers using fast servers, storage systems, and high-bandwidth
networks.
The Platform Evolution
o From 1950 to 1970, a handful of mainframes, including the IBM 360 and CDC 6400

Department of CSE VII Semester 2


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

o From 1960 to 1980, lower-cost minicomputers such as the DEC PDP 11 and VAX
Series
o From 1970 to 1990, we saw widespread use of personal computers built with VLSI
microprocessors.
o From 1980 to 2000, massive numbers of portable computers and pervasive devices
appeared in both wired and wireless applications
o Since 1990, the use of both HPC and HTC systems hidden in clusters, grids, or
Internet clouds has proliferated

On the HPC side, supercomputers (massively parallel processors or MPPs) are


gradually replaced by clusters of cooperative computers out of a desire to share
computing resources. The cluster is often a collection of homogeneous compute
nodes that are physically connected in close range to one another.
On the HTC side, peer-to-peer (P2P) networks are formed for distributed file sharing
and content delivery applications. A P2P system is built over many client machines (a
concept we will discuss further in Chapter 5). Peer machines are globally distributed
in nature. P2P, cloud computing, and web service platforms are more focused on

Department of CSE VII Semester 3


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

HTC applications than on HPC applications. Clustering and P2P technologies lead to
the development of computational grids or data grids.
For many years, HPC systems emphasize the raw speed performance. The speed of
HPC systems has increased from Gflops in the early 1990s to now Pflops in 2010.
The development of market-oriented high-end computing systems is undergoing a
strategic change from an HPC paradigm to an HTC paradigm. This HTC paradigm
pays more attention to high-flux computing. The main application for high-flux
computing is in Internet searches and web services by millions or more users
simultaneously. The performance goal thus shifts to measure high throughput or the
number of tasks completed per unit of time. HTC technology needs to not only
improve in terms of batch processing speed, but also address the acute problems of
cost, energy savings, security, and reliability at many data and enterprise computing
centers.
Advances in virtualization make it possible to see the growth of Internet clouds as a
new computing paradigm. The maturity of radio-frequency identification (RFID),
Global Positioning System (GPS), and sensor technologies has triggered the
development of the Internet of Things (IoT). These new paradigms are only briefly
introduced here.
The high-technology community has argued for many years about the precise
definitions of centralized computing, parallel computing, distributed computing, and
cloud computing. In general, distributed computing is the opposite of centralized
computing. The field of parallel computing overlaps with distributed computing to a
great extent, and cloud computing overlaps with distributed, centralized, and parallel

Terms computing.

Centralized computing
This is a computing paradigm by which all computer resources are centralized in
one physical system. All resources (processors, memory, and storage) are fully shared and
tightly coupled within one integrated OS. Many data centers and supercomputers are
centralized systems, but they are used in parallel, distributed, and cloud computing
applications.

Department of CSE VII Semester 4


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

• Parallel computing
In parallel computing, all processors are either tightly coupled with centralized shared
memory or loosely coupled with distributed memory. Inter processor communication is
accomplished through shared memory or via message passing. Acomputer system
capable of parallel computing is commonly known as a parallel computer. Programs
running in a parallel computer are called parallel programs. The process of writing
parallel programs is often referred to as parallel programming.
• Distributed computing This is a field of computer science/engineering that studies
distributed systems. A distributed system consists of multiple autonomous computers,
each having its own private memory, communicating through a computer network.
Information exchange in a distributed system is accomplished through message passing.
A computer program that runs in a distributed system is known as a distributed program.
The process of writing distributed programs is referred to as distributed programming.
• Cloud computing An Internet cloud of resources can be either a centralized or a
distributed computing system. The cloud applies parallel or distributed computing, or
both. Clouds can be built with physical or virtualized resources over large data centers
that are centralized or distributed. Some authors consider cloud computing to be a form
of utility computing or service computing. As an alternative to the preceding terms, some
in the high-tech community prefer the term concurrent computing or concurrent
programming. These terms typically refer to the union of parallel computing and
distributing computing, although biased practitioners may interpret them differently.
• Ubiquitous computing refers to computing with pervasive devices at any place and time
using wired or wireless communication. The Internet of Things (IoT) is a networked
connection of everyday objects including computers, sensors, humans, etc. The IoT is
supported by Internet clouds to achieve ubiquitous computing with any object at any
place and time. Finally, the term Internet computing is even broader and covers all
computing paradigms over the Internet. This book covers all the aforementioned
computing paradigms, placing more emphasis on distributed and cloud computing and
their working systems, including the clusters, grids, P2P, and cloud systems.
Internet of Things
• The traditional Internet connects machines to machines or web pages to web pages. The
concept of the IoT was introduced in 1999 at MIT.

Department of CSE VII Semester 5


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

• The IoT refers to the networked interconnection of everyday objects, tools, devices, or
computers. One can view the IoT as a wireless network of sensors that interconnect all
things in our daily life.
• It allows objects to be sensed and controlled remotely across existing network
infrastructure

SYSTEM MODELS FOR DISTRIBUTED AND CLOUD COMPUTING


• Distributed and cloud computing systems are built over a large number of autonomous
computer nodes.
• These node machines are interconnected by SANs, LANs, or WANs in a hierarchical
manner. With today’s networking technology, a few LAN switches can easily connect
hundreds of machines as a working cluster.
• A WAN can connect many local clusters to form a very large cluster of clusters.
Clusters of Cooperative Computers
A computing cluster consists of interconnected stand-alone computers which work
cooperatively as a single integrated computing resource.
• In the past, clustered computer systems have demonstrated impressive results in handling
heavy workloads with large data sets.
Cluster Architecture
cluster built around a low-latency, high bandwidth interconnection network. This network
can be as simple as a SAN or a LAN (e.g., Ethernet).

Figure 3.2 Clusters of Servers

Department of CSE VII Semester 6


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

Figure 1.2shows the architecture of a typical server cluster built around a low-latency,
high bandwidth interconnection network. This network can be as simple as a SAN (e.g., Myrinet)
or a LAN (e.g., Ethernet).
• To build a larger cluster with more nodes, the interconnection network can be built with
multiple levels of Gigabit Ethernet, or InfiniBand switches.
• Through hierarchical construction using a SAN, LAN, or WAN, one can build scalable
clusters with an increasing number of nodes. The cluster is connected to the Internet via a
virtual private network (VPN) gateway.
• The gateway IP address locates the cluster. The system image of a computer is decided
by the way the OS manages the shared cluster resources.
Most clusters have loosely coupled node computers. All resources of a server node are
managed by their own OS. Thus, most clusters have multiple system images as a result of having
many autonomous nodes under different OS control.
Single-System Image(SSI)
• Ideal cluster should merge multiple system images into a single-system image (SSI).
• Cluster designers desire a cluster operating system or some middleware to support SSI at
various levels, including the sharing of CPUs, memory, and I/O across all cluster nodes.
An SSI is an illusion created by software or hardware that presents a collection of resources as
one integrated, powerful resource. SSI makes the cluster appear like a single machine to the user.
A cluster with multiple system images is nothing but a collection of independent computers.
Hardware, Software, and Middleware Support
• Clusters exploring massive parallelism are commonly known as MPPs. Almost all HPC
clusters in the Top 500 list are also MPPs.
• The building blocks are computer nodes (PCs, workstations, servers, or SMP), special
communication software such as PVM, and a network interface card in each computer
node.
Most clusters run under the Linux OS. The computer nodes are interconnected by a high-
bandwidth network (such as Gigabit Ethernet, Myrinet, InfiniBand, etc.). Special cluster
middleware supports are needed to create SSI or high availability (HA). Both sequential and
parallel applications can run on the cluster, and special parallel environments are needed to
facilitate use of the cluster resources. For example, distributed memory has multiple images.
Users may want all distributed memory to be shared by all servers by forming distributed shared

Department of CSE VII Semester 7


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

memory (DSM). Many SSI features are expensive or difficult to achieve at various cluster
operational levels. Instead of achieving SSI, many clusters are loosely coupled machines. Using
virtualization, one can build many virtual clusters dynamically, upon user demand.

Cloud Computing over the Internet


• A cloud is a pool of virtualized computer resources.
• A cloud can host a variety of different workloads, including batch-style backend jobs and
interactive and user-facing applications.
• A cloud allows workloads to be deployed and scaled out quickly through rapid
provisioning of virtual or physical machines.
• The cloud supports redundant, self-recovering, highly scalable programming models that
allow workloads to recover from many unavoidable hardware/software failures.
• Finally, the cloud system should be able to monitor resource use in real time to enable
rebalancing of allocations when needed.
a. Internet Clouds
• Cloud computing applies a virtualized platform with elastic resources on demand by
provisioning hardware, software, and data sets dynamically .The idea is to move desktop
computing to a service-oriented platform using server clusters and huge databases at data
centers.
• Cloud computing leverages its low cost and simplicity to benefit both users and
providers.
• Machine virtualization has enabled such cost-effectiveness. Cloud computing intends to
satisfy many user applications simultaneously.

Figure 3.3 Internet Cloud

Department of CSE VII Semester 8


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

b. The Cloud Landscape


• The cloud ecosystem must be designed to be secure, trustworthy, and dependable. Some
computer users think of the cloud as a centralized resource pool. Others consider the
cloud to be a server cluster which practices distributed computing over all the servers
Traditionally, a distributed computing system tends to be owned and operated by an
autonomous administrative domain (e.g., a research laboratory or company) for on-
premises computing needs.
• Cloud computing as an on-demand computing paradigm resolves or relieves us from
these problems.
Three Cloud service Model in a cloud landscape
Infrastructure as a Service (IaaS)
 This model puts together infrastructures demanded by users—namely servers, storage,
networks, and the data center fabric.
• The user can deploy and run on multiple VMs running guest OS on specific applications.
• The user does not manage or control the underlying cloud infrastructure, but can specify
when to request and release the needed resources.
Platform as a Service (PaaS)
• This model enables the user to deploy user-built applications onto a virtualized cloud platform.
PaaS includes middleware, databases, development tools, and some runtime support such as Web
2.0 and Java.
• The platform includes both hardware and software integrated with specific programming
interfaces.
• The provider supplies the API and software tools (e.g., Java, Python, Web 2.0, .NET). The user
is freed from managing the cloud infrastructure.
Software as a Service (SaaS)
• This refers to browser-initiated application software over thousands of paid cloud customers.
The SaaS model applies to business processes, industry applications, consumer relationship
management (CRM), enterprise resources planning (ERP), human resources (HR), and
collaborative applications. On the customer side, there is no upfront investment in servers or
software licensing. On the provider side, costs are rather low, compared with conventional
hosting of user applications.

Department of CSE VII Semester 9


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

Figure 3.4 The Cloud Landscape in an application


Internet clouds offer four deployment modes: private, public, managed, and hybrid .
These modes demand different levels of security implications. The different SLAs imply that the
security responsibility is shared among all the cloud providers, the cloud resource consumers,
and the third party cloud-enabled software providers. Advantages of cloud computing have been
advocated by many IT experts, industry leaders, and computer science researchers.

Reasons to adapt the cloud for upgraded Internet applications and web services:
1. Desired location in areas with protected space and higher energy efficiency
2. Sharing of peak-load capacity among a large pool of users, improving overall utilization
3. Separation of infrastructure maintenance duties from domain-specific application development
4. Significant reduction in cloud computing cost, compared with traditional computing
paradigms
5. Cloud computing programming and application development
6. Service and data discovery and content/service distribution
7. Privacy, security, copyright, and reliability issues
8. Service agreements, business models, and pricing policies

Department of CSE VII Semester 10


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

� Cloud computing is using the internet to access someone else's software running
on someone else's hardware in someone else's data center.
� The user sees only one resource ( HW, Os) but uses virtually multiple os. HW
resources etc..
� Cloud architecture effectively uses virtualization
� A model of computation and data storage based on “pay as you go” access to “unlimited”
remote data center capabilities
� A cloud infrastructure provides a framework to manage scalable, reliable, on-demand
access to applications
� Cloud services provide the “invisible” backend to many of our mobile applications
� High level of elasticity in consumption
� Historical roots in today’s Internet apps
� Search, email, social networks, e-com sites
� File storage (Live Mesh, Mobile Me)

3.2 Definition

� “The National Institute of Standards and Technology (NIST) defines cloud


computing as a "pay-per-use model for enabling available, convenient and on-
demand network access to a shared pool of configurable computing resources
(e.g., networks, servers, storage, applications and services) that can be rapidly
provisioned and released with minimal management effort or service provider
interaction."

Cloud Computing Architecture


3.2.1 Architecture consists of 3 tiers
3.2.1.1 Cloud Deployment Model
3.2.1.2 Cloud Service Model
3.2.1.3 Essential Characteristics of Cloud Computing
Essential Characteristics 1
3.2.2 On-demand self-service.

Department of CSE VII Semester 11


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

3.2.2.1 A consumer can unilaterally provision computing capabilities such as


server time and network storage as needed automatically, without
requiring human interaction with a service provider.

Figure 3.5 Cloud Computing Architecture


Essential Characteristics 2
3.2.3 Broad network access.
3.2.3.1 Capabilities are available over the network and accessed through standard
mechanisms that promote use by heterogeneous thin or thick client
platforms (e.g., mobile phones, laptops, and PDAs) as well as other
traditional or cloudbased software services.

Essential Characteristics 3
3.2.4 Resource pooling.
3.2.4.1 The provider’s computing resources are pooled to serve multiple
consumers using a multi-tenant model, with different physical and
virtual resources dynamically assigned and reassigned according to
consumer demand.

Department of CSE VII Semester 12


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

Essential Characteristics 4
3.2.5 Rapid elasticity.
3.2.5.1 Capabilities can be rapidly and elastically provisioned - in some
cases automatically - to quickly scale out; and rapidly released to quickly
scale in.
3.2.5.2 To the consumer, the capabilities available for provisioning often
appear to be unlimited and can be purchased in any quantity at any time.

Essential Characteristics 5
3.2.6 Measured service.
3.2.6.1 Cloud systems automatically control and optimize resource usage by
leveraging a metering capability at some level of abstraction appropriate
to the type of service.
3.2.6.2 Resource usage can be monitored, controlled, and reported -
providing transparency for both the provider and consumer of the service.

Cloud Service Models


3.2.7 Cloud Software as a Service (SaaS)
3.2.8 Cloud Platform as a Service (PaaS)
3.2.9 Cloud Infrastructure as a Service (IaaS)
SaaS
3.2.10 SaaS is a licensed software offering on the cloud and pay per use
3.2.11 SaaS is a software delivery methodology that provides licensed multi-tenant
access to software and its functions remotely as a
Web-based service. Usually billed based on usage
3.2.11.1 Usually multi tenant environment
3.2.11.2 Highly scalable architecture
3.2.12 Customers do not invest on software application programs
3.2.13 The capability provided to the consumer is to use the provider’s applications
running on a cloud infrastructure.
3.2.14 The applications are accessible from various client devices through a thin client
interface such as a web browser (e.g., web-based email).

Department of CSE VII Semester 13


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

3.2.15 The consumer does not manage or control the underlying cloud infrastructure
including network, servers, operating systems, storage, data or even individual
application capabilities, with the possible exception of limited user specific
application configuration settings.
SaaS providers
3.2.16 Google’s Gmail, Docs, Talk etc
3.2.17 Microsoft’s Hotmail, Sharepoint
3.2.18 SalesForce,
3.2.19 Yahoo, Facebook
Infrastructure as a Service (IaaS)
3.2.20 IaaS is the delivery of technology infrastructure ( mostly hardware) as an on
demand, scalable service
3.2.20.1 Usually billed based on usage
3.2.20.2 Usually multi tenant virtualized environment
3.2.20.3 Can be coupled with Managed Services for OS and application support
3.2.20.4 User can choose his OS, storage, deployed app, networking components

Figure 3.6 Cloud Service Model


3.2.21 The capability provided to the consumer is to provision processing, storage,
networks, and other fundamental computing resources.

Department of CSE VII Semester 14


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

3.2.22 Consumer is able to deploy and run arbitrary software, which may include
operating systems and applications.
3.2.23 The consumer does not manage or control the underlying cloud infrastructure
but has control over operating systems, storage, deployed applications, and
possibly limited control of select networking components (e.g., host firewalls).
IaaS providers
3.2.24 Amazon Elastic Compute Cloud (EC2)
3.2.24.1 Each instance provides 1-20 processors, upto 16 GB RAM, 1.69TB storage
3.2.25 RackSpace Hosting
3.2.25.1 Each instance provides 4 core CPU, upto 8 GB RAM, 480 GB storage
3.2.26 Joyent Cloud
3.2.26.1 Each instance provides 8 CPUs, upto 32 GB RAM, 48 GB storage
3.2.27 Go Grid
3.2.27.1 Each instance provides 1-6 processors, upto 15 GB RAM, 1.69TB storage

Platform as a Service (PaaS)


3.2.28 PaaS provides all of the facilities required to support the complete life cycle of
building, delivering and deploying web applications and services entirely from
the Internet. Typically applications must be developed with a particular platform
in mind
• Multi tenant environments
• Highly scalable multi tier architecture
3.2.29 The capability provided to the consumer is to deploy onto the cloud
infrastructure consumer created or acquired applications created using
programming languages and tools supported by the provider.
3.2.30 The consumer does not manage or control the underlying cloud infrastructure
including
network, servers, operating systems, or storage, but has control over the deployed
applications and possibly application hosting environment configurations.

PaaS providers
3.2.31 Google App Engine
3.2.31.1 Python, Java, Eclipse

Department of CSE VII Semester 15


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

3.2.32 Microsoft Azure


3.2.32.1 .Net, Visual Studio
3.2.33 Sales Force
3.2.33.1 Apex, Web wizard
3.2.34 TIBCO,
3.2.35 VMware,
3.2.36 Zoho

Cloud Computing - Opportunities and Challenges


3.2.37 It enables services to be used without any understanding of their infrastructure.
3.2.38 Cloud computing works using economies of scale
3.2.39 It potentially lowers the outlay expense for start up companies, as they would
no longer need to buy their own software or servers.
3.2.40 Cost would be by on-demand pricing.
3.2.41 Vendors and Service providers claim costs by establishing an ongoing revenue
stream.
3.2.42 Data and services are stored remotely but accessible from “anywhere”

Cloud Computing – Pros


3.2.43 Lower computer costs
3.2.44 Instant software updates:
3.2.44.1 When the application is web-based, updates happen automatically
3.2.45 Improved document format compatibility
3.2.46 e capacity:
3.2.46.1 Cloud computing offers virtually limitless storage
3.2.46.2 • Increased data reliability:

Cloud Computing – Cons


3.2.47 Need of Internet :
3.2.47.1 A dead Internet connection means no work and in
areas where Internet connections are few or inherently unreliable, this
could be a deal-breaker.
3.2.47.2 Requires a constant Internet connection

Department of CSE VII Semester 16


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

3.2.48 Can be slow:


3.2.48.1 Even with a fast connection, web-based applications can
sometimes be slower than accessing a similar software program on your
desktop PC.
3.2.49 Disparate Protocols :
3.2.49.1 Each cloud systems uses different protocols and different APIs –
Standards yet to evolve.

3.3 Evolution of Cloud Computing


Evolution of Cloud Computing
 Cloud Computing Leverages dynamic resources to deliver a large number of services to
end users.
 It is High Throughput Computing(HTC) paradigm
 It enables users to share access to resources from anywhere at any time

II Hardware Evolution
 In 1930, binary arithmetic was developed
 computer processing technology, terminology, and programming languages.
• In 1939,Electronic computer was developed
 Computations were performed using vacuum-tube technology.
• In 1941, Konrad Zuse's Z3 was developed
 Support both floating-point and binary
arithmetic. There are four generations
 First Generation Computers
 Second Generation Computers
 Third Generation Computers
 Fourth Generation Computers
a.First Generation Computers
Time Period : 1942 to 1955
Technology : Vacuum Tubes
Size : Very Large System
Processing : Very Slow

Department of CSE VII Semester 17


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

Examples:
1.ENIAC (Electronic Numerical Integrator and Computer)
2.EDVAC(Electronic Discrete Variable Automatic Computer)

Advantages:
• It made use of vacuum tubes which was the advanced technology at that time
• Computations were performed in milliseconds.
Disadvantages:
• very big in size, weight was about 30 tones.
• very costly.
• Requires more power consumption
•Large amount heat was generated.

b.Second Generation Computers


Time Period : 1956 to 1965.
Technology : Transistors
Size : Smaller
Processing : Faster
o Examples
Honeywell 400
IBM 7094
Advantages
 Less heat than first generation.
 Assembly language and punch cards were used for input.
 Low cost than first generation computers.
 Computations was performed in microseconds.
 Better Portability as compared to first generation
Disadvantages:
 A cooling system was required.
 Constant maintenance was required.
 Only used for specific purposes

Department of CSE VII Semester 18


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

c.Third Generation Computers


Time Period : 1966 to 1975
Technology : ICs (Integrated Circuits)
Size : Small as compared to 2nd generation computers
Processing : Faster than 2nd generation computers
Examples
• PDP-8 (Programmed Data Processor)
• PDP-11
Advantages
• These computers were cheaper as compared to generation computers.
• They were fast and reliable.
• IC not only reduce the size of the computer but it also improves the performance of the
computer
• Computations was performed in nanoseconds
Disadvantages
• IC chips are difficult to maintain.
• The highly sophisticated technology required for the manufacturing of IC chips.
•Air Conditioning is required

d.Fourth Generation Computers


Time Period : 1975 to Till Date
Technology : Microprocessor
Size : Small as compared to third generation computer
Processing : Faster than third generation computer
Examples
• IBM 4341
• DEC 10

Advantages:
 Fastest in computation and size get reduced as compared to the previous generation of
computer. Heat generated is small.
 Less maintenance is required.

Department of CSE VII Semester 19


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

Disadvantages:
 The Microprocessor design and fabrication are very complex.
 Air Conditioning is required in many cases

III Internet Hardware Evolution


 Internet Protocol is the standard communications protocol used by every computer on the
Internet.
 The conceptual foundation for creation of the Internet was significantly developed by three
individuals.
• Vannevar Bush — MEMIX (1930)
• Norbert Wiener
• Marshall McLuhan
 Licklider was founder for the creation of the AR PANET (Advanced Research Projects
Agency Network)
 Clark deployed a minicomputer called an Interface Message Processor (IMP) at each site.
 Network Control Program (NCP)- first networking protocol that was used on the ARPANET

Figure 3.7 IMP Architecture

Department of CSE VII Semester 20


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

Internet Hardware Evolution


 Establishing a Common Protocol for the Internet
 Evolution of Ipv6
 Finding a Common Method to Communicate Using the Internet Protocol
 Building a Common Interface to the Internet
 The Appearance of Cloud Formations From One Computer to a Grid of Many
a.Establishing a Common Protocol for the Internet
 NCP essentially provided a transport layer consisting of the ARPANET Host-to-Host
Protocol (AIIIIP) and the Initial Connection Protocol (ICP)
 Application protocols
o File Transfer Protocol (FTP), used for file transfers,
o Simple Mail Transfer Protocol (SMTP), used for sending
email Four versions of TCP/IP
• TCP vl
• TCP v2
• TCP v3 and IP v3,
• TCP v4 and IP v4
b.Evolution of Ipv6
 IPv4 was never designed to scale to global levels.
 To increase available address space, it had to process large data packets (i.e., more bits of
data).
 To overcome these problems, Internet Engineering Task Force (IETF) developed IPv6,
which was released in January 1995.
 Ipv6 is sometimes called the Next Generation Internet Protocol (IPNG) or TCP/IP v6.

c.Finding a Common Method to Communicate Using the Internet Protocol


 In the 1960s,the word ktpertext was created by Ted Nelson.
 In 1962, Engelbart's first project was Augment, and its purpose was to develop computer
tools to augment human capabilities.
 He developed the mouse, Graphical user interface (GUI), and the first working hypertext
system, named NLS (oN-Line System).

Department of CSE VII Semester 21


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

 NLS was designed to cross-reference research papers for sharing among geographically
distributed researchers.
 In the 1980s, Web was developed in Europe by Tim Berners-Lee and Robert Cailliau
d.Building a Common Interface to the Internet
 Betters-Lee developed the first web browser featuring an integrated editor that could
create hypertext documents.
 Following this initial success, Berners-Lee enhanced the server and browser by adding
support for the FTP (File Transfer protocol)

 Mosaic was the first widely popular web browser available to the general public. Mosaic
support for graphics, sound, and video clips.
 In October 1994, Netscape released the first beta version of its browser, Mozilla 0.96b,
over the Internet.
 In 1995, Microsoft Internet Explorer was developed that supports both a graphical Web
browser and the name for a set of technologies.
 Mozilla Firefox. released in November 2004, became very popular almost immediately.

e.The Appearance of Cloud Formations From One Computer to a Grid of Many


 Two decades ago, computers were clustered together to form a single larger computer in
order to simulate a supercomputer and greater processing power.
 In the early 1990s, Ian Foster and Carl Kesselman presented their concept of "The Grid."
They used an analogy to the electricity grid, where users could plug in and use a
(metered) utility service.
 A major problem in clustering model was data residency. Because of the distributed
nature of a grid, computational nodes could be anywhere in the world.

Department of CSE VII Semester 22


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

 The Globus Toolkit is an open source software toolkit used for building grid systems and
applications

Figure 3.9 Evolution


Evolution of Cloud Services
Google Application Engine
2008-2009
Microsoft Azure
2006 S3 launches EC2
2002 Launch of Amazon Web Services
The first milestone of cloud computing arrival of
1990
salesforce.com
Super Computers
1960
Mainframes

IV. SERVER VIRTUALIZATION


 Virtualization is a method of running multiple independent virtual operating systems on a
single physical computer.
 This approach maximizes the return on investment for the computer.

Department of CSE VII Semester 23


IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING
S.A.ENGINEERING COLLEGE

 Virtualization technology is a way of reducing the majority of hardware acquisition and


maintenance costs, which can result in significant savings for any company.
 Parallel Processing
 Vector Processing
 Symmetric Multiprocessing Systems
 Massively Parallel Processing Systems
a.Parallel Processing
 Parallel processing is performed by the simultaneous execution of program instructions
that have been allocated across multiple processors.
 Objective: running a progran in less time.
 The next advancement in parallel processing-multiprogramming
 In a multiprogramming system, multiple programs submitted by users but each allowed
to use the processor for a short time.
 This approach is known as "round-robin scheduling”(RR scheduling)
b.Vector Processing
 Vector processing was developed to increase processing performance by operating in a
multitasking manner.
 Matrix operations were added to computers to perform arithmetic operations.
 This was valuable in certain types of applications in which data occurred in the form of
vectors or matrices.
 In applications with less well-formed data, vector processing was less valuable.
c.Symmetric Multiprocessing Systems
 Symmetric multiprocessing systems (SMP) was developed to address the problem of
resource management in master/slave models.
 In SMP systems, each processor is equally capable and responsible for managing the
workflow as it passes through the system.
 The primary goal is to achieve sequential consistency
d.Massively Parallel Processing Systems
 In Massively Parallel Processing Systems, a computer system with many independent
arithmetic units, which run in parallel.
 All the processing elements are interconnected to act as one very large computer.

Department of CSE VII Semester 24


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

 Early examples of MPP systems were the Distributed ArrayProcessor, the Goodyear
MPP, the Connection Machine, and the Ultracomputer
 MPP machines are not easy to program, but for certain applications, such as data mining,
they are the best solution

3.4 ELASTICITY IN CLOUD COMPUTING


3.4.1 Elasticity is defined as the ability of a system to add and remove resources (such
as CPU cores, memory, VM and container instances) to adapt to the load
variation in real time.
3.4.2 Elasticity is a dynamic property for cloud computing.
3.4.3 Elasticity is the degree to which a system is able to adapt to workload
changes by provisioning and deprovisioning resources in an autonomic
manner, such that at each point in time the available resources match the current
demand as closely as possible.

Elasticity = Scalability + Automation + Optimization

3.4.4 Elasticity is built on top of scalability.


3.4.5 It can be considered as an automation of the concept of scalability and aims to
optimize at best and as quickly as possible the resources at a given time.
3.4.6 Another term associated with elasticity is the efficiency, which characterizes
how cloud resource can be efficiently utilized as it scales up or down.
3.4.7 It is the amount of resources consumed for processing a given amount of work,
the lower this amount is, the higher the efficiency of a system.

Department of CSE VII Semester 25


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

3.4.8 Elasticity also introduces a new important factor, which is the speed.
3.4.9 Rapid provisioning and deprovisioning are key to maintaining
an acceptable performance in the context of cloud computing
3.4.10 Quality of service is subjected to a service level agreement

Classification
Elasticity solutions can be arranged in different classes based on
3.4.11 Scope
3.4.12 Policy
3.4.13 Purpose
3.4.14 Method
a.Scope
� Elasticity can be implemented on any of the cloud layers.
� Most commonly, elasticity is achieved on the IaaS level, where the resources to
be provisioned are virtual machine instances.
� Other infrastructure services can also be scaled
� On the PaaS level, elasticity consists in scaling containers or databases for instance.
� Finally, both PaaS and IaaS elasticity can be used to implement elastic applications, be it
for private use or in order to be provided as a SaaS
� The elasticity actions can be applied either at the infrastructure or
application/platform level.
� The elasticity actions perform the decisions made by the elasticity strategy or
management system to scale the resources.
� Google App Engine and Azure elastic pool are examples of elastic Platform as a Service
(PaaS).
� Elasticity actions can be performed at the infrastructure level where the elasticity
controller monitors the system and takes decisions.
� The cloud infrastructures are based on the virtualization technology, which can be
VMs or containers.
� In the embedded elasticity, elastic applications are able to adjust their own resources
according to runtime requirements or due to changes in the execution flow.
� There must be a knowledge of the source code of the applications.

Department of CSE VII Semester 26


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

� Application Map: The elasticity controller must have a complete map of the
application components and instances.
� Code embedded: The elasticity controller is embedded in the application source code.
� The elasticity actions are performed by the application itself.
� While moving the elasticity controller to the application source code eliminates the use of
monitoring systems
� There must be a specialized controller for each application.

b.Policy
� Elastic solutions can be either manual or automatic.
� A manual elastic solution would provide their users with tools to monitor their
systems and add or remove resources but leaves the scaling decision to them.
Automatic mode: All the actions are done automatically, and this could be classified into
reactive and proactive modes.
Elastic solutions can be either reactive or predictive
Reactive mode: The elasticity actions are triggered based on certain thresholds or rules, the
system reacts to the load (workload or resource utilization) and triggers actions to adapt changes
accordingly.
� An elastic solution is reactive when it scales a posteriori, based on a monitored change in
the system.
� These are generally implemented by a set of Event-Condition-Action rules.
Proactive mode: This approach implements forecasting techniques, anticipates the future
needs and triggers actions based on this anticipation.
� A predictive or proactive elasticity solution uses its knowledge of either recent history or
load patterns inferred from longer periods of time in order to predict the upcoming load
of the system and scale according to it.
c.Purpose
� An elastic solution can have many purposes.
� The first one to come to mind is naturally performance, in which case the focus should be
put on their speed.
� Another purpose for elasticity can also be energy efficiency, where using the
minimum amount of resources is the dominating factor.

Department of CSE VII Semester 27


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

� Other solutions intend to reduce the cost by multiplexing either resource providers or
elasticity methods
� Elasticity has different purposes such as improving performance, increasing resource
capacity, saving energy, reducing cost and ensuring availability.
� Once we look to the elasticity objectives, there are different perspectives.
� Cloud IaaS providers try to maximize the profit by minimizing the resources
while offering a good Quality of Service (QoS),
� PaaS providers seek to minimize the cost they pay to the
Cloud.
� The customers (end-users) search to increase their Quality of Experience (QoE) and
to minimize their payments.
� QoE is the degree of delight or annoyance of the user of an application or service

d.Method
� Vertical elasticity, changes the amount of resources linked to existing instances on-
the-fly.
� This can be done in two manners.
� The first method consists in explicitly redimensioning a virtual machine instance, i.e.,
changing the quota of physical resources allocated to it.
� This is however poorly supported by common operating systems as they fail to take into
account changes in CPU or memory without rebooting, thus resulting in service
interruption.
� The second vertical scaling method involves VM migration: moving a virtual machine
instance to another physical machine with a different overall load changes its available
resources

Department of CSE VII Semester 28


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

� Horizontal scaling is the process of adding/removing instances, which may be located at


different locations.
� Load balancers are used to distribute the load among the different instances.
� Vertical scaling is the process of modifying resources (CPU, memory, storage or
both) size for an instance at run time.
� It gives more flexibility for the cloud systems to cope with the varying workloads

Migration
� Migration can be also considered as a needed action to further allow the vertical
scaling when there is no enough resources on the host machine.
� It is also used for other purposes such as migrating a VM to a less loaded physical
machine just to guarantee its performance.
� Several types of migration are deployed such as live migration and no-live migration.
� Live migration has two main approaches
� post-copy
� pre-copy
� Post-copy migration suspends the migrating VM, copies minimal processor state to
the target host, resumes the VM and then begins fetching memory pages from the source.
� In pre-copy approach, the memory pages are copied while the VM is running on the
source.
� If some pages are changed (called dirty pages) during the memory copy process, they will
be recopied until the number of recopied pages is greater than dirty pages, or the source
VM will be stopped.
� The remaining dirty pages will be copied to the destination VM.

Architecture
� The architecture of the elasticity management solutions can be either centralized
or decentralized.
� Centralized architecture has only one elasticity controller, i.e., the auto scaling
system that provisions and deprovisions resources.

Department of CSE VII Semester 29


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

� In decentralized solutions, the architecture is composed of many elasticity controllers


or application managers, which are responsible for provisioning resources for
different cloud-hosted platforms

Provider
� Elastic solutions can be applied to a single or multiple cloud providers.
� A single cloud provider can be either public or private with one or multiple regions
or datacenters.
� Multiple clouds in this context means more than one cloud provider.
� It includes hybrid clouds that can be private or public, in addition to the federated clouds
and cloud bursting.
� Most of the elasticity solutions support only a single cloud provider

3.5 On-demand Provisioning.


3.5.1 Resource Provisioning means the selection, deployment, and run-time
management of software (e.g., database server management systems, load
balancers) and hardware resources (e.g., CPU, storage, and network) for ensuring
guaranteed performance for applications.
3.5.2 Resource Provisioning is an important and challenging problem in the
large-scale
distributed systems such as Cloud computing environments.
3.5.3 There are many resource provisioning techniques, both static and dynamic
each one having its own advantages and also some challenges.
3.5.4 These resource provisioning techniques used must meet Quality of Service
(QoS) parameters like availability, throughput, response time, security,
reliability etc., and thereby avoiding Service Level Agreement (SLA) violation.
3.5.5 Over provisioning and under provisioning of resources must be avoided.
3.5.6 Another important constraint is power consumption.
3.5.7 The ultimate goal of the cloud user is to minimize cost by renting the resources
and from the cloud service provider’s perspective to maximize profit by
efficiently allocating the resources.

Department of CSE VII Semester 30


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

3.5.8 In order to achieve the goal, the cloud user has to request cloud service provider
to make a provision for the resources either statically or dynamically.
3.5.9 So that the cloud service provider will know how many instances of the
resources and what resources are required for a particular application.
3.5.10 By provisioning the resources, the QoS parameters like availability, throughput,
security, response time, reliability, performance etc must be achieved without
violating SLA
There are two types
 Static Provisioning
 Dynamic Provisioning
Static Provisioning
3.5.11 For applications that have predictable and generally unchanging
demands/workloads, it is
possible to use “static provisioning" effectively.
3.5.12 With advance provisioning, the customer contracts with the provider for services.
3.5.13 The provider prepares the appropriate resources in advance of start of service.
3.5.14 The customer is charged a flat fee or is billed on a monthly basis.
Dynamic Provisioning
3.5.15 In cases where demand by applications may change or vary, “dynamic
provisioning" techniques have been suggested whereby VMs may be
migrated on-the-fly to new compute nodes within the cloud.
3.5.16 The provider allocates more resources as they are needed and removes them
when they
are not.
3.5.17 The customer is billed on a pay-per-use basis.
3.5.18 When dynamic provisioning is used to create a hybrid cloud, it is sometimes
referred to as cloud bursting.
Parameters for Resource Provisioning
3.5.19 Response time
3.5.20 Minimize Cost
3.5.21 Revenue Maximization
3.5.22 Fault tolerant
3.5.23 Reduced SLA Violation
3.5.24 Reduced Power Consumption

Department of CSE VII Semester 31


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

Response time: The resource provisioning algorithm designed must take minimal time to
respond when executing the task.
Minimize Cost: From the Cloud user point of view cost should be minimized.
Revenue Maximization: This is to be achieved from the Cloud Service Provider’s view.
Fault tolerant: The algorithm should continue to provide service in spite of failure of nodes.
Reduced SLA Violation: The algorithm designed must be able to reduce SLA violation.
Reduced Power Consumption: VM placement & migration techniques must lower power
consumption
Dynamic Provisioning Types
1. Local On-demand Resource Provisioning
2. Remote On-demand Resource Provisioning
Local On-demand Resource Provisioning
1. The Engine for the Virtual Infrastructure
The OpenNebula Virtual Infrastructure Engine
• OpenNEbula creates a distributed virtualization layer
• Extend the benefits of VM Monitors from one to multiple resources
• Decouple the VM (service) from the physical location
• Transform a distributed physical infrastructure into a flexible and elastic virtual
infrastructure, which adapts to the changing demands of the VM (service) workloads

Department of CSE VII Semester 32


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

Separation of Resource Provisioning from Job Management


• New virtualization layer between the service and the infrastructure layers
• Seamless integration with the existing middleware stacks.
• Completely transparent to the computing service and so end users

Cluster Partitioning
• Dynamic partition of the infrastructure
• Isolate workloads (several computing clusters)
• Dedicated HA partitions

Benefits for Existing Grid Infrastructures


• The virtualization of the local infrastructure supports a virtualized alternative to
contribute resources to a Grid infrastructure
• Simpler deployment and operation of new middleware distributions
• Lower operational costs
• Easy provision of resources to more than one infrastructure
• Easy support for VO-specific worker nodes
Performance partitioning between local and grid clusters

Department of CSE VII Semester 33


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

Other Tools for VM Management


• VMware DRS, Platform Orchestrator, IBM Director, Novell ZENworks, Enomalism,
Xenoserver
• Advantages:
• Open-source (Apache license v2.0)
• Open and flexible architecture to integrate new virtualization technologies
• Support for the definition of any scheduling policy (consolidation, workload
balance, affinity, SLA)
• LRM-like CLI and API for the integration of third-party tools

Remote on-Demand Resource Provisioning


Access to Cloud Systems
• Provision of virtualized resources as a service
VM Management Interfaces
The processes involved are
• Submission
• Control
• Monitoring

Department of CSE VII Semester 34


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

Infrastructure Cloud Computing Solutions


• Commercial Cloud: Amazon EC2
• Scientific Cloud: Nimbus (University of Chicago)
• Open-source Technologies
• Globus VWS (Globus interfaces)
• Eucalyptus (Interfaces compatible with Amazon EC2)
• OpenNEbula (Engine for the Virtual Infrastructure)
On-demand Access to Cloud Resources
• Supplement local resources with cloud resources to satisfy peak or fluctuating demands

3.6 CLOUD REFERENCE ARCHITECTURE


Definitions
3.6.1.1 A model of computation and data storage based on “pay as
you go” access to “unlimited” remote data center capabilities.
3.6.1.2 A cloud infrastructure provides a framework to manage scalable,
reliable, on-demand access to applications.
3.6.1.3 Cloud services provide the “invisible” backend to many of our mobile
applications. High level of elasticity in consumption.
NIST Cloud Definition:
The National Institute of Standards and Technology (NIST) defines cloud computing as a

"pay-per-use model for enabling available, convenient and on-demand network access to a
shared pool
Department of configurable computingVII
of CSE resources
Semester(e.g., networks, servers, storage, 35
applications and services) that can be rapidly provisioned and released with minimal
management effort or service provider interaction."
S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

Architecture
3.6.1.4 Architecture consists of 3 tiers
3.6.1.4.1 Cloud Deployment Model
3.6.1.4.2 Cloud Service Model
3.6.1.4.3 Essential Characteristics of Cloud Computing .

Department of CSE VII Semester 36


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

Essential Characteristics 1
3.6.1.5 On-demand self-service.
3.6.1.5.1 A consumer can unilaterally provision computing
capabilities such as server time and network storage as
needed automatically, without requiring human interaction
with a service provider.
Essential Characteristics 2
3.6.1.6 Broad network access.
3.6.1.6.1 Capabilities are available over the network and accessed
through standard mechanisms that promote use by
heterogeneous thin or thick client platforms (e.g., mobile
phones, laptops, and PDAs) as well as other traditional or
cloudbased software services.
Essential Characteristics 3
3.6.1.7 Resource pooling.
3.6.1.7.1 The provider’s computing resources are pooled to serve
multiple consumers using a multi-tenant model, with
different physical and virtual resources dynamically
assigned and reassigned according to consumer demand.
Essential Characteristics 4
3.6.1.8 Rapid elasticity.
3.6.1.8.1 Capabilities can be rapidly and elastically provisioned - in
some cases automatically - to quickly scale out; and
rapidly released to quickly scale in.
3.6.1.8.2 To the consumer, the capabilities available for
provisioning often appear to be unlimited and can be
purchased in any quantity at any time.
Essential Characteristics 5
3.6.1.9 Measured service.
3.6.1.9.1 Cloud systems automatically control and optimize
resource usage by leveraging a metering capability at
some level of abstraction appropriate to the type of
service.
Resource usage can be monitored, controlled, and reported - providing transparency for both
Department of CSE
the provider and consumer of the service. VII Semester 37
S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

3.6.1 NIST (National Institute of Standards and Technology Background)


The goal is to accelerate the federal government’s adoption of secure and effective cloud
computing to reduce costs and improve services.

Department of CSE VII Semester 38


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

Cloud Computing Reference Architecture:

Actors in Cloud Computing

Interactions between the Actors in Cloud Computing

Department of CSE VII Semester 39


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

Example Usage Scenario 1:


� A cloud consumer may request service from a cloud broker instead of contacting
a cloud provider directly.
� The cloud broker may create a new service by combining multiple services or by
enhancing an existing service.
Usage Scenario- Cloud Brokers
� In this example, the actual cloud providers are invisible to the cloud consumer.
� The cloud consumer interacts directly with the cloud broker.

Example Usage Scenario 2


� Cloud carriers provide the connectivity and transport of cloud services from cloud
providers to cloud consumers.
� A cloud provider participates in and arranges for two unique service level agreements
(SLAs), one with a cloud carrier (e.g. SLA2) and one with a cloud consumer (e.g.
SLA1).
Usage Scenario for Cloud Carriers
 A cloud provider arranges service level agreements (SLAs) with a cloud carrier.
 Request dedicated and encrypted connections to ensure the cloud services.

Example Usage Scenario 3


• For a cloud service, a cloud auditor conducts independent assessments of the
operation and security of the cloud service implementation.

Department of CSE VII Semester 40


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

• The audit may involve interactions with both the Cloud Consumer and the Cloud
Provider.

Cloud Consumer
� The cloud consumer is the principal stakeholder for the cloud computing service.
� A cloud consumer represents a person or organization that maintains a
business relationship with, and uses the service from a cloud provider.
The cloud consumer may be billed for the service provisioned, and needs to arrange
payments accordingly.
Example Services Available to a Cloud Consumer

� The consumers of SaaS can be organizations that provide their members with
access to software applications, end users or software application administrators.
� SaaS consumers can be billed based on the number of end users, the time of use,
the network bandwidth consumed, the amount of data stored or duration of stored
data.of CSE
Department VII Semester 41
S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

� Cloud consumers of PaaScan employ the tools and execution resources provided
by cloud providers to develop, test, deploy and manage the applications.
� PaaS consumers can be application developers or application testers who run and test
applications in cloud-based environments,.
� PaaS consumers can be billed according to, processing, database storage and network
resources consumed.
� Consumers of IaaS have access to virtual computers, network-accessible storage
& network infrastructure components.
� The consumers of IaaS can be system developers, system administrators and
IT managers.
� IaaS consumers are billed according to the amount or duration of the
resources consumed, such as CPU hours used by virtual computers, volume and
duration of data stored.
Cloud Provider
� A cloud provider is a person, an organization;
� It is the entity responsible for making a service available to interested parties.
� A Cloud Provider acquires and manages the computing infrastructure required for
providing the services.
� Runs the cloud software that provides the services.
Makes arrangement to deliver the cloud services to the Cloud Consumers through network
access.

Cloud Provider - Major Activities

Department of CSE VII Semester 42


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

Cloud Auditor
� A cloud auditor is a party that can perform an independent examination of cloud
service controls.
� Audits are performed to verify conformance to standards through review of objective
evidence.
� A cloud auditor can evaluate the services provided by a cloud provider in terms
of security controls, privacy impact, performance, etc.
Cloud Broker
� Integration of cloud services can be too complex for cloud consumers to manage.
� A cloud consumer may request cloud services from a cloud broker, instead of
contacting a cloud provider directly.
� A cloud broker is an entity that manages the use, performance and delivery of
cloud services. Negotiates relationships between cloud providers and cloud
consumers.
Services of cloud broker
Service Intermediation:
� A cloud broker enhances a given service by improving some specific capability
and providing value-added services to cloud consumers.
Service Aggregation:
� A cloud broker combines and integrates multiple services into one or more
new services.
� The broker provides data integration and ensures the secure data movement
between the cloud consumer and multiple cloud providers.
Services of cloud broker
Service Arbitrage:
� Service arbitrage is similar to service aggregation except that the services
being aggregated are not fixed.
� Service arbitrage means a broker has the flexibility to choose services from
multiple agencies.
Eg: The cloud broker can use a credit-scoring service to measure and select an agency with
the best score.
Cloud Carrier
� A cloud carrier acts as an intermediary that provides connectivity and transport of
cloud services between cloud consumers and cloud providers.

Department of CSE VII Semester 43


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

� Cloud carriers provide access to consumers through network.


� The distribution of cloud services is normally provided by network and
telecommunication carriers or a transport agent
� A transport agent refers to a business organization that provides physical transport of
storage media such as high-capacity hard drives and other access devices.
Scope of Control between Provider and Consumer
The Cloud Provider and Cloud Consumer share the control of resources in a cloud system

� The application layer includes software applications targeted at end users or


programs.
The applications are used by SaaS consumers, or installed/managed/maintained by PaaS
consumers, IaaS consumers and SaaS providers.
� The middleware layer provides software building blocks (e.g., libraries, database, and
Java virtual machine) for developing application software in the cloud.
� Used by PaaS consumers, installed/ managed/ maintained by IaaS consumers or PaaS
providers, and hidden from SaaS consumers.
� The OS layer includes operating system and drivers, and is hidden from
SaaS consumers and PaaS consumers.
� An IaaS cloud allows one or multiple guest OS to run virtualized on a single physical
host.
The IaaS consumers should assume full responsibility for the guest OS, while the IaaS
provider controls the host OS,

Department of CSE VII Semester 44


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

3.7 Cloud Deployment Model


� Public Cloud
� Private Cloud
� Hybrid Cloud
� Community Cloud
Public cloud

� A public cloud is one in which the cloud infrastructure and computing resources are
made available to the general public over a public network.
� A public cloud is meant to serve a multitude(huge number) of users, not a single
customer.
� A fundamental characteristic of public clouds is multitenancy.
� Multitenancy allows multiple users to work in a software environment at the same
time, each with their own resources.
� Built over the Internet (i.e., service provider offers resources, applications storage to
the customers over the internet) and can be accessed by any user.
� Owned by service providers and are accessible through a subscription.
� Best Option for small enterprises, which are able to start their businesses without
large up-front(initial) investment.
� By renting the services, customers were able to dynamically upsize or downsize their
IT according to the demands of their business.
� Services are offered on a price-per-use basis.
� Promotes standardization, preserve capital investment
� Public clouds have geographically dispersed datacenters to share the load of users
and better serve them according to their locations
� Provider is in control of the infrastructure

Department of CSE VII Semester 45


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

Examples:
o Amazon EC2 is a public cloud that provides Infrastructure as a Service
o Google AppEngine is a public cloud that provides Platform as a Service
o SalesForce.com is a public cloud that provides software as a service.
Advantage
� Offers unlimited scalability – on demand resources are available to meet your
business needs.
� Lower costs—no need to purchase hardware or software and you pay only for the
service you use.
� No maintenance - Service provider provides the maintenance.
� Offers reliability: Vast number of resources are available so failure of a system will
not interrupt service.
� Services like SaaS, PaaS, IaaS are easily available on Public Cloud platform as it
can be accessed from anywhere through any Internet enabled devices.
� Location independent – the services can be accessed from any location
Disadvantage
� No control over privacy or security
� Cannot be used for use of sensitive applications(Government and Military
agencies will not consider Public cloud)
� Lacks complete flexibility(since dependent on provider)
� No stringent (strict) protocols regarding data management

Private Cloud
� Cloud services are used by a single organization, which are not exposed to the public
� Services are always maintained on a private network and the hardware and
software are dedicated only to single organization
� Private cloud is physically located at
 Organization’s premises [On-site private clouds] (or)
 Outsourced(Given) to a third party[Outsource private Clouds]
� It may be managed either by
� Cloud Consumer organization (or)
 By a third party
� Private clouds are used by

Department of CSE VII Semester 46


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

 government agencies
 financial institutions
 Mid size to large-size organisations.
� On-site private clouds

Out-sourced Private Cloud


� Supposed to deliver more efficient and convenient cloud

� Offers higher efficiency, resiliency(to recover quickly), security, and privacy


� Customer information protection: In-house security is easier to maintain and
rely on.

Department of CSE VII Semester 47


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

 Follows its own(private organization) standard procedures and


operations(where as in public cloud standard procedures and operations
of service providers are followed )
Advantage
� Offers greater Security and Privacy
� Organization has control over resources
� Highly reliable
� Saves money by virtualizing the resources
Disadvantage
� Expensive when compared to public cloud
� Requires IT Expertise to maintain resources.

Hybrid Cloud
� Built with both public and private clouds
� It is a heterogeneous cloud resulting from a private and public clouds.
� Private cloud are used for
 sensitive applications are kept inside the organization’s network
 business-critical operations like financial reporting
� Public Cloud are used when
 Other services are kept outside the organization’s network
 high-volume of data
 Lower-security needs such as web-based email(gmail,yahoomail etc)
� The resources or services are temporarily leased for the time required and then
released. This practice is also known as cloud bursting.

Department of CSE VII Semester 48


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

Fig: 3.10Hybrid Cloud

Advantage
� It is scalable
� Offers better security
� Flexible-Additional resources are availed in public cloud when needed
� Cost-effectiveness—we have to pay for extra resources only when needed.
� Control - Organisation can maintain a private infrastructure for sensitive application

Disadvantage
� Infrastructure Dependency
� Possibility of security breach(violate) through public cloud

Department of CSE VII Semester 49


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

Difference Public Private Hybrid

Tenancy Multi-tenancy: Single tenancy: Data stored in th


e
the dat of Single public cloud is multi-
a
multiple organizations data tenant.
organizations in is stored in the Data stored in private
stored in a shared cloud. cloud is Single
environment. Tenancy.
Exposed to Yes: anyone can No: Only the Services on private
the Public use the publi organization itself cloud can b
c e
cloud services. can use the accessed only by the
private cloud organization’s users
services. Services on public
cloud can b
e
Accessed by anyone.
Data Anywhere on the Inside the Private Cloud-
Center Internet organization’s Present in
Location network. organization’s
network.
Public Cloud -
anywhere on th
e
Internet.
Cloud Cloud service Organization has Organization
Service provider manages their own manages the private
Management
the services. administrators cloud.
managing services Cloud Service
Provider(CSP)
manages the public
cloud.
Hardware CSP provides all Organization Private Cloud –
Components the hardware. provides organization provides
hardware. resources.
Public Cloud – Cloud
service Provider
provides.
Expenses Less Cost Expensive when Cost required fo
r
Department of CSE compared
VII Semester to setting up private50
public cloud cloud.
S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

3.8 Cloud Service Models


� Software as a Service (SaaS)
� Platform as a Service (PaaS)
� Infrastructure as a Service (IaaS)

These models are offered based on various SLAs between providers and users
SLA of cloud computing covers
o service availability
o performance
 data protection
o Security
Software as a Service(SaaS)( Complete software offering on the cloud)
� SaaS is a licensed software offering on the cloud and pay per use
� SaaS is a software delivery methodology that provides licensed multi-tenant access to
software and its functions remotely as a Web-based service.
Usually billed based on usage
◦ Usually multi tenant environment
Department of CSE VII Semester 51
S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

◦ Highly scalable architecture


� Customers do not invest on software application programs.
� The capability provided to the consumer is to use the provider’s applications running
on a cloud infrastructure.
� The applications are accessible from various client devices through a thin
client interface such as a web browser (e.g., web-based email).
� The consumer does not manage or control the underlying cloud infrastructure
including network, servers, operating systems, storage, data or even individual
application capabilities, with the possible exception of limited user specific
application configuration settings.
� On the customer side, there is no upfront investment in servers or software licensing.
� It is a “one-to-many” software delivery model, whereby an application is
shared across multiple users
� Characteristic of Application Service Provider(ASP)
o Product sold to customer is application access.
o Application is centrally managed by Service Provider.
o Service delivered is one-to-many customers
o Services are delivered on the contract
E.g. Gmail and docs, Microsoft SharePoint, and the CRM software(Customer
Relationship management)
� SaaS providers
� Google’s Gmail, Docs, Talk etc
� Microsoft’s Hotmail, Sharepoint
� SalesForce,
� Yahoo
� Facebook
Infrastructure as a Service (IaaS) ( Hardware offerings on the cloud)
IaaS is the delivery of technology infrastructure (mostly hardware) as an on demand,
scalable service .
◦ Usually billed based on usage
◦ Usually multi tenant virtualized environment
◦ Can be coupled with Managed Services for OS and application support
◦ User can choose his OS, storage, deployed app, networking components

Department of CSE VII Semester 52


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

◦ The capability provided to the consumer is to provision processing, storage,


networks, and other fundamental computing resources.
◦ Consumer is able to deploy and run arbitrary software, which may include
operating systems and applications.
◦ The consumer does not manage or control the underlying cloud infrastructure
but has control over operating systems, storage and deployed applications.

� IaaS/HaaS solutions bring all the benefits of hardware virtualization: workload


partitioning, application isolation, sandboxing, and hardware tuning
� Sandboxing: A program is set aside from other programs in a separate environment
so that if errors or security issues occur, those issues will not spread to other areas on
the computer.
� Hardware tuning: To improve the performance of system
� The user works on multiple VMs running guest OSes
� the service is performed by rented cloud infrastructure
� The user does not manage or control the cloud infrastructure, but can specify when to
request and release the needed resources.

Department of CSE VII Semester 53


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

IaaS providers
� Amazon Elastic Compute Cloud (EC2)
◦ Each instance provides 1-20 processors, upto 16 GB RAM, 1.69TB storage
� RackSpace Hosting
◦ Each instance provides 4 core CPU, upto 8 GB RAM, 480 GB storage
� Joyent Cloud
◦ Each instance provides 8 CPUs, upto 32 GB RAM, 48 GB storage
� Go Grid
◦ Each instance provides 1-6 processors, upto 15 GB RAM, 1.69TB storage

Platform as a Service (PaaS) ( Development platform)


� PaaS provides all of the facilities required to support the complete life cycle of
building, delivering and deploying web applications and services entirely from the
Internet.
� Typically applications must be developed with a particular platform in mind
• Multi tenant environments
• Highly scalable multi tier architecture
� The capability provided to the consumer is to deploy onto the cloud infrastructure
consumer created or acquired applications created using programming languages and
tools supported by the provider.
� The consumer does not manage or control the underlying cloud infrastructure
including network, servers, operating systems, or storage.
Have control over the deployed applications and possibly application hosting environment
configurations.

Customers are provided with execution platform for developing applications.


Execution platform includes operating system, programming language execution
environment, database, web server, hardware etc.
This acts as middleware on top of which applications are built
The user is freed from managing the cloud infrastructure

Department of CSE VII Semester 54


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

Application management is the core functionality of the middleware


Provides runtime(execution) environment
Developers design their applications in the
execution environment.
Developers need not concern about hardware (physical or virtual), operating systems, and
other resources.
PaaS core middleware manages the resources and scaling of applications on demand.
PaaS offers
o Execution environment and hardware resources (infrastructure) (or)
o software is installed on the user premises
PaaS: Service Provider provides Execution environment and hardware resources
(infrastructure)

Characteristics of PaaS
Runtime framework: Executes end-user code according to the policies set by the user and
the provider.
Abstraction: PaaS helps to deploy(install) and manage applications on the cloud.

Department of CSE VII Semester 55


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

Automation: Automates the process of deploying applications to the infrastructure,


additional resources are provided when needed.
Cloud services: helps the developers to simplify the creation and delivery cloud
applications.
PaaS providers
� Google App Engine
◦ Python, Java, Eclipse
� Microsoft Azure
◦ .Net, Visual Studio
� Sales Force
◦ Apex, Web wizard
� TIBCO,
� VMware,
� Zoho
Cloud Computing – Services
 Software as a Service - SaaS
 Platform as a Service - PaaS
 Infrastructure as a Service - IaaS

Department of CSE VII Semester 56


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

Category Description Product Type Vendors


and
Products
PaaS-I Execution platform is Middleware + Force.com,
provided along with Longjump
hardware resources Infrastructure
(infrastructure)
PaaS -II Execution platform is Middleware + Google App
provided with additional Infrastructure, Engine
components
Middleware

PaaS- III Runtime environment for Middleware + Microsoft Azure


developing any kind of Infrastructure,
application development
Middleware

3.9 Architectural Design Challenges


Challenge 1 : Service Availability and Data Lock-in Problem
Service Availability
Service Availability in Cloud might be affected because of
Single Point Failure
Distributed Denial of Service
Single Point Failure
o Depending on single service provider might result in failure.
o In case of single service providers, even if company has multiple data
centres located in different geographic regions, it may have common software
infrastructure and accounting systems.
Solution:
o Multiple cloud providers may provide more protection from failures and they provide
High Availability(HA)
o Multiple cloud Providers will rescue the loss of all data.
Distributed Denial of service (DDoS) attacks.
o Cyber criminals, attack target websites and online services and makes services
unavailable to users.
oDepartment
DDoS triesof to
CSE VII Semester
overwhelm (disturb) the services unavailable to user by having more 57
S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

traffic than the server or network can accommodate.

Solution:
o Some SaaS providers provide the opportunity to defend against DDoS attacks by
using quick scale-ups.
Customers cannot easily extract their data and programs from one site to run on another.
Solution:
o Have standardization among service providers so that customers can deploy (install)
services and data across multiple cloud providers.

Data Lock-in
It is a situation in which a customer using service of a provider cannot be moved to another
service provider because technologies used by a provider will be incompatible with other
providers.
This makes a customer dependent on a vendor for services and makes customer unable to
use service of another vendor.
Solution:
o Have standardization (in technologies) among service providers so that customers
can easily move from a service provider to another.

Challenge 2: Data Privacy and Security Concerns


Cloud services are prone to attacks because they are accessed through internet.
Security is given by
o Storing the encrypted data in to cloud.
o Firewalls, filters.
Cloud environment attacks include
o Guest hopping
o Hijacking
o VM rootkits.
Guest Hopping: Virtual machine hyper jumping (VM jumping) is an attack method that
exploits(make use of) hypervisor’s weakness that allows a virtual machine (VM) to be
accessed from another.
Hijacking: Hijacking is a type of network security attack in which the attacker takes
control of a communication
Department of CSE VII Semester 58
S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

VM Rootkit: is a collection of malicious (harmful) computer software, designed to enable


access to a computer that is not otherwise allowed.
A man-in-the-middle (MITM) attack is a form of eavesdroppping(Spy) where
communication between two users is monitored and modified by an unauthorized party.
o Man-in-the-middle attack may take place during VM migrations [virtual machine
(VM) migration - VM is moved from one physical host to another host].
Passive attacks steal sensitive data or passwords.
Active attacks may manipulate (control) kernel data structures which will cause major
damage to cloud servers.

Challenge 3: Unpredictable Performance and Bottlenecks


Multiple VMs can share CPUs and main memory in cloud computing, but I/O sharing is
problematic.
Internet applications continue to become more data-intensive (handles huge amount of
data).
Handling huge amount of data (data intensive) is a bottleneck in cloud environment.
Weak Servers that does not provide data transfers properly must be removed from cloud
environment

Challenge 4: Distributed Storage and Widespread Software Bugs


The database is always growing in cloud applications.
There is a need to create a storage system that meets this growth.
This demands the design of efficient distributed SANs (Storage Area Network of Storage
devices).
Data centres must meet
o Scalability
o Data durability
o HA(High Availability)
o Data consistence
Bug refers to errors in software.
Debugging must be done in data centres.

Department of CSE VII Semester 59


S.A.ENGINEERING COLLEGE IT1701 DISTRIBUTED SYSTEMS AND CLOUD COMPUTING

Challenge 5: Cloud Scalability, Interoperability and Standardization Cloud


Scalability
Cloud resources are scalable. Cost increases when storage and network bandwidth
scaled(increased)
Interoperability
Open Virtualization Format (OVF) describes an open, secure, portable, efficient, and extensible
format for the packaging and distribution of VMs.
OVF defines a transport mechanism for VM, that can be applied to different virtualization
platforms
Standardization
Cloud standardization, should have ability for virtual machine to run on any virtual platform.

Challenge 6: Software Licensing and Reputation Sharing


Cloud providers can use both pay-for-use and bulk-use licensing schemes to widen the business
coverage.
Cloud providers must create reputation-guarding services similar to the “trusted e-mail”
services
Cloud providers want legal liability to remain with the customer, and vice versa.

Department of CSE VII Semester 60

You might also like