0% found this document useful (0 votes)
6 views

Unit II Introduction to Cloud Computing

This document provides an overview of cloud computing, including its evolution, characteristics, deployment models, and service models such as IaaS, PaaS, and SaaS. It highlights the benefits of cloud computing, such as increased data storage capacity, reliability, and security, while also addressing its disadvantages, including the need for a constant internet connection and potential data loss. Additionally, it discusses the importance of cloud computing in modern IT infrastructure and its role in enabling scalable and efficient computing solutions.

Uploaded by

jsohampanda
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Unit II Introduction to Cloud Computing

This document provides an overview of cloud computing, including its evolution, characteristics, deployment models, and service models such as IaaS, PaaS, and SaaS. It highlights the benefits of cloud computing, such as increased data storage capacity, reliability, and security, while also addressing its disadvantages, including the need for a constant internet connection and potential data loss. Additionally, it discusses the importance of cloud computing in modern IT infrastructure and its role in enabling scalable and efficient computing solutions.

Uploaded by

jsohampanda
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

IT5603- Distributed and Cloud Computing

UNIT II INTRODUCTION TO CLOUD COMPUTING

Introduction to Cloud Computing – Evolution of Cloud Computing – Cloud Characteristics –

Elasticity in Cloud – On-demand Provisioning – NIST Cloud Computing Reference

Architecture– Architectural Design Challenges – Deployment Models: Public, Private and

Hybrid Clouds – Service Models: IaaS – PaaS – SaaS – Benefits of Cloud Computing.

Objectives of Cloud Computing

 To increase the backup of data

 Full availability

 On-demand-service

 Pay per use

 To reduce the overhead of multiple request from client

 To increase the security of data

CLOUD COMPUTING (INTERNET COMPUTING)

The term cloud refers to a network or internet (Next generation internet computing)
 It is computing based on the internet (Delivery of computing service over the internet)
 The main advantage of the internet computing is used to deliver the resources (docs, image,
audio, video or software) to the cloud customer at the right time.
 Cloud can provide services over network (i.e., on public network or on private networks.,
WAN, LAN or VPN)

 CLOUD Common Location-Independent Online Utility that is available on Demand

1
IT5603- Distributed and Cloud Computing

CLOUD COMPUTING

It means storing and accessing data and programs over the internet, instead of
user computer’s
hard drive
It provides a solution of IT infrastructure in low cost
CLOUD STORAGE SYSTEM (ONLINE STORAGE SYSTEM)

An online storage system enables the end-user (client) to store the large amount
of data at the
remote server over the internet.
Once a client posts the data to server, then that user can access the data at any
time, at anywhere,
through the internet. (Portability)
It offers online data storage, platform, infrastructure and applications
Cloud computing is both a combination of software and hardware based
computing resources
delivered as a network service
Examples of Cloud Storage
Web e-mail providers such as G-mail, Yahoo, etc, …
2
IT5603- Distributed and Cloud Computing

Online Drives like Box, Gdrive, Dropbox, SkyDrive, etc, …


POPULARITY OF CLOUD SERVICE

Reduce the complexity of networks


No need to buy software licenses
Scalability, reliability and efficiency
Data / Information at clouds are not easily lost
Customization

IMPORTANCE OF CLOUD COMPUTING

To store the huge amount of data (Petabytes of data) which normally is not
possible with single PC
To increase the availability of data
To provide a secure access to an application and data which is away from a
networked device
To provide a reliable access to handle the personal data to their customers while
servicing
APPLICATION EXAMPLES

Social networking sites


Ex. Facebook, Twitter, Linkedin, etc, …
E-mail sites
Ex. Gmail, Yahoo Mail, Yandex Mail, etc, ...
Search engines
Ex. Google, Bing, Yahoo, etc, …
Personal cloud storage (Public cloud)
Ex. Dropbox, Gdrive, Box, Skydrive, etc, ...
Business cloud storage (Public Cloud)
Ex. Microsoft, Amazon, Google, etc, ...
BENEFITS OF CLOUD COMPUTING

Increased storage capacity (Unlimited data capacity)


Increase data reliability
Backup and recovery
Improved performance
Pay just for service
Access all over the world
3
IT5603- Distributed and Cloud Computing

Resources are shared


Performance and scalability
Fewer Maintenance issues
Instant software updates
Better security
Fast implementation
Device Independence
DISADVANTAGES OF CLOUD COMPUTING

It needs a constant internet connection


It can be slow
It does not work well with low-speed connections
Stored data can be lost
Stored data might not be secure

4
IT5603- Distributed and Cloud Computing

1.1 INTRODUCTION
EVOLUTION OF DISTRIBUTED COMPUTING
Grids enable access to shared computing power and storage capacity from your desktop.
Clouds enable access to leased computing power and storage capacity from your desktop.
• Grids are an open source technology. Resource users and providers alike can understand
and contribute to the management of their grid
• Clouds are a proprietary technology. Only the resource provider knows exactly how
their cloud manages data, job queues, security requirements and so on.
• The concept of grids was proposed in 1995. The Open science grid (OSG) started in 1995
The EDG (European Data Grid) project began in 2001.
• In the late 1990`s Oracle and EMC offered early private cloud solutions. However, the
term cloud computing didn't gain prominence until 2007.
SCALABLE COMPUTING OVER THE INTERNET
Instead of using a centralized computer to solve computational problems, a parallel and
distributed computing system uses multiple computers to solve large-scale problems over the
Internet. Thus, distributed computing becomes data-intensive and network-centric.
The Age of Internet Computing
o high-performance computing (HPC) applications is no longer optimal for measuring
system performance
o The emergence of computing clouds instead demands high-throughput computing (HTC)
systems built with parallel and distributed computing technologies
o We have to upgrade data centers using fast servers, storage systems, and high-bandwidth
networks.
The Platform Evolution
o From 1950 to 1970, a handful of mainframes, including the IBM 360 and CDC 6400
o From 1960 to 1980, lower-cost minicomputers such as the DEC PDP 11 and VAX
Series
o From 1970 to 1990, we saw widespread use of personal computers built with VLSI
microprocessors.
o From 1980 to 2000, massive numbers of portable computers and pervasive devices
appeared in both wired and wireless applications

5
IT5603- Distributed and Cloud Computing
o Since 1990, the use of both HPC and HTC systems hidden in clusters, grids, or
Internet clouds has proliferated

On the HPC side, supercomputers (massively parallel processors or MPPs) are


gradually replaced by clusters of cooperative computers out of a desire to share
computing resources. The cluster is often a collection of homogeneous compute nodes
that are physically connected in close range to one another.
On the HTC side, peer-to-peer (P2P) networks are formed for distributed file sharing
and content delivery applications. A P2P system is built over many client machines (a
concept we will discuss further in Chapter 5). Peer machines are globally distributed in
nature. P2P, cloud computing, and web service platforms are more focused on

6
IT5603- Distributed and Cloud Computing
HTC applications than on HPC applications. Clustering and P2P technologies lead to
the development of computational grids or data grids.
For many years, HPC systems emphasize the raw speed performance. The speed of
HPC systems has increased from Gflops in the early 1990s to now Pflops in 2010.
The development of market-oriented high-end computing systems is undergoing a
strategic change from an HPC paradigm to an HTC paradigm. This HTC paradigm pays
more attention to high-flux computing. The main application for high-flux computing
is in Internet searches and web services by millions or more users simultaneously. The
performance goal thus shifts to measure high throughput or the number of tasks
completed per unit of time. HTC technology needs to not only improve in terms of
batch processing speed, but also address the acute problems of cost, energy savings,
security, and reliability at many data and enterprise computing centers.
Advances in virtualization make it possible to see the growth of Internet clouds as a
new computing paradigm. The maturity of radio-frequency identification (RFID),
Global Positioning System (GPS), and sensor technologies has triggered the
development of the Internet of Things (IoT). These new paradigms are only briefly
introduced here.
The high-technology community has argued for many years about the precise
definitions of centralized computing, parallel computing, distributed computing, and
cloud computing. In general, distributed computing is the opposite of centralized
computing. The field of parallel computing overlaps with distributed computing to a
great extent, and cloud computing overlaps with distributed, centralized, and parallel
computing.

Terms

Centralized computing
This is a computing paradigm by which all computer resources are centralized in
one physical system. All resources (processors, memory, and storage) are fully shared and
tightly coupled within one integrated OS. Many data centers and supercomputers are
centralized systems, but they are used in parallel, distributed, and cloud computing
applications.

7
IT5603- Distributed and Cloud Computing
• Parallel computing
In parallel computing, all processors are either tightly coupled with centralized
shared memory or loosely coupled with distributed memory. Inter processor
communication is accomplished through shared memory or via message passing.
Acomputer system capable of parallel computing is commonly known as a parallel
computer. Programs running in a parallel computer are called parallel programs. The
process of writing parallel programs is often referred to as parallel programming.
• Distributed computing This is a field of computer science/engineering that studies
distributed systems. A distributed system consists of multiple autonomous computers, each
having its own private memory, communicating through a computer network. Information
exchange in a distributed system is accomplished through message passing. A computer
program that runs in a distributed system is known as a distributed program. The process
of writing distributed programs is referred to as distributed programming.
• Cloud computing An Internet cloud of resources can be either a centralized or a
distributed computing system. The cloud applies parallel or distributed computing, or both.
Clouds can be built with physical or virtualized resources over large data centers that are
centralized or distributed. Some authors consider cloud computing to be a form of utility
computing or service computing . As an alternative to the preceding terms, some in the
high-tech community prefer the term concurrent computing or concurrent programming.
These terms typically refer to the union of parallel computing and distributing computing,
although biased practitioners may interpret them differently.
• Ubiquitous computing refers to computing with pervasive devices at any place and time
using wired or wireless communication. The Internet of Things (IoT) is a networked
connection of everyday objects including computers, sensors, humans, etc. The IoT is
supported by Internet clouds to achieve ubiquitous computing with any object at any place
and time. Finally, the term Internet computing is even broader and covers all computing
paradigms over the Internet. This book covers all the aforementioned computing
paradigms, placing more emphasis on distributed and cloud computing and their working
systems, including the clusters, grids, P2P, and cloud systems.
Internet of Things
• The traditional Internet connects machines to machines or web pages to web pages. The
concept of the IoT was introduced in 1999 at MIT .

8
IT5603- Distributed and Cloud Computing
• The IoT refers to the networked interconnection of everyday objects, tools, devices, or
computers. One can view the IoT as a wireless network of sensors that interconnect all
things in our daily life.
• It allows objects to be sensed and controlled remotely across existing network
infrastructure

SYSTEM MODELS FOR DISTRIBUTED AND CLOUD COMPUTING


• Distributed and cloud computing systems are built over a large number of autonomous
computer nodes.
• These node machines are interconnected by SANs, LANs, or WANs in a hierarchical
manner. With today’s networking technology, a few LAN switches can easily connect
hundreds of machines as a working cluster.
• A WAN can connect many local clusters to form a very large cluster of clusters.
Clusters of Cooperative Computers
A computing cluster consists of interconnected stand-alone computers which work
cooperatively as a single integrated computing resource.
• In the past, clustered computer systems have demonstrated impressive results in handling
heavy workloads with large data sets.
Cluster Architecture
cluster built around a low-latency, high bandwidth interconnection network. This network can
be as simple as a SAN or a LAN (e.g., Ethernet).

Figure 1.2 Clusters of Servers

9
IT5603- Distributed and Cloud Computing
Figure 1.2shows the architecture of a typical server cluster built around a low-latency, high
bandwidth interconnection network. This network can be as simple as a SAN (e.g., Myrinet) or a
LAN (e.g., Ethernet).
• To build a larger cluster with more nodes, the interconnection network can be built with
multiple levels of Gigabit Ethernet, or InfiniBand switches.
• Through hierarchical construction using a SAN, LAN, or WAN, one can build scalable
clusters with an increasing number of nodes. The cluster is connected to the Internet via a
virtual private network (VPN) gateway.
• The gateway IP address locates the cluster. The system image of a computer is decided by
the way the OS manages the shared cluster resources.
Most clusters have loosely coupled node computers. All resources of a server node are
managed by their own OS. Thus, most clusters have multiple system images as a result of having
many autonomous nodes under different OS control.
1.3.1.2 Single-System Image(SSI)
• Ideal cluster should merge multiple system images into a single-system image (SSI).
• Cluster designers desire a cluster operating system or some middleware to support SSI at
various levels, including the sharing of CPUs, memory, and I/O across all cluster nodes.
An SSI is an illusion created by software or hardware that presents a collection of resources as one
integrated, powerful resource. SSI makes the cluster appear like a single machine to the user. A
cluster with multiple system images is nothing but a collection of independent computers.
1.3.1.3 Hardware, Software, and Middleware Support
• Clusters exploring massive parallelism are commonly known as MPPs. Almost all HPC
clusters in the Top 500 list are also MPPs.
• The building blocks are computer nodes (PCs, workstations, servers, or SMP), special
communication software such as PVM, and a network interface card in each computer
node.
Most clusters run under the Linux OS. The computer nodes are interconnected by a high-
bandwidth network (such as Gigabit Ethernet, Myrinet, InfiniBand, etc.). Special cluster
middleware supports are needed to create SSI or high availability (HA). Both sequential and
parallel applications can run on the cluster, and special parallel environments are needed to
facilitate use of the cluster resources. For example, distributed memory has multiple images. Users
may want all distributed memory to be shared by all servers by forming distributed shared

10
IT5603- Distributed and Cloud Computing
memory (DSM). Many SSI features are expensive or difficult to achieve at various cluster
operational levels. Instead of achieving SSI, many clusters are loosely coupled machines. Using
virtualization, one can build many virtual clusters dynamically, upon user demand.

Cloud Computing over the Internet


• A cloud is a pool of virtualized computer resources.
• A cloud can host a variety of different workloads, including batch-style backend jobs and
interactive and user-facing applications.
• A cloud allows workloads to be deployed and scaled out quickly through rapid
provisioning of virtual or physical machines.
• The cloud supports redundant, self-recovering, highly scalable programming models that
allow workloads to recover from many unavoidable hardware/software failures.
• Finally, the cloud system should be able to monitor resource use in real time to enable
rebalancing of allocations when needed.
a. Internet Clouds
• Cloud computing applies a virtualized platform with elastic resources on demand by
provisioning hardware, software, and data sets dynamically .The idea is to move desktop
computing to a service-oriented platform using server clusters and huge databases at data
centers.
• Cloud computing leverages its low cost and simplicity to benefit both users and providers.
• Machine virtualization has enabled such cost-effectiveness. Cloud computing intends to
satisfy many user applications simultaneously.

Figure 1.3 Internet Cloud

11
IT5603- Distributed and Cloud Computing
b. The Cloud Landscape
• The cloud ecosystem must be designed to be secure, trustworthy, and dependable. Some
computer users think of the cloud as a centralized resource pool. Others consider the cloud
to be a server cluster which practices distributed computing over all the servers
Traditionally, a distributed computing system tends to be owned and operated by an
autonomous administrative domain (e.g., a research laboratory or company) for on-
premises computing needs.
• Cloud computing as an on-demand computing paradigm resolves or relieves us from these
problems.
Three Cloud service Model in a cloud landscape
Infrastructure as a Service (IaaS)
 This model puts together infrastructures demanded by users—namely servers, storage,
networks, and the data center fabric.
• The user can deploy and run on multiple VMs running guest OS on specific applications.
• The user does not manage or control the underlying cloud infrastructure, but can specify
when to request and release the needed resources.
Platform as a Service (PaaS)
• This model enables the user to deploy user-built applications onto a virtualized cloud platform.
PaaS includes middleware, databases, development tools, and some runtime support such as Web
2.0 and Java.
• The platform includes both hardware and software integrated with specific programming
interfaces.
• The provider supplies the API and software tools (e.g., Java, Python, Web 2.0, .NET). The user
is freed from managing the cloud infrastructure.
Software as a Service (SaaS)
• This refers to browser-initiated application software over thousands of paid cloud customers.
The SaaS model applies to business processes, industry applications, consumer relationship
management (CRM), enterprise resources planning (ERP), human resources (HR), and
collaborative applications. On the customer side, there is no upfront investment in servers or
software licensing. On the provider side, costs are rather low, compared with conventional hosting
of user applications.

12
IT5603- Distributed and Cloud Computing

Figure 1.4 The Cloud Landscape in an application


Internet clouds offer four deployment modes: private, public, managed, and hybrid . These
modes demand different levels of security implications. The different SLAs imply that the security
responsibility is shared among all the cloud providers, the cloud resource consumers, and the third
party cloud-enabled software providers. Advantages of cloud computing have been advocated by
many IT experts, industry leaders, and computer science researchers.

Reasons to adapt the cloud for upgraded Internet applications and web services:
1. Desired location in areas with protected space and higher energy efficiency
2. Sharing of peak-load capacity among a large pool of users, improving overall utilization
3. Separation of infrastructure maintenance duties from domain-specific application development
4. Significant reduction in cloud computing cost, compared with traditional computing
paradigms
5. Cloud computing programming and application development
6. Service and data discovery and content/service distribution
7. Privacy, security, copyright, and reliability issues
8. Service agreements, business models, and pricing policies

13
IT5603- Distributed and Cloud Computing
Cloud computing is using the internet to access someone else's software running on
someone else's hardware in someone else's data center.
The user sees only one resource ( HW, Os) but uses virtually multiple os. HW resources
etc..
Cloud architecture effectively uses virtualization
A model of computation and data storage based on “pay as you go” access to “unlimited”
remote data center capabilities
A cloud infrastructure provides a framework to manage scalable, reliable, on-demand
access to applications
Cloud services provide the “invisible” backend to many of our mobile applications
High level of elasticity in consumption
Historical roots in today’s Internet apps
Search, email, social networks, e-com sites
File storage (Live Mesh, Mobile Me)

1.2 Definition

“The National Institute of Standards and Technology (NIST) defines cloud computing as
a "pay-per-use model for enabling available, convenient and on- demand network
access to a shared pool of configurable computing resources (e.g., networks, servers,
storage, applications and services) that can be rapidly provisioned and released with
minimal management effort or service provider interaction."

Cloud Computing Architecture


Architecture consists of 3 tiers
◦ Cloud Deployment Model
◦ Cloud Service Model
◦ Essential Characteristics of Cloud Computing
Essential Characteristics 1
On-demand self-service.

14
IT5603- Distributed and Cloud Computing
◦ A consumer can unilaterally provision computing capabilities such as server time
and network storage as needed automatically, without requiring human interaction
with a service provider.

Figure 1.5 Cloud Computing Architecture


Essential Characteristics 2
Broad network access.
◦ Capabilities are available over the network and accessed through standard
mechanisms that promote use by heterogeneous thin or thick client platforms (e.g.,
mobile phones, laptops, and PDAs) as well as other traditional or cloudbased
software services.

Essential Characteristics 3
Resource pooling.
◦ The provider’s computing resources are pooled to serve multiple consumers using
a multi-tenant model, with different physical and virtual resources dynamically
assigned and reassigned according to consumer demand.

15
IT5603- Distributed and Cloud Computing
Essential Characteristics 4
Rapid elasticity.
◦ Capabilities can be rapidly and elastically provisioned - in some cases
automatically - to quickly scale out; and rapidly released to quickly scale in.
◦ To the consumer, the capabilities available for provisioning often appear to be
unlimited and can be purchased in any quantity at any time.

Essential Characteristics 5
Measured service.
◦ Cloud systems automatically control and optimize resource usage by leveraging a
metering capability at some level of abstraction appropriate to the type of service.
◦ Resource usage can be monitored, controlled, and reported - providing
transparency for both the provider and consumer of the service.

Cloud Service Models


Cloud Software as a Service (SaaS)
Cloud Platform as a Service (PaaS)
Cloud Infrastructure as a Service (IaaS)
SaaS
SaaS is a licensed software offering on the cloud and pay per use
SaaS is a software delivery methodology that provides licensed multi-tenant access to
software and its functions remotely as a Web-based service. Usually
billed based on usage
◦ Usually multi tenant environment
◦ Highly scalable architecture
Customers do not invest on software application programs
The capability provided to the consumer is to use the provider’s applications running on a
cloud infrastructure.
The applications are accessible from various client devices through a thin client interface
such as a web browser (e.g., web-based email).

16
IT5603- Distributed and Cloud Computing
The consumer does not manage or control the underlying cloud infrastructure including
network, servers, operating systems, storage, data or even individual application
capabilities, with the possible exception of limited user specific application configuration
settings.
SaaS providers
Google’s Gmail, Docs, Talk etc
Microsoft’s Hotmail, Sharepoint
SalesForce,
Yahoo, Facebook
Infrastructure as a Service (IaaS)
IaaS is the delivery of technology infrastructure ( mostly hardware) as an on demand,
scalable service
◦ Usually billed based on usage
◦ Usually multi tenant virtualized environment
◦ Can be coupled with Managed Services for OS and application support
◦ User can choose his OS, storage, deployed app, networking components

Figure 1.6 Cloud Service Model


The capability provided to the consumer is to provision processing, storage, networks,
and other fundamental computing resources.

17
IT5603- Distributed and Cloud Computing
Consumer is able to deploy and run arbitrary software, which may include operating
systems and applications.
The consumer does not manage or control the underlying cloud infrastructure but has
control over operating systems, storage, deployed applications, and possibly limited control
of select networking components (e.g., host firewalls).
IaaS providers
Amazon Elastic Compute Cloud (EC2)
◦ Each instance provides 1-20 processors, upto 16 GB RAM, 1.69TB storage
RackSpace Hosting
◦ Each instance provides 4 core CPU, upto 8 GB RAM, 480 GB storage
Joyent Cloud
◦ Each instance provides 8 CPUs, upto 32 GB RAM, 48 GB storage
Go Grid
◦ Each instance provides 1-6 processors, upto 15 GB RAM, 1.69TB storage

Platform as a Service (PaaS)


PaaS provides all of the facilities required to support the complete life cycle of building,
delivering and deploying web applications and services entirely from the Internet.
Typically applications must be developed with a particular platform in mind
• Multi tenant environments
• Highly scalable multi tier architecture
The capability provided to the consumer is to deploy onto the cloud infrastructure consumer
created or acquired applications created using programming languages and tools supported
by the provider.
The consumer does not manage or control the underlying cloud infrastructure including
network, servers, operating systems, or storage, but has control over the deployed
applications and possibly application hosting environment configurations.

PaaS providers
Google App Engine
◦ Python, Java, Eclipse

18
IT5603- Distributed and Cloud Computing
Microsoft Azure
◦ .Net, Visual Studio
Sales Force
◦ Apex, Web wizard
TIBCO,
VMware,
Zoho

Cloud Computing - Opportunities and Challenges


It enables services to be used without any understanding of their infrastructure.
Cloud computing works using economies of scale
It potentially lowers the outlay expense for start up companies, as they would no longer
need to buy their own software or servers.
Cost would be by on-demand pricing.
Vendors and Service providers claim costs by establishing an ongoing revenue stream.
Data and services are stored remotely but accessible from “anywhere”

Cloud Computing – Pros


Lower computer costs
Instant software updates:
◦ When the application is web-based, updates happen automatically
Improved document format compatibility
e capacity:
◦ Cloud computing offers virtually limitless storage
◦ • Increased data reliability:

Cloud Computing – Cons


Need of Internet :
◦ A dead Internet connection means no work and in areas where Internet
connections are few or inherently unreliable, this could be a deal-breaker.
◦ Requires a constant Internet connection

19
IT5603- Distributed and Cloud Computing
Can be slow:
◦ Even with a fast connection, web-based applications can sometimes be slower
than accessing a similar software program on your desktop PC.
Disparate Protocols :
◦ Each cloud systems uses different protocols and different APIs – Standards yet to
evolve.

1.3 Evolution of Cloud Computing


Evolution of Cloud Computing
 Cloud Computing Leverages dynamic resources to deliver a large number of services to
end users.
 It is High Throughput Computing(HTC) paradigm
 It enables users to share access to resources from anywhere at any time

II Hardware Evolution
 In 1930, binary arithmetic was developed
 computer processing technology, terminology, and programming languages.
• In 1939,Electronic computer was developed
 Computations were performed using vacuum-tube technology.
• In 1941, Konrad Zuse's Z3 was developed
 Support both floating-point and binary arithmetic.
There are four generations
 First Generation Computers
 Second Generation Computers
 Third Generation Computers
 Fourth Generation Computers
a.First Generation Computers
Time Period : 1942 to 1955
Technology : Vacuum Tubes
Size : Very Large System
Processing : Very Slow

20
IT5603- Distributed and Cloud Computing
Examples:
1.ENIAC (Electronic Numerical Integrator and Computer)
2.EDVAC(Electronic Discrete Variable Automatic Computer)

Advantages:
• It made use of vacuum tubes which was the advanced technology at that time
• Computations were performed in milliseconds.
Disadvantages:
• very big in size, weight was about 30 tones.
• very costly.
• Requires more power consumption
•Large amount heat was generated.

b.Second Generation Computers


Time Period : 1956 to 1965.
Technology : Transistors
Size : Smaller
Processing : Faster
o Examples
Honeywell 400
IBM 7094
Advantages
 Less heat than first generation.
 Assembly language and punch cards were used for input.
 Low cost than first generation computers.
 Computations was performed in microseconds.
 Better Portability as compared to first generation
Disadvantages:
 A cooling system was required.
 Constant maintenance was required.
 Only used for specific purposes

21
IT5603- Distributed and Cloud Computing
c.Third Generation Computers

Time Period : 1966 to 1975


Technology : ICs (Integrated Circuits)

Size : Small as compared to 2nd generation computers


Processing : Faster than 2nd generation computers
Examples
• PDP-8 (Programmed Data Processor)
• PDP-11
Advantages
• These computers were cheaper as compared to generation computers.
• They were fast and reliable.
• IC not only reduce the size of the computer but it also improves the performance of the
computer
• Computations was performed in nanoseconds
Disadvantages
• IC chips are difficult to maintain.
• The highly sophisticated technology required for the manufacturing of IC chips.
•Air Conditioning is required

d.Fourth Generation Computers


Time Period : 1975 to Till Date
Technology : Microprocessor

Size : Small as compared to third generation computer


Processing : Faster than third generation computer
Examples
• IBM 4341
• DEC 10

Advantages:
 Fastest in computation and size get reduced as compared to the previous generation of
computer. Heat generated is small.
 Less maintenance is required.

22
IT5603- Distributed and Cloud Computing
Disadvantages:
 The Microprocessor design and fabrication are very complex.
 Air Conditioning is required in many cases

III Internet Hardware Evolution


 Internet Protocol is the standard communications protocol used by every computer on the
Internet.
 The conceptual foundation for creation of the Internet was significantly developed by three
individuals.
• Vannevar Bush — MEMIX (1930)
• Norbert Wiener
• Marshall McLuhan
 Licklider was founder for the creation of the AR PANET (Advanced Research Projects
Agency Network)
 Clark deployed a minicomputer called an Interface Message Processor (IMP) at each site.
 Network Control Program (NCP)- first networking protocol that was used on the ARPANET

Figure 1.7 IMP Architecture

23
IT5603- Distributed and Cloud Computing
Internet Hardware Evolution
 Establishing a Common Protocol for the Internet
 Evolution of Ipv6
 Finding a Common Method to Communicate Using the Internet Protocol
 Building a Common Interface to the Internet
 The Appearance of Cloud Formations From One Computer to a Grid of Many
a.Establishing a Common Protocol for the Internet
 NCP essentially provided a transport layer consisting of the ARPANET Host-to-Host
Protocol (AIIIIP) and the Initial Connection Protocol (ICP)
 Application protocols
o File Transfer Protocol (FTP), used for file transfers,
o Simple Mail Transfer Protocol (SMTP), used for sending email
Four versions of TCP/IP
• TCP vl
• TCP v2
• TCP v3 and IP v3,
• TCP v4 and IP v4
b.Evolution of Ipv6
 IPv4 was never designed to scale to global levels.
 To increase available address space, it had to process large data packets (i.e., more bits of
data).
 To overcome these problems, Internet Engineering Task Force (IETF) developed IPv6,
which was released in January 1995.
 Ipv6 is sometimes called the Next Generation Internet Protocol (IPNG) or TCP/IP v6.

c.Finding a Common Method to Communicate Using the Internet Protocol


 In the 1960s,the word ktpertext was created by Ted Nelson.
 In 1962, Engelbart's first project was Augment, and its purpose was to develop computer
tools to augment human capabilities.
 He developed the mouse, Graphical user interface (GUI), and the first working hypertext
system, named NLS (oN-Line System).

24
IT5603- Distributed and Cloud Computing
 NLS was designed to cross-reference research papers for sharing among geographically
distributed researchers.
 In the 1980s, Web was developed in Europe by Tim Berners-Lee and Robert Cailliau
d.B uilding a Common Interface to the Internet
 Betters-Lee developed the first web browser featuring an integrated editor that could
create hypertext documents.
 Following this initial success, Berners-Lee enhanced the server and browser by adding
support for the FTP (File Transfer protocol)

Figure 1.8 First Web Browser


 Mosaic was the first widely popular web browser available to the general public. Mosaic
support for graphics, sound, and video clips.
 In October 1994, Netscape released the first beta version of its browser, Mozilla 0.96b,
over the Internet.
 In 1995, Microsoft Internet Explorer was developed that supports both a graphical Web
browser and the name for a set of technologies.
 Mozilla Firefox. released in November 2004, became very popular almost immediately.

e.The Appearance of Cloud Formations From One Computer to a Grid of Many


 Two decades ago, computers were clustered together to form a single larger computer in
order to simulate a supercomputer and greater processing power.
 In the early 1990s, Ian Foster and Carl Kesselman presented their concept of "The Grid."
They used an analogy to the electricity grid, where users could plug in and use a (metered)
utility service.
 A major problem in clustering model was data residency. Because of the distributed nature
of a grid, computational nodes could be anywhere in the world.

25
IT5603- Distributed and Cloud Computing
 The Globus Toolkit is an open source software toolkit used for building grid systems and
applications

Figure 1.9 Evolution


Evolution of Cloud Services

Google Application Engine


2008-2009
Microsoft Azure
2006 S3 launches EC2
2002 Launch of Amazon Web Services
The first milestone of cloud computing arrival of
1990
salesforce.com
Super Computers
1960
Mainframes

IV. SERVER VIRTUALIZATION


 Virtualization is a method of running multiple independent virtual operating systems on a
single physical computer.
 This approach maximizes the return on investment for the computer.

26
IT5603- Distributed and Cloud Computing
 Virtualization technology is a way of reducing the majority of hardware acquisition and
maintenance costs, which can result in significant savings for any company.
 Parallel Processing
 Vector Processing
 Symmetric Multiprocessing Systems
 Massively Parallel Processing Systems
a.Parallel Processing
 Parallel processing is performed by the simultaneous execution of program instructions
that have been allocated across multiple processors.
 Objective: running a progran in less time.
 The next advancement in parallel processing-multiprogramming
 In a multiprogramming system, multiple programs submitted by users but each allowed
to use the processor for a short time.
 This approach is known as "round-robin scheduling”(RR scheduling)
b.Vector Processing
 Vector processing was developed to increase processing performance by operating in a
multitasking manner.
 Matrix operations were added to computers to perform arithmetic operations.
 This was valuable in certain types of applications in which data occurred in the form of
vectors or matrices.
 In applications with less well-formed data, vector processing was less valuable.
c.Symmetric Multiprocessing Systems
 Symmetric multiprocessing systems (SMP) was developed to address the problem of
resource management in master/slave models.
 In SMP systems, each processor is equally capable and responsible for managing the
workflow as it passes through the system.
 The primary goal is to achieve sequential consistency
d.Massively Parallel Processing Systems
 In Massively Parallel Processing Systems, a computer system with many independent
arithmetic units, which run in parallel.
 All the processing elements are interconnected to act as one very large computer.

27
 Early examples of MPP systems were the Distributed ArrayProcessor, the Goodyear
MPP, the Connection Machine, and the Ultracomputer
 MPP machines are not easy to program, but for certain applications, such as data mining,
they are the best solution

1.4 ELASTICITY IN CLOUD COMPUTING


Elasticity is defined as the ability of a system to add and remove resources (such as CPU
cores, memory, VM and container instances) to adapt to the load variation in real time.
Elasticity is a dynamic property for cloud computing.
Elasticity is the degree to which a system is able to adapt to workload changes by
provisioning and deprovisioning resources in an autonomic manner, such that at each point
in time the available resources match the current demand as closely as possible.

Elasticity = Scalability + Automation + Optimization

Elasticity is built on top of scalability.


It can be considered as an automation of the concept of scalability and aims to optimize at
best and as quickly as possible the resources at a given time.
Another term associated with elasticity is the efficiency, which characterizes how cloud
resource can be efficiently utilized as it scales up or down.
It is the amount of resources consumed for processing a given amount of work, the lower
this amount is, the higher the efficiency of a system.

28
Elasticity also introduces a new important factor, which is the speed.
Rapid provisioning and deprovisioning are key to maintaining an acceptable
performance in the context of cloud computing
Quality of service is subjected to a service level agreement

Classification
Elasticity solutions can be arranged in different classes based on
Scope
Policy
Purpose
Method
a.Scope
Elasticity can be implemented on any of the cloud layers.
Most commonly, elasticity is achieved on the IaaS level, where the resources to be
provisioned are virtual machine instances.
Other infrastructure services can also be scaled
On the PaaS level, elasticity consists in scaling containers or databases for instance.
Finally, both PaaS and IaaS elasticity can be used to implement elastic applications, be it
for private use or in order to be provided as a SaaS
The elasticity actions can be applied either at the infrastructure or application/platform
level.
The elasticity actions perform the decisions made by the elasticity strategy or
management system to scale the resources.
Google App Engine and Azure elastic pool are examples of elastic Platform as a Service
(PaaS).
Elasticity actions can be performed at the infrastructure level where the elasticity
controller monitors the system and takes decisions.
The cloud infrastructures are based on the virtualization technology, which can be VMs
or containers.
In the embedded elasticity, elastic applications are able to adjust their own resources
according to runtime requirements or due to changes in the execution flow.
There must be a knowledge of the source code of the applications.

29
Application Map: The elasticity controller must have a complete map of the application
components and instances.
Code embedded: The elasticity controller is embedded in the application source code.
The elasticity actions are performed by the application itself.
While moving the elasticity controller to the application source code eliminates the use of
monitoring systems
There must be a specialized controller for each application.

b.Policy
Elastic solutions can be either manual or automatic.
A manual elastic solution would provide their users with tools to monitor their systems
and add or remove resources but leaves the scaling decision to them.
Automatic mode: All the actions are done automatically, and this could be classified into
reactive and proactive modes.
Elastic solutions can be either reactive or predictive
Reactive mode: The elasticity actions are triggered based on certain thresholds or rules, the system
reacts to the load (workload or resource utilization) and triggers actions to adapt changes
accordingly.
An elastic solution is reactive when it scales a posteriori, based on a monitored change in
the system.
These are generally implemented by a set of Event-Condition-Action rules.
Proactive mode: This approach implements forecasting techniques, anticipates the future needs
and triggers actions based on this anticipation.

A predictive or proactive elasticity solution uses its knowledge of either recent history or
load patterns inferred from longer periods of time in order to predict the upcoming load of
the system and scale according to it.
c.Purpose
An elastic solution can have many purposes.
The first one to come to mind is naturally performance, in which case the focus should be
put on their speed.
Another purpose for elasticity can also be energy efficiency, where using the minimum
amount of resources is the dominating factor.

30
Other solutions intend to reduce the cost by multiplexing either resource providers or
elasticity methods
Elasticity has different purposes such as improving performance, increasing resource
capacity, saving energy, reducing cost and ensuring availability.
Once we look to the elasticity objectives, there are different perspectives.
Cloud IaaS providers try to maximize the profit by minimizing the resources while
offering a good Quality of Service (QoS),
PaaS providers seek to minimize the cost they pay to the
Cloud.
The customers (end-users) search to increase their Quality of Experience (QoE) and to
minimize their payments.
QoE is the degree of delight or annoyance of the user of an application or service

d.Method
Vertical elasticity, changes the amount of resources linked to existing instances on-the-
fly.
This can be done in two manners.
The first method consists in explicitly redimensioning a virtual machine instance, i.e.,
changing the quota of physical resources allocated to it.
This is however poorly supported by common operating systems as they fail to take into
account changes in CPU or memory without rebooting, thus resulting in service
interruption.
The second vertical scaling method involves VM migration: moving a virtual machine
instance to another physical machine with a different overall load changes its available
resources

31
Horizontal scaling is the process of adding/removing instances, which may be located at
different locations.
Load balancers are used to distribute the load among the different instances.
Vertical scaling is the process of modifying resources (CPU, memory, storage or both)
size for an instance at run time.
It gives more flexibility for the cloud systems to cope with the varying workloads

Migration
Migration can be also considered as a needed action to further allow the vertical scaling
when there is no enough resources on the host machine.
It is also used for other purposes such as migrating a VM to a less loaded physical
machine just to guarantee its performance.
Several types of migration are deployed such as live migration and no-live migration.
Live migration has two main approaches
post-copy
pre-copy
Post-copy migration suspends the migrating VM, copies minimal processor state to the
target host, resumes the VM and then begins fetching memory pages from the source.
In pre-copy approach, the memory pages are copied while the VM is running on the source.
If some pages are changed (called dirty pages) during the memory copy process, they will
be recopied until the number of recopied pages is greater than dirty pages, or the source
VM will be stopped.
The remaining dirty pages will be copied to the destination VM.

Architecture
The architecture of the elasticity management solutions can be either centralized or
decentralized.
Centralized architecture has only one elasticity controller, i.e., the auto scaling system
that provisions and deprovisions resources.

32
In decentralized solutions, the architecture is composed of many elasticity controllers or
application managers, which are responsible for provisioning resources for different cloud-
hosted platforms

Provider
Elastic solutions can be applied to a single or multiple cloud providers.
A single cloud provider can be either public or private with one or multiple regions or
datacenters.
Multiple clouds in this context means more than one cloud provider.
It includes hybrid clouds that can be private or public, in addition to the federated clouds
and cloud bursting.
Most of the elasticity solutions support only a single cloud provider

1.5 On-demand Provisioning.


Resource Provisioning means the selection, deployment, and run-time management of
software (e.g., database server management systems, load balancers) and hardware
resources (e.g., CPU, storage, and network) for ensuring guaranteed performance for
applications.
Resource Provisioning is an important and challenging problem in the large-scale
distributed systems such as Cloud computing environments.
There are many resource provisioning techniques, both static and dynamic each one having
its own advantages and also some challenges.
These resource provisioning techniques used must meet Quality of Service (QoS)
parameters like availability, throughput, response time, security, reliability etc., and
thereby avoiding Service Level Agreement (SLA) violation.
Over provisioning and under provisioning of resources must be avoided.
Another important constraint is power consumption.
The ultimate goal of the cloud user is to minimize cost by renting the resources and from
the cloud service provider’s perspective to maximize profit by efficiently allocating the
resources.

33
In order to achieve the goal, the cloud user has to request cloud service provider to make
a provision for the resources either statically or dynamically.
So that the cloud service provider will know how many instances of the resources and
what resources are required for a particular application.
By provisioning the resources, the QoS parameters like availability, throughput, security,
response time, reliability, performance etc must be achieved without violating SLA
There are two types
 Static Provisioning
 Dynamic Provisioning
Static Provisioning
For applications that have predictable and generally unchanging demands/workloads, it is
possible to use “static provisioning" effectively.
With advance provisioning, the customer contracts with the provider for services.
The provider prepares the appropriate resources in advance of start of service.
The customer is charged a flat fee or is billed on a monthly basis.
Dynamic Provisioning
In cases where demand by applications may change or vary, “dynamic provisioning"
techniques have been suggested whereby VMs may be migrated on-the-fly to new compute
nodes within the cloud.
The provider allocates more resources as they are needed and removes them when they are
not.
The customer is billed on a pay-per-use basis.
When dynamic provisioning is used to create a hybrid cloud, it is sometimes referred to as
cloud bursting.
Parameters for Resource Provisioning
Response time
Minimize Cost
Revenue Maximization
Fault tolerant
Reduced SLA Violation
Reduced Power Consumption

34
Response time: The resource provisioning algorithm designed must take minimal time to
respond when executing the task.
Minimize Cost: From the Cloud user point of view cost should be minimized.
Revenue Maximization: This is to be achieved from the Cloud Service Provider’s view.
Fault tolerant: The algorithm should continue to provide service in spite of failure of nodes.
Reduced SLA Violation: The algorithm designed must be able to reduce SLA violation.

Reduced Power Consumption: VM placement & migration techniques must lower power
consumption

Dynamic Provisioning Types


1. Local On-demand Resource Provisioning
2. Remote On-demand Resource Provisioning
Local On-demand Resource Provisioning
1. The Engine for the Virtual Infrastructure
The OpenNebula Virtual Infrastructure Engine
• OpenNEbula creates a distributed virtualization layer
• Extend the benefits of VM Monitors from one to multiple resources
• Decouple the VM (service) from the physical location
• Transform a distributed physical infrastructure into a flexible and elastic virtual
infrastructure, which adapts to the changing demands of the VM (service) workloads

35
Separation of Resource Provisioning from Job Management
• New virtualization layer between the service and the infrastructure layers
• Seamless integration with the existing middleware stacks.
• Completely transparent to the computing service and so end users

Cluster Partitioning
• Dynamic partition of the infrastructure
• Isolate workloads (several computing clusters)
• Dedicated HA partitions

Benefits for Existing Grid Infrastructures


• The virtualization of the local infrastructure supports a virtualized alternative to
contribute resources to a Grid infrastructure
• Simpler deployment and operation of new middleware distributions
• Lower operational costs
• Easy provision of resources to more than one infrastructure
• Easy support for VO-specific worker nodes
Performance partitioning between local and grid clusters
36

Other Tools for VM Management
• VMware DRS, Platform Orchestrator, IBM Director, Novell ZENworks, Enomalism,
Xenoserver
• Advantages:
• Open-source (Apache license v2.0)
• Open and flexible architecture to integrate new virtualization technologies
• Support for the definition of any scheduling policy (consolidation, workload
balance, affinity, SLA)
• LRM-like CLI and API for the integration of third-party tools

Remote on-Demand Resource Provisioning


Access to Cloud Systems
• Provision of virtualized resources as a service
VM Management Interfaces
The processes involved are
• Submission
• Control
• Monitoring

37
Infrastructure Cloud Computing Solutions
• Commercial Cloud: Amazon EC2
• Scientific Cloud: Nimbus (University of Chicago)
• Open-source Technologies
• Globus VWS (Globus interfaces)
• Eucalyptus (Interfaces compatible with Amazon EC2)
• OpenNEbula (Engine for the Virtual Infrastructure)
On-demand Access to Cloud Resources
• Supplement local resources with cloud resources to satisfy peak or fluctuating demands

NIST CLOUD REFERENCE ARCHITECTURE


Definitions
□ A model of computation and data storage based on ―pay as you go‖ access to
―unlimited‖ remote data center capabilities.
□ A cloud infrastructure provides a framework to manage scalable, reliable, on-demand
access to applications.
□ Cloud services provide the ―invisible‖ backend to many of our mobile applications.
High level of elasticity in consumption.
NIST Cloud Definition:

38
The National Institute of Standards and Technology (NIST) defines cloud computing as a

"pay-per-use model for enabling available, convenient and on-demand network access to a
shared pool of configurable computing resources (e.g., networks, servers, storage,
applications and services) that can be rapidly provisioned and released with minimal
management effort or service provider interaction."
Architecture
□ Architecture consists of 3 tiers
◦ Cloud Deployment Model
◦ Cloud Service Model
◦ Essential Characteristics of Cloud Computing .

39
Essential Characteristics 1
□ On-demand self-service.
◦ A consumer can unilaterally provision computing capabilities such as server
time and network storage as needed automatically, without requiring human
interaction with a service provider.
Essential Characteristics 2
□ Broad network access.
◦ Capabilities are available over the network and accessed through standard
mechanisms that promote use by heterogeneous thin or thick client platforms
(e.g., mobile phones, laptops, and PDAs) as well as other traditional or
cloudbased software services.
Essential Characteristics 3
□ Resource pooling.
◦ The provider’s computing resources are pooled to serve multiple consumers
using a multi-tenant model, with different physical and virtual resources
dynamically assigned and reassigned according to consumer demand.
Essential Characteristics 4
□ Rapid elasticity.
◦ Capabilities can be rapidly and elastically provisioned - in some cases
automatically - to quickly scale out; and rapidly released to quickly scale in.
◦ To the consumer, the capabilities available for provisioning often appear to be
unlimited and can be purchased in any quantity at any time.
Essential Characteristics 5
□ Measured service.
◦ Cloud systems automatically control and optimize resource usage by leveraging
a metering capability at some level of abstraction appropriate to the type of
service.
Resource usage can be monitored, controlled, and reported - providing transparency for both
the provider and consumer of the service.

NIST (National Institute of Standards and Technology Background)


The goal is to accelerate the federal government’s adoption of secure and effective cloud
computing to reduce costs and improve services.

40
Cloud Computing Reference Architecture:

Actors in Cloud Computing

Interactions between the Actors in Cloud Computing

41
Example Usage Scenario 1:
□ A cloud consumer may request service from a cloud broker instead of contacting a
cloud provider directly.
□ The cloud broker may create a new service by combining multiple services or by
enhancing an existing service.
Usage Scenario- Cloud Brokers
□ In this example, the actual cloud providers are invisible to the cloud consumer.
□ The cloud consumer interacts directly with the cloud broker.

Example Usage Scenario 2


□ Cloud carriers provide the connectivity and transport of cloud services from cloud
providers to cloud consumers.
□ A cloud provider participates in and arranges for two unique service level agreements
(SLAs), one with a cloud carrier (e.g. SLA2) and one with a cloud consumer (e.g.
SLA1).
Usage Scenario for Cloud Carriers
 A cloud provider arranges service level agreements (SLAs) with a cloud carrier.
 Request dedicated and encrypted connections to ensure the cloud services.

Example Usage Scenario 3


• For a cloud service, a cloud auditor conducts independent assessments of the
operation and security of the cloud service implementation.
42
• The audit may involve interactions with both the Cloud Consumer and the Cloud
Provider.

Cloud Consumer
□ The cloud consumer is the principal stakeholder for the cloud computing service.
□ A cloud consumer represents a person or organization that maintains a business
relationship with, and uses the service from a cloud provider.
The cloud consumer may be billed for the service provisioned, and needs to arrange
payments accordingly.
Example Services Available to a Cloud Consumer

□ The consumers of SaaS can be organizations that provide their members with access
to software applications, end users or software application administrators.
□ SaaS consumers can be billed based on the number of end users, the time of use, the
network bandwidth consumed, the amount of data stored or duration of stored data.
43
□ Cloud consumers of PaaScan employ the tools and execution resources provided by
cloud providers to develop, test, deploy and manage the applications.
□ PaaS consumers can be application developers or application testers who run and test
applications in cloud-based environments,.
□ PaaS consumers can be billed according to, processing, database storage and network
resources consumed.
□ Consumers of IaaS have access to virtual computers, network-accessible storage &
network infrastructure components.
□ The consumers of IaaS can be system developers, system administrators and IT
managers.
□ IaaS consumers are billed according to the amount or duration of the resources
consumed, such as CPU hours used by virtual computers, volume and duration of data
stored.
Cloud Provider
□ A cloud provider is a person, an organization;
□ It is the entity responsible for making a service available to interested parties.
□ A Cloud Provider acquires and manages the computing infrastructure required for
providing the services.
□ Runs the cloud software that provides the services.
Makes arrangement to deliver the cloud services to the Cloud Consumers through network
access.

Cloud Provider - Major Activities

44
Cloud Auditor
□ A cloud auditor is a party that can perform an independent examination of cloud
service controls.
□ Audits are performed to verify conformance to standards through review of objective
evidence.
□ A cloud auditor can evaluate the services provided by a cloud provider in terms of
security controls, privacy impact, performance, etc.
Cloud Broker
□ Integration of cloud services can be too complex for cloud consumers to manage.
□ A cloud consumer may request cloud services from a cloud broker, instead of
contacting a cloud provider directly.
□ A cloud broker is an entity that manages the use, performance and delivery of cloud
services. Negotiates relationships between cloud providers and cloud consumers.
Services of cloud broker
Service Intermediation:
□ A cloud broker enhances a given service by improving some specific capability and
providing value-added services to cloud consumers.
Service Aggregation:
□ A cloud broker combines and integrates multiple services into one or more new
services.
□ The broker provides data integration and ensures the secure data movement between
the cloud consumer and multiple cloud providers.
Services of cloud broker
Service Arbitrage:
□ Service arbitrage is similar to service aggregation except that the services being
aggregated are not fixed.
□ Service arbitrage means a broker has the flexibility to choose services from multiple
agencies.
Eg: The cloud broker can use a credit-scoring service to measure and select an agency with
the best score.
Cloud Carrier
□ A cloud carrier acts as an intermediary that provides connectivity and transport of
cloud services between cloud consumers and cloud providers.

45
□ Cloud carriers provide access to consumers through network.
□ The distribution of cloud services is normally provided by network and
telecommunication carriers or a transport agent
□ A transport agent refers to a business organization that provides physical transport of
storage media such as high-capacity hard drives and other access devices.
Scope of Control between Provider and Consumer
The Cloud Provider and Cloud Consumer share the control of resources in a cloud system

□ The application layer includes software applications targeted at end users or


programs.
The applications are used by SaaS consumers, or installed/managed/maintained by PaaS
consumers, IaaS consumers and SaaS providers.
□ The middleware layer provides software building blocks (e.g., libraries, database, and
Java virtual machine) for developing application software in the cloud.
□ Used by PaaS consumers, installed/ managed/ maintained by IaaS consumers or PaaS
providers, and hidden from SaaS consumers.
□ The OS layer includes operating system and drivers, and is hidden from SaaS
consumers and PaaS consumers.
□ An IaaS cloud allows one or multiple guest OS to run virtualized on a single physical
host.
The IaaS consumers should assume full responsibility for the guest OS, while the IaaS
provider controls the host OS,

46
Cloud Deployment Model
□ Public Cloud
□ Private Cloud
□ Hybrid Cloud
□ Community Cloud
Public cloud

□ A public cloud is one in which the cloud infrastructure and computing resources are
made available to the general public over a public network.
□ A public cloud is meant to serve a multitude(huge number) of users, not a single
customer.
□ A fundamental characteristic of public clouds is multitenancy.
□ Multitenancy allows multiple users to work in a software environment at the same
time, each with their own resources.
□ Built over the Internet (i.e., service provider offers resources, applications storage to
the customers over the internet) and can be accessed by any user.
□ Owned by service providers and are accessible through a subscription.
□ Best Option for small enterprises, which are able to start their businesses without
large up-front(initial) investment.
□ By renting the services, customers were able to dynamically upsize or downsize their
IT according to the demands of their business.
47
□ Services are offered on a price-per-use basis.
□ Promotes standardization, preserve capital investment
□ Public clouds have geographically dispersed datacenters to share the load of users and
better serve them according to their locations
□ Provider is in control of the infrastructure

Examples:
o Amazon EC2 is a public cloud that provides Infrastructure as a Service
o Google AppEngine is a public cloud that provides Platform as a Service
o SalesForce.com is a public cloud that provides software as a service.
Advantage
□ Offers unlimited scalability – on demand resources are available to meet your
business needs.
□ Lower costs—no need to purchase hardware or software and you pay only for the
service you use.
□ No maintenance - Service provider provides the maintenance.
□ Offers reliability: Vast number of resources are available so failure of a system will
not interrupt service.
□ Services like SaaS, PaaS, IaaS are easily available on Public Cloud platform as it can
be accessed from anywhere through any Internet enabled devices.
□ Location independent – the services can be accessed from any location
Disadvantage
□ No control over privacy or security
□ Cannot be used for use of sensitive applications(Government and Military agencies
will not consider Public cloud)
□ Lacks complete flexibility(since dependent on provider)
□ No stringent (strict) protocols regarding data management

Private Cloud
□ Cloud services are used by a single organization, which are not exposed to the public
□ Services are always maintained on a private network and the hardware and software
are dedicated only to single organization
□ Private cloud is physically located at
 Organization’s premises [On-site private clouds] (or)
 Outsourced(Given) to a third party[Outsource private Clouds]
48
□ It may be managed either by
□ Cloud Consumer organization (or)
 By a third party
□ Private clouds are used by

 government agencies
 financial institutions
 Mid size to large-size organisations.
□ On-site private clouds

Out-sourced Private Cloud


□ Supposed to deliver more efficient and convenient cloud

□ Offers higher efficiency, resiliency(to recover quickly), security, and privacy


□ Customer information protection: In-house security is easier to maintain and rely
on. 49
 Follows its own(private organization) standard procedures and
operations(where as in public cloud standard procedures and operations of
service providers are followed )
Advantage
□ Offers greater Security and Privacy
□ Organization has control over resources
□ Highly reliable
□ Saves money by virtualizing the resources
Disadvantage
□ Expensive when compared to public cloud
□ Requires IT Expertise to maintain resources.

Hybrid Cloud
□ Built with both public and private clouds
□ It is a heterogeneous cloud resulting from a private and public clouds.
□ Private cloud are used for
 sensitive applications are kept inside the organization’s network
 business-critical operations like financial reporting
□ Public Cloud are used when
 Other services are kept outside the organization’s network
 high-volume of data
 Lower-security needs such as web-based email(gmail,yahoomail etc)
□ The resources or services are temporarily leased for the time required and then
released. This practice is also known as cloud bursting.

50
Fig: Hybrid Cloud

Advantage
□ It is scalable
□ Offers better security
□ Flexible-Additional resources are availed in public cloud when needed
□ Cost-effectiveness—we have to pay for extra resources only when needed.
□ Control - Organisation can maintain a private infrastructure for sensitive application

Disadvantage
□ Infrastructure Dependency
□ Possibility of security breach(violate) through public cloud

51
Difference Public Private Hybrid

Tenancy Multi-tenancy: Single tenancy:  Data stored in


the public cloud is
The data of Single
multi- tenant.
Multiple Organizations data  Data stored in private
Organizations in is stored in the cloud is Single
stored in a shared cloud. Tenancy.
environment.
Exposed to Yes: anyone can use No:Only  Services on private
the Public the public cloud organization can cloud can be accessed
services. only by the
the itself the
organization’s users
cloud use private  Services on public
services. cloud can be Accessed
by anyone.

Data Anywhere on the Inside the  Private Cloud-


Center Internet organization’s Present in
Location network. organization’s
network.
 Public Cloud -
anywhere on the
Internet.

Cloud Service Cloud service provider Organization has their  Organization


Management manages the services. own administrators manages the private
managing services cloud.

 Cloud Service
Provider(CSP)
manages the public
cloud.

Hardware CSP provides all Organization  Private Cloud–


Components the hardware. provides organization
hardware. provides resources.
 Public Cloud –
Cloud service
Provider provides.
Expenses Less Cost Expensive when Cost required for
compared to setting up private cloud.
public cloud

52
Cloud Service Models
□ Software as a Service (SaaS)
□ Platform as a Service (PaaS)
□ Infrastructure as a Service (IaaS)

These models are offered based on various SLAs between providers and users
SLA of cloud computing covers
o service availability
o performance
 data protection
o Security
Software as a Service(SaaS)( Complete software offering on the cloud)
□ SaaS is a licensed software offering on the cloud and pay per use
□ SaaS is a software delivery methodology that provides licensed multi-tenant access to
software and its functions remotely as a Web-based service. Usually
billed based on usage
◦ Usually multi tenant environment
53
◦ Highly scalable architecture
□ Customers do not invest on software application programs.
□ The capability provided to the consumer is to use the provider’s applications running
on a cloud infrastructure.
□ The applications are accessible from various client devices through a thin client
interface such as a web browser (e.g., web-based email).
□ The consumer does not manage or control the underlying cloud infrastructure including
network, servers, operating systems, storage, data or even individual application
capabilities, with the possible exception of limited user specific application
configuration settings.
□ On the customer side, there is no upfront investment in servers or software licensing.
□ It is a ―one-to-many‖ software delivery model, whereby an application is shared
across multiple users
□ Characteristic of Application Service Provider(ASP)
o Product sold to customer is application access.
o Application is centrally managed by Service Provider.
o Service delivered is one-to-many customers
o Services are delivered on the contract
E.g. Gmail and docs, Microsoft SharePoint, and the CRM software(Customer
Relationship management)
□ SaaS providers
□ Google’s Gmail, Docs, Talk etc
□ Microsoft’s Hotmail, Sharepoint
□ SalesForce,
□ Yahoo
□ Facebook
Infrastructure as a Service (IaaS) ( Hardware offerings on the cloud)
IaaS is the delivery of technology infrastructure (mostly hardware) as an on demand,
scalable service .
◦ Usually billed based on usage
◦ Usually multi tenant virtualized environment
◦ Can be coupled with Managed Services for OS and application support
◦ User can choose his OS, storage, deployed app, networking components
54
◦ The capability provided to the consumer is to provision processing, storage,
networks, and other fundamental computing resources.
◦ Consumer is able to deploy and run arbitrary software, which may include
operating systems and applications.
◦ The consumer does not manage or control the underlying cloud infrastructure
but has control over operating systems, storage and deployed applications.

□ IaaS/HaaS solutions bring all the benefits of hardware virtualization: workload


partitioning, application isolation, sandboxing, and hardware tuning
□ Sandboxing: A program is set aside from other programs in a separate environment
so that if errors or security issues occur, those issues will not spread to other areas on
the computer.
□ Hardware tuning: To improve the performance of system
□ The user works on multiple VMs running guest OSes
□ the service is performed by rented cloud infrastructure
□ The user does not manage or control the cloud infrastructure, but can specify when to
request and release the needed resources.

55
IaaS providers
□ Amazon Elastic Compute Cloud (EC2)
◦ Each instance provides 1-20 processors, upto 16 GB RAM, 1.69TB storage
□ RackSpace Hosting
◦ Each instance provides 4 core CPU, upto 8 GB RAM, 480 GB storage
□ Joyent Cloud
◦ Each instance provides 8 CPUs, upto 32 GB RAM, 48 GB storage
□ Go Grid
◦ Each instance provides 1-6 processors, upto 15 GB RAM, 1.69TB storage

Platform as a Service (PaaS) ( Development platform)


□ PaaS provides all of the facilities required to support the complete life cycle of building,
delivering and deploying web applications and services entirely from the Internet.
□ Typically applications must be developed with a particular platform in mind
• Multi tenant environments
• Highly scalable multi tier architecture
□ The capability provided to the consumer is to deploy onto the cloud infrastructure
consumer created or acquired applications created using programming languages and
tools supported by the provider.
□ The consumer does not manage or control the underlying cloud infrastructure including
network, servers, operating systems, or storage.
Have control over the deployed applications and possibly application hosting environment
configurations.

Customers are provided with execution platform for developing applications.


Execution platform includes operating system, programming language execution
environment, database, web server, hardware etc.
This acts as middleware on top of which applications are built
The user is freed from managing the cloud infrastructure

56
Application management is the core functionality of the middleware
Provides runtime(execution) environment
Developers design their applications in the execution environment.
Developers need not concern about hardware (physical or virtual), operating systems, and
other resources.
PaaS core middleware manages the resources and scaling of applications on demand.
PaaS offers
o Execution environment and hardware resources (infrastructure) (or)
o software is installed on the user premises
PaaS: Service Provider provides Execution environment and hardware resources
(infrastructure)

Characteristics of PaaS
Runtime framework: Executes end-user code according to the policies set by the user and
the provider.
Abstraction: PaaS helps to deploy(install) and manage applications on the cloud. 57
Automation: Automates the process of deploying applications to the infrastructure,
additional resources are provided when needed.
Cloud services: helps the developers to simplify the creation and delivery cloud
applications.
PaaS providers
□ Google App Engine
◦ Python, Java, Eclipse
□ Microsoft Azure
◦ .Net, Visual Studio
□ Sales Force
◦ Apex, Web wizard
□ TIBCO,
□ VMware,
□ Zoho
Cloud Computing – Services
 Software as a Service - SaaS
 Platform as a Service - PaaS
 Infrastructure as a Service - IaaS

58
Category Description Product Type Vendors
and
Products
PaaS-I Execution platform is Middleware + Force.com,
provided along wit Longjump
h
hardware resources Infrastructure
(infrastructure)
PaaS -II Execution platform is Middleware + Google App
provided with additional Infrastructure, Engine
components
Middleware

PaaS- III Runtime environment for Middleware + Microsoft Azure


developing any kind of Infrastructure,
application development
Middleware

Architectural Design Challenges


Challenge 1 : Service Availability and Data Lock-in Problem
Service Availability

Service Availability in Cloud might be affected because of


Single Point Failure
Distributed Denial of Service
Single Point Failure
o Depending on single service provider might result in failure.
o In case of single service providers, even if company has multiple data centres
located in different geographic regions, it may have common software
infrastructure and accounting systems.
Solution:
o Multiple cloud providers may provide more protection from failures and they provide High
Availability(HA)
o Multiple cloud Providers will rescue the loss of all data.
Distributed Denial of service (DDoS) attacks.
o Cyber criminals, attack target websites and online services and makes services unavailable
to users. 59
o DDoS tries to overwhelm (disturb) the services unavailable to user by having more traffic
than the server or network can accommodate.

Solution:
o Some SaaS providers provide the opportunity to defend against DDoS attacks by using
quick scale-ups.
Customers cannot easily extract their data and programs from one site to run on another.
Solution:
o Have standardization among service providers so that customers can deploy (install)
services and data across multiple cloud providers.

Data Lock-in
It is a situation in which a customer using service of a provider cannot be moved to another
service provider because technologies used by a provider will be incompatible with other
providers.
This makes a customer dependent on a vendor for services and makes customer unable to
use service of another vendor.
Solution:
o Have standardization (in technologies) among service providers so that customers can
easily move from a service provider to another.

Challenge 2: Data Privacy and Security Concerns


Cloud services are prone to attacks because they are accessed through internet.
Security is given by
o Storing the encrypted data in to cloud.
o Firewalls, filters.
Cloud environment attacks include
o Guest hopping
o Hijacking
o VM rootkits.
Guest Hopping: Virtual machine hyper jumping (VM jumping) is an attack method that
exploits(make use of) hypervisor’s weakness that allows a virtual machine (VM) to be
accessed from another.
Hijacking: Hijacking is a type of network security attack in which the attacker takes
control of a communication

60
VM Rootkit: is a collection of malicious (harmful) computer software, designed to enable
access to a computer that is not otherwise allowed.
A man-in-the-middle (MITM) attack is a form of eavesdroppping(Spy) where
communication between two users is monitored and modified by an unauthorized party.
o Man-in-the-middle attack may take place during VM migrations [virtual machine (VM)
migration - VM is moved from one physical host to another host].
Passive attacks steal sensitive data or passwords.
Active attacks may manipulate (control) kernel data structures which will cause major
damage to cloud servers.

Challenge 3: Unpredictable Performance and Bottlenecks


Multiple VMs can share CPUs and main memory in cloud computing, but I/O sharing is
problematic.
Internet applications continue to become more data-intensive (handles huge amount of
data).
Handling huge amount of data (data intensive) is a bottleneck in cloud environment.
Weak Servers that does not provide data transfers properly must be removed from cloud
environment

Challenge 4: Distributed Storage and Widespread Software Bugs


The database is always growing in cloud applications.
There is a need to create a storage system that meets this growth.
This demands the design of efficient distributed SANs (Storage Area Network of Storage
devices).
Data centres must meet
o Scalability
o Data durability
o HA(High Availability)
o Data consistence
Bug refers to errors in software.
Debugging must be done in data centres.

61
Challenge 5: Cloud Scalability, Interoperability and StandardizationCloud
Scalability
Cloud resources are scalable. Cost increases when storage and network bandwidth
scaled(increased)
Interoperability
Open Virtualization Format (OVF) describes an open, secure, portable, efficient, and
extensible format for the packaging and distribution of VMs.
OVF defines a transport mechanism for VM, that can be applied to different virtualization
platforms
Standardization
Cloud standardization, should have ability for virtual machine to run on any virtual
platform.

Challenge 6: Software Licensing and Reputation Sharing


Cloud providers can use both pay-for-use and bulk-use licensing schemes to widen the
business coverage.
Cloud providers must create reputation-guarding services similar to the ―trusted e-mail‖
services
Cloud providers want legal liability to remain with the customer, and vice versa.

Cloud Storage
Storing your data on the storage of a cloud service provider rather than on a local system.
Data stored on the cloud are accessed through Internet.
Cloud Service Provider provides Storage as a Service

Storage as a Service
□ Third-party provider rents space on their storage to cloud users.
□ Customers move to cloud storage when they lack in budget for having their own storage.
□ Storage service providers takes the responsibility of taking current backup, replication,
and disaster recovery needs.
□ Small and medium-sized businesses can make use of Cloud Storage
□ Storage is rented from the provider using a
o cost-per-gigabyte-stored (or)

62
o cost-per-data-transferred
□ The end user doesn’t have to pay for infrastructure (resources), they have to pay only for
how much they transfer and save on the provider’s storage.

5.2 Providers
□ Google Docs allows users to upload documents, spreadsheets, and presentations to
Google’s data servers.
□ Those files can then be edited using a Google application.
□ Web email providers like Gmail, Hotmail, and Yahoo! Mail, store email messages on
their own servers.
□ Users can access their email from computers and other devices connected to the Internet.
□ Flicker and Picasa host millions of digital photographs, Users can create their own online
photo albums.
□ YouTube hosts millions of user-uploaded video files.
□ Hostmonster and GoDaddy store files and data for many client web sites.
□ Facebook and MySpace are social networking sites and allow members to post pictures
and other content. That content is stored on the company’s servers.
□ MediaMax and Strongspace offer storage space for any kind of digital data.

Data Security
□ To secure data, most systems use a combination of techniques:
o Encryption
o Authentication 63
o Authorization

Encryption
o Algorithms are used to encode information. To decode the information keys are required.
Authentication processes
o This requires a user to create a name and password.
Authorization practices
o The client lists the people who are authorized to access information stored on the cloud
system.
If information stored on the cloud, the head of the IT department might have complete and
free access to everything.

Reliability
□ Service Providers gives reliability for data through redundancy (maintaining multiple
copies of data).
Reputation is important to cloud storage providers. If there is a perception that the provider is
unreliable, they won’t have many clients.
Advantages
□ Cloud storage providers balance server loads.
□ Move data among various datacenters, ensuring that information is stored close and
thereby available quickly to where it is used.
□ It allows to protect the data in case there’s a disaster.
□ Some products are agent-based and the application automatically transfers
information to the cloud via FTP
Cautions
□ Don’t commit everything to the cloud, but use it for a few, noncritical purposes.
□ Large enterprises might have difficulty with vendors like Google or Amazon. 64
□ Forced to rewrite solutions for their applications.

□ Lack of portability.

Theft (Disadvantage)
□ User data could be stolen or viewed by those who are not authorized to see it.
□ Whenever user data is let out of their own datacenter, risk trouble occurs from a
security point of view.
□ If user store data on the cloud, make sure user encrypts data and secures data transit
with technologies like SSL.

Cloud Storage Providers Amazon


Simple Storage Service (S3)
□ The best-known cloud storage service is Amazon’s Simple Storage Service (S3),
launched in 2006.
□ Amazon S3 is designed to make computing easier for developers.
□ Amazon S3 provides an interface that can be used to store and retrieve any amount of
data, at any time, from anywhere on the Web.
□ Amazon S3 is intentionally built with a minimal feature set that includes the
following functionality:
• Write, read, and delete objects containing from 1 byte to 5 gigabytes of data
each.
The number of objects that can be stored is unlimited.
• Each object is stored and retrieved via a unique developer-assigned key.
• Objects can be made private or public, and rights can be assigned to specific
users.
• Uses standards-based REST and SOAP interfaces designed to work with any
Internet-development toolkit.

Design Requirements
Amazon built S3 to fulfill the following design requirements:
• Scalable Amazon S3 can scale in terms of storage, request rate, and users to support an
unlimited number of web-scale applications.
 Reliable Store data durably, with 99.99 percent availability. Amazon says it does not
allow any downtime.
65
• Fast Amazon S3 was designed to be fast enough to support high-performance applications.
Server-side latency must be insignificant relative to Internet latency. Any performance
bottlenecks can be fixed by simply adding nodes to the system.
• Inexpensive Amazon S3 is built from inexpensive commodity hardware components. As a
result, frequent node failure is the norm and must not affect the overall system. It must be
hardware-agnostic, so that savings can be captured as Amazon continues to drive down
infrastructure costs.
• Simple Building highly scalable, reliable, fast, and inexpensive storage is difficult. Doing so
in a way that makes it easy to use for any application anywhere is more difficult. Amazon S3
must do both.

Design Principles
Amazon used the following principles of distributed system design to meet Amazon S3
requirements:
• Decentralization It uses fully decentralized techniques to remove scaling bottlenecks and
single points of failure.
• Autonomy The system is designed such that individual components can make decisions
based on local information.
• Local responsibility Each individual component is responsible for achieving its
consistency; this is never the burden of its peers.
• Controlled concurrency Operations are designed such that no or limited concurrency
control is required.
• Failure toleration The system considers the failure of components to be a normal mode of
operation and continues operation with no or minimal interruption.
• Controlled parallelism Abstractions used in the system are of such granularity that
parallelism can be used to improve performance and robustness of recovery or the introduction
of new nodes.
• Small, well-understood building blocks Do not try to provide a single service that does
everything for everyone, but instead build small components that can be used as building blocks
for other services.
• Symmetry Nodes in the system are identical in terms of functionality, and require no or
minimal node-specific configuration to function.
• Simplicity The system should be made as simple as possible, but no simpler.

66
How S3 Works
Amazon keeps its lips pretty tight about how S3 works, but according to Amazon, S3’s
design aims to provide scalability, high availability, and low latency at commodity costs. S3
stores arbitrary objects at up to 5GB in size, and each is accompanied by up to 2KB of
metadata. Objects are organized by buckets. Each bucket is owned by an AWS account and
the buckets are identified by a unique, user-assigned key.

Buckets and objects are created, listed, and retrieved using either a REST-style or SOAP
interface.
Objects can also be retrieved using the HTTP GET interface or via BitTorrent. An
access control list restricts who can access the data in each bucket. Bucket names and keys are
formulated so that they can be accessed using HTTP. Requests are authorized using an access
control list associated with each bucket and object, for instance: 67
http://s3.amazonaws.com/examplebuc
ket/examplekey
http://examplebucket.s3.amazonaws.c
om/examplekey
The Amazon AWS Authentication tools allow the bucket owner to create an
authenticated URL with a set amount of time that the URL will be valid.

Peer to Peer Architecture


In the common client-server architecture, multiple clients will communicate with a central
server. A peer-to-peer (P2P) architecture consists of a decentralized network of peers - nodes
that are both clients and servers. P2P networks distribute the workload between peers, and all
peers contribute and consume resources within the network without the need for a centralized
server. However, not all peers are necessarily equal. Super peers may have more resources and
can contribute more than they consume. Edge peers do not contribute any resources, they only
consume from the network. In its purest form, P2P architecture is completely decentralized.
However, in application, sometimes there is a central tracking server layered on top of the P2P
network to help peers find each other and manage the network. Here’s a simple example of
small P2P network.

Applications P2P architecture works best when there are lots of active peers in an active
network, so new peers joining the network can easily find other peers to connect to. If a large
number of peers drop out of the network, there are still enough remaining peers to pick up the
68
slack. If there are only a few peers, there are less resources available overall. For example, in
a P2P file-sharing application, the more popular a file is, which means that lots of peers are
sharing the file, the faster it can be downloaded. P2P works best if the workload is split into
small chunks that can be reassembled later. This way, a large number of peers can work
simultaneously on one task and each peer has less work to do. In the case of P2P file-sharing,
a file can be broken down so that a peer can download many chunks of the file from different
peers at the same time.
Some uses of P2P architecture:
● File sharing
● Instant messaging
● Voice Communication
● Collaboration
● High Performance Computing
Some examples of P2P architecture:
● Napster - it was shut down in 2001 since they used a centralized tracking server
● BitTorrent - popular P2P file-sharing protocol, usually associated with piracy
● Skype - it used to use proprietary hybrid P2P protocol, now uses client-server model after
Microsoft’s acquisition
● Bitcoin - P2P cryptocurrency without a central monetary authority
Advantages/Change Resilience
P2P networks have many advantages. For example, there is no central server to maintain and
to pay for (disregarding tracking servers), so this type of networks can be more economical.
That also means there is no need for a network operating system, thus lowering cost even
further. Another advantage would be there is no single point of failure, unless in the very
unlikely case that the network is very small. P2P networks are very resilient to the change in
peers; if one peer leaves, there is minimal impact on the overall network. If a large group of
peers join the network at once, the network can handle the increased load easily. Due to its
decentralized nature, P2P networks can survive attacks fairly well since there is no centralized
server.
Disadvantages
P2P networks introduce many security concerns. If one peer is infected with a virus and uploads
a chuck of the file that contains the virus, it can quickly spread to other peers. Also,if there
are many peers in the network, it can be difficult to ensure they have the proper permissions to
access the network if a peer is sharing a confidential file. P2P networks often contain a large
number of users who utilize resources shared by other nodes, but who do not share anything
themselves. These type of freeriders are called the leechers. Although being hard to shut down
is counted as an advantage, it can also be a disadvantage if it is used to facilitate illegal and
immoral activities. Furthermore, the widespread use of mobile devices has made many
companies to switch to other architectures. With many people using mobile devices that are
not always on, it can be difficult for users to contribute to the network without draining battery
life and using up mobile data. In a client-server architecture, clients don’t need to contribute 69
any of their resources.
Non-Functional Properties
● Scalability - the network is very easy to scale up
● Efficiency - the network uses available resources of peers, who normally wouldn’t
contribute anything in a typical client-server architecture
● Adaptability - the network can be used for a variety of use cases, and it can easily start
upanother network if one is taken down, which means there is no single point of failure

70

You might also like