Unit II Introduction to Cloud Computing
Unit II Introduction to Cloud Computing
Hybrid Clouds – Service Models: IaaS – PaaS – SaaS – Benefits of Cloud Computing.
Full availability
On-demand-service
The term cloud refers to a network or internet (Next generation internet computing)
It is computing based on the internet (Delivery of computing service over the internet)
The main advantage of the internet computing is used to deliver the resources (docs, image,
audio, video or software) to the cloud customer at the right time.
Cloud can provide services over network (i.e., on public network or on private networks.,
WAN, LAN or VPN)
1
IT5603- Distributed and Cloud Computing
CLOUD COMPUTING
It means storing and accessing data and programs over the internet, instead of
user computer’s
hard drive
It provides a solution of IT infrastructure in low cost
CLOUD STORAGE SYSTEM (ONLINE STORAGE SYSTEM)
An online storage system enables the end-user (client) to store the large amount
of data at the
remote server over the internet.
Once a client posts the data to server, then that user can access the data at any
time, at anywhere,
through the internet. (Portability)
It offers online data storage, platform, infrastructure and applications
Cloud computing is both a combination of software and hardware based
computing resources
delivered as a network service
Examples of Cloud Storage
Web e-mail providers such as G-mail, Yahoo, etc, …
2
IT5603- Distributed and Cloud Computing
To store the huge amount of data (Petabytes of data) which normally is not
possible with single PC
To increase the availability of data
To provide a secure access to an application and data which is away from a
networked device
To provide a reliable access to handle the personal data to their customers while
servicing
APPLICATION EXAMPLES
4
IT5603- Distributed and Cloud Computing
1.1 INTRODUCTION
EVOLUTION OF DISTRIBUTED COMPUTING
Grids enable access to shared computing power and storage capacity from your desktop.
Clouds enable access to leased computing power and storage capacity from your desktop.
• Grids are an open source technology. Resource users and providers alike can understand
and contribute to the management of their grid
• Clouds are a proprietary technology. Only the resource provider knows exactly how
their cloud manages data, job queues, security requirements and so on.
• The concept of grids was proposed in 1995. The Open science grid (OSG) started in 1995
The EDG (European Data Grid) project began in 2001.
• In the late 1990`s Oracle and EMC offered early private cloud solutions. However, the
term cloud computing didn't gain prominence until 2007.
SCALABLE COMPUTING OVER THE INTERNET
Instead of using a centralized computer to solve computational problems, a parallel and
distributed computing system uses multiple computers to solve large-scale problems over the
Internet. Thus, distributed computing becomes data-intensive and network-centric.
The Age of Internet Computing
o high-performance computing (HPC) applications is no longer optimal for measuring
system performance
o The emergence of computing clouds instead demands high-throughput computing (HTC)
systems built with parallel and distributed computing technologies
o We have to upgrade data centers using fast servers, storage systems, and high-bandwidth
networks.
The Platform Evolution
o From 1950 to 1970, a handful of mainframes, including the IBM 360 and CDC 6400
o From 1960 to 1980, lower-cost minicomputers such as the DEC PDP 11 and VAX
Series
o From 1970 to 1990, we saw widespread use of personal computers built with VLSI
microprocessors.
o From 1980 to 2000, massive numbers of portable computers and pervasive devices
appeared in both wired and wireless applications
5
IT5603- Distributed and Cloud Computing
o Since 1990, the use of both HPC and HTC systems hidden in clusters, grids, or
Internet clouds has proliferated
6
IT5603- Distributed and Cloud Computing
HTC applications than on HPC applications. Clustering and P2P technologies lead to
the development of computational grids or data grids.
For many years, HPC systems emphasize the raw speed performance. The speed of
HPC systems has increased from Gflops in the early 1990s to now Pflops in 2010.
The development of market-oriented high-end computing systems is undergoing a
strategic change from an HPC paradigm to an HTC paradigm. This HTC paradigm pays
more attention to high-flux computing. The main application for high-flux computing
is in Internet searches and web services by millions or more users simultaneously. The
performance goal thus shifts to measure high throughput or the number of tasks
completed per unit of time. HTC technology needs to not only improve in terms of
batch processing speed, but also address the acute problems of cost, energy savings,
security, and reliability at many data and enterprise computing centers.
Advances in virtualization make it possible to see the growth of Internet clouds as a
new computing paradigm. The maturity of radio-frequency identification (RFID),
Global Positioning System (GPS), and sensor technologies has triggered the
development of the Internet of Things (IoT). These new paradigms are only briefly
introduced here.
The high-technology community has argued for many years about the precise
definitions of centralized computing, parallel computing, distributed computing, and
cloud computing. In general, distributed computing is the opposite of centralized
computing. The field of parallel computing overlaps with distributed computing to a
great extent, and cloud computing overlaps with distributed, centralized, and parallel
computing.
Terms
Centralized computing
This is a computing paradigm by which all computer resources are centralized in
one physical system. All resources (processors, memory, and storage) are fully shared and
tightly coupled within one integrated OS. Many data centers and supercomputers are
centralized systems, but they are used in parallel, distributed, and cloud computing
applications.
7
IT5603- Distributed and Cloud Computing
• Parallel computing
In parallel computing, all processors are either tightly coupled with centralized
shared memory or loosely coupled with distributed memory. Inter processor
communication is accomplished through shared memory or via message passing.
Acomputer system capable of parallel computing is commonly known as a parallel
computer. Programs running in a parallel computer are called parallel programs. The
process of writing parallel programs is often referred to as parallel programming.
• Distributed computing This is a field of computer science/engineering that studies
distributed systems. A distributed system consists of multiple autonomous computers, each
having its own private memory, communicating through a computer network. Information
exchange in a distributed system is accomplished through message passing. A computer
program that runs in a distributed system is known as a distributed program. The process
of writing distributed programs is referred to as distributed programming.
• Cloud computing An Internet cloud of resources can be either a centralized or a
distributed computing system. The cloud applies parallel or distributed computing, or both.
Clouds can be built with physical or virtualized resources over large data centers that are
centralized or distributed. Some authors consider cloud computing to be a form of utility
computing or service computing . As an alternative to the preceding terms, some in the
high-tech community prefer the term concurrent computing or concurrent programming.
These terms typically refer to the union of parallel computing and distributing computing,
although biased practitioners may interpret them differently.
• Ubiquitous computing refers to computing with pervasive devices at any place and time
using wired or wireless communication. The Internet of Things (IoT) is a networked
connection of everyday objects including computers, sensors, humans, etc. The IoT is
supported by Internet clouds to achieve ubiquitous computing with any object at any place
and time. Finally, the term Internet computing is even broader and covers all computing
paradigms over the Internet. This book covers all the aforementioned computing
paradigms, placing more emphasis on distributed and cloud computing and their working
systems, including the clusters, grids, P2P, and cloud systems.
Internet of Things
• The traditional Internet connects machines to machines or web pages to web pages. The
concept of the IoT was introduced in 1999 at MIT .
8
IT5603- Distributed and Cloud Computing
• The IoT refers to the networked interconnection of everyday objects, tools, devices, or
computers. One can view the IoT as a wireless network of sensors that interconnect all
things in our daily life.
• It allows objects to be sensed and controlled remotely across existing network
infrastructure
9
IT5603- Distributed and Cloud Computing
Figure 1.2shows the architecture of a typical server cluster built around a low-latency, high
bandwidth interconnection network. This network can be as simple as a SAN (e.g., Myrinet) or a
LAN (e.g., Ethernet).
• To build a larger cluster with more nodes, the interconnection network can be built with
multiple levels of Gigabit Ethernet, or InfiniBand switches.
• Through hierarchical construction using a SAN, LAN, or WAN, one can build scalable
clusters with an increasing number of nodes. The cluster is connected to the Internet via a
virtual private network (VPN) gateway.
• The gateway IP address locates the cluster. The system image of a computer is decided by
the way the OS manages the shared cluster resources.
Most clusters have loosely coupled node computers. All resources of a server node are
managed by their own OS. Thus, most clusters have multiple system images as a result of having
many autonomous nodes under different OS control.
1.3.1.2 Single-System Image(SSI)
• Ideal cluster should merge multiple system images into a single-system image (SSI).
• Cluster designers desire a cluster operating system or some middleware to support SSI at
various levels, including the sharing of CPUs, memory, and I/O across all cluster nodes.
An SSI is an illusion created by software or hardware that presents a collection of resources as one
integrated, powerful resource. SSI makes the cluster appear like a single machine to the user. A
cluster with multiple system images is nothing but a collection of independent computers.
1.3.1.3 Hardware, Software, and Middleware Support
• Clusters exploring massive parallelism are commonly known as MPPs. Almost all HPC
clusters in the Top 500 list are also MPPs.
• The building blocks are computer nodes (PCs, workstations, servers, or SMP), special
communication software such as PVM, and a network interface card in each computer
node.
Most clusters run under the Linux OS. The computer nodes are interconnected by a high-
bandwidth network (such as Gigabit Ethernet, Myrinet, InfiniBand, etc.). Special cluster
middleware supports are needed to create SSI or high availability (HA). Both sequential and
parallel applications can run on the cluster, and special parallel environments are needed to
facilitate use of the cluster resources. For example, distributed memory has multiple images. Users
may want all distributed memory to be shared by all servers by forming distributed shared
10
IT5603- Distributed and Cloud Computing
memory (DSM). Many SSI features are expensive or difficult to achieve at various cluster
operational levels. Instead of achieving SSI, many clusters are loosely coupled machines. Using
virtualization, one can build many virtual clusters dynamically, upon user demand.
11
IT5603- Distributed and Cloud Computing
b. The Cloud Landscape
• The cloud ecosystem must be designed to be secure, trustworthy, and dependable. Some
computer users think of the cloud as a centralized resource pool. Others consider the cloud
to be a server cluster which practices distributed computing over all the servers
Traditionally, a distributed computing system tends to be owned and operated by an
autonomous administrative domain (e.g., a research laboratory or company) for on-
premises computing needs.
• Cloud computing as an on-demand computing paradigm resolves or relieves us from these
problems.
Three Cloud service Model in a cloud landscape
Infrastructure as a Service (IaaS)
This model puts together infrastructures demanded by users—namely servers, storage,
networks, and the data center fabric.
• The user can deploy and run on multiple VMs running guest OS on specific applications.
• The user does not manage or control the underlying cloud infrastructure, but can specify
when to request and release the needed resources.
Platform as a Service (PaaS)
• This model enables the user to deploy user-built applications onto a virtualized cloud platform.
PaaS includes middleware, databases, development tools, and some runtime support such as Web
2.0 and Java.
• The platform includes both hardware and software integrated with specific programming
interfaces.
• The provider supplies the API and software tools (e.g., Java, Python, Web 2.0, .NET). The user
is freed from managing the cloud infrastructure.
Software as a Service (SaaS)
• This refers to browser-initiated application software over thousands of paid cloud customers.
The SaaS model applies to business processes, industry applications, consumer relationship
management (CRM), enterprise resources planning (ERP), human resources (HR), and
collaborative applications. On the customer side, there is no upfront investment in servers or
software licensing. On the provider side, costs are rather low, compared with conventional hosting
of user applications.
12
IT5603- Distributed and Cloud Computing
Reasons to adapt the cloud for upgraded Internet applications and web services:
1. Desired location in areas with protected space and higher energy efficiency
2. Sharing of peak-load capacity among a large pool of users, improving overall utilization
3. Separation of infrastructure maintenance duties from domain-specific application development
4. Significant reduction in cloud computing cost, compared with traditional computing
paradigms
5. Cloud computing programming and application development
6. Service and data discovery and content/service distribution
7. Privacy, security, copyright, and reliability issues
8. Service agreements, business models, and pricing policies
13
IT5603- Distributed and Cloud Computing
Cloud computing is using the internet to access someone else's software running on
someone else's hardware in someone else's data center.
The user sees only one resource ( HW, Os) but uses virtually multiple os. HW resources
etc..
Cloud architecture effectively uses virtualization
A model of computation and data storage based on “pay as you go” access to “unlimited”
remote data center capabilities
A cloud infrastructure provides a framework to manage scalable, reliable, on-demand
access to applications
Cloud services provide the “invisible” backend to many of our mobile applications
High level of elasticity in consumption
Historical roots in today’s Internet apps
Search, email, social networks, e-com sites
File storage (Live Mesh, Mobile Me)
1.2 Definition
“The National Institute of Standards and Technology (NIST) defines cloud computing as
a "pay-per-use model for enabling available, convenient and on- demand network
access to a shared pool of configurable computing resources (e.g., networks, servers,
storage, applications and services) that can be rapidly provisioned and released with
minimal management effort or service provider interaction."
14
IT5603- Distributed and Cloud Computing
◦ A consumer can unilaterally provision computing capabilities such as server time
and network storage as needed automatically, without requiring human interaction
with a service provider.
Essential Characteristics 3
Resource pooling.
◦ The provider’s computing resources are pooled to serve multiple consumers using
a multi-tenant model, with different physical and virtual resources dynamically
assigned and reassigned according to consumer demand.
15
IT5603- Distributed and Cloud Computing
Essential Characteristics 4
Rapid elasticity.
◦ Capabilities can be rapidly and elastically provisioned - in some cases
automatically - to quickly scale out; and rapidly released to quickly scale in.
◦ To the consumer, the capabilities available for provisioning often appear to be
unlimited and can be purchased in any quantity at any time.
Essential Characteristics 5
Measured service.
◦ Cloud systems automatically control and optimize resource usage by leveraging a
metering capability at some level of abstraction appropriate to the type of service.
◦ Resource usage can be monitored, controlled, and reported - providing
transparency for both the provider and consumer of the service.
16
IT5603- Distributed and Cloud Computing
The consumer does not manage or control the underlying cloud infrastructure including
network, servers, operating systems, storage, data or even individual application
capabilities, with the possible exception of limited user specific application configuration
settings.
SaaS providers
Google’s Gmail, Docs, Talk etc
Microsoft’s Hotmail, Sharepoint
SalesForce,
Yahoo, Facebook
Infrastructure as a Service (IaaS)
IaaS is the delivery of technology infrastructure ( mostly hardware) as an on demand,
scalable service
◦ Usually billed based on usage
◦ Usually multi tenant virtualized environment
◦ Can be coupled with Managed Services for OS and application support
◦ User can choose his OS, storage, deployed app, networking components
◦
17
IT5603- Distributed and Cloud Computing
Consumer is able to deploy and run arbitrary software, which may include operating
systems and applications.
The consumer does not manage or control the underlying cloud infrastructure but has
control over operating systems, storage, deployed applications, and possibly limited control
of select networking components (e.g., host firewalls).
IaaS providers
Amazon Elastic Compute Cloud (EC2)
◦ Each instance provides 1-20 processors, upto 16 GB RAM, 1.69TB storage
RackSpace Hosting
◦ Each instance provides 4 core CPU, upto 8 GB RAM, 480 GB storage
Joyent Cloud
◦ Each instance provides 8 CPUs, upto 32 GB RAM, 48 GB storage
Go Grid
◦ Each instance provides 1-6 processors, upto 15 GB RAM, 1.69TB storage
PaaS providers
Google App Engine
◦ Python, Java, Eclipse
18
IT5603- Distributed and Cloud Computing
Microsoft Azure
◦ .Net, Visual Studio
Sales Force
◦ Apex, Web wizard
TIBCO,
VMware,
Zoho
19
IT5603- Distributed and Cloud Computing
Can be slow:
◦ Even with a fast connection, web-based applications can sometimes be slower
than accessing a similar software program on your desktop PC.
Disparate Protocols :
◦ Each cloud systems uses different protocols and different APIs – Standards yet to
evolve.
II Hardware Evolution
In 1930, binary arithmetic was developed
computer processing technology, terminology, and programming languages.
• In 1939,Electronic computer was developed
Computations were performed using vacuum-tube technology.
• In 1941, Konrad Zuse's Z3 was developed
Support both floating-point and binary arithmetic.
There are four generations
First Generation Computers
Second Generation Computers
Third Generation Computers
Fourth Generation Computers
a.First Generation Computers
Time Period : 1942 to 1955
Technology : Vacuum Tubes
Size : Very Large System
Processing : Very Slow
20
IT5603- Distributed and Cloud Computing
Examples:
1.ENIAC (Electronic Numerical Integrator and Computer)
2.EDVAC(Electronic Discrete Variable Automatic Computer)
Advantages:
• It made use of vacuum tubes which was the advanced technology at that time
• Computations were performed in milliseconds.
Disadvantages:
• very big in size, weight was about 30 tones.
• very costly.
• Requires more power consumption
•Large amount heat was generated.
21
IT5603- Distributed and Cloud Computing
c.Third Generation Computers
Advantages:
Fastest in computation and size get reduced as compared to the previous generation of
computer. Heat generated is small.
Less maintenance is required.
22
IT5603- Distributed and Cloud Computing
Disadvantages:
The Microprocessor design and fabrication are very complex.
Air Conditioning is required in many cases
23
IT5603- Distributed and Cloud Computing
Internet Hardware Evolution
Establishing a Common Protocol for the Internet
Evolution of Ipv6
Finding a Common Method to Communicate Using the Internet Protocol
Building a Common Interface to the Internet
The Appearance of Cloud Formations From One Computer to a Grid of Many
a.Establishing a Common Protocol for the Internet
NCP essentially provided a transport layer consisting of the ARPANET Host-to-Host
Protocol (AIIIIP) and the Initial Connection Protocol (ICP)
Application protocols
o File Transfer Protocol (FTP), used for file transfers,
o Simple Mail Transfer Protocol (SMTP), used for sending email
Four versions of TCP/IP
• TCP vl
• TCP v2
• TCP v3 and IP v3,
• TCP v4 and IP v4
b.Evolution of Ipv6
IPv4 was never designed to scale to global levels.
To increase available address space, it had to process large data packets (i.e., more bits of
data).
To overcome these problems, Internet Engineering Task Force (IETF) developed IPv6,
which was released in January 1995.
Ipv6 is sometimes called the Next Generation Internet Protocol (IPNG) or TCP/IP v6.
24
IT5603- Distributed and Cloud Computing
NLS was designed to cross-reference research papers for sharing among geographically
distributed researchers.
In the 1980s, Web was developed in Europe by Tim Berners-Lee and Robert Cailliau
d.B uilding a Common Interface to the Internet
Betters-Lee developed the first web browser featuring an integrated editor that could
create hypertext documents.
Following this initial success, Berners-Lee enhanced the server and browser by adding
support for the FTP (File Transfer protocol)
25
IT5603- Distributed and Cloud Computing
The Globus Toolkit is an open source software toolkit used for building grid systems and
applications
26
IT5603- Distributed and Cloud Computing
Virtualization technology is a way of reducing the majority of hardware acquisition and
maintenance costs, which can result in significant savings for any company.
Parallel Processing
Vector Processing
Symmetric Multiprocessing Systems
Massively Parallel Processing Systems
a.Parallel Processing
Parallel processing is performed by the simultaneous execution of program instructions
that have been allocated across multiple processors.
Objective: running a progran in less time.
The next advancement in parallel processing-multiprogramming
In a multiprogramming system, multiple programs submitted by users but each allowed
to use the processor for a short time.
This approach is known as "round-robin scheduling”(RR scheduling)
b.Vector Processing
Vector processing was developed to increase processing performance by operating in a
multitasking manner.
Matrix operations were added to computers to perform arithmetic operations.
This was valuable in certain types of applications in which data occurred in the form of
vectors or matrices.
In applications with less well-formed data, vector processing was less valuable.
c.Symmetric Multiprocessing Systems
Symmetric multiprocessing systems (SMP) was developed to address the problem of
resource management in master/slave models.
In SMP systems, each processor is equally capable and responsible for managing the
workflow as it passes through the system.
The primary goal is to achieve sequential consistency
d.Massively Parallel Processing Systems
In Massively Parallel Processing Systems, a computer system with many independent
arithmetic units, which run in parallel.
All the processing elements are interconnected to act as one very large computer.
27
Early examples of MPP systems were the Distributed ArrayProcessor, the Goodyear
MPP, the Connection Machine, and the Ultracomputer
MPP machines are not easy to program, but for certain applications, such as data mining,
they are the best solution
28
Elasticity also introduces a new important factor, which is the speed.
Rapid provisioning and deprovisioning are key to maintaining an acceptable
performance in the context of cloud computing
Quality of service is subjected to a service level agreement
Classification
Elasticity solutions can be arranged in different classes based on
Scope
Policy
Purpose
Method
a.Scope
Elasticity can be implemented on any of the cloud layers.
Most commonly, elasticity is achieved on the IaaS level, where the resources to be
provisioned are virtual machine instances.
Other infrastructure services can also be scaled
On the PaaS level, elasticity consists in scaling containers or databases for instance.
Finally, both PaaS and IaaS elasticity can be used to implement elastic applications, be it
for private use or in order to be provided as a SaaS
The elasticity actions can be applied either at the infrastructure or application/platform
level.
The elasticity actions perform the decisions made by the elasticity strategy or
management system to scale the resources.
Google App Engine and Azure elastic pool are examples of elastic Platform as a Service
(PaaS).
Elasticity actions can be performed at the infrastructure level where the elasticity
controller monitors the system and takes decisions.
The cloud infrastructures are based on the virtualization technology, which can be VMs
or containers.
In the embedded elasticity, elastic applications are able to adjust their own resources
according to runtime requirements or due to changes in the execution flow.
There must be a knowledge of the source code of the applications.
29
Application Map: The elasticity controller must have a complete map of the application
components and instances.
Code embedded: The elasticity controller is embedded in the application source code.
The elasticity actions are performed by the application itself.
While moving the elasticity controller to the application source code eliminates the use of
monitoring systems
There must be a specialized controller for each application.
b.Policy
Elastic solutions can be either manual or automatic.
A manual elastic solution would provide their users with tools to monitor their systems
and add or remove resources but leaves the scaling decision to them.
Automatic mode: All the actions are done automatically, and this could be classified into
reactive and proactive modes.
Elastic solutions can be either reactive or predictive
Reactive mode: The elasticity actions are triggered based on certain thresholds or rules, the system
reacts to the load (workload or resource utilization) and triggers actions to adapt changes
accordingly.
An elastic solution is reactive when it scales a posteriori, based on a monitored change in
the system.
These are generally implemented by a set of Event-Condition-Action rules.
Proactive mode: This approach implements forecasting techniques, anticipates the future needs
and triggers actions based on this anticipation.
A predictive or proactive elasticity solution uses its knowledge of either recent history or
load patterns inferred from longer periods of time in order to predict the upcoming load of
the system and scale according to it.
c.Purpose
An elastic solution can have many purposes.
The first one to come to mind is naturally performance, in which case the focus should be
put on their speed.
Another purpose for elasticity can also be energy efficiency, where using the minimum
amount of resources is the dominating factor.
30
Other solutions intend to reduce the cost by multiplexing either resource providers or
elasticity methods
Elasticity has different purposes such as improving performance, increasing resource
capacity, saving energy, reducing cost and ensuring availability.
Once we look to the elasticity objectives, there are different perspectives.
Cloud IaaS providers try to maximize the profit by minimizing the resources while
offering a good Quality of Service (QoS),
PaaS providers seek to minimize the cost they pay to the
Cloud.
The customers (end-users) search to increase their Quality of Experience (QoE) and to
minimize their payments.
QoE is the degree of delight or annoyance of the user of an application or service
d.Method
Vertical elasticity, changes the amount of resources linked to existing instances on-the-
fly.
This can be done in two manners.
The first method consists in explicitly redimensioning a virtual machine instance, i.e.,
changing the quota of physical resources allocated to it.
This is however poorly supported by common operating systems as they fail to take into
account changes in CPU or memory without rebooting, thus resulting in service
interruption.
The second vertical scaling method involves VM migration: moving a virtual machine
instance to another physical machine with a different overall load changes its available
resources
31
Horizontal scaling is the process of adding/removing instances, which may be located at
different locations.
Load balancers are used to distribute the load among the different instances.
Vertical scaling is the process of modifying resources (CPU, memory, storage or both)
size for an instance at run time.
It gives more flexibility for the cloud systems to cope with the varying workloads
Migration
Migration can be also considered as a needed action to further allow the vertical scaling
when there is no enough resources on the host machine.
It is also used for other purposes such as migrating a VM to a less loaded physical
machine just to guarantee its performance.
Several types of migration are deployed such as live migration and no-live migration.
Live migration has two main approaches
post-copy
pre-copy
Post-copy migration suspends the migrating VM, copies minimal processor state to the
target host, resumes the VM and then begins fetching memory pages from the source.
In pre-copy approach, the memory pages are copied while the VM is running on the source.
If some pages are changed (called dirty pages) during the memory copy process, they will
be recopied until the number of recopied pages is greater than dirty pages, or the source
VM will be stopped.
The remaining dirty pages will be copied to the destination VM.
Architecture
The architecture of the elasticity management solutions can be either centralized or
decentralized.
Centralized architecture has only one elasticity controller, i.e., the auto scaling system
that provisions and deprovisions resources.
32
In decentralized solutions, the architecture is composed of many elasticity controllers or
application managers, which are responsible for provisioning resources for different cloud-
hosted platforms
Provider
Elastic solutions can be applied to a single or multiple cloud providers.
A single cloud provider can be either public or private with one or multiple regions or
datacenters.
Multiple clouds in this context means more than one cloud provider.
It includes hybrid clouds that can be private or public, in addition to the federated clouds
and cloud bursting.
Most of the elasticity solutions support only a single cloud provider
33
In order to achieve the goal, the cloud user has to request cloud service provider to make
a provision for the resources either statically or dynamically.
So that the cloud service provider will know how many instances of the resources and
what resources are required for a particular application.
By provisioning the resources, the QoS parameters like availability, throughput, security,
response time, reliability, performance etc must be achieved without violating SLA
There are two types
Static Provisioning
Dynamic Provisioning
Static Provisioning
For applications that have predictable and generally unchanging demands/workloads, it is
possible to use “static provisioning" effectively.
With advance provisioning, the customer contracts with the provider for services.
The provider prepares the appropriate resources in advance of start of service.
The customer is charged a flat fee or is billed on a monthly basis.
Dynamic Provisioning
In cases where demand by applications may change or vary, “dynamic provisioning"
techniques have been suggested whereby VMs may be migrated on-the-fly to new compute
nodes within the cloud.
The provider allocates more resources as they are needed and removes them when they are
not.
The customer is billed on a pay-per-use basis.
When dynamic provisioning is used to create a hybrid cloud, it is sometimes referred to as
cloud bursting.
Parameters for Resource Provisioning
Response time
Minimize Cost
Revenue Maximization
Fault tolerant
Reduced SLA Violation
Reduced Power Consumption
34
Response time: The resource provisioning algorithm designed must take minimal time to
respond when executing the task.
Minimize Cost: From the Cloud user point of view cost should be minimized.
Revenue Maximization: This is to be achieved from the Cloud Service Provider’s view.
Fault tolerant: The algorithm should continue to provide service in spite of failure of nodes.
Reduced SLA Violation: The algorithm designed must be able to reduce SLA violation.
Reduced Power Consumption: VM placement & migration techniques must lower power
consumption
35
Separation of Resource Provisioning from Job Management
• New virtualization layer between the service and the infrastructure layers
• Seamless integration with the existing middleware stacks.
• Completely transparent to the computing service and so end users
Cluster Partitioning
• Dynamic partition of the infrastructure
• Isolate workloads (several computing clusters)
• Dedicated HA partitions
37
Infrastructure Cloud Computing Solutions
• Commercial Cloud: Amazon EC2
• Scientific Cloud: Nimbus (University of Chicago)
• Open-source Technologies
• Globus VWS (Globus interfaces)
• Eucalyptus (Interfaces compatible with Amazon EC2)
• OpenNEbula (Engine for the Virtual Infrastructure)
On-demand Access to Cloud Resources
• Supplement local resources with cloud resources to satisfy peak or fluctuating demands
38
The National Institute of Standards and Technology (NIST) defines cloud computing as a
"pay-per-use model for enabling available, convenient and on-demand network access to a
shared pool of configurable computing resources (e.g., networks, servers, storage,
applications and services) that can be rapidly provisioned and released with minimal
management effort or service provider interaction."
Architecture
□ Architecture consists of 3 tiers
◦ Cloud Deployment Model
◦ Cloud Service Model
◦ Essential Characteristics of Cloud Computing .
39
Essential Characteristics 1
□ On-demand self-service.
◦ A consumer can unilaterally provision computing capabilities such as server
time and network storage as needed automatically, without requiring human
interaction with a service provider.
Essential Characteristics 2
□ Broad network access.
◦ Capabilities are available over the network and accessed through standard
mechanisms that promote use by heterogeneous thin or thick client platforms
(e.g., mobile phones, laptops, and PDAs) as well as other traditional or
cloudbased software services.
Essential Characteristics 3
□ Resource pooling.
◦ The provider’s computing resources are pooled to serve multiple consumers
using a multi-tenant model, with different physical and virtual resources
dynamically assigned and reassigned according to consumer demand.
Essential Characteristics 4
□ Rapid elasticity.
◦ Capabilities can be rapidly and elastically provisioned - in some cases
automatically - to quickly scale out; and rapidly released to quickly scale in.
◦ To the consumer, the capabilities available for provisioning often appear to be
unlimited and can be purchased in any quantity at any time.
Essential Characteristics 5
□ Measured service.
◦ Cloud systems automatically control and optimize resource usage by leveraging
a metering capability at some level of abstraction appropriate to the type of
service.
Resource usage can be monitored, controlled, and reported - providing transparency for both
the provider and consumer of the service.
40
Cloud Computing Reference Architecture:
41
Example Usage Scenario 1:
□ A cloud consumer may request service from a cloud broker instead of contacting a
cloud provider directly.
□ The cloud broker may create a new service by combining multiple services or by
enhancing an existing service.
Usage Scenario- Cloud Brokers
□ In this example, the actual cloud providers are invisible to the cloud consumer.
□ The cloud consumer interacts directly with the cloud broker.
Cloud Consumer
□ The cloud consumer is the principal stakeholder for the cloud computing service.
□ A cloud consumer represents a person or organization that maintains a business
relationship with, and uses the service from a cloud provider.
The cloud consumer may be billed for the service provisioned, and needs to arrange
payments accordingly.
Example Services Available to a Cloud Consumer
□ The consumers of SaaS can be organizations that provide their members with access
to software applications, end users or software application administrators.
□ SaaS consumers can be billed based on the number of end users, the time of use, the
network bandwidth consumed, the amount of data stored or duration of stored data.
43
□ Cloud consumers of PaaScan employ the tools and execution resources provided by
cloud providers to develop, test, deploy and manage the applications.
□ PaaS consumers can be application developers or application testers who run and test
applications in cloud-based environments,.
□ PaaS consumers can be billed according to, processing, database storage and network
resources consumed.
□ Consumers of IaaS have access to virtual computers, network-accessible storage &
network infrastructure components.
□ The consumers of IaaS can be system developers, system administrators and IT
managers.
□ IaaS consumers are billed according to the amount or duration of the resources
consumed, such as CPU hours used by virtual computers, volume and duration of data
stored.
Cloud Provider
□ A cloud provider is a person, an organization;
□ It is the entity responsible for making a service available to interested parties.
□ A Cloud Provider acquires and manages the computing infrastructure required for
providing the services.
□ Runs the cloud software that provides the services.
Makes arrangement to deliver the cloud services to the Cloud Consumers through network
access.
44
Cloud Auditor
□ A cloud auditor is a party that can perform an independent examination of cloud
service controls.
□ Audits are performed to verify conformance to standards through review of objective
evidence.
□ A cloud auditor can evaluate the services provided by a cloud provider in terms of
security controls, privacy impact, performance, etc.
Cloud Broker
□ Integration of cloud services can be too complex for cloud consumers to manage.
□ A cloud consumer may request cloud services from a cloud broker, instead of
contacting a cloud provider directly.
□ A cloud broker is an entity that manages the use, performance and delivery of cloud
services. Negotiates relationships between cloud providers and cloud consumers.
Services of cloud broker
Service Intermediation:
□ A cloud broker enhances a given service by improving some specific capability and
providing value-added services to cloud consumers.
Service Aggregation:
□ A cloud broker combines and integrates multiple services into one or more new
services.
□ The broker provides data integration and ensures the secure data movement between
the cloud consumer and multiple cloud providers.
Services of cloud broker
Service Arbitrage:
□ Service arbitrage is similar to service aggregation except that the services being
aggregated are not fixed.
□ Service arbitrage means a broker has the flexibility to choose services from multiple
agencies.
Eg: The cloud broker can use a credit-scoring service to measure and select an agency with
the best score.
Cloud Carrier
□ A cloud carrier acts as an intermediary that provides connectivity and transport of
cloud services between cloud consumers and cloud providers.
45
□ Cloud carriers provide access to consumers through network.
□ The distribution of cloud services is normally provided by network and
telecommunication carriers or a transport agent
□ A transport agent refers to a business organization that provides physical transport of
storage media such as high-capacity hard drives and other access devices.
Scope of Control between Provider and Consumer
The Cloud Provider and Cloud Consumer share the control of resources in a cloud system
46
Cloud Deployment Model
□ Public Cloud
□ Private Cloud
□ Hybrid Cloud
□ Community Cloud
Public cloud
□ A public cloud is one in which the cloud infrastructure and computing resources are
made available to the general public over a public network.
□ A public cloud is meant to serve a multitude(huge number) of users, not a single
customer.
□ A fundamental characteristic of public clouds is multitenancy.
□ Multitenancy allows multiple users to work in a software environment at the same
time, each with their own resources.
□ Built over the Internet (i.e., service provider offers resources, applications storage to
the customers over the internet) and can be accessed by any user.
□ Owned by service providers and are accessible through a subscription.
□ Best Option for small enterprises, which are able to start their businesses without
large up-front(initial) investment.
□ By renting the services, customers were able to dynamically upsize or downsize their
IT according to the demands of their business.
47
□ Services are offered on a price-per-use basis.
□ Promotes standardization, preserve capital investment
□ Public clouds have geographically dispersed datacenters to share the load of users and
better serve them according to their locations
□ Provider is in control of the infrastructure
Examples:
o Amazon EC2 is a public cloud that provides Infrastructure as a Service
o Google AppEngine is a public cloud that provides Platform as a Service
o SalesForce.com is a public cloud that provides software as a service.
Advantage
□ Offers unlimited scalability – on demand resources are available to meet your
business needs.
□ Lower costs—no need to purchase hardware or software and you pay only for the
service you use.
□ No maintenance - Service provider provides the maintenance.
□ Offers reliability: Vast number of resources are available so failure of a system will
not interrupt service.
□ Services like SaaS, PaaS, IaaS are easily available on Public Cloud platform as it can
be accessed from anywhere through any Internet enabled devices.
□ Location independent – the services can be accessed from any location
Disadvantage
□ No control over privacy or security
□ Cannot be used for use of sensitive applications(Government and Military agencies
will not consider Public cloud)
□ Lacks complete flexibility(since dependent on provider)
□ No stringent (strict) protocols regarding data management
Private Cloud
□ Cloud services are used by a single organization, which are not exposed to the public
□ Services are always maintained on a private network and the hardware and software
are dedicated only to single organization
□ Private cloud is physically located at
Organization’s premises [On-site private clouds] (or)
Outsourced(Given) to a third party[Outsource private Clouds]
48
□ It may be managed either by
□ Cloud Consumer organization (or)
By a third party
□ Private clouds are used by
government agencies
financial institutions
Mid size to large-size organisations.
□ On-site private clouds
Hybrid Cloud
□ Built with both public and private clouds
□ It is a heterogeneous cloud resulting from a private and public clouds.
□ Private cloud are used for
sensitive applications are kept inside the organization’s network
business-critical operations like financial reporting
□ Public Cloud are used when
Other services are kept outside the organization’s network
high-volume of data
Lower-security needs such as web-based email(gmail,yahoomail etc)
□ The resources or services are temporarily leased for the time required and then
released. This practice is also known as cloud bursting.
50
Fig: Hybrid Cloud
Advantage
□ It is scalable
□ Offers better security
□ Flexible-Additional resources are availed in public cloud when needed
□ Cost-effectiveness—we have to pay for extra resources only when needed.
□ Control - Organisation can maintain a private infrastructure for sensitive application
Disadvantage
□ Infrastructure Dependency
□ Possibility of security breach(violate) through public cloud
51
Difference Public Private Hybrid
Cloud Service
Provider(CSP)
manages the public
cloud.
52
Cloud Service Models
□ Software as a Service (SaaS)
□ Platform as a Service (PaaS)
□ Infrastructure as a Service (IaaS)
These models are offered based on various SLAs between providers and users
SLA of cloud computing covers
o service availability
o performance
data protection
o Security
Software as a Service(SaaS)( Complete software offering on the cloud)
□ SaaS is a licensed software offering on the cloud and pay per use
□ SaaS is a software delivery methodology that provides licensed multi-tenant access to
software and its functions remotely as a Web-based service. Usually
billed based on usage
◦ Usually multi tenant environment
53
◦ Highly scalable architecture
□ Customers do not invest on software application programs.
□ The capability provided to the consumer is to use the provider’s applications running
on a cloud infrastructure.
□ The applications are accessible from various client devices through a thin client
interface such as a web browser (e.g., web-based email).
□ The consumer does not manage or control the underlying cloud infrastructure including
network, servers, operating systems, storage, data or even individual application
capabilities, with the possible exception of limited user specific application
configuration settings.
□ On the customer side, there is no upfront investment in servers or software licensing.
□ It is a ―one-to-many‖ software delivery model, whereby an application is shared
across multiple users
□ Characteristic of Application Service Provider(ASP)
o Product sold to customer is application access.
o Application is centrally managed by Service Provider.
o Service delivered is one-to-many customers
o Services are delivered on the contract
E.g. Gmail and docs, Microsoft SharePoint, and the CRM software(Customer
Relationship management)
□ SaaS providers
□ Google’s Gmail, Docs, Talk etc
□ Microsoft’s Hotmail, Sharepoint
□ SalesForce,
□ Yahoo
□ Facebook
Infrastructure as a Service (IaaS) ( Hardware offerings on the cloud)
IaaS is the delivery of technology infrastructure (mostly hardware) as an on demand,
scalable service .
◦ Usually billed based on usage
◦ Usually multi tenant virtualized environment
◦ Can be coupled with Managed Services for OS and application support
◦ User can choose his OS, storage, deployed app, networking components
54
◦ The capability provided to the consumer is to provision processing, storage,
networks, and other fundamental computing resources.
◦ Consumer is able to deploy and run arbitrary software, which may include
operating systems and applications.
◦ The consumer does not manage or control the underlying cloud infrastructure
but has control over operating systems, storage and deployed applications.
55
IaaS providers
□ Amazon Elastic Compute Cloud (EC2)
◦ Each instance provides 1-20 processors, upto 16 GB RAM, 1.69TB storage
□ RackSpace Hosting
◦ Each instance provides 4 core CPU, upto 8 GB RAM, 480 GB storage
□ Joyent Cloud
◦ Each instance provides 8 CPUs, upto 32 GB RAM, 48 GB storage
□ Go Grid
◦ Each instance provides 1-6 processors, upto 15 GB RAM, 1.69TB storage
56
Application management is the core functionality of the middleware
Provides runtime(execution) environment
Developers design their applications in the execution environment.
Developers need not concern about hardware (physical or virtual), operating systems, and
other resources.
PaaS core middleware manages the resources and scaling of applications on demand.
PaaS offers
o Execution environment and hardware resources (infrastructure) (or)
o software is installed on the user premises
PaaS: Service Provider provides Execution environment and hardware resources
(infrastructure)
Characteristics of PaaS
Runtime framework: Executes end-user code according to the policies set by the user and
the provider.
Abstraction: PaaS helps to deploy(install) and manage applications on the cloud. 57
Automation: Automates the process of deploying applications to the infrastructure,
additional resources are provided when needed.
Cloud services: helps the developers to simplify the creation and delivery cloud
applications.
PaaS providers
□ Google App Engine
◦ Python, Java, Eclipse
□ Microsoft Azure
◦ .Net, Visual Studio
□ Sales Force
◦ Apex, Web wizard
□ TIBCO,
□ VMware,
□ Zoho
Cloud Computing – Services
Software as a Service - SaaS
Platform as a Service - PaaS
Infrastructure as a Service - IaaS
58
Category Description Product Type Vendors
and
Products
PaaS-I Execution platform is Middleware + Force.com,
provided along wit Longjump
h
hardware resources Infrastructure
(infrastructure)
PaaS -II Execution platform is Middleware + Google App
provided with additional Infrastructure, Engine
components
Middleware
Solution:
o Some SaaS providers provide the opportunity to defend against DDoS attacks by using
quick scale-ups.
Customers cannot easily extract their data and programs from one site to run on another.
Solution:
o Have standardization among service providers so that customers can deploy (install)
services and data across multiple cloud providers.
Data Lock-in
It is a situation in which a customer using service of a provider cannot be moved to another
service provider because technologies used by a provider will be incompatible with other
providers.
This makes a customer dependent on a vendor for services and makes customer unable to
use service of another vendor.
Solution:
o Have standardization (in technologies) among service providers so that customers can
easily move from a service provider to another.
60
VM Rootkit: is a collection of malicious (harmful) computer software, designed to enable
access to a computer that is not otherwise allowed.
A man-in-the-middle (MITM) attack is a form of eavesdroppping(Spy) where
communication between two users is monitored and modified by an unauthorized party.
o Man-in-the-middle attack may take place during VM migrations [virtual machine (VM)
migration - VM is moved from one physical host to another host].
Passive attacks steal sensitive data or passwords.
Active attacks may manipulate (control) kernel data structures which will cause major
damage to cloud servers.
61
Challenge 5: Cloud Scalability, Interoperability and StandardizationCloud
Scalability
Cloud resources are scalable. Cost increases when storage and network bandwidth
scaled(increased)
Interoperability
Open Virtualization Format (OVF) describes an open, secure, portable, efficient, and
extensible format for the packaging and distribution of VMs.
OVF defines a transport mechanism for VM, that can be applied to different virtualization
platforms
Standardization
Cloud standardization, should have ability for virtual machine to run on any virtual
platform.
Cloud Storage
Storing your data on the storage of a cloud service provider rather than on a local system.
Data stored on the cloud are accessed through Internet.
Cloud Service Provider provides Storage as a Service
Storage as a Service
□ Third-party provider rents space on their storage to cloud users.
□ Customers move to cloud storage when they lack in budget for having their own storage.
□ Storage service providers takes the responsibility of taking current backup, replication,
and disaster recovery needs.
□ Small and medium-sized businesses can make use of Cloud Storage
□ Storage is rented from the provider using a
o cost-per-gigabyte-stored (or)
62
o cost-per-data-transferred
□ The end user doesn’t have to pay for infrastructure (resources), they have to pay only for
how much they transfer and save on the provider’s storage.
5.2 Providers
□ Google Docs allows users to upload documents, spreadsheets, and presentations to
Google’s data servers.
□ Those files can then be edited using a Google application.
□ Web email providers like Gmail, Hotmail, and Yahoo! Mail, store email messages on
their own servers.
□ Users can access their email from computers and other devices connected to the Internet.
□ Flicker and Picasa host millions of digital photographs, Users can create their own online
photo albums.
□ YouTube hosts millions of user-uploaded video files.
□ Hostmonster and GoDaddy store files and data for many client web sites.
□ Facebook and MySpace are social networking sites and allow members to post pictures
and other content. That content is stored on the company’s servers.
□ MediaMax and Strongspace offer storage space for any kind of digital data.
Data Security
□ To secure data, most systems use a combination of techniques:
o Encryption
o Authentication 63
o Authorization
Encryption
o Algorithms are used to encode information. To decode the information keys are required.
Authentication processes
o This requires a user to create a name and password.
Authorization practices
o The client lists the people who are authorized to access information stored on the cloud
system.
If information stored on the cloud, the head of the IT department might have complete and
free access to everything.
Reliability
□ Service Providers gives reliability for data through redundancy (maintaining multiple
copies of data).
Reputation is important to cloud storage providers. If there is a perception that the provider is
unreliable, they won’t have many clients.
Advantages
□ Cloud storage providers balance server loads.
□ Move data among various datacenters, ensuring that information is stored close and
thereby available quickly to where it is used.
□ It allows to protect the data in case there’s a disaster.
□ Some products are agent-based and the application automatically transfers
information to the cloud via FTP
Cautions
□ Don’t commit everything to the cloud, but use it for a few, noncritical purposes.
□ Large enterprises might have difficulty with vendors like Google or Amazon. 64
□ Forced to rewrite solutions for their applications.
□ Lack of portability.
Theft (Disadvantage)
□ User data could be stolen or viewed by those who are not authorized to see it.
□ Whenever user data is let out of their own datacenter, risk trouble occurs from a
security point of view.
□ If user store data on the cloud, make sure user encrypts data and secures data transit
with technologies like SSL.
Design Requirements
Amazon built S3 to fulfill the following design requirements:
• Scalable Amazon S3 can scale in terms of storage, request rate, and users to support an
unlimited number of web-scale applications.
Reliable Store data durably, with 99.99 percent availability. Amazon says it does not
allow any downtime.
65
• Fast Amazon S3 was designed to be fast enough to support high-performance applications.
Server-side latency must be insignificant relative to Internet latency. Any performance
bottlenecks can be fixed by simply adding nodes to the system.
• Inexpensive Amazon S3 is built from inexpensive commodity hardware components. As a
result, frequent node failure is the norm and must not affect the overall system. It must be
hardware-agnostic, so that savings can be captured as Amazon continues to drive down
infrastructure costs.
• Simple Building highly scalable, reliable, fast, and inexpensive storage is difficult. Doing so
in a way that makes it easy to use for any application anywhere is more difficult. Amazon S3
must do both.
Design Principles
Amazon used the following principles of distributed system design to meet Amazon S3
requirements:
• Decentralization It uses fully decentralized techniques to remove scaling bottlenecks and
single points of failure.
• Autonomy The system is designed such that individual components can make decisions
based on local information.
• Local responsibility Each individual component is responsible for achieving its
consistency; this is never the burden of its peers.
• Controlled concurrency Operations are designed such that no or limited concurrency
control is required.
• Failure toleration The system considers the failure of components to be a normal mode of
operation and continues operation with no or minimal interruption.
• Controlled parallelism Abstractions used in the system are of such granularity that
parallelism can be used to improve performance and robustness of recovery or the introduction
of new nodes.
• Small, well-understood building blocks Do not try to provide a single service that does
everything for everyone, but instead build small components that can be used as building blocks
for other services.
• Symmetry Nodes in the system are identical in terms of functionality, and require no or
minimal node-specific configuration to function.
• Simplicity The system should be made as simple as possible, but no simpler.
66
How S3 Works
Amazon keeps its lips pretty tight about how S3 works, but according to Amazon, S3’s
design aims to provide scalability, high availability, and low latency at commodity costs. S3
stores arbitrary objects at up to 5GB in size, and each is accompanied by up to 2KB of
metadata. Objects are organized by buckets. Each bucket is owned by an AWS account and
the buckets are identified by a unique, user-assigned key.
Buckets and objects are created, listed, and retrieved using either a REST-style or SOAP
interface.
Objects can also be retrieved using the HTTP GET interface or via BitTorrent. An
access control list restricts who can access the data in each bucket. Bucket names and keys are
formulated so that they can be accessed using HTTP. Requests are authorized using an access
control list associated with each bucket and object, for instance: 67
http://s3.amazonaws.com/examplebuc
ket/examplekey
http://examplebucket.s3.amazonaws.c
om/examplekey
The Amazon AWS Authentication tools allow the bucket owner to create an
authenticated URL with a set amount of time that the URL will be valid.
Applications P2P architecture works best when there are lots of active peers in an active
network, so new peers joining the network can easily find other peers to connect to. If a large
number of peers drop out of the network, there are still enough remaining peers to pick up the
68
slack. If there are only a few peers, there are less resources available overall. For example, in
a P2P file-sharing application, the more popular a file is, which means that lots of peers are
sharing the file, the faster it can be downloaded. P2P works best if the workload is split into
small chunks that can be reassembled later. This way, a large number of peers can work
simultaneously on one task and each peer has less work to do. In the case of P2P file-sharing,
a file can be broken down so that a peer can download many chunks of the file from different
peers at the same time.
Some uses of P2P architecture:
● File sharing
● Instant messaging
● Voice Communication
● Collaboration
● High Performance Computing
Some examples of P2P architecture:
● Napster - it was shut down in 2001 since they used a centralized tracking server
● BitTorrent - popular P2P file-sharing protocol, usually associated with piracy
● Skype - it used to use proprietary hybrid P2P protocol, now uses client-server model after
Microsoft’s acquisition
● Bitcoin - P2P cryptocurrency without a central monetary authority
Advantages/Change Resilience
P2P networks have many advantages. For example, there is no central server to maintain and
to pay for (disregarding tracking servers), so this type of networks can be more economical.
That also means there is no need for a network operating system, thus lowering cost even
further. Another advantage would be there is no single point of failure, unless in the very
unlikely case that the network is very small. P2P networks are very resilient to the change in
peers; if one peer leaves, there is minimal impact on the overall network. If a large group of
peers join the network at once, the network can handle the increased load easily. Due to its
decentralized nature, P2P networks can survive attacks fairly well since there is no centralized
server.
Disadvantages
P2P networks introduce many security concerns. If one peer is infected with a virus and uploads
a chuck of the file that contains the virus, it can quickly spread to other peers. Also,if there
are many peers in the network, it can be difficult to ensure they have the proper permissions to
access the network if a peer is sharing a confidential file. P2P networks often contain a large
number of users who utilize resources shared by other nodes, but who do not share anything
themselves. These type of freeriders are called the leechers. Although being hard to shut down
is counted as an advantage, it can also be a disadvantage if it is used to facilitate illegal and
immoral activities. Furthermore, the widespread use of mobile devices has made many
companies to switch to other architectures. With many people using mobile devices that are
not always on, it can be difficult for users to contribute to the network without draining battery
life and using up mobile data. In a client-server architecture, clients don’t need to contribute 69
any of their resources.
Non-Functional Properties
● Scalability - the network is very easy to scale up
● Efficiency - the network uses available resources of peers, who normally wouldn’t
contribute anything in a typical client-server architecture
● Adaptability - the network can be used for a variety of use cases, and it can easily start
upanother network if one is taken down, which means there is no single point of failure
70