Cloud
Cloud
AND SCIENCES
(UGC AUTONOMOUS)
SANGIVALASA-531162,
2020-2024
BACHELOR OF TECHNOLOGY
IN
Submitted by
1
ANIL NEERUKONDA INSTITUTE OF TECHNOLOGY
AND SCIENCES
(Affiliated to Andhra University)
CERTIFICATE
REVIEWER
Mr. S. Ratan Kumar Dr. K. Selvani Deepthi
Associate Professor Head of the Department
Department of CSE (AI&ML, DS) Department of CSE (AI&ML,DS)
ANITS ANITS
2
Acknowledgement
We also thank all the staff members of CSE (AI &ML, DS)
department for their valuable advises. We also thank supporting staff
for providing resources as and when required.
Content
3
s
Introduction to Computing....................................................................................................................6
Trends in Computing.............................................................................................................................6
Distributed Computing/System.............................................................................................................7
Why Distributed Computing..............................................................................................................8
Distributed applications.....................................................................................................................8
Grid Computing.....................................................................................................................................9
Electrical Power Grid Analogy............................................................................................................9
Need of Grid Computing....................................................................................................................9
Type of Grids...................................................................................................................................10
Grid Components.............................................................................................................................10
Cluster Computing...............................................................................................................................11
Types of Clusters..............................................................................................................................11
Cluster Components........................................................................................................................11
Key Operational Benefits of Clustering............................................................................................12
Utility Computing................................................................................................................................13
Utility Computing Payment Models.................................................................................................14
Risks in a Utility Computing World..................................................................................................14
Cloud Computing.................................................................................................................................15
Essential Characteristics..................................................................................................................15
Cloud Characteristics.......................................................................................................................16
Common Characteristics..................................................................................................................16
Cloud Services Models.........................................................................................................................16
Software as a Service (SaaS)....................................................................................................16
Cloud Infrastructure as a Service (IaaS)...................................................................................17
Platform as a Service (PaaS).....................................................................................................17
Types of Cloud (Deployment Models).................................................................................................17
Cloud and Virtualization......................................................................................................................18
Virtual Machines..................................................................................................................................18
Cloud-Sourcing....................................................................................................................................18
Dew Computing...................................................................................................................................19
Characteristics of Dew Computing...................................................................................................19
Fog Computing....................................................................................................................................21
What is Fog Computing?..................................................................................................................21
4
Key Characteristics of Fog Computing.............................................................................................21
Architecture of Fog Computing.......................................................................................................21
Serverless Computing..........................................................................................................................23
What is Serverless Computing?.......................................................................................................23
Key Characteristics...........................................................................................................................23
Components of Serverless Architecture..........................................................................................23
Advantages of Serverless Computing..............................................................................................23
Use Cases.........................................................................................................................................24
Sustainable Computing........................................................................................................................25
Why Sustainable Computing ?.........................................................................................................25
Challenges.......................................................................................................................................25
Cloud Migration and Container Virtualization with Docker.................................................................26
Cloud Migration...............................................................................................................................26
Types of Cloud Migration:...............................................................................................................26
Benefits of Cloud Migration.............................................................................................................26
Challenges in Cloud Migration.........................................................................................................26
Container-Based Virtualization........................................................................................................27
Docker: A Popular Containerization Platform..................................................................................27
Conclusion...........................................................................................................................................28
5
Course completion certificate:
6
Cloud Computing
Course Objectives:
1. Cloud Service Models: To familiarize learners with different cloud service models
such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software
as a Service (SaaS), including their features, benefits, and use cases.
2. Deployment Models: To explore various cloud deployment models, including public,
private, hybrid, and community clouds, and understand their differences, advantages,
and considerations.
3. Virtualization: To introduce participants to virtualization technologies and their role
in enabling cloud computing environments for resource optimization and scalability.
4. Cloud Security: To educate learners about security challenges and best practices in
cloud computing, including data protection, encryption, identity management, and
compliance with regulatory requirements.
Course Outcomes:
1. Understanding Cloud Computing: Participants will gain a deep understanding of
cloud computing concepts, principles, and terminology, allowing them to comprehend
the fundamental aspects of cloud-based technologies.
2. Ability to Evaluate Cloud Services: Upon completion of the course, learners will be
able to assess different cloud service models and deployment options to determine the
most suitable solution for specific business requirements.
3. Enhanced Security Knowledge: Learners will acquire knowledge of cloud security
best practices and techniques, enabling them to implement robust security measures to
protect data and applications in cloud environments.
7
Introduction to Computing
• The ACM Computing Curricula 2005 defined "computing" as "In a general way, we can
define computing to mean any goal-oriented activity requiring, benefiting from, or creating
computers. Thus, computing includes designing and building hardware and software systems
for a wide range of purposes; processing, structuring, and managing various kinds of
information; doing scientific studies using computers; making computer systems behave
intelligently; creating and using communications and entertainment media; finding and
gathering information relevant to any particular purpose, and so on. The list is virtually
endless, and the possibilities are vast."
Trends in Computing
•Distributed Computing
•Grid Computing
•Cluster Computing
•Utility Computing
•Cloud Computing
8
Distributed Computing/System
Distributed computing is a field of computing science that studies distributed system.
Distributed systems are used to solve computational problems. Example of Distributed
system is Wikipedia. There are several autonomous computational entities, each of which has
its own local memory. The entities communicate with each other by message passing. The
processors communicate with one another through various communication lines, such as
high-speed buses or telephone lines. Each processor has its own local memory.
Internet
ATM (bank) machines
Intranets/Workgroups
Fault tolerance
1. When one or some nodes fails, the whole system can still work fine except
performance.
2. Need to check the status of each node
Resource sharing
1. Each user can share the computing power and storage resource in the system
with other users
Load Sharing
1. Dispatching several tasks to each nodes can help share loading to the whole
system.
Easy to expand
1. We expect to use few time when adding nodes. Hope to spend no time if
possible.
9
Performance
Nature of application
Performance - Computing intensive
The task could consume a lot of time on computing. For example, Computation of Pi
value using Monte Carlo simulation - Data intensive
The task that deals with a large amount or large size of files. For example, Facebook,
LHC(Large Hadron Collider) experimental data processing.
Robustness - No SPOF (Single Point Of Failure) and other nodes can execute the
same task executed on failed node.
Distributed applications
Applications that consist of a set of processes that are distributed across a network of
machines and work together as an ensemble to solve a common problem. In the past, mostly
it is “client-server”. Resource management centralized at the server. “Peer to Peer”
computing represents a movement towards more “truly” distributed applications
10
Grid Computing
Grid computing is a form of networking. unlike conventional networks that focus on
communication among devices, grid computing harnesses unused processing cycles of all
computers in a network for solving problems too intensive for any stand-alone machine. Grid
computing enables the virtualization of distributed computing and data resources such as
processing, network bandwidth and storage capacity to create a single system image, granting
users and applications seamless access to vast IT capabilities. Just as an Internet user views a
unified instance of content via the Web, a grid user essentially sees a single, large virtual
computer.
Users (or electrical appliances) get access to electricity through wall sockets with no care or
consideration for where or how the electricity is actually generated. “The power grid” links
together power plants of many different kinds. “The Grid" links together computing
resources (PCs, workstations, servers, storage elements) and provides the mechanism needed
to access them.
11
Type of Grids
Computational Grid: These grids provide secure access to huge pool of shared
processing power suitable for high throughput applications and computation intensive
computing.
Data Grid: Data grids provide an infrastructure to support data storage, data
discovery, data handling, data publication, and data manipulation of large volumes of
data actually stored in various heterogeneous databases and file systems.
Collaboration Grid: With the advent of Internet, there has been an increased demand
for better collaboration. Such advanced collaboration is possible using the grid. For
instance, persons from different companies in a virtual enterprise can work on
different components of a CAD project without even disclosing their proprietary
technologies.
Network Grid: A Network Grid provides fault-tolerant and high-performance
communication services. Each grid node works as a data router between two
communication points, providing data-caching and other facilities to speed up the
communications between such points.
Utility Grid: This is the ultimate form of the Grid, in which not only data and
computation cycles are shared but software or just about any resource is shared. The
main services provided through utility grids are software and special equipment. For
instance, the applications can be run on one machine and all the users can send their
data to be processed to that machine and receive the result back.
12
Grid Components
Cluster Computing
A cluster is a type of parallel or distributed computer system, which consists of a collection
of inter-connected stand-alone computers working together as a single integrated computing
resource. Key components of a cluster include multiple standalone computers (PCs,
Workstations, or SMPs), operating systems, high-performance interconnects, middleware,
parallel programming environments, and applications. Clusters are usually deployed to
improve speed and/or reliability over that provided by a single computer, while typically
being much more cost effective than single computer the of comparable speed or reliability.
In a typical cluster:
13
Network: Faster, closer connection than a typical network (LAN)
Low latency communication protocols
Loosely coupled than SMP
Types of Clusters
Cluster Components
Basic building blocks of clusters are broken down into multiple categories:
Cluster Nodes
Cluster Network
Network Characterization
System availability: offer inherent high system availability due to the redundancy of
hardware, operating systems, and applications.
Hardware fault tolerance: redundancy for most system components (eg. disk-RAID),
including both hardware and software.
OS and application reliability: run multiple copies of the OS and applications, and through
this redundancy
Scalability: Clusters can easily scale up or down by adding or removing nodes as needed.
This scalability feature ensures that organizations can accommodate growing workloads or
14
adjust resources based on demand fluctuations without experiencing downtime or
performance degradation
Utility Computing
Utility Computing is purely a concept which cloud computing practically implements. Utility
computing is a service provisioning model in which a service provider makes computing
resources and infrastructure management available to the customer as needed, and charges
them for specific usage rather than a flat rate. This model has the advantage of a low or no
initial cost to acquire computer resources; instead, computational resources are essentially
rented. The word utility is used to make an analogy to other services, such as electrical
power, that seek to meet fluctuating customer needs, and charge for the resources based on
usage rather than on a flat-rate basis. This approach, sometimes known as pay-per-use.
"Utility computing" has usually envisioned some form of virtualization so that the amount of
storage or computing power available is considerably larger than that of a single time-sharing
computer.
15
Utility computing is a paradigm in which computing resources are provided to users on-
demand, much like traditional utility services such as electricity or water. In this model, users
access computing resources, such as processing power, storage, and applications, via the
internet or a network, paying only for the resources they consume on a metered basis. This
pay-as-you-go approach offers flexibility and scalability, allowing organizations to scale
resources up or down according to fluctuating demands without the need for substantial
upfront investments in infrastructure. Utility computing enables cost-effective resource
utilization, as organizations only pay for what they use, eliminating the need to maintain
excess capacity. Additionally, it offers agility and responsiveness, enabling rapid deployment
of resources to support business initiatives or address sudden spikes in workload. Overall,
utility computing provides a flexible and cost-efficient approach to accessing computing
resources, empowering organizations to focus on innovation and business growth while
outsourcing their IT infrastructure needs to specialized providers.
Utility Computing is :
Flat rate
Tiered
Subscription
16
Metered
Pay as you go
Standing charges
Different pricing models for different customers based on factors such as scale, commitment
and payment frequency. But the principle of utility computing remains. The pricing model is
simply an expression by the provider of the costs of provision of the resources and a profit
margin.
Data Backup
Data Security
Partner Competency
Defining SLA
Getting value from charge back
Cloud Computing
US National Institute of Standards and Technology defines Computing as
Essential Characteristics
On-demand self-service
A consumer can unilaterally provision computing capabilities, such as server time and
network storage, as needed automatically without requiring human interaction with each
service provider.
17
Broad network access
Capabilities are available over the network and accessed through standard mechanisms that
promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets,
laptops, and workstations).
Resource pooling
The provider’s computing resources are pooled to serve multiple consumers using a multi-
tenant model, with different physical and virtual resources dynamically assigned and
reassigned according to consumer demand.
Multi-Tenancy
Cloud computing environments support multiple users or tenants sharing the same
infrastructure resources. This multi-tenancy model allows providers to achieve greater
resource utilization and efficiency, while also enabling cost-sharing among users.
Cloud providers implement robust security measures to protect data and ensure the
confidentiality, integrity, and availability of resources. Additionally, cloud services offer
built-in redundancy and disaster recovery capabilities to ensure high availability and
reliability of services.
Cloud Characteristics
Measured Service
Cloud systems automatically control and optimize resource use by leveraging a metering
capability at some level of abstraction appropriate to the type of service (e.g., storage,
processing, bandwidth, and active user accounts). Resource usage can be monitored,
controlled, and reported, providing transparency for both the provider and consumer of the
utilized service.
Rapid elasticity
Capabilities can be elastically provisioned and released, in some cases automatically, to scale
rapidly outward and inward commensurate with demand. To the consumer, the capabilities
available for provisioning often appear to be unlimited and can be appropriated in any
quantity at any time.
Common Characteristics
Massive Scale
Resilient Computing
Homogeneity
18
Geographic Distribution
Virtualization
Service Orientation
Low Cost Software
Advanced Security
1. The capability provided to the consumer is to deploy onto the cloud infrastructure
consumer-created or acquired applications created using programming languages,
libraries, services, and tools supported by the provider.
2. The consumer does not manage or control the underlying cloud infrastructure
including network, servers, operating systems, or storage, but has control over the
deployed applications and possibly configuration settings for the application-
hosting environment.
19
Types of Cloud (Deployment Models)
20
Virtual Machines
VM technology allows multiple virtual machines to run on a single physical machine.
Cloud-Sourcing
Why is it becoming important ?
Concerns:
Dew Computing
Dew Computing is an emerging area within the field of computing, which has garnered
interest due to its potential to further enhance the capabilities of cloud and fog computing
architectures. It focuses on the client-side processing and storage capabilities of devices,
aiming to leverage the underutilized resources available in client devices to augment cloud
services.
Dew Computing is characterized by its emphasis on the local processing and storage
capabilities of devices. Unlike cloud computing, which relies on centralized servers for
processing and storage, Dew Computing advocates for a decentralized approach, utilizing the
computing power of client devices. Key characteristics include:
Resource Utilization: Maximizes the use of available resources on client devices to reduce
reliance on remote servers.
Architecture
The architecture of Dew Computing involves several layers, each contributing to the
seamless integration of local and cloud resources. These layers include:
1. Dew Layer: The foundational layer, consisting of client devices that provide local
computing and storage capabilities.
2. Fog Layer (optional): Acts as an intermediary between the Dew and Cloud layers,
facilitating edge computing processes closer to end-users.
3. Cloud Layer: The centralized servers that provide extensive computing resources and
services accessible over the Internet.
Benefits
Dew Computing offers several benefits over traditional cloud computing models:
Applications
Dew Computing can be applied across various domains to improve efficiency, reliability, and
performance. Potential applications include:
1. Smart Home Systems: Enhancing privacy and responsiveness of smart home devices
by processing data locally.
2. Healthcare Monitoring: Local processing of health data to ensure patient privacy and
immediate responsiveness.
3. Internet of Things (IoT): Reducing latency and enhancing the functionality of IoT
devices by enabling local data processing and decision-making.
4. Content Delivery Networks (CDNs): Augmenting CDNs by caching content on client
devices, reducing bandwidth usage and improving content delivery speeds.
Challenges
22
While Dew Computing offers promising benefits, it also faces several challenges, including
security concerns, the need for standardization, and the management of distributed resources.
Future research directions may focus on developing secure protocols for Dew Computing
environments, creating standards for interoperability, and designing efficient resource
management algorithms.
Fog Computing
Fog computing, a term coined by Cisco Systems, refers to a decentralized computing
infrastructure in which data, compute, storage, and applications are distributed in the most
logical, efficient place between the data source and the cloud. This concept aims to bring the
advantages of cloud computing closer to where data is created and acted upon. By doing so, it
helps in reducing latency, enhancing data bandwidth, and improving security and privacy
aspects of a computing infrastructure.
Fog computing extends cloud computing and services to the edge of the network, similar to
how fog consists of water droplets that are close to the ground, hence the name. It involves
the use of edge devices to carry out a substantial amount of computation, storage, and
communication locally and routed over the internet backbone.
1. Low Latency and Faster Response Time: By processing data closer to the source,
fog computing can significantly reduce latency and improve response times.
23
2. Improved Security: Local data processing can enhance data security and privacy by
reducing the amount of data transmitted over the network.
3. Geographical Distribution: Unlike centralized cloud computing, fog computing
encompasses multiple distributed nodes that can operate independently.
4. Scalability: Fog computing supports vertically and horizontally scalable services and
applications.
5. Mobility Support: Offers services to mobile devices, including dynamic location-
based services.
Fog computing architecture is hierarchical and distributed, designed to work at the network's
edge. It consists of three primary layers:
Cloud Layer
The topmost layer, responsible for global management, long-term analytics, and storage. It
oversees the entire fog network, offering services and resources that are not time-sensitive.
Fog Layer
This layer includes the fog nodes themselves, which can be deployed on devices like
industrial controllers, switches, routers, embedded servers, and video surveillance cameras.
These nodes provide the compute, storage, and networking services to the end devices.
Edge Layer
The bottom layer consists of the end devices or sensors that generate the data. These could be
IoT devices, mobile phones, industrial machines, and other gadgets that collect and initially
process the data before sending it upwards through the fog layer for further processing or
down to the fog layer for immediate action.
Fog computing finds its application in various fields due to its ability to process data quickly
and efficiently. Some of the prominent applications include:
24
1. Reduction in Bandwidth Needs: By processing data locally, fog computing
significantly reduces the need for bandwidth.
2. Real-time Data Processing and Analysis: Critical for applications requiring instant
decision-making.
3. Enhanced Security: By keeping sensitive data local, fog computing minimizes the risk
of data breaches.
4. Scalability and Flexibility: Enables businesses to scale up or down without significant
changes to the core infrastructure.
Challenges
While fog computing offers numerous benefits, it also faces several challenges:
1. Security and Privacy Concerns: Although fog computing can enhance security, the
distributed nature also introduces new vulnerabilities.
2. Complexity in Management: Managing a distributed network requires sophisticated
tools and protocols.
3. Interoperability: With many devices and platforms, ensuring seamless communication
can be challenging.
Serverless Computing
Serverless computing is a cloud-computing execution model in which the cloud provider
dynamically manages the allocation and provisioning of servers. This model allows
developers to build and run applications and services without having to manage
infrastructure. Here's an in-depth look into serverless computing, its components, advantages,
use cases, and challenges.
Serverless computing, despite its name, does not eliminate servers from the computing
environment. Instead, it abstracts the server management aspect away from the application
developers. The cloud provider automatically handles the deployment, maintenance, and
scaling of servers, enabling developers to focus solely on writing code for individual
functions that execute in response to events.
Key Characteristics
25
Components of Serverless Architecture
1. With serverless computing, you only pay for the execution time of your functions,
potentially leading to significant cost savings compared to traditional cloud service
models.
2. The cloud provider manages servers, databases, and application logic layers, which
significantly reduces the operational burden on development teams.
3. Serverless applications can scale automatically with the volume of incoming requests
without any manual intervention.
Use Cases
Challenges
Cold Starts: The initial invocation of a serverless function can suffer from latency due
to the time taken to provision resources.
Monitoring and Debugging: Traditional monitoring and debugging tools may not be
directly applicable to serverless applications, requiring new approaches.
Vendor Lock-in: Using proprietary features from cloud providers can lead to
difficulties in migrating applications between platforms.
26
Sustainable Computing
Sustainable computing, often termed as green computing, encompasses a wide range of
practices, strategies, and technologies aimed at reducing the environmental impact of
computing. This field focuses on designing, manufacturing, using, and disposing of
computers, servers, and associated subsystems—such as monitors, printers, storage devices,
and networking and communications systems—efficiently and effectively with minimal or no
impact on the environment.
27
reducing the energy consumption of the hardware running it.Cloud computing and
virtualization can contribute to sustainability by optimizing resource usage. These
technologies allow for the sharing of physical resources among multiple users or applications,
leading to more efficient use of energy and hardware.
Challenges
Cloud Migration
Cloud migration is the process of moving digital business operations into the cloud. It’s akin
to a physical move, except it involves the shifting of data, applications, and IT processes from
some data centres to other data centres, instead of packing up and moving physical goods.
The goal often includes moving from on-premises or legacy infrastructure to the cloud or
moving from one cloud environment to another. Cloud migration enables organizations to
scale, reduce costs, and improve efficiency by leveraging the cloud's flexible and scalable
nature.
1. Lift-and-Shift
This approach involves moving applications and data from an on-premises data center
to the cloud with minimal or no modifications. It's quick and cost-effective but doesn't
take full advantage of cloud-native features.
2. Refactoring
In refactoring, applications are modified or rebuilt before they are moved to the cloud
to better align with cloud-native capabilities. This method is more time-consuming
and expensive but offers improved scalability and performance in the cloud
environment.
3. Replatforming
28
Replatforming involves making a few cloud optimizations to realize a tangible benefit
without changing the core architecture of the application. It strikes a balance between
lift-and-shift and refactoring by enabling some cloud benefits while avoiding the
complexity of a full refactor.
1. Cloud environments allow businesses to easily scale their resources up or down based
on demand, providing flexibility and efficiency.
2. Migrating to the cloud can reduce operational costs by eliminating the need for
physical hardware and maintenance. Organizations pay only for the resources they
use.
3. Cloud providers invest heavily in security technologies and expertise, offering a level
of security that may be challenging for individual organizations to achieve on their
own.
4. Cloud environments often come with built-in disaster recovery capabilities, ensuring
data is backed up and can be restored quickly in the event of a loss.
1. Ensuring data security and compliance with regulations during and after migration is a
significant concern for many organizations.
2. The complexity of an organization’s existing IT infrastructure can make migration a
challenging process, requiring careful planning and execution.
Container-Based Virtualization
1. Efficiency
Containers require less system resources than traditional virtual machines
because they share the host system’s kernel and utilize fewer resources.
2. Portability
Containers encapsulate everything an application needs to run. This makes it
easy to move the containers between different environments while retaining
full functionality.
3. Scalability
Containers can be easily created, destroyed, started, and stopped, making it
simple to scale applications up or down as needed.
29
4. Isolation
Each container operates independently and does not interfere with others,
providing a secure and stable environment for applications.
Docker is an open-source platform that automates the deployment, scaling, and management
of applications within containers. It has become synonymous with containerization, providing
tools to help developers build, deploy, and run applications more efficiently.
Docker Components
1. Docker Engine
The core part of Docker, responsible for creating and running Docker
containers.
2. Docker Images
Blueprints for creating Docker containers, including the application and its
dependencies.
3. Docker Containers
The runtime instances of Docker images, where the applications and services
run in an isolated environment.
4. Docker Hub
A registry service provided by Docker for finding and sharing container
images with your team.
Benefits of Docker
1. Simplified Configuration
Docker simplifies the process of configuring applications to run in different
environments.
2. Application Isolation
Docker ensures that applications are isolated in their containers, increasing
security.
3. Rapid Deployment
Docker containers can be created and started in seconds, leading to faster
deployment times.
4. Environment Consistency
Docker containers provide consistency across different environments,
reducing the "it works on my machine" syndrome.
Conclusion
In conclusion, the course on Cloud Computing has provided a comprehensive understanding
of the fundamental principles, technologies, and applications of cloud computing. Through
30
detailed lectures, practical examples, and hands-on exercises, participants have gained
insights into the key concepts such as virtualization, scalability, elasticity, and service
models. The course has equipped learners with the knowledge and skills needed to leverage
cloud computing to address various business challenges, improve operational efficiency, and
drive innovation. Moreover, by exploring real-world case studies and industry best practices,
participants have gained practical insights into the implementation, management, and security
aspects of cloud-based solutions. As technology continues to evolve, the knowledge gained
from this course will empower participants to navigate the complexities of the cloud
computing landscape, adapt to emerging trends, and make informed decisions to harness the
full potential of cloud technologies in their respective domains. Overall, the course on Cloud
Computing has been instrumental in fostering a deeper understanding of this transformative
technology and its implications for organizations across diverse sectors.
31