0% found this document useful (0 votes)
48 views31 pages

Cloud

The document discusses various distributed computing technologies including distributed computing, grid computing, cluster computing, utility computing, cloud computing, dew computing, fog computing, serverless computing and sustainable computing. It provides details on what each technology is, its key characteristics, components, advantages and use cases.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views31 pages

Cloud

The document discusses various distributed computing technologies including distributed computing, grid computing, cluster computing, utility computing, cloud computing, dew computing, fog computing, serverless computing and sustainable computing. It provides details on what each technology is, its key characteristics, components, advantages and use cases.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 31

ANIL NEERUKONDA INSTITUTE OF TECHNOLOGY

AND SCIENCES
(UGC AUTONOMOUS)

(Permanently Affiliated to AU Approved by AICTE and Accredited by


NBA & NAAC with ‘A’ Grade)

SANGIVALASA-531162,

Bheemunipatnam Mandal, Visakhapatnam District

2020-2024

“Data Science Internship”


An Industrial Training report submitted in partial fulfilment of the
requirements for the award of the degree of

BACHELOR OF TECHNOLOGY

IN

Computer Science and Engineering (AI & ML,DS)

Submitted by

KEDARISETTY HEMA SRI – 320126551028

1
ANIL NEERUKONDA INSTITUTE OF TECHNOLOGY
AND SCIENCES
(Affiliated to Andhra University)

SANGIVALASA, VISAKHAPATNAM -531162


2020-2024

CERTIFICATE

This is to certify that the “Data science Internship Report” submitted by


KEDARISETTY HEMA SRI(320126551028) is work done by her and
submitted during the 2023 - 2024 academic year, in partial fulfilment of the
requirements for the award of the degree of BACHELOR OF TECHNOLOGY
in COMPUTER SCIENCE AND ENGINEERING at Quantam AI Systems
(BDM),Visakhapatnam ,Andhra Pradesh.

REVIEWER
Mr. S. Ratan Kumar Dr. K. Selvani Deepthi
Associate Professor Head of the Department
Department of CSE (AI&ML, DS) Department of CSE (AI&ML,DS)
ANITS ANITS

2
Acknowledgement

An endeavour over a long period can be successful with the advice


and support of many well-wishers. We take this opportunity to
express our gratitude and appreciation to all of them.

We owe our tributes to Prof. K. S. Deepthi Head of the Department,


Computer Science & Engineering (AI & ML, DS), ANITS, for her
valuable support and guidance during the period of Internship.

We wish to express our sincere thanks and gratitude to NPTEL who


offered the course through the program NPTEL, in analyzing
problems associated with our internship and for guiding us throughout
the project.

We express our warm and sincere thanks for the encouragement,


untiring guidance and the confidence they had shown in us. We are
immensely indebted for their valuable guidance throughout our
internship.

We also thank all the staff members of CSE (AI &ML, DS)
department for their valuable advises. We also thank supporting staff
for providing resources as and when required.

Kedarisetty Hema Sri 320126551028

Content
3
s
Introduction to Computing....................................................................................................................6
Trends in Computing.............................................................................................................................6
Distributed Computing/System.............................................................................................................7
Why Distributed Computing..............................................................................................................8
Distributed applications.....................................................................................................................8
Grid Computing.....................................................................................................................................9
Electrical Power Grid Analogy............................................................................................................9
Need of Grid Computing....................................................................................................................9
Type of Grids...................................................................................................................................10
Grid Components.............................................................................................................................10
Cluster Computing...............................................................................................................................11
Types of Clusters..............................................................................................................................11
Cluster Components........................................................................................................................11
Key Operational Benefits of Clustering............................................................................................12
Utility Computing................................................................................................................................13
Utility Computing Payment Models.................................................................................................14
Risks in a Utility Computing World..................................................................................................14
Cloud Computing.................................................................................................................................15
Essential Characteristics..................................................................................................................15
Cloud Characteristics.......................................................................................................................16
Common Characteristics..................................................................................................................16
Cloud Services Models.........................................................................................................................16
 Software as a Service (SaaS)....................................................................................................16
 Cloud Infrastructure as a Service (IaaS)...................................................................................17
 Platform as a Service (PaaS).....................................................................................................17
Types of Cloud (Deployment Models).................................................................................................17
Cloud and Virtualization......................................................................................................................18
Virtual Machines..................................................................................................................................18
Cloud-Sourcing....................................................................................................................................18
Dew Computing...................................................................................................................................19
Characteristics of Dew Computing...................................................................................................19
Fog Computing....................................................................................................................................21
What is Fog Computing?..................................................................................................................21

4
Key Characteristics of Fog Computing.............................................................................................21
Architecture of Fog Computing.......................................................................................................21
Serverless Computing..........................................................................................................................23
What is Serverless Computing?.......................................................................................................23
Key Characteristics...........................................................................................................................23
Components of Serverless Architecture..........................................................................................23
Advantages of Serverless Computing..............................................................................................23
Use Cases.........................................................................................................................................24
Sustainable Computing........................................................................................................................25
Why Sustainable Computing ?.........................................................................................................25
Challenges.......................................................................................................................................25
Cloud Migration and Container Virtualization with Docker.................................................................26
Cloud Migration...............................................................................................................................26
Types of Cloud Migration:...............................................................................................................26
Benefits of Cloud Migration.............................................................................................................26
Challenges in Cloud Migration.........................................................................................................26
Container-Based Virtualization........................................................................................................27
Docker: A Popular Containerization Platform..................................................................................27
Conclusion...........................................................................................................................................28

5
Course completion certificate:

6
Cloud Computing

About the Course:

The Cloud Computing course offered by NPTEL (National Program on Technology


Enhanced Learning) is designed to provide participants with a comprehensive understanding
of cloud computing concepts, technologies, and applications. The main purpose of the course
is to familiarize learners with key aspects of cloud computing, including its definition,
characteristics, and benefits. It covers a range of topics such as cloud service models (IaaS,
PaaS, SaaS), deployment models (public, private, hybrid, community), virtualization, cloud
architecture, security, storage, networking, and popular cloud platforms like Amazon Web
Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). The course includes
lectures, readings, assignments, quizzes, and hands-on exercises to ensure a practical
understanding of cloud computing principles. It is suitable for students, professionals, and
anyone interested in learning about cloud computing technologies and their applications.
Participants have the opportunity to earn a certificate upon successful completion of the
course, enhancing their credentials and career prospects in the field of cloud computing.

Course Objectives:

1. Cloud Service Models: To familiarize learners with different cloud service models
such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software
as a Service (SaaS), including their features, benefits, and use cases.
2. Deployment Models: To explore various cloud deployment models, including public,
private, hybrid, and community clouds, and understand their differences, advantages,
and considerations.
3. Virtualization: To introduce participants to virtualization technologies and their role
in enabling cloud computing environments for resource optimization and scalability.
4. Cloud Security: To educate learners about security challenges and best practices in
cloud computing, including data protection, encryption, identity management, and
compliance with regulatory requirements.

Course Outcomes:
1. Understanding Cloud Computing: Participants will gain a deep understanding of
cloud computing concepts, principles, and terminology, allowing them to comprehend
the fundamental aspects of cloud-based technologies.
2. Ability to Evaluate Cloud Services: Upon completion of the course, learners will be
able to assess different cloud service models and deployment options to determine the
most suitable solution for specific business requirements.
3. Enhanced Security Knowledge: Learners will acquire knowledge of cloud security
best practices and techniques, enabling them to implement robust security measures to
protect data and applications in cloud environments.

7
Introduction to Computing
• The ACM Computing Curricula 2005 defined "computing" as "In a general way, we can
define computing to mean any goal-oriented activity requiring, benefiting from, or creating
computers. Thus, computing includes designing and building hardware and software systems
for a wide range of purposes; processing, structuring, and managing various kinds of
information; doing scientific studies using computers; making computer systems behave
intelligently; creating and using communications and entertainment media; finding and
gathering information relevant to any particular purpose, and so on. The list is virtually
endless, and the possibilities are vast."

Trends in Computing
•Distributed Computing

•Grid Computing

•Cluster Computing

•Utility Computing

•Cloud Computing

Early computing was performed on a single processor. Uni-processor computing can be


called centralized computing. Centralized computing refers to a computing model in which
all processing, storage, and control are concentrated in a single location or a limited number
of locations. In this architecture, users access resources and applications through terminals or
thin clients connected to the central server or mainframe. Centralized computing offers
advantages such as easier management, centralized data storage and backup, and enhanced
security through controlled access. However, it also poses challenges like single point of
failure, network dependency, and potential scalability issues. Despite these drawbacks,
centralized computing remains a prevalent model in various industries, particularly in large
enterprises where centralized control and management are valued over distributed systems.

8
Distributed Computing/System
Distributed computing is a field of computing science that studies distributed system.
Distributed systems are used to solve computational problems. Example of Distributed
system is Wikipedia. There are several autonomous computational entities, each of which has
its own local memory. The entities communicate with each other by message passing. The
processors communicate with one another through various communication lines, such as
high-speed buses or telephone lines. Each processor has its own local memory.

Example Distributed Systems

 Internet
 ATM (bank) machines
 Intranets/Workgroups

Computers in a Distributed System

 Workstations: Computers used by end-users to perform computing


 Server Systems: Computers which provide resources and services
 Personal Assistance Devices: Handheld computers connected to the system via a
wireless communication link.

Common properties of Distributed Computing

 Fault tolerance

1. When one or some nodes fails, the whole system can still work fine except
performance.
2. Need to check the status of each node

 Each node play partial role

1. Each computer has only a limited, incomplete view of the system.


2. Each computer may know only one part of the input.

 Resource sharing

1. Each user can share the computing power and storage resource in the system
with other users

 Load Sharing

1. Dispatching several tasks to each nodes can help share loading to the whole
system.

 Easy to expand

1. We expect to use few time when adding nodes. Hope to spend no time if
possible.

9
 Performance

1. Parallel computing can be considered a subset of distributed computing

Why Distributed Computing

 Nature of application
 Performance - Computing intensive
 The task could consume a lot of time on computing. For example, Computation of Pi
value using Monte Carlo simulation - Data intensive
 The task that deals with a large amount or large size of files. For example, Facebook,
LHC(Large Hadron Collider) experimental data processing.
 Robustness - No SPOF (Single Point Of Failure) and other nodes can execute the
same task executed on failed node.

Distributed applications

Applications that consist of a set of processes that are distributed across a network of
machines and work together as an ensemble to solve a common problem. In the past, mostly
it is “client-server”. Resource management centralized at the server. “Peer to Peer”
computing represents a movement towards more “truly” distributed applications

Clients invoke individual servers

A typical distributed application based on peer processes

10
Grid Computing
Grid computing is a form of networking. unlike conventional networks that focus on
communication among devices, grid computing harnesses unused processing cycles of all
computers in a network for solving problems too intensive for any stand-alone machine. Grid
computing enables the virtualization of distributed computing and data resources such as
processing, network bandwidth and storage capacity to create a single system image, granting
users and applications seamless access to vast IT capabilities. Just as an Internet user views a
unified instance of content via the Web, a grid user essentially sees a single, large virtual
computer.

Grid Computing is a computing infrastructure that provides dependable, consistent, pervasive


and inexpensive access to computational capabilities

1. Share more than information: Data, computing power, applications in dynamic


environment, multi-institutional, virtual organizations
2. Efficient use of resources at many institutes. People from many institutions
working to solve a common problem (virtual organisation).
3. Join local communities.
4. Interactions with the underneath layers must be transparent and seamless to the
user.

Electrical Power Grid Analogy

Users (or electrical appliances) get access to electricity through wall sockets with no care or
consideration for where or how the electricity is actually generated. “The power grid” links
together power plants of many different kinds. “The Grid" links together computing
resources (PCs, workstations, servers, storage elements) and provides the mechanism needed
to access them.

Need of Grid Computing

Today’s Science/Research is based on computations, data analysis, data visualization &


collaborations. Computer Simulations & Modelling are more cost effective than experimental
methods. Scientific and Engineering problems are becoming more complex & users need
more accurate, precise solutions to their problems in shortest possible time. Data
Visualization is becoming very important. Exploiting underutilized resources

11
Type of Grids

 Computational Grid: These grids provide secure access to huge pool of shared
processing power suitable for high throughput applications and computation intensive
computing.
 Data Grid: Data grids provide an infrastructure to support data storage, data
discovery, data handling, data publication, and data manipulation of large volumes of
data actually stored in various heterogeneous databases and file systems.
 Collaboration Grid: With the advent of Internet, there has been an increased demand
for better collaboration. Such advanced collaboration is possible using the grid. For
instance, persons from different companies in a virtual enterprise can work on
different components of a CAD project without even disclosing their proprietary
technologies.
 Network Grid: A Network Grid provides fault-tolerant and high-performance
communication services. Each grid node works as a data router between two
communication points, providing data-caching and other facilities to speed up the
communications between such points.
 Utility Grid: This is the ultimate form of the Grid, in which not only data and
computation cycles are shared but software or just about any resource is shared. The
main services provided through utility grids are software and special equipment. For
instance, the applications can be run on one machine and all the users can send their
data to be processed to that machine and receive the result back.

12
Grid Components

Cluster Computing
A cluster is a type of parallel or distributed computer system, which consists of a collection
of inter-connected stand-alone computers working together as a single integrated computing
resource. Key components of a cluster include multiple standalone computers (PCs,
Workstations, or SMPs), operating systems, high-performance interconnects, middleware,
parallel programming environments, and applications. Clusters are usually deployed to
improve speed and/or reliability over that provided by a single computer, while typically
being much more cost effective than single computer the of comparable speed or reliability.

Cluster computing involves the interconnected arrangement of multiple computers, or nodes,


to work together as a single system. These nodes are typically connected through a high-
speed network and operate collaboratively to perform tasks. The primary goal of cluster
computing is to achieve high performance, scalability, and fault tolerance. By distributing
workloads across multiple nodes, cluster computing enables parallel processing, allowing
complex tasks to be divided into smaller, manageable units that can be executed
simultaneously. This parallelism results in faster processing times and increased
computational power, making cluster computing ideal for applications requiring intensive
computational resources, such as scientific simulations, data analytics, and rendering.
Additionally, cluster computing offers redundancy and fault tolerance through the replication
of data and resources across multiple nodes, ensuring system reliability and availability.
Moreover, cluster systems can be easily scaled by adding or removing nodes, allowing
organizations to adapt to changing workload demands efficiently. Overall, cluster computing
represents a powerful and flexible computing paradigm that enables organizations to harness
the collective resources of multiple machines for enhanced performance and reliability.

In a typical cluster:

13
 Network: Faster, closer connection than a typical network (LAN)
 Low latency communication protocols
 Loosely coupled than SMP

Types of Clusters

 High Availability or Failover Clusters


 Load Balancing Cluster
 Parallel/Distributed Processing Clusters

Cluster Components

Basic building blocks of clusters are broken down into multiple categories:

 Cluster Nodes
 Cluster Network
 Network Characterization

Key Operational Benefits of Clustering

System availability: offer inherent high system availability due to the redundancy of
hardware, operating systems, and applications.

Hardware fault tolerance: redundancy for most system components (eg. disk-RAID),
including both hardware and software.

OS and application reliability: run multiple copies of the OS and applications, and through
this redundancy

Scalability: Clusters can easily scale up or down by adding or removing nodes as needed.
This scalability feature ensures that organizations can accommodate growing workloads or

14
adjust resources based on demand fluctuations without experiencing downtime or
performance degradation

High performance: By distributing workloads across multiple nodes, clustering enables


parallel processing, significantly enhancing computational speed and performance. This
allows organizations to execute complex tasks more efficiently, leading to faster data
processing, analysis, and application performance.

Overall, the operational benefits of clustering contribute to improved performance, reliability,


scalability, and cost-effectiveness, making it a preferred choice for organizations seeking to
optimize their IT infrastructure and enhance business agility.

Utility Computing
Utility Computing is purely a concept which cloud computing practically implements. Utility
computing is a service provisioning model in which a service provider makes computing
resources and infrastructure management available to the customer as needed, and charges
them for specific usage rather than a flat rate. This model has the advantage of a low or no
initial cost to acquire computer resources; instead, computational resources are essentially
rented. The word utility is used to make an analogy to other services, such as electrical
power, that seek to meet fluctuating customer needs, and charge for the resources based on
usage rather than on a flat-rate basis. This approach, sometimes known as pay-per-use.
"Utility computing" has usually envisioned some form of virtualization so that the amount of
storage or computing power available is considerably larger than that of a single time-sharing
computer.

15
Utility computing is a paradigm in which computing resources are provided to users on-
demand, much like traditional utility services such as electricity or water. In this model, users
access computing resources, such as processing power, storage, and applications, via the
internet or a network, paying only for the resources they consume on a metered basis. This
pay-as-you-go approach offers flexibility and scalability, allowing organizations to scale
resources up or down according to fluctuating demands without the need for substantial
upfront investments in infrastructure. Utility computing enables cost-effective resource
utilization, as organizations only pay for what they use, eliminating the need to maintain
excess capacity. Additionally, it offers agility and responsiveness, enabling rapid deployment
of resources to support business initiatives or address sudden spikes in workload. Overall,
utility computing provides a flexible and cost-efficient approach to accessing computing
resources, empowering organizations to focus on innovation and business growth while
outsourcing their IT infrastructure needs to specialized providers.

Utility Computing is :

 Pay-for-use Pricing Business Model


 Data Center Virtualization and Provisioning
 Solves Resource Utilization Problem
 Web Services Delivery
 Automation

Utility Computing Example

On-Demand Cyber Infrastructure

Utility Computing Payment Models

Same range of charging models as other utility providers: gas, electricity,


telecommunications, water, television broadcasting

 Flat rate
 Tiered
 Subscription

16
 Metered
 Pay as you go
 Standing charges

Different pricing models for different customers based on factors such as scale, commitment
and payment frequency. But the principle of utility computing remains. The pricing model is
simply an expression by the provider of the costs of provision of the resources and a profit
margin.

Risks in a Utility Computing World

 Data Backup
 Data Security
 Partner Competency
 Defining SLA
 Getting value from charge back

Cloud Computing
US National Institute of Standards and Technology defines Computing as

“ Cloud computing is a model for enabling ubiquitous, convenient, on-demand network


access to a shared pool of configurable computing resources (e.g networks, servers, storage,
applications, and services) that can be rapidly provisioned and released with minimal
management effort or service provider interaction. ”

Essential Characteristics

On-demand self-service

A consumer can unilaterally provision computing capabilities, such as server time and
network storage, as needed automatically without requiring human interaction with each
service provider.

17
Broad network access

Capabilities are available over the network and accessed through standard mechanisms that
promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets,
laptops, and workstations).

Resource pooling

The provider’s computing resources are pooled to serve multiple consumers using a multi-
tenant model, with different physical and virtual resources dynamically assigned and
reassigned according to consumer demand.

Multi-Tenancy

Cloud computing environments support multiple users or tenants sharing the same
infrastructure resources. This multi-tenancy model allows providers to achieve greater
resource utilization and efficiency, while also enabling cost-sharing among users.

Security and Reliability

Cloud providers implement robust security measures to protect data and ensure the
confidentiality, integrity, and availability of resources. Additionally, cloud services offer
built-in redundancy and disaster recovery capabilities to ensure high availability and
reliability of services.

Cloud Characteristics

Measured Service

Cloud systems automatically control and optimize resource use by leveraging a metering
capability at some level of abstraction appropriate to the type of service (e.g., storage,
processing, bandwidth, and active user accounts). Resource usage can be monitored,
controlled, and reported, providing transparency for both the provider and consumer of the
utilized service.

Rapid elasticity

Capabilities can be elastically provisioned and released, in some cases automatically, to scale
rapidly outward and inward commensurate with demand. To the consumer, the capabilities
available for provisioning often appear to be unlimited and can be appropriated in any
quantity at any time.

Common Characteristics

 Massive Scale
 Resilient Computing
 Homogeneity

18
 Geographic Distribution
 Virtualization
 Service Orientation
 Low Cost Software
 Advanced Security

Cloud Services Models

 Software as a Service (SaaS)

1. The capability provided to the consumer is to use the provider’s applications


running on a cloud infrastructure. The applications are accessible from various
client devices through either a thin client interface, such as a web browser (e.g.,
web-based email), or a program interface.
2. The consumer does not manage or control the underlying cloud infrastructure
including network, servers, operating systems, storage, or even individual
application capabilities, with the possible exception of limited user-specific
application configuration settings.

E.g: Google Spread Sheet

 Cloud Infrastructure as a Service (IaaS)

1. The capability provided to provision processing, storage, networks, and other


fundamental computing resources
2. Consumer can deploy and run arbitrary software

E.g: Amazon Web Services and Flexi scale.

 Platform as a Service (PaaS)

1. The capability provided to the consumer is to deploy onto the cloud infrastructure
consumer-created or acquired applications created using programming languages,
libraries, services, and tools supported by the provider.
2. The consumer does not manage or control the underlying cloud infrastructure
including network, servers, operating systems, or storage, but has control over the
deployed applications and possibly configuration settings for the application-
hosting environment.

19
Types of Cloud (Deployment Models)

 Private cloud - The cloud infrastructure is operated solely for an organization.


E.g Window Server 'Hyper-V'.
 Community cloud - The cloud infrastructure is shared by several organizations and
supports a specific goal.
 Public cloud - The cloud infrastructure is made available to the general public.
E.g Google Doc, Spreadsheet,
 Hybrid cloud - The cloud infrastructure is a composition of two or more clouds
(private, community, or public) .E.g Cloud Bursting for load balancing between
clouds.

Cloud and Virtualization


Virtual Workspaces:

An abstraction of an execution environment that can be made dynamically available to


authorized clients by using well-defined protocols, Resource quota (e.g. CPU, memory
share), Software configuration (e.g. OS).

Implement on Virtual Machines (VMs):

 Abstraction of a physical host machine,


 Hypervisor intercepts and emulates instructions from VMs, and allows management
of VMs,
 VMWare, Xen, KVM etc.

Provide infrastructure API:

 Plug-ins to hardware/support structures

20
Virtual Machines
VM technology allows multiple virtual machines to run on a single physical machine.

Advantages of virtual machines:

 Run operating systems where the physical hardware is unavailable,


 Easier to create new machines, backup machines, etc.,
 Software testing using “clean” installs of operating systems and software,
 Emulate more machines than are physically available,
 Timeshare lightly loaded systems on one host,
 Debug problems (suspend and resume the problem machine),
 Easy migration of virtual machines (shutdown needed or not).
 Run legacy systems

Cloud-Sourcing
Why is it becoming important ?

 Using high-scale/low-cost providers,


 Any time/place access via web browser,
 Rapid scalability; incremental cost and load sharing,

Concerns:

 Performance, reliability, and SLAs,


 Control of data, and service parameters,
 Application features and choices,

Dew Computing
Dew Computing is an emerging area within the field of computing, which has garnered
interest due to its potential to further enhance the capabilities of cloud and fog computing
architectures. It focuses on the client-side processing and storage capabilities of devices,
aiming to leverage the underutilized resources available in client devices to augment cloud
services.

Characteristics of Dew Computing

Dew Computing is characterized by its emphasis on the local processing and storage
capabilities of devices. Unlike cloud computing, which relies on centralized servers for
processing and storage, Dew Computing advocates for a decentralized approach, utilizing the
computing power of client devices. Key characteristics include:

Local Autonomy: Devices in a Dew Computing environment can operate independently,


even when disconnected from the Internet.
21
Collaboration with Cloud Services: It complements cloud services by providing local
processing and storage solutions, enhancing performance and reliability.

Resource Utilization: Maximizes the use of available resources on client devices to reduce
reliance on remote servers.

Architecture

The architecture of Dew Computing involves several layers, each contributing to the
seamless integration of local and cloud resources. These layers include:

1. Dew Layer: The foundational layer, consisting of client devices that provide local
computing and storage capabilities.
2. Fog Layer (optional): Acts as an intermediary between the Dew and Cloud layers,
facilitating edge computing processes closer to end-users.
3. Cloud Layer: The centralized servers that provide extensive computing resources and
services accessible over the Internet.

Benefits

Dew Computing offers several benefits over traditional cloud computing models:

1. By processing data locally, it significantly reduces the latency involved in data


transmission to and from cloud servers.
2. Local processing of sensitive data minimizes the risk of data breaches associated with
centralized storage.
3. The autonomy of client devices ensures that applications remain operational even in
the event of Internet outages.
4. Utilizes the computing power of a vast number of client devices, potentially reducing
the need for large-scale data centres.

Applications

Dew Computing can be applied across various domains to improve efficiency, reliability, and
performance. Potential applications include:

1. Smart Home Systems: Enhancing privacy and responsiveness of smart home devices
by processing data locally.
2. Healthcare Monitoring: Local processing of health data to ensure patient privacy and
immediate responsiveness.
3. Internet of Things (IoT): Reducing latency and enhancing the functionality of IoT
devices by enabling local data processing and decision-making.
4. Content Delivery Networks (CDNs): Augmenting CDNs by caching content on client
devices, reducing bandwidth usage and improving content delivery speeds.

Challenges

22
While Dew Computing offers promising benefits, it also faces several challenges, including
security concerns, the need for standardization, and the management of distributed resources.
Future research directions may focus on developing secure protocols for Dew Computing
environments, creating standards for interoperability, and designing efficient resource
management algorithms.

Fog Computing
Fog computing, a term coined by Cisco Systems, refers to a decentralized computing
infrastructure in which data, compute, storage, and applications are distributed in the most
logical, efficient place between the data source and the cloud. This concept aims to bring the
advantages of cloud computing closer to where data is created and acted upon. By doing so, it
helps in reducing latency, enhancing data bandwidth, and improving security and privacy
aspects of a computing infrastructure.

What is Fog Computing?

Fog computing extends cloud computing and services to the edge of the network, similar to
how fog consists of water droplets that are close to the ground, hence the name. It involves
the use of edge devices to carry out a substantial amount of computation, storage, and
communication locally and routed over the internet backbone.

Key Characteristics of Fog Computing

1. Low Latency and Faster Response Time: By processing data closer to the source,
fog computing can significantly reduce latency and improve response times.

23
2. Improved Security: Local data processing can enhance data security and privacy by
reducing the amount of data transmitted over the network.
3. Geographical Distribution: Unlike centralized cloud computing, fog computing
encompasses multiple distributed nodes that can operate independently.
4. Scalability: Fog computing supports vertically and horizontally scalable services and
applications.
5. Mobility Support: Offers services to mobile devices, including dynamic location-
based services.

Architecture of Fog Computing

Fog computing architecture is hierarchical and distributed, designed to work at the network's
edge. It consists of three primary layers:

Cloud Layer

The topmost layer, responsible for global management, long-term analytics, and storage. It
oversees the entire fog network, offering services and resources that are not time-sensitive.

Fog Layer

This layer includes the fog nodes themselves, which can be deployed on devices like
industrial controllers, switches, routers, embedded servers, and video surveillance cameras.
These nodes provide the compute, storage, and networking services to the end devices.

Edge Layer

The bottom layer consists of the end devices or sensors that generate the data. These could be
IoT devices, mobile phones, industrial machines, and other gadgets that collect and initially
process the data before sending it upwards through the fog layer for further processing or
down to the fog layer for immediate action.

Applications of Fog Computing

Fog computing finds its application in various fields due to its ability to process data quickly
and efficiently. Some of the prominent applications include:

1. Smart Grids: For real-time analysis and management of utility operations.


2. Healthcare: For real-time patient monitoring and data analysis, enhancing
telemedicine and remote healthcare services.
3. Smart Cities: In traffic management systems, public safety, and waste management by
processing data from various sensors and devices distributed across the city.
4. Industrial IoT (IIoT): In manufacturing for predictive maintenance, supply chain
management, and safety management.

Benefits of Fog Computing

24
1. Reduction in Bandwidth Needs: By processing data locally, fog computing
significantly reduces the need for bandwidth.
2. Real-time Data Processing and Analysis: Critical for applications requiring instant
decision-making.
3. Enhanced Security: By keeping sensitive data local, fog computing minimizes the risk
of data breaches.
4. Scalability and Flexibility: Enables businesses to scale up or down without significant
changes to the core infrastructure.

Challenges

While fog computing offers numerous benefits, it also faces several challenges:

1. Security and Privacy Concerns: Although fog computing can enhance security, the
distributed nature also introduces new vulnerabilities.
2. Complexity in Management: Managing a distributed network requires sophisticated
tools and protocols.
3. Interoperability: With many devices and platforms, ensuring seamless communication
can be challenging.

Serverless Computing
Serverless computing is a cloud-computing execution model in which the cloud provider
dynamically manages the allocation and provisioning of servers. This model allows
developers to build and run applications and services without having to manage
infrastructure. Here's an in-depth look into serverless computing, its components, advantages,
use cases, and challenges.

What is Serverless Computing?

Serverless computing, despite its name, does not eliminate servers from the computing
environment. Instead, it abstracts the server management aspect away from the application
developers. The cloud provider automatically handles the deployment, maintenance, and
scaling of servers, enabling developers to focus solely on writing code for individual
functions that execute in response to events.

Key Characteristics

 Event-driven: Functions are executed in response to specific triggers or events.


 Auto-scaling: Resources automatically scale based on the application's needs.
 Pay-as-you-go pricing model: Charges are based on the actual amount of resources
consumed by applications, rather than on pre-purchased units of capacity.

25
Components of Serverless Architecture

1. Functions as a Service (FaaS)


 FaaS is the cornerstone of serverless computing. Developers deploy individual
functions written in a language supported by the provider, which are then
executed, scaled, and billed in response to specific events.
2. Backend as a Service (BaaS)
 BaaS components are third-party services that serverless applications can use,
including databases, authentication services, and storage systems. These
managed services are integrated into the serverless application through APIs.
3. Event Sources
 Event sources trigger the execution of serverless functions. These can be
HTTP requests, file uploads to a storage service, database operations, or in-
app activities, among others.

Advantages of Serverless Computing

1. With serverless computing, you only pay for the execution time of your functions,
potentially leading to significant cost savings compared to traditional cloud service
models.
2. The cloud provider manages servers, databases, and application logic layers, which
significantly reduces the operational burden on development teams.
3. Serverless applications can scale automatically with the volume of incoming requests
without any manual intervention.

Use Cases

 Web Applications: Building web applications without having to manage the


underlying infrastructure.
 APIs: Creating scalable APIs that run in serverless functions.
 Data Processing: Executing code in response to events in a data stream or changes in
a database.
 IoT Applications: Managing the backend for IoT applications, where functions can
process data from devices.

Challenges

 Cold Starts: The initial invocation of a serverless function can suffer from latency due
to the time taken to provision resources.
 Monitoring and Debugging: Traditional monitoring and debugging tools may not be
directly applicable to serverless applications, requiring new approaches.
 Vendor Lock-in: Using proprietary features from cloud providers can lead to
difficulties in migrating applications between platforms.

26
Sustainable Computing
Sustainable computing, often termed as green computing, encompasses a wide range of
practices, strategies, and technologies aimed at reducing the environmental impact of
computing. This field focuses on designing, manufacturing, using, and disposing of
computers, servers, and associated subsystems—such as monitors, printers, storage devices,
and networking and communications systems—efficiently and effectively with minimal or no
impact on the environment.

Why Sustainable Computing ?

Energy efficiency is at the core of sustainable computing. It involves optimizing computing


resources to perform tasks while consuming the least amount of power possible. Techniques
include power management features in hardware and software, efficient cooling systems, and
the use of energy-efficient hardware components.Incorporating renewable energy sources,
such as solar or wind power, to run data centers and computing infrastructures is another
strategy to reduce the carbon footprint of computing operations.Resource efficiency in
sustainable computing involves designing hardware that is durable, easy to repair, and
upgradeable, reducing the need to frequently replace equipment. It also encompasses the use
of environmentally friendly materials in the manufacturing process.Proper recycling and
disposal of electronic waste (e-waste) are crucial to minimizing the environmental impact of
computing. Sustainable computing advocates for effective e-waste management practices,
including recycling programs and regulations to ensure responsible e-waste
handling.Software plays a significant role in the overall energy consumption of computing
systems. Energy-efficient software is designed to require less computational power, thereby

27
reducing the energy consumption of the hardware running it.Cloud computing and
virtualization can contribute to sustainability by optimizing resource usage. These
technologies allow for the sharing of physical resources among multiple users or applications,
leading to more efficient use of energy and hardware.

Challenges

1. Technological Challenges: The rapid pace of technological advancement poses


challenges to sustainable computing, as newer technologies may not always be more
energy-efficient or environmentally friendly. Ongoing research and innovation are
required to overcome these challenges.
2. Policy and Regulation: Governments and international organizations play a
significant role in promoting sustainable computing through policies and regulations.
These may include standards for energy efficiency, requirements for the use of
renewable energy, and guidelines for e-waste management.
3. Awareness and Education: Raising awareness and educating stakeholders, including
manufacturers, businesses, and consumers, about the importance of sustainable
computing is crucial for its widespread adoption. Efforts should focus on
demonstrating the environmental and economic benefits of sustainable computing
practices.

Cloud Migration and Container Virtualization with Docker

Cloud Migration

Cloud migration is the process of moving digital business operations into the cloud. It’s akin
to a physical move, except it involves the shifting of data, applications, and IT processes from
some data centres to other data centres, instead of packing up and moving physical goods.
The goal often includes moving from on-premises or legacy infrastructure to the cloud or
moving from one cloud environment to another. Cloud migration enables organizations to
scale, reduce costs, and improve efficiency by leveraging the cloud's flexible and scalable
nature.

Types of Cloud Migration:

1. Lift-and-Shift
This approach involves moving applications and data from an on-premises data center
to the cloud with minimal or no modifications. It's quick and cost-effective but doesn't
take full advantage of cloud-native features.
2. Refactoring
In refactoring, applications are modified or rebuilt before they are moved to the cloud
to better align with cloud-native capabilities. This method is more time-consuming
and expensive but offers improved scalability and performance in the cloud
environment.
3. Replatforming
28
Replatforming involves making a few cloud optimizations to realize a tangible benefit
without changing the core architecture of the application. It strikes a balance between
lift-and-shift and refactoring by enabling some cloud benefits while avoiding the
complexity of a full refactor.

Benefits of Cloud Migration

1. Cloud environments allow businesses to easily scale their resources up or down based
on demand, providing flexibility and efficiency.
2. Migrating to the cloud can reduce operational costs by eliminating the need for
physical hardware and maintenance. Organizations pay only for the resources they
use.
3. Cloud providers invest heavily in security technologies and expertise, offering a level
of security that may be challenging for individual organizations to achieve on their
own.
4. Cloud environments often come with built-in disaster recovery capabilities, ensuring
data is backed up and can be restored quickly in the event of a loss.

Challenges in Cloud Migration

1. Ensuring data security and compliance with regulations during and after migration is a
significant concern for many organizations.
2. The complexity of an organization’s existing IT infrastructure can make migration a
challenging process, requiring careful planning and execution.

Container-Based Virtualization

What is Container-Based Virtualization?

Container-based virtualization, also known as containerization, is a lightweight form of


virtualization that allows for running multiple isolated instances of an operating system
(containers) on a single control host. Containers share the host system's kernel but package
their application code, libraries, and dependencies into a single executable package.

Key Features of Containers

1. Efficiency
 Containers require less system resources than traditional virtual machines
because they share the host system’s kernel and utilize fewer resources.
2. Portability
 Containers encapsulate everything an application needs to run. This makes it
easy to move the containers between different environments while retaining
full functionality.
3. Scalability
 Containers can be easily created, destroyed, started, and stopped, making it
simple to scale applications up or down as needed.
29
4. Isolation
 Each container operates independently and does not interfere with others,
providing a secure and stable environment for applications.

Docker: A Popular Containerization Platform

Docker is an open-source platform that automates the deployment, scaling, and management
of applications within containers. It has become synonymous with containerization, providing
tools to help developers build, deploy, and run applications more efficiently.

Docker Components

1. Docker Engine
 The core part of Docker, responsible for creating and running Docker
containers.
2. Docker Images
 Blueprints for creating Docker containers, including the application and its
dependencies.
3. Docker Containers
 The runtime instances of Docker images, where the applications and services
run in an isolated environment.
4. Docker Hub
 A registry service provided by Docker for finding and sharing container
images with your team.

Benefits of Docker

1. Simplified Configuration
 Docker simplifies the process of configuring applications to run in different
environments.
2. Application Isolation
 Docker ensures that applications are isolated in their containers, increasing
security.
3. Rapid Deployment
 Docker containers can be created and started in seconds, leading to faster
deployment times.
4. Environment Consistency
 Docker containers provide consistency across different environments,
reducing the "it works on my machine" syndrome.

Conclusion
In conclusion, the course on Cloud Computing has provided a comprehensive understanding
of the fundamental principles, technologies, and applications of cloud computing. Through

30
detailed lectures, practical examples, and hands-on exercises, participants have gained
insights into the key concepts such as virtualization, scalability, elasticity, and service
models. The course has equipped learners with the knowledge and skills needed to leverage
cloud computing to address various business challenges, improve operational efficiency, and
drive innovation. Moreover, by exploring real-world case studies and industry best practices,
participants have gained practical insights into the implementation, management, and security
aspects of cloud-based solutions. As technology continues to evolve, the knowledge gained
from this course will empower participants to navigate the complexities of the cloud
computing landscape, adapt to emerging trends, and make informed decisions to harness the
full potential of cloud technologies in their respective domains. Overall, the course on Cloud
Computing has been instrumental in fostering a deeper understanding of this transformative
technology and its implications for organizations across diverse sectors.

31

You might also like