100% found this document useful (1 vote)
154 views

Cloud Computing Notes(Unit-1 to 5)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
154 views

Cloud Computing Notes(Unit-1 to 5)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 98

MEERUT INSTITUTE OF ENGINEERING &

TECHNOLOGY MEERUT

Course Content
For
Cloud Computing (KCS – 713)

B.Tech IV Year
CSE,IT,CS-IT,CSE(AI),CSE(AI&ML),CSE(DS) and CSE(IOT)

Prepared By:

Mr. Ajay Kumar Sah


CSE-IoT

Course Name: B.Tech

DR. A.P.J. ABDUL KALAM TECHNICAL UNIVERSITY


LUCKNOW

Cloud Computing (KCS-713) 1|Page


COLLEGE VISION AND MISSION

Vision
To be an outstanding institution in the country imparting technical education, providing need-
based, value-based and career-based programs and producing self-reliant, self-sufficient
technocrats capable of meeting new challenges.

Mission
The mission of the institute is to educate young aspirants in various technical fields to fulfill
global requirement of human resources by providing sustainable quality education, training and
invigorating environment besides molding them into skilled competent and sociallyresponsible
citizens who will lead the building of a powerful nation.

Cloud Computing (KCS-713) 2|Page


DEPARTMENT VISION AND MISSION

Vision

To become a prominent department in nation which provides quality education,


keeping pace with rapidly changing technologies, and to create technical
graduates of global standards, who develop capabilities of accepting new
challenges in the field of Information Technology.

Mission
M1: To provide quality education in the core and applied areas of information
technology, and develop students from all socio-economic levels into globally
competent professionals.

M2: To impart professional ethics, social responsibilities, moral values and


entrepreneur skills to the students.

M3: To invigorate student’s skills so that deploys their potential in research and
development, and inculcates the habit of lifelong learning.

Cloud Computing (KCS-713) 3|Page


Program Educational Objectives, Program Outcomes, Program
Specific Outcome, Course Outcomes and Mapping with POs

Program Educational Objectives


PEO 1: Students will have successful careers in IT and allied sectors with high quality technical
skills for global competence.

PEO 2: To bring the physical, analytical and computational approaches of IT to solve real
world Engineering problems and provide innovative solutions by applying appropriate models,
tools and evaluations.

PEO 3: Be adaptable to rapidly changing technological advancements through continuous


learning and research to meet the diversified needs of the industry.

PEO 4: Students to imbibe professional attitudes, team spirit, effective communication and
contribute ethically to the needs of the society with moral values.

PEO 5: Encourage students for higher studies and entrepreneurial skills by imparting the
quality of lifelong learning in emerging technologies and work in multidisciplinary roles and
capacities.

Program Outcomes

1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering


fundamentals, and an engineering specialization to the solution of complex engineering
problems.

2. Problem analysis: Identify, formulate, review research literature, and analyse complex
engineering problems reaching substantiated conclusions using first principles of mathematics,
natural sciences, and engineering sciences.

3. Design/development of solutions: Design solutions for complex engineering problems and


design system components or processes that meet the specified needs with appropriate
consideration for the public health and safety, and the cultural, societal, and environmental
considerations

Cloud Computing (KCS-713) 4|Page


4. Conduct investigations of complex problems: Use research-based knowledge and
research methods including design of experiments, analysis and interpretation of data, and
synthesis of the information to provide valid conclusions.

5. Modern tool usage: Create, select, and apply appropriate techniques, resources, andmodern
engineering and IT tools including prediction and modeling to complex engineering activities
with an understanding of the limitations.

6. The engineer and society: Apply reasoning informed by the contextual knowledge to
assess societal, health, safety, legal and cultural issues and the consequent responsibilities
relevant to the professional engineering practice.

7. Environment and sustainability: Understand the impact of the professional engineering


solutions in societal and environmental contexts, and demonstrate the knowledge of, and need
for sustainable development.

8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.

9. Individual and team work: Function effectively as an individual, and as a member or


leader in diverse teams, and in multidisciplinary settings.

10. Communication: Communicate effectively on complex engineering activities with the


engineering community and with society at large, such as, being able to comprehend and write
effective reports and design documentation, make effective presentations, and give and receive
clear instructions.

11. Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member and
leader in a team, to manage projects and in ltidisciplinary environments.

12. Life-long learning: Recognize the need for, and have the preparation and ability to engage
in independent and life-long learning in the broadest context of technological change

Cloud Computing (KCS-713) 5|Page


Program Specific Outcomes

1. PSO 1: Ability to understand, apply and analyze computational concepts in the areas related
to algorithms, machine learning, multimedia, web designing, Data Science, and networking on
the systems having different degree of complexity.

2. PSO 2: Ability to apply standard practices and methodologies in software developmentand


project management using learned concepts and skills to deliver a quality product.

3. PSO 3: Ability to employ contemporary computer languages, environment and platforms


towards enriched career opportunities and zeal for higher studies.

Cloud Computing (KCS-713) 6|Page


Meerut Institute of Engineering & Technology, Meerut

Course Name (Code) : Cloud Computing (KCS-713)


Topics / lectures are arranged in sequence - same - as to be taught in the class. Maintain data related to "Date" in its har
Date
S. Lecture CO Teaching Referen
Topic Description Actual
No. No (No) Pedagogy ce Planned
Material Delivery

1 1 CO1 Introduction To Cloud Computing T1/T2/T3


2 2 CO1 Evolution of Cloud Computing T1/T2/T3
3 3 CO1 Underlying Principles of Parallel and T1/T2/T3
Distributed Computing
T1/T2/T3
4 4 CO1 CHARACTERISTICS OF CLOUD

5 5 CO1 CLOUD ELASTICITY T1/T2/T3


6 6 CO1 On-Demand COMPUTING T1/T2/T3
7 7 CO2 Cloud Enabling Tech. Services Oriented Arch. T1/T2
8 8 CO2 Characteristics of Contemporary SOA T1/T2
9 9 CO2 Primitive SOA T1/T2
10 10 CO2 SOA Characteristics T1/T2
11 11 CO2 REST and Systems of Systems T1/T2
12 12 CO2 Web Services T1/T2
13 13 CO2 Web Service Architecture T1/T2
14 14 CO2 What is Virtualization T1/T2
15 15 CO2 Type of Virtualization T1/T2
16 16 CO2 Implementation Level of Virtualization T1/T2
Structure
17 17 CO2 Virtualization of CPU T1/T2
18 18 CO2 Memory, I/O Devices T1/T2
CO3 Cloud Computing Architecture Overview T1/T2/T3
19 19
Frontend, Backend
CO3 Layered Cloud Architecture Design T1/T2/T3
20 20
21 21 CO3 NIST CLOUD COMPUTING REFERENCE T1/T2/T3
ARCHITECTURE
22 22 CO3 CLOUD DEPLOYMENT MODELS T1/T2

23 23 CO3 COMMUNITY CLOUD T1/T2

24 24 CO3 CLOUD STORAGE AS A SERVICE T1/T2

25 25 CO4 Inter Cloud Resource Management- Resource T1/T2/T3


Provisioning
26 26 CO4 Resource Provisioning Methods T1/T2/T3

Global Exchange of Cloud Resources T1/T2/T3


27 27 CO4
28 28 CO4 Security Overview and Definition. T1/T2/T3

Cloud Computing (KCS-713) 7|Page


29 29 Cloud Security Challenges T1/T2/T3
CO4
CO4 Software-as-a-Service Security T1/T2/T3
30 30
31 31 CO4 Security Governance T1/T2/T3

32 32 CO4 Virtual Machine Security T1/T2/T3

33 33 CO4 IAM, Security Standards T1/T2/T3

34 34 CO5 Cloud Technologies and Advancements T1/T2/T3


Hadoop
35 35 CO5 Map Reduce T1/T2/T3

36 36 CO5 Virtual Box ,Google App Engine T1/T2/T3

37 37 CO5 Programming Environment for Google T1/T2/T3


AppEngine
38 38 CO5 Open Stack. T1/T2/T3

39 39 CO5 Federation in the Cloud and Four Levels of T1/T2/T3


Federation
40 40 CO5 Federation Services and Applications, Future of T1/T2/T3
Federation.

Cloud Computing (KCS-713) 8|Page


Unit 1st
1. INTRODUCTION TO CLOUD COMPUTING

Lecture-1:
1.1 What is Cloud Computing?

Cloud computing means storing and accessing the data and programs on remote servers that
are hosted on the internet instead of the computer’s hard drive or local server. Cloud computing
is also referred to as Internet-based computing, it is a technology where the resource is provided
as a service through the Internet to the user. The data which is stored canbe files, images,
documents, or any other storable document.

Some operations which can be performed with cloud computing are –

 Storage, backup, and recovery of data


 Delivery of software on demand
 Development of new applications and services
 Streaming videos and audio

Nowadays, Cloud computing is adopted by every company, whether it is a MNC or a startup


and many are still migrating towards it because of the cost-cutting, lesser maintenance, and the
increased capacity of the data with the help of servers maintained by the cloud providers. One
more reason for this drastic change from the On-premises servers of the companies tothe
Cloud providers is the „Pay as you go‟ service provided by them i.e., you only have to payfor
the service which you are using. The disadvantage On-premises server holds is that if the server
is not in use the company still has to pay for it.

1.2 Why Learn Cloud Computing.


Over the past decade, cloud computing has played an increasing role in helping organizations
operate. Even more non-technical jobs are transitioning to cloud platforms to improve
operations and lower costs. The global market for cloud computing grew in the decade between
2010 and 2020 from $24 billion to $156 billion, a 635% jump. Currently, more than 90% of
organizations use the cloud. This trend is expected to continue. Many top organizations like
Microsoft, Amazon, and Google are contributing to this expansion. For those interested in a
career in cloud computing, many options are available in both small and large organizations.

Cloud Computing (KCS-713) 9|Page


1.3 Where is cloud computing used.
Organizations of every type, size, and industry are using the cloud for a wide variety of use cases, such
as data backup, disaster recovery, email, virtual desktops, software development and testing, big data
analytics, and customer-facing web applications. For example, healthcare companies are using the cloud
to develop more personalized treatments for patients. Financial services companies are using the cloud to
power real-time fraud detection and prevention. And video game makers are using the cloud to deliver
online games to millions of players around the world.

Lecture-2:
2. Evolution of Cloud Computing
Cloud computing is all about renting computing services. This idea first came in the 1950s. Inmaking cloud computing what it
is today, five technologies played a vital role. These are distributed systems and its peripherals, virtualization, web 2.0, service
orientation, and utility computing.

Fig-1: Evolution of Cloud Computing

Cloud Computing (KCS-713) 10 | P a g e


2.1 Distributed Systems:
It is a composition of multiple independent systems but all of them are depicted as a single
entity to the users. The purpose of distributed systems is to share resources and also use them
effectively and efficiently. Distributed systems possess characteristics such as scalability,
concurrency, continuous availability, heterogeneity, and independence in failures. But the main
problem with this system was that all the systems were required to be present at the same
geographical location. Thus to solve this problem, distributed computing led to three more
types of computing and they were-Mainframe computing, cluster computing, and grid
computing.
2.2 Mainframe computing:
Mainframes which first came into existence in 1951 are highly powerful and reliable computing
machines. These are responsible for handling large data such as massive input- output
operations. Even today these are used for bulk processing tasks such as online transactions etc.
These systems have almost no downtime with high fault tolerance. After distributed computing,
these increased the processing capabilities of the system. But these were very expensive. To
reduce this cost, cluster computing came as an alternative to mainframe technology.

2.3 Cluster computing:

In 1980s, cluster computing came as an alternative to mainframe computing. Each machine in


the cluster was connected to each other by a network with high bandwidth. These were way
cheaper than those mainframe systems. These were equally capable of high computations.
Also, new nodes could easily be added to the cluster if it was required. Thus, the problem of
the cost was solved to some extent but the problem related to geographical restrictions still
pertained. To solve this, the concept of grid computing was introduced.

2.4 Grid computing:


In 1990s, the concept of grid computing was introduced. It means that different systems were
placed at entirely different geographical locations and these all were connected via the internet.
These systems belonged to different organizations and thus the grid consisted of heterogeneous
nodes. Although it solved some problems but new problems emerged as the distance between
the nodes increased. The main problem which was encountered was the lowavailability of high
bandwidth connectivity and with it other network associated issues. Thus.Cloud computing is
often referred to as “Successor of grid computing”.

Cloud Computing (KCS-713) 11 | P a g e


2.5 Virtualization:

It was introduced nearly 40 years back. It refers to the process of creating a virtual layer over
the hardware which allows the user to run multiple instances simultaneously on the hardware.
It is a key technology used in cloud computing. It is the base on which major cloud computing
services such as Amazon EC2, VMware vCloud, etc. work on. Hardware virtualization is still
one of the most common types of virtualization.

2.6 Web 2.0:

It is the interface through which the cloud computing services interact with the clients. It is
because of Web 2.0 that we have interactive and dynamic web pages. It also increases
flexibility among web pages. Popular examples of web 2.0 include Google Maps, Facebook,
Twitter, etc. Needless to say, social media is possible because of this technology only. It gained
major popularity in 2004.

2.7 Service orientation:

It acts as a reference model for cloud computing. It supports low-cost, flexible, and evolvable
applications. Two important concepts were introduced in this computing model. These were
Quality of Service (QoS) which also includes the SLA (Service Level Agreement) and
Software as a Service (SaaS).

2.8 Utility computing:

It is a computing model that defines service provisioning techniques for services such as
computer services along with other major services such as storage, infrastructure, etc which are
provisioned on a pay-per-use basis.

Cloud Computing (KCS-713) 12 | P a g e


Lecture: 3
3. Underlying Principles of Parallel and DistributedComputing
3.1 PARALLEL COMPUTING:

In parallel computing multiple processors performs multiple tasks assigned to them


simultaneously. Memory in parallel systems can either be shared or distributed. Parallel
computing provides concurrency and saves time and money.

3.2 DISTRIBUTED COMPUTING:

In distributed computing we have multiple autonomous computers which seems to the user as
single system. In distributed systems there is no shared memory and computers communicate
with each other through message passing. In distributed computing a single task is divided
among different computers.

Difference between Parallel Computing and Distributed Computing:

Sr. No. Parallel Computing Distributed Computing


1 Many operations are performed System components are located at different
Simultaneously locations
2 Single computer is required Uses multiple computers
3 Multiple processors perform Multiple computers perform multiple
multiple operations operations
4 It may have shared or distributed It have only distributed memory
Memory
5 Processors communicate with each Computers communicate with each other
other through bus through message passing.
6 Improves the system performance Improves system scalability, fault tolerance
and resource sharing capabilities

Cloud Computing (KCS-713) 13 | P a g e


Lecture: 4
4. CHARACTERISTICS OF CLOUD

There are many characteristics of Cloud computing here are few of them:

 On-demand self-services: The Cloud computing services does not require any human
administrators, user they are able to provision, monitor and manage computing
resources as needed.
 Broad network access: The Computing services are generally provided over standard
networks and heterogeneous devices.
 Rapid elasticity: The Computing services should have IT resources that are able to scale
out and in quickly and on as needed basis. Whenever the user require services it is
provided to him and it is scale out as soon as its requirement gets over.
 Resource pooling: The IT resource (e.g., networks, servers, storage, applications, and
services) present are shared across multiple applications and occupant in an
uncommitted manner. Multiple clients are provided service from a same physical
resource.
 Measured service: The resource utilization is tracked for each application and occupant,
it will provide both the user and the resource provider with an account of what has been
used. This is done for various reasons like monitoring billing and effective use of
resource.
 Multi-tenancy: Cloud computing providers can support multiple tenants (users or
organizations) on a single set of shared resources.
 Virtualization: Cloud computing providers use virtualization technology to abstract
underlying hardware resources and present them as logical resources to users.
 Resilient computing: Cloud computing services are typically designed with redundancy
and fault tolerance in mind, which ensures high availability and reliability.
 Flexible pricing models: Cloud providers offer a variety of pricing models, including
pay-per-use, subscription-based, and spot pricing, allowing users to choose the option
that best suits their needs.

Cloud Computing (KCS-713) 14 | P a g e


4.1 Security: Cloud providers invest heavily in security measures to protect their
users‟ data and ensure the privacy of sensitive information.
4.2 Automation: Cloud computing services are often highly automated, allowing
users to deploy and manage resources with minimal manual intervention.
4.3 Sustainability: Cloud providers are increasingly focused on sustainable practices,
suchas energy-efficient data centers and the use of renewable energy sources, to
reduce their environmental impact.

Fig-2: Cloud Service

Lecture: 5
5.1 CLOUD ELASTICITY:

Elasticity refers to the ability of a cloud to automatically expand or compress the


infrastructural resources on a sudden up and down in the requirement so that the workload can
be managed efficiently. This elasticity helps to minimize infrastructural costs. This is not
applicable for all kinds of environments, it is helpful to address only those scenarios where the
resource requirements fluctuate up and down suddenly for a specific time interval. It is not
quite practical to use where persistent resource infrastructure is required to handle the heavy
workload.

Cloud Computing (KCS-713) 15 | P a g e


The versatility is vital for mission basic or business basic applications where any split the
difference in the exhibition may prompts enormous business misfortune. Thus, flexibility
comes into picture where extra assets are provisioned for such application to meet the
presentation prerequisites.

It works such a way that when number of client access expands, applications are naturally
provisioned the extra figuring, stockpiling and organization assets like central processor,
Memory, Stockpiling or transfer speed what’s more, when fewer clients are there it will
naturally diminish those as per prerequisite.

5.2 CLOUD SCALABILITY:

Cloud scalability is used to handle the growing workload where good performance is also
needed to work efficiently with software or applications. Scalability is commonly used where
the persistent deployment of resources is required to handle the workload statically.

5.3 Types of Scalability:

5.3.1 Vertical Scalability (Scale-up) –

In this type of scalability, we increase the power of existing resources in the working
environment in an upward direction.

FIg-2: Vertical Scaling

Cloud Computing (KCS-713) 16 | P a g e


5.3.2 Horizontal Scalability:

In this kind of scaling, the resources are added in a horizontal row.

FIg-3: Horizontal Scaling


5.3.3 Diagonal Scalability –

It is a mixture of both Horizontal and Vertical scalability where the resources are added both
vertically and horizontally.
FIg-4: Diagonal Scaling

Sr. No. Cloud Elasticity Cloud Scalability


1 Elasticity is used just to meet the sudden Scalability is used to meet the static
up and down in the workload for a small increase in the workload.
period of time.
2 Elasticity is used to meet dynamic Scalability is always used to address
changes, where the resources need can the increase in workload in an
Increase or decrease. Organization.
3 Elasticity is commonly used by small Scalability is used by giant companies
companies whose workload and demand whose customer circle persistently
increases only for a specific period of time. grows in order to do the operations
Efficiently.
4 It is a short term planning and adopted just Scalability is a long term planning
and adopted just to deal with an
to deal with an unexpected increase in
expected increase in demand.
Demand or seasonal demands.

Cloud Computing (KCS-713) 17 | P a g e


Lecture: 6
6. WHAT IS ON-DEMAND COMPUTING?

On-demand computing (ODC) is a delivery model in which computing resources are made
available to the user as needed. The resources may be maintained within the user's enterprise
or made available by a cloud service provider. The term cloud computing is often used as a
synonym for on-demand computing when the services are provided by a third party -- such as
a cloud hosting organization.

The on-demand business computing model was developed to overcome the challenge of
enterprises meeting fluctuating demands efficiently. Because an enterprise's demand for
computing resources can be unpredictable at times, maintaining sufficient resources to meet
peak requirements can be costly. And cutting costs by only maintaining minimal resources
means there are likely insufficient resources to meet peak loads. The on-demand model
provides an enterprise with the ability to scale computing resources up or down whenever
needed, with the click of a button.

The model is characterized by three attributes: scalability, pay-per-use and self-service.


Whether the resource is an application program that helps team members collaborate or
provides additional storage, the computing resources are elastic, metered and easy toobtain.

When an organization pairs with a third party to provide on-demand computing, it either
subscribes to the service or uses a pay-per-use model. The third party then provides computing
resources whenever needed, including when the organization is working on temporary
projects, has expected or unexpected workloads or has long-term computing requirements.
For example, a retail organization could use on-demand computing to scale uptheir online
services, providing additional computing resources during a high-volume time, such as Black
Friday.

On-demand computing normally provides computing resources such as storage capacity, or


hardware and software applications. The service itself is provided with methods including
virtualization, computer clusters and distributed computing.

Cloud Computing (KCS-713) 18 | P a g e


6.1 How does cloud computing provide on-demand functionality?

Cloud computing is a general term for anything that involves delivering hosted services over
the internet. These services are divided into different types of cloud computing resources and
applications.

o IaaS provides virtualized computing resources over the internet.

o SaaS is a software distribution model where a cloud provider hosts applications and
makesthem available to users over the internet.

o DaaS is a form of cloud computing where a third party hosts the back end of a virtual
desktopinfrastructure

o PaaS is a model in which a third-party provider hosts customer applications on


theirinfrastructure. Hardware and software tools are delivered to users over the
internet.

o Managed hosting services are an IT provisioning and cloud server hosting model
where aservice provider leases dedicated servers and associated hardware to a single
customer andmanages those systems on the customer's behalf.

o Cloud storage is a service model where data is transmitted and stored securely on
remote storage systems, where it is maintained, managed, backed up and made
available to users over a network.

o Cloud backup is a strategy for sending a copy of a file or database to a secondary


locationfor preservation in case of equipment failure.

6.2 Benefits of on-demand computing

On-demand computing offers the following benefits:

6.3 Flexibility to meet fluctuating demands.

Users can quickly increase or decrease their computing resources as needed -- either short-
term or long-term.

Cloud Computing (KCS-713) 19 | P a g e


6.4 Removes the need to purchase, maintain and upgrade hardware.

The cloud service organization managing the on-demand services handles resources such as
servers and hardware, system updates and maintenance.

6.5 User friendly.

Many on-demand computing services in the cloud are user friendly enabling most users to
easily acquire additional computing resources without any help from their IT department.
This can help to improve business agility.

6.6 Cut costs.

Saves money because organizations don't have to purchase hardware or software to meet
peaks in demand. Organizations also don't have to worry about updating or maintaining those
resources

Cloud Computing (KCS-713) 20 | P a g e


UNIT-II
7. Cloud Enabling TechnologyService Oriented Architecture
Lecture: 7

7.1 What is SOA

SOA is a style of software design. In the SOA concept, services are provided from externally to other
components as application components through a communication protocol over a network. The basic
principle of SOA does not depend upon technologies, products, and vendors. Each service in an SOA
embodies the code and data integrations required to execute a complete, discrete business function (e.g.,
checking a customer’s credit, calculating a monthly loan payment, or processing a mortgage application).
The service interfaces provide loose coupling, meaning they can be called with little or no knowledge of
how the integration is implemented underneath. The services are exposed using standard network
protocols—such as SOAP (simple object access protocol)/HTTP or JSON/HTTP—to send requests to
read or change data. The services are published in a way that enables developers to quickly find them and
reuse them to assemble new applications.
These services can be built from scratch but are often created by exposing functions from legacy systems
of record as service interfaces.
Service-Oriented Architecture (SOA) is an architectural style that supports service- orientation. SOA is
an architecture that publishes services in the form of XML interface. Applications built using an SOA
style deliver functionality as services, which can be used or reused when building applications or
integrating within the enterprise or trading partners.
 SOA are based on a mesh of software services
 Each service implements one action, such as filling out an online application for an account,
viewing an online bank-statement, or placing an online booking or airline ticket order

7.2 Why do we need SOA?

SOA can help organizations streamline processes so that they can do business more efficiently, and adapt
to changing needs and competition, enabling the software as a service concept. eBay for example, is
opening up its web services API for its online auction. The goal is to drive developers to make money
around the eBay platform. Through the new APIs, developers can build custom applications that link to
the online auction site and allow applications to submit items for sale. Such applications are typically
aimed at sellers, since buyers must still head to ebay.com to bid on items. This type of strategy, however,
will increase the customer base for eBay.
In this way, SOA represents an important stage in the evolution of application development and
integration over the last few decades. Before SOA emerged in the late 1990s, connecting an application
to data or functionality housed in another system required complex point-to-point integration—
integration that developers had to recreate, in part or whole, for each new development project. Exposing
those functionsthrough SOA eliminates the need to recreate the deep integration every time.
Cloud Computing (KCS-713) 21 | P a g e
7.3 Where is used to SOA
Developers use SOA to reuse services in different systems or combine several independent services to
perform complex tasks.
For example, multiple business processes in an organization require the user authentication
functionality. Instead of rewriting the authentication code for all business processes, you can create a
single authentication service and reuse it for all applications.
Lecture: 8

7.4 Primitive SOA: - SOA is a constantly growing field with various vendors developing SOA
products regularly. A baseline service-oriented architecture that is suitable to be realized by any vendor is
known as the primitive SOA. Baseline SOA, common SOA and core SOA are some of the other terms
used to refer to the primitive SOA. Application of service-orientation principles to software solutions
produces services and these are the basic unit of logic in the SOA. These services can exist autonomously,
but they are certainly not isolated. Services maintain certain common and standard features, yet they can
be evolved and extended independently. Services can be combined to create other services. Services are
aware of other services only through service descriptions and therefore can be considered loosely-
coupled. Services communicate using autonomous messages that are intelligent enough to self-govern
their own parts of logic. Most important (primitive) SOA design principles are loose coupling, service
contract, autonomy, abstraction, reusability, compensability, statelessness and discoverability.

Figure-5: Primitive SOA

Cloud Computing (KCS-713) 22 | P a g e


7.5 Contemporary SOA: - Contemporary SOA is the classification that is used to represent the
extensions to the primitive SOA implementations in order to further achieve the goals of service-
orientation. In other words, contemporary SOA is used to take the primitive SOA to a target SOA state
that the organizations would like to have in the future. But, as the SOA (in general) evolve with time, the
primitive SOA is expanded by inheriting the attributes of contemporary SOA. Contemporary SOA helps
the growth of the primitive SOA by introducing new features, and then these features are adapted by the
primitive SOA model making its horizon larger than before. For all these reasons, contemporary SOA is
also referred to as future state SOA, target SOA or extended SOA.

Figure-6: Contemporary SOA

Cloud Computing (KCS-713) 23 | P a g e


Lecture-9:
7.6 Characteristics of Contemporary SOA
a) SOA increase quality of service.

b) SOA is fundamentally autonomous.

c) SOA supports vendor diversity.

d) SOA fosters intrinsic interoperability.

e) SOA promotes discovery.

f) SOA promotes federation.

g) SOA supports a service-oriented business modeling paradigm.

h) SOA implements layers of abstraction.

i) SOA promotes loose coupling throughout the enterprise.

j) SOA promotes organizational agility.

k) SOA emphasizes extensibility.

l) SOA is an evolution.

Cloud Computing (KCS-713) 24 | P a g e


7.7 Difference between Primitive SOA and Contemporary SOA

Contemporary SOA and primitive SOA differ on the purpose they stand for within the context of SOA.
Primitive SOA is the baseline service-oriented architecture while, contemporary SOA is used to represent
the extensions to the primitive SOA. Primitive SOA provides a guideline to be realized by all vendors,
whereas Contemporary SOA expands the SOA horizon by adding new features to primitive SOA.
Currently, Contemporary SOA focuses on securing content of messages, improving reliability through
delivery status notifications, enhancing XML/SOAP processing and transaction processing to account for
task failure.

7.8 Service-Oriented Business and Government


Every business and government organization is engaged in delivering services. Here are some examples:

a. Bank- Savings accounts, checking accounts, credit cards, safety deposit boxes, consumer
loans, mortgages, credit verification.

b. Travel agency- Holiday planning, business travel, travel insurance, annual summary of business
travel expenditures.

c. Insurance agency- Car insurance, home insurance, health insurance, accident assessment.

d. Retail store- In-store shopping, online shopping, catalog shopping, credit cards, extended
warranties, repair services.

e. Lawyer's office- Legal advice, wills preparation, business incorporation, bankruptcy proceedings.

f. Hospital Emergency- medical care, in-patient services, out-patient


services, chronic pain management.

g. Department of transportation- Driver testing and licensing, vehicle licensing, license


administration, vehicle inspections and emissions testing.

h. Department of human services- Benefits disbursement and administration, child support


services and case management.

i. Police department- Law enforcement, community education.

Cloud Computing (KCS-713) 25 | P a g e


Figure-7: Using SOA to align business and information technology

Cloud Computing (KCS-713) 26 | P a g e


Lecture-10:
7.8 SOA Characteristics
The primary characteristics that should go into the design, implementation, and management of
services are as follows:

 Loosely coupled.
 Well-defined service contracts.
 Meaningful to service requesters.
 Standards-based.
A service should also possess as many of the following secondary characteristics as possible in order
to deliver the greatest business and technical benefits:

 Predictable service-level agreements.


 Dynamic, discoverable, metadata-driven.
 Design service contracts with related services in mind.
 Implementation independent of other services.
 Consider the need for compensating transactions.
 Design for multiple invocation styles.
 Stateless.
 Design services with performance in mind.

7.9 Primary Characteristics


7.9.1 Loosely Coupled Services

The notion of designing services to be loosely coupled is the most important, the most far reaching, and
the least understood service characteristic. Loose coupling is a broad term that actually refers to several
different elements of a service, its implementation, and its usage.

7.9.2 Interface coupling refers to the coupling between service requesters and service providers.
Interface coupling measures the dependencies that the service provider imposes on the service requester
the fewer the dependencies, the looser the coupling. Ideally, the service requester should be able to use a
service solely based on the published service contract and service-level agreement (see the next section),
and under no circumstances should the service requester require information about the internal
implementation of the service (for example, requiring that one of the input parameters be a SQL command
because the service provider uses a RDBMS as a data store). Another way of saying this is that the
interface should encapsulate all implementation details and make them opaque to service requesters.

7.9.3 Technology coupling measures the extent to which a service depends on a particular technology,
product, or development platform (operating systems, application servers, packaged applications, and
middleware platforms). For instance, if an organization standardizes on J2EE for implementing all
services and requires all service requesters and service providers to use JNDI to look up user and role

Cloud Computing (KCS-713) 27 | P a g e


Information, then the service is tightly coupled to the J2EE platform, which limits the extent to which
diverse service requesters can access these services and the extent to which the service can be outsourced
to a third-party provider
Process coupling measures the extent to which a service is tied to a particular business process. Ideally,
a service should not be tied to a single business process so that it can be reused across many different
processes and applications. However, there are exceptions. For instance, sometimes it is important to
define a service contract for a piece of business functionality (e.g., Photocopy-Check) that is only used
in one business process so that you have the option of non-invasively substituting another
implementation in the future. However, in this case, don't expect the service to be reusable across different
processes and applications.

7.10 Well-Defined Service Contracts

Every service should have a well-defined interface called its service contract that clearly defines the
service's capabilities and how to invoke the service in an interoperable fashion, and that clearly separates
the service's externally accessible interface from the service's technical implementation. In thiscontext,
WSDL provides the basis for service contracts; however, a service contract goes well beyond what can be
defined in WSDL to include document metadata, security metadata, and policy metadata using the WS-
Policy family of specifications. It is important that the service contract is defined based onknowledge of
the business domain and is not simply derived from the service's implementation.

Furthermore, changing a service contract is generally much more expensive than modifying the
implementation of a service because changing a service contract might require changing hundreds or
thousands of service requesters, while modifying the implementation of a service does not usually have
such far reaching effects. As a corollary, it is important to have a formal mechanism for extending and
versioning service contracts to manage these dependencies and costs.

7.11 Meaningful to the Service Requester

Services and service contracts must be defined at a level of abstraction that makes sense to service
requesters. An appropriate level of abstraction will:

 Capture the essence of the business service being provided without unnecessarily restricting future
uses or implementations of the service.
 Use a business-oriented vocabulary drawn from the business service domain to define the business
service and the input and output documents of the business service.
 Avoid exposing technical details such as internal structures or conventions to service requesters.

An abstract interface promotes substitutability that is, the interface captures a business theme and is
independent of a specific implementation, which allows a new service provider to be substituted for an
existing services provider as necessary without affecting any of the service requesters. In this way,
defining abstract interfaces that are meaningful to service requesters promotes loose coupling.
Cloud Computing (KCS-713) 28 | P a g e
7.12 Technical Benefits of a Service-Oriented Architecture

Services that possess the characteristics discussed earlier deliver the following technical benefits:

 Efficient development.
 More reuse.
 Simplified maintenance.
 Incremental adoption.
 Graceful evolution.

Efficient Development

An SOA promotes modularity because services are loosely coupled. This modularity has positive
implications for the development of composite applications because:

 After the service contracts have been defined (including the service-level data models), each
service can be designed and implemented separately by the developers who best understand the particular
functionality. In fact, the developers working on a service have no need to interact with or even know
about the developers working on the other business services.
 Service requesters can be designed and implemented based solely on the published service
contracts without any need to contact the developers who created the service provider and without
access to the source code that implements the service provider (as long as the developers have access to
information about the semantics of the service; for example, the service registry may provide a link to
comprehensive documentation about the semantics of the service).
7.13 Advantages and Disadvantages of SO
7.13.1 Advantages

1. Maintenance is Easy – Editing and updating any service implemented under SOA architecture is easy.
You don’t need to update your system. Service is maintained by a third party and any amendment in this
service won’t have an effect on your system. In most cases previous API work because it is functioning
before.

2. Quality of Code Improved – As services run freelance of our system they have their own variety of
code; therefore, our code is prevented from redundancy. Also, our code becomes error free.

3. Platform Independence – Services communicate with alternative applications through common


language which implies it’s freelance of the platform on that application is running. Services can provide
API in different languages e.g. PHP, JavaScript, etc.

Cloud Computing (KCS-713) 29 | P a g e


4. Scalable – If any service obtaining several users then it is often simply scalable by attaching
additionalservers. This will create service out there all time to the users.

5. Reliable – Services square measure typically tiny size as compared to the full-fledged application. So
it’s easier to correct and check the freelance services.

6. Same Directory Structure – Services have an equivalent directory structure so customers can access
the service information from an equivalent directory on every occasion. If any service has modified its
location then additionally directory remains the same. This is very helpful for consumers.

7. Independent of Other Services – Services generated using SOA principles are independent of each
other. So services are often utilized by multiple applications at an equivalent time.

7.13.2 Disadvantages
1. High Bandwidth Server – As therefore net service sends and receives messages and knowledge often
times so it simply reaches high requests per day. So it involves a high-speed server with plenty of
information measure to run an internet service.

2. Extra Overload – In SOA, all inputs square measures its validity before it’s sent to the service. Ifyou
are victimization multiple services then it’ll overload your system with further computation.

3. High Cost – It is expensive in terms of human resource, development, and technology.

Cloud Computing (KCS-713) 30 | P a g e


Lecture-11:
8.1 What is REST?

The acronym REST stands for RE presentational State Transfer. It was term originally coined by Roy
Fielding, who was also the inventor of the HTTP protocol. RE presentational State Transfer, or REST, is a
design pattern for interacting with resources stored in a server. Each resource has an identity, a data type,
and supports a set of actions. REST is a simple way to organize interactions between independent systems.
It's been growing in popularity since 2005, and inspires the design of services, such as the Twitter API.
This is due to the fact that REST allows you to interact with minimal overhead with clientsas diverse as
mobile phones and other websites. In theory, REST is not tied to the web, but it's almost always
implemented as such, and was inspired by HTTP. As a result, REST can be used wherever HTTP can.
The RESTful design pattern is normally used in combination with HTTP, the language of the internet. In
this context the resource's identity is its URI, the data type is its Media Type, and the actions are made up
of the standard HTTP methods (GET, PUT, POST, and DELETE). The HTTP POST method is used for
creating a resource, GET is used to query it, PUT is used to change it, and DELETE is used to destroy it.
The most common RESTful architecture involves a shared data model that is used across these four
operations. This data model defines the input to the POST method (create), the output for the GET
method (inquire) and the input to the PUT method (replace). A fifth HTTP method called 'HEAD' is
sometimes supported by RESTful web services. This method is equivalent to GET, except that it returns
only HTTP Headers, and no Body data. It's sometimes used to test the Existence of a resource. Not all
RESTful APIs support use of the HEAD method. These correspond to create, read, update, and delete (or
CRUD) operations, respectively. There are a number of other verbs, too, but are utilized less frequently.
8.2 Why do we need REST?
Representational State Transfer (REST) is a set of guidelines that ensure high quality in applications like
Web services by emphasizing simplicity, performance, and scalability. RESTful Web services follow a
client-server architecture and use a stateless communication protocol such as http. They are designed based
on four principles: resource identification through URIs, a uniform interface with four operations (PUT,
GET, POST, and DELETE), self-descriptive messages, and stateful interactions through hyperlinks.
8.3 Where is REST USED?
Representational State Transfer (REST) is an architectural style for designing networked applications that's
commonly used in cloud computing. REST APIs (Application Programming Interfaces) allow software
applications to communicate with each other over the internet. In cloud computing, REST APIs are used to
interact with cloud services and resources such as virtual machines, databases, and storage.
Here are some examples of REST APIs in cloud computing:
 Customers can use the REST API to get analysis results and reports, run data migrations, and search
for data across their storage.
Ordering food through an app
 Multiple REST API calls are used, such as one to check the menu, another to place the order, and
another to update the delivery status.

Cloud Computing (KCS-713) 31 | P a g e


8.4 This style of service differs from Request-Response style web services:
 Request-Response services start interaction with an Application, whereas RESTful services typically
interact with data (referred to as 'resources').
 Request-Response services involve application defined 'operations', but RESTful services avoid
application specific concepts.
 Request-Response services have different data formats for each message, but RESTful service typically
shares a data format across different HTTP methods.

8.4.1 The POST verb is most-often utilized to **create** new resources. In particular, it's used to
create subordinate resources. That is, subordinate to some other (e.g. parent) resource. In other words, when
creating a new resource, POST to the parent and the service takes care of associating the new resource with
the parent, assigning an ID (new resource URI), etc.
On successful creation, return HTTP status 201, returning a Location header with a link to the newly-
created resource with the 201 HTTP status.
POST is neither safe nor idempotent. It is therefore recommended for non-idempotent resource requests.
Making two identical POST requests will most-likely result in two resources containing the same
information.
Examples:

 POST http://www.example.com/customers
 POST http://www.example.com/customers/12345/orders

Cloud Computing (KCS-713) 32 | P a g e


8.4.2 The HTTP GET method is used to **read** (or retrieve) a representation of a resource. In the
―happy‖ (or non-error) path, GET returns a representation in XML or JSON and an HTTP response code
of 200 (OK). In an error case, it most often returns a 404 (NOT FOUND) or 400 (BAD REQUEST).
According to the design of the HTTP specification, GET (along with HEAD) requests are used only to read
data and not change it. Therefore, when used this way, they are considered safe. That is, they can be called
without risk of data modification or corruption—calling it once has the same effect as calling it 10 times,
or none at all. Additionally, GET (and HEAD) is idempotent, which means that making multiple identical
requests ends up having the same result as a single request.
Do not expose unsafe operations via GET—it should never modify any resources on the server.
Examples:

 GET http://www.example.com/customers/12345
 GET http://www.example.com/customers/12345/orders
 GET http://www.example.com/buckets/sample

8.4.3 PUT is most-often utilized for **update** capabilities, PUT-ing to a known resource URI with
the request body containing the newly-updated representation of the original resource.
However, PUT can also be used to create a resource in the case where the resource ID is chosen by the
client instead of by the server. In other words, if the PUT is to a URI that contains the value of a non-
existent resource ID. Again, the request body contains a resource representation. Many feel this is
convoluted and confusing. Consequently, this method of creation should be used sparingly, if at all.
Alternatively, use POST to create new resources and provide the client-defined ID in the body
representation—presumably to a URI that doesn't include the ID of the resource (see POST below).
On successful update, return 200 (or 204 if not returning any content in the body) from a PUT. If using
PUT for create, return HTTP status 201 on successful creation. A body in the response is optional—
providing one consumes more bandwidth. It is not necessary to return a link via a Location header in the
creation case since the client already set the resource ID.
PUT is not a safe operation, in that it modifies (or creates) state on the server, but it is idempotent. In other
words, if you create or update a resource using PUT and then make that same call again, the resource is
still there and still has the same state as it did with the first call.
If, for instance, calling PUT on a resource increments a counter within the resource, the call is no longer
idempotent. Sometimes that happens and it may be enough to document that the call is not idempotent.
However, it's recommended to keep PUT requests idempotent. It is strongly recommended to use POST for
non-idempotent requests.
Examples:

 PUT http://www.example.com/customers/12345
 PUT http://www.example.com/customers/12345/orders/98765
 PUT http://www.example.com/buckets/secret_stuff
Cloud Computing (KCS-713) 33 | P a g e
8.4.4 PATCH is used for **modify** capabilities. The PATCH request only needs to contain the
changes to the resource, not the complete resource.
This resembles PUT, but the body contains a set of instructions describing how a resource currently residing
on the server should be modified to produce a new version. This means that the PATCH body should not
just be a modified part of the resource, but in some kind of patch language like JSON Patch or XML Patch.
PATCH is neither safe nor idempotent. However, a PATCH request can be issued in such a way as to be
idempotent, which also helps prevent bad outcomes from collisions between two PATCH requests on the
same resource in a similar time frame. Collisions from multiple PATCH requests may be more dangerous
than PUT collisions because some patch formats need to operate from a known base-point or else them
will corrupt the resource. Clients using this kind of patch application should use a conditional request such
that the request will fail if the resource has been updated since the client last accessed the resource. For
example, the client can use a strong ETag in an If-Match header on the PATCH request.
Examples:

 PATCH http://www.example.com/customers/12345
 PATCH http://www.example.com/customers/12345/orders/98765
 PATCH http://www.example.com/buckets/secret_stuff

8.4.5 DELETE is pretty easy to understand. It is used to **delete** a resource identified by a URI.
On successful deletion, return HTTP status 200 (OK) along with a response body, perhaps the
representation of the deleted item (often demands too much bandwidth), or a wrapped response (see Return
Values below). Either that or return HTTP status 204 (NO CONTENT) with no response body. In other
words, a 204 status with no body, or the JSEND-style response and HTTP status 200 are the recommended
responses.
HTTP-spec-wise, DELETE operations are idempotent. If you DELETE a resource, it's removed.
Repeatedly calling DELETE on that resource ends up the same: the resource is gone. If calling DELETE
say, decrements a counter (within the resource), the DELETE call is no longer idempotent. As mentioned
previously, usage statistics and measurements may be updated while still considering the service
idempotent as long as no resource data is changed. Using POST for non-idempotent resource requests is
recommended.
There is a caveat about DELETE, however. Calling DELETE on a resource a second time will often return
a 404 (NOT FOUND) since it was already removed and therefore is no longer findable. This, by some
opinions, makes DELETE operations no longer idempotent, however, the end-state of the resource is the
same. Returning a 404 is acceptable and communicates accurately the status of the call.
Examples:

 DELETE http://www.example.com/customers/12345
 DELETE http://www.example.com/customers/12345/orders
 DELETE http://www.example.com/bucket/sample

Cloud Computing (KCS-713) 34 | P a g e


Lecture-12:

9. WEB SERVICE

A Web Service can be defined by following ways:


 A web service is any piece of software that makes itself available over the internet and uses a
standardized XML messaging system
 Web services are application components, self-contained and self-describing
 It exposes the existing function on the internet

 It is a collection of open protocols and standards used for exchanging data between applications or
systems
 Web services can be discovered using UDDI
 XML is the basis for Web services

Fig-8: Web Services Define

9.1 Types of Web Services


There are mainly two types of web services.

1. SOAP web services.


2. Restful web services.

9.2 Web Services Behavioral Characteristics


 XML-based: Web Services uses XML at data representation and data transportation layers. Using
XML it can communicate with any OS with any technology. So Web Services based applications
are highly interoperable application at their core level. (XML is platform independent and language
independent)

Cloud Computing (KCS-713) 35 | P a g e


 Loosely Coupled:
 Ability to be synchronous or asynchronous:
o Synchronicity refers to the binding of the client to the execution of the service.
o In synchronous invocations, the client blocks and waits for the service to complete its
operation before continuing.
o Asynchronous operations allow a client to invoke a service and then execute other functions.
o Asynchronous clients retrieve their result at a later point in time, while synchronous clients
receive their result when the service has completed.
o Asynchronous capability is a key factor in enabling loosely coupled systems.
 Supports document exchange:
 One of the key advantages of XML is its generic way of representing not only data, but also complex
documents. These documents can be simple, such as when representing a current address, or they
can be complex, representing an entire book or RFQ. Web services support the transparent exchange
of documents to facilitate business integration.
 Supports RPC
9.3 Components of Web Services
The basic Web services platform is XML + HTTP. All the standard Web Services works using following
components

 SOAP (Simple Object Access Protocol)


 UDDI (Universal Description, Discovery and Integration)
 WSDL (Web Services Description Language)
9.4 SOAP

SOAP is a protocol for accessing a Web Service.

 SOAP stands for Simple Object Access Protocol


 SOAP is a communication protocol
 SOAP is a format for sending messages
 SOAP can exchange complete documents or call a remote procedure
 SOAP is designed to communicate via Internet
 SOAP is platform & language independent
 SOAP is based on XML
 SOAP is simple and extensible
 SOAP allows you to get around firewalls
 SOAP is a W3C standard

Cloud Computing (KCS-713) 36 | P a g e


A SOAP message is an ordinary XML document containing the following elements.

Envelope: (Mandatory) - Defines the start and the end of the message.
Header: (Optional)- Contains any optional attributes of the message used in processing the message,
either at an intermediary point or at the ultimate end point.
Body: (Mandatory) - Contains the XML data comprising the message being sent.
Fault: (Optional) - An optional Fault element that provides information about errors that occurred while
processing the message
All these elements are declared in the default namespace for the SOAP envelope

The SOAP envelope indicates the start and the end of the message so that the receiver knows when an
entire message has been received. The SOAP envelope solves the problem of knowing when you're done
receiving a message and are ready to process it. The SOAP envelope is therefore basic ally a packaging
mechanism

SOAP Envelope element can be explained as:

 Every SOAP message has a root Envelope element.


 Envelope element is mandatory part of SOAP Message.
 Every Envelope element must contain exactly one Body element.
 If an Envelope contains a Header element, it must contain no more than one, and it must appear as
the first child of the Envelope, before the Body.
 The envelope changes when SOAP versions change.
 The SOAP envelope is specified using the ENV namespace prefix and the Envelope element.

Cloud Computing (KCS-713) 37 | P a g e


 Lecture-13:

9.5 Web Service Architecture-

Fig10:- Web Service Architecture

9.5.1 Service provider: - From a business perspective, this is the owner of the service. From an Service
provider. Architectural perspective, this is the platform that hosts access to the service.

9.5.2 Service requestor: - From a business perspective, this is the business that requires Service
requestor. Certain functions to be satisfied. From an architectural perspective, this is the application that is
looking for and invoking or initiating an interaction with a service.

9.5.3 Service registry: - This is a searchable registry of service descriptions where service providers
publish their service descriptions. Service requestors find services and obtain binding information (in the
service descriptions) for services during development for static binding or during execution for dynamic
binding. For statically bound service requestors, the service registry is an optional role in the architecture,
because a service provider can send the description directly to service requestors.

9.6 Operations in Web Service Architecture

For an application to take advantage of Web Services, three behaviors must take place:

Publication of service descriptions, lookup or finding of service descriptions, and binding or invoking
of services based on the service description. These behaviors can occur singly or iteratively. In detail, these
operations are:

9.7 Web Service Advantages-

The advantages of Web services are numerous, as shown in the list below:
 Web services have an easy integration in an information system with a merchant platform
 Their components are reusable,
 Their interoperability makes it possible to link several systems together
 They permit reduction of coupling between systems.
 They have an extended functional scope made available to merchants: Import, Inventory,
Cloud Computing (KCS-713) 38 | P a g e
Order Management, Pricing, After-Sales...
 They connect heterogeneous systems
 They interconnect middleware/or allow to install them
 They allow servers and machines to communicate,
 Reduced computing power is required
 They allow a multi-user use, without disturbing sources
 Easy component update
 Low maintenance (like any big data tool)
 They are not linked to any operating system or programming language

Lecture-14:
10.1 What is Virtualization
Virtualization uses software to create an abstraction layer over computer hardware that allows the hardware elements
of a single computer—processors, memory, storage and more—to be divided into multiple virtual computers,
commonly called virtual machines (VMs). Each VM runs its own operating system (OS) and behaves like an
independent computer, even though it is running on just a portion of the actual underlying computer hardware.

Virtualization is the process of creating a software-based, or virtual, representation of something, such as virtual
applications, servers, storage and networks. It is the single most effective way to reduce IT expenses while boosting
efficiency and agility for all size businesses.

e.g. - Virtualization is there to present a logical view of the original things. In real time scenario When a user open
my computer icon, there appear some hard drive partitions say Local Disk (: C), Local Disk (: D), Local Disk (: E)
and so on.

10.2 Why Need of Virtualization?


There are five major needs of virtualization which are described below:

10.2.1 ENHANCED PERFORMANCE-


Currently, the end user system i.e. PC is sufficiently powerful to fulfill all the basic computation requirements of
the user, with various additional capabilities which are rarely used by the user. Most of their systems have sufficient
resources which can host a virtual machine manager and can perform a virtual machine with acceptable performance
so far.

10.2.2 LIMITED USE OF HARDWARE AND SOFTWARE RESOURCES-


The limited use of the resources leads to under-utilization of hardware and software resources. As all the PCs of
the user are sufficiently capable to fulfill their regular computational needs that’s why many of their computers are
used often which can be used 24/7 continuously without any interruption. The efficiency of IT infrastructure could
be increase by using these resources after hours for other purposes. This environment is possible to attain with the
help of Virtualization.

10.2.3 SHORTAGE OF SPACE-The regular requirement for additional capacity, whether memory storage
or compute power, leads data centers raise rapidly. Companies like Google, Microsoft and Amazon develop their
infrastructure by building data centers as per their needs. Mostly, enterprises unable to pay to build any other data
center to accommodate additional resource capacity. This heads to the diffusion of a technique which is known as
Cloud Computing (KCS-713) 39 | P a g e
server consolidation.

10.2.4 ECO-FRIENDLY INITIATIVES-


At this time, corporations are actively seeking for various methods to minimize their expenditures on power which
is consumed by their systems. Data centers are main power consumers and maintaining a data center operations
needs a continuous power supply as well as a good amount of energy is needed to keep them cool for well-
functioning. Therefore, server consolidation drops the power consumed and cooling impact by having a fall in
number of servers. Virtualization can provide a sophisticated method of server consolidation.
10.2.5 ADMINISTRATIVE COSTS-
Furthermore, the rise in demand for capacity surplus, that convert into more servers in a data center, accountable for
a significant increase in administrative costs. Hardware monitoring, server setup and updates, defectivehardware
replacement, server resources monitoring, and backups are included in common system administration tasks. These
are personnel-intensive operations. The administrative costs is increased as per the number of servers.Virtualization
decreases number of required servers for a given workload, hence reduces the cost of administrative employees.

Fig-11: Major Needs of Virtualizations

10.3 Where is virtualization used?


 Efficient resource use
Virtualization allows you to use hardware resources more efficiently by creating virtual servers on the
same computer system. You can then return servers to the pool as needed, which frees up space and
saves money on electricity, cooling, and generators.
 Resource sharing
Virtualization allows you to share resources among multiple applications. For example, you can create
a virtual machine that runs both Windows and Linux on the same physical server.
 Access control
Virtualization allows you to set up access controls to secure resources.
 Application virtualization
Virtualization allows you to run programs in a virtual environment, which reduces the need for local
installations. For example, an administrator can install an application on a server, and anyone with access
to the server can run it as if it were installed on their device. This provides users with benefits like
portability, cross-platform operation, and the ability to run multiple instances of the application.
 Network virtualization
Virtualization allows you to create virtual networks that are separate from each other. For example, you
can create a virtual LAN (VLAN), which is a subsection of a local area network (LAN) that combines
network devices into one group, regardless of their physical location.

Cloud Computing (KCS-713) 40 | P a g e


Fig11:- Traditional & Virtual Architecture
10.4 What is a virtual machine?
A virtual machine is a computer file, typically called an image, which behaves like an actual computer. In other
words, creating a computer within a computer. It runs in a window, much like any other programme, giving the
end user the same experience on a virtual machine as they would have on the host operating system itself. The virtual
machine is sandboxed from the rest of the system, meaning that the software inside a virtual machine cannot escape
or tamper with the computer itself. This produces an ideal environment for testing other operating systems including
beta releases, accessing virus-infected data, creating operating system backups and running software or applications
on operating systems for which they were not originally intended.

Multiple virtual machines can run simultaneously on the same physical computer. For servers, the multiple operating
systems run side-by-side with a piece of software called a hypervisor to manage them, while desktop computers
typically employ one operating system to run the other operating systems within its programme windows. Each
virtual machine provides its own virtual hardware, including CPUs, memory, hard drives, network interfaces and
other devices. The virtual hardware is then mapped to the real hardware on the physical machine which saves costs
by reducing the need for physical hardware systems along with the associated maintenance coststhat go with it, plus
reduces power and cooling demand.

10.5 How does virtualization work?


Virtualization creates several virtual machines (also known as virtual computers, virtual instances, virtual versions
or VMs) from one physical machine using software called a hypervisor. Because these virtual machines perform
just like physical machines while only relying on the resources of one computer system, virtualization allows IT
organizations to run multiple operating systems on a single server (also known as a host). During these operations,

Cloud Computing (KCS-713) 41 | P a g e


The hypervisor allocates computing resources to each virtual computer as needed. This makes IT operations much
more efficient and cost-effective. Flexible resource allocation like this made virtualization the foundation of cloud
computing.

Virtualization methods can change based on the user’s operating system. For example, Linux machines offer a
unique open-source hypervisor known as the kernel-based virtual machine (KVM). Because KVM is part of Linux,
it allows the host machine to run multiple VMs without a separate hypervisor. However, KVM is not supported by
all IT solution providers and requires Linux expertise in order to implement it.

10.5.1 The virtualization process follows the steps listed below:

o Hypervisors detach the physical resources from their physical environments.


o Resources are taken and divided, as needed, from the physical environment to the various virtual
environments.
o System users work with and perform computations within the virtual environment.
o Once the virtual environment is running, a user or program can send an instruction that requires extra
resources form the physical environment. In response, the hypervisor relays the message to the physical
system and stores the changes. This process will happen at an almost native speed.

Cloud Computing (KCS-713) 42 | P a g e


Lecture: 15

11. Types of virtualization


To this point we’ve discussed server virtualization, but many other IT infrastructure elements can be virtualized
to deliver significant advantages to IT managers (in particular) and the enterprise as a whole. Inthis section,
we'll cover the following types of virtualization:

 Desktop virtualization
 Network virtualization
 Storage virtualization
 Data virtualization
 Application virtualization
 Data center virtualization
 CPU virtualization
 GPU virtualization
 Linux virtualization
 Cloud virtualization

11.1 Desktop virtualization


Desktop virtualization lets you run multiple desktop operating systems, each in its own VM on the same
computer.

There are two types of desktop virtualization:

 Virtual desktop infrastructure (VDI) runs multiple desktops in VMs on a central server and streams
them to users who log in on thin client devices. In this way, VDI lets an organization provide its users
access to variety of OSs from any device, without installing OSs on any device.
 Local desktop virtualization runs a hypervisor on a local computer, enabling the user to run one or
more additional OSs on that computer and switch from one OS to another as needed without changing
anything about the primary OS.

11.2 Network virtualization

Network virtualization uses software to create a ―view‖ of the network that an administrator can use to
manage the network from a single console. It abstracts hardware elements and functions (e.g., connections,
switches, routers, etc.) and abstracts them into software running on a hypervisor. The network administrator can
modify and control these elements without touching the underlying physical components, which dramatically
simplifies network management.

Types of network virtualization include software-defined networking (SDN), which virtualizes hardware that
controls network traffic routing (called the ―control plane‖), and network function virtualization (NFV), which
virtualizes one or more hardware appliances that provide a specific network function (e.g., a firewall, load
balancer, or traffic analyzer), making those appliances easier to configure, provision, andmanage.
11.3 Storage virtualization
Storage virtualization enables all the storage devices on the network— whether they’re installed on individual
servers or standalone storage units—to be accessed and managed as a single storage device. Specifically,

Cloud Computing (KCS-713) 43 | P a g e


Storage virtualization masses all blocks of storage into a single shared pool from which they can be assigned
to any VM on the network as needed. Storage virtualization makes it easier to provision storage for VMs and
makes maximum use of all available storage on the network.

11.4 Data virtualization

Modern enterprises store data from multiple applications, using multiple file formats, in multiple locations,
ranging from the cloud to on-premise hardware and software systems. Data virtualization lets any application
access all of that data—irrespective of source, format, or location.

Data virtualization tools create a software layer between the applications accessing the data and the systems
storing it. The layer translates an application’s data request or query as needed and returns results that can span
multiple systems. Data virtualization can help break down data silos when other types of integration aren’t
feasible, desirable, or affordable.

11.5 Application virtualization

Application virtualization runs application software without installing it directly on the user’s OS. This differs
from complete desktop virtualization (mentioned above) because only the application runs in a virtual
environment—the OS on the end user’s device runs as usual. There are three types of application virtualization:

 Local application virtualization: The entire application runs on the endpoint device but runs in a
runtime environment instead of on the native hardware.
 Application streaming: The application lives on a server which sends small components of the
software to run on the end user's device when needed.
 Server-based application virtualization: The application runs entirely on a server that sends only its
user interface to the client device.

11.6 Data center virtualization

Data center virtualization abstracts most of a data center’s hardware into software, effectively enabling an
administrator to divide a single physical data center into multiple virtual data centers for different clients.

Each client can access its own infrastructure as a service (IaaS), which would run on the same underlying
physical hardware. Virtual data centers offer an easy on-ramp into cloud-based computing, letting a company
quickly set up a complete data center environment without purchasing infrastructure hardware.

11.7 CPU virtualization

CPU (central processing unit) virtualization is the fundamental technology that makes hypervisors, virtual
machines, and operating systems possible. It allows a single CPU to be divided into multiple virtual CPUs for
use by multiple VMs.

At first, CPU virtualization was entirely software-defined, but many of today’s processors include extended
instruction sets that support CPU virtualization, which improves VM performance.

Cloud Computing (KCS-713) 44 | P a g e


11.8 GPU virtualization
A GPU (graphical processing unit) is a special multi-core processor that improves overall computing
performance by taking over heavy-duty graphic or mathematical processing. GPU virtualization lets multiple
VMs use all or some of a single GPU’s processing power for faster video, artificial intelligence (AI), and other
graphic- or math-intensive applications.

 Pass-through GPUs make the entire GPU available to a single guest OS.
 Shared vGPUs divide physical GPU cores among several virtual GPUs (vGPUs) for use by server-
based VMs.

11.9 Linux virtualization


Linux includes its own hypervisor, called the kernel-based virtual machine (KVM), which supports Intel and
AMD’s virtualization processor extensions so you can create x86-based VMs from within a Linux host OS.

As an open source OS, Linux is highly customizable. You can create VMs running versions of Linux tailored
for specific workloads or security-hardened versions for more sensitive applications.

Cloud virtualization

As noted above, the cloud computing model depends on virtualization. By virtualizing servers, storage, and
other physical data center resources, cloud computing providers can offer a range of services to customers,
including the following:

 Infrastructure as a service (IaaS): Virtualized server, storage, and network resources you can
configure based on their requirements.
 Platform as a service (PaaS): Virtualized development tools, databases, and other cloud-based
services you can use to build you own cloud-based applications and solutions.
 Software as a service (SaaS): Software applications you use on the cloud. SaaS is the cloud-based
service most abstracted from the hardware.

Cloud Computing (KCS-713) 45 | P a g e


Lecture: 16

12. Implementation Levels of Virtualization

Virtualization is not that easy to implement. A computer runs an OS that is configured to that particular
hardware. Running a different OS on the same hardware is not exactly feasible.

To tackle this, there exists a hypervisor. What hypervisor does is, it acts as a bridge between virtual OS and
hardware to enable its smooth functioning of the instance. There are five levels of virtualizations available that
are most commonly used in the industry. These are as follows:

I. Instruction Set Architecture Level (ISA)


II. Hardware Abstraction Level (HAL)
III. Operating System Level
IV. Library Level
V. Application Level

Fig12:- Five Level of Virtualizations.

12.1 Instruction Set Architecture Level (ISA)


Cloud Computing (KCS-713) 46 | P a g e
In ISA, virtualization works through an ISA emulation. This is helpful to run heaps of legacy code which was
originally written for different hardware configurations. These codes can be run on the virtual machine through
an ISA. A binary code that might need additional layers to run can now run on an x86 machine orwith some
tweaking, even on x64 machines. ISA helps make this a hardware-agnostic virtual machine. The basic
emulation, though, requires an interpreter. This interpreter interprets the source code and converts it to hardware
readable format for processing.

12.2 Hardware Abstraction Level (HAL)

As the name suggests, this level helps perform virtualization at the hardware level. It uses a bare hypervisor for
its functioning. This level helps form the virtual machine and manages the hardware through virtualization.It
enables virtualization of each hardware component such as I/O devices, processors, memory etc. This way
multiple users can use the same hardware with numerous instances of virtualization at the same time. IBM had
first implemented this on the IBM VM/370 back in 1960. It is more usable for cloud-based infrastructure. Thus,
it is no surprise that currently, Xen hypervisors are using HAL to run Linux and other OS on x86 based
machines.

12.3 Operating System Level

At the operating system level, the virtualization model creates an abstract layer between the applications and
the OS. It is like an isolated container on the physical server and operating system that utilizes hardware and
software. Each of these containers functions like servers. When the number of users is high, and no one is
willing to share hardware, this level of virtualization comes in handy. Here, every user gets their own virtual
environment with dedicated virtual hardware resource. This way, no conflicts arise.

12.4 Library Level

OS system calls are lengthy and cumbersome. Which is why applications opt for APIs from user-level libraries?
Most of the APIs provided by systems are rather well documented. Hence, library level virtualization is
preferred in such scenarios. Library interfacing virtualization is made possible by API hooks. These API hooks
control the communication link from the system to the applications. Some tools available today, such as vCUDA
and WINE, have successfully demonstrated this technique.

12.5 Application Level

Application-level virtualization comes handy when you wish to virtualized only an application. It does not
virtualized an entire platform or environment. On an operating system, applications work as one process. Hence
it is also known as process-level virtualization. It is generally useful when running virtual machines with high-
level languages. Here, the application sits on top of the virtualization layer, which is above the application

Cloud Computing (KCS-713) 47 | P a g e


program. The application program is, in turn, residing in the operating system. Programs written in high-level
languages and compiled for an application-level virtual machine can run fluently here. Even though there are
five levels of virtualization, each enterprise doesn’t need to use all of them. It depends on what the company is
working on as to which level of virtualization it prefers. Companies tend to use virtual machines for
development and testing of cross-platform applications. With cloud-based applications on the rise, virtualization
has become a must-have for enterprises across the globe.

12.6 Virtualization Structures

Fig13:- Structures of Virtualizations.

Before virtualization, the operating system manages the hardware. After virtualization, a virtualization layer is
inserted between the hardware and the operating system. In such a case, the virtualization layer is responsible
for converting portions of the real hardware into virtual hardware. Therefore, different operating systems such
as Linux and Windows can run on the same physical machine.

Cloud Computing (KCS-713) 48 | P a g e


Lecture: 17
13. Virtualization of CPU –Memory – I/O Devices

13.1 CPU Virtualization- CPU Virtualization emphasizes on running programs and instructions
through virtual machine giving the feeling as it is working on a physical workstation. All the operations are
handled byan emulator that controls software to run according to it. Nevertheless, CPU Virtualization does not
act as an emulator. The emulator performs the same way as a normal computer machine does. It replicates the
same copy or data and generates the same output just like a physical machine does. The emulation function
offers great portability and facilitates working on a single platform acting like working on multiple platforms.

Types of CPU Virtualization


1. Software-Based CPU Virtualization
2. Hardware-Assisted CPU Virtualization
3. Virtualization and Processor-Specific Behavior
4. Performance Implications of CPU Virtualization

13.1.1 Software-Based CPU Virtualization


This CPU Virtualization is software-based where with the help of it, application code gets executed on the
processor and the privileged code gets translated first and that translated code gets executed directly on the
processor. This translation is purely known as Binary Translation (BT). The code that gets translated is very
large in size and also slow at the same time on execution. The guest programs that are based on privileged
coding runs very smooth and fast. The code programs or the applications that are based on privileged code
components that are significant such as system calls, run at a slower rate in the virtual environment.

13.1.2 Hardware-Assisted CPU Virtualization


There is hardware that gets assistance to support CPU Virtualization from certain processors. Here, the guest
user uses a different version of code and mode of execution known as a guest mode. The guest code mainly runs
on guest mode. The best part in hardware-assisted CPU Virtualization is that while using it for hardware
assistance, there is no requirement of translation. For this, the system calls runs faster than expected. Workloads
that require updation of page tables get a chance of exiting from guest mode to root mode that eventually slows
down the performance and efficiency of the program.

13.1.3 Virtualization and Processor-Specific Behavior


In spite of having specific software behavior of the CPU model, still, the virtual machine helps in detecting
the processor model on which the system runs. The processor model is different based on the CPU and the wide
variety of features it offers whereas the applications that produce the output generally utilize such features. In
such cases, vMotion cannot be used to migrate the virtual machines that are running on feature- rich processors.
This feature is easily handled by Enhanced vMotion Compatibility.

13.1.4 Performance Implications of CPU Virtualization


CPU Virtualization adds the amount of overhead based on the workloads and virtualization used. Any
application depends mainly on the CPU power waiting for the instructions to get executed first. Such
applications require the use of CPU Virtualization that gets the command or executions that are needed to be
executed first. This overhead takes the overall processing time and results in overall degrade in performance
and execution of CPU Virtualization.

Cloud Computing (KCS-713) 49 | P a g e


Lecture:18
13.2 Memory Virtualization- It introduces a way to decouple memory from the server to provide a
shared, distributed or networked function.

It enhances performance by providing greater memory capacity without any addition to the main memory.
That’s why a portion of the disk drive serves as an extension of the main memory.
Implementations –
 Application-level integration – Applications running on connected computers directly connect to the
memory pool through an API or the file system.

Fig14:- Application-level integration.

 Operating System-Level Integration – The operating system first connects to the memory pool and makes
that pooled memory available to applications.

Fig15:- Operating System-Level integration.

Cloud Computing (KCS-713) 50 | P a g e


13.3 I/O Virtualization:
I/O virtualization involves managing the routing of I/O requests between virtual devices and the shared physical
hardware. At the time of this writing, there are three ways to implement I/O virtualization: full device emulation,
para-virtualization, and direct I/O. Full device emulation is the first approach for I/O virtualization. Generally,
this approach emulates well-known, real-world devices.

Fig16:- Virtualization Layer.

I/O virtualization provides a foothold for many innovative and beneficial enhancements of the logical I/O
devices. The ability to interpose on the I/O stream in and out of a VM has been widely exploited in both research
papers and commercial virtualization systems.

One useful capability enabled by I/O virtualization is device aggregation, where multiple physical devices can
be combined into a single more capable logical device that is exported to the VM. Examples includecombining
multiple disk storage devices exported as a single larger disk, and network channel bonding where multiple
network interfaces can be combined to appear as a single faster network interface.

New features can be added to existing systems by interposing and transforming virtual I/O requests,
transparently enhancing unmodified software with new capabilities. For example, a disk write can be
transformed into replicated writes to multiple disks, so that the system can tolerate disk-device failures.
Similarly, by logging and tracking the changes made to a virtual disk, the virtualization layer can offer a time-
travel feature, making it possible to move a VM’s file system backward to an earlier point in time. This
functionality is a key ingredient of the snapshot and undo features found in many desktop virtualization systems.
Many I/O virtualization enhancements are designed to improve system security. A simple example is running
an encryption function over the I/O to and from a disk to implement transparent disk encryption. Interposing on
network traffic allows virtualization layers to implement advanced networking security, such as firewalls and
intrusion-detection systems employing deep packet inspection.

13.4 Disaster Recovery:-


Virtual disaster recovery is a combination of storage and server virtualization that helps to create more effective
means of disaster recovery and backup. It is now popular in many enterprise systems because of the many ways
that it helps to mitigate risk.
The general idea of virtual disaster recovery is that combining server and storage virtualization allowscompanies
to store backups in places that are not tied to their own physical location. This protects data and systems from
fires, floods and other types of natural disasters, as well as other emergencies. Many vendor systems feature
redundant design with availability zones, so that if data in one zone is compromised, another zone can keep
backups alive.

Cloud Computing (KCS-713) 51 | P a g e


UNIT: - III
14. CLOUD ARCHITECTURE, SERVICES AND STORAGE

Lecture: 19
14.1 What is Cloud Computing Architecture?
Cloud architecture refers to how various cloud technology components, such as hardware, virtual
resources, software capabilities, and virtual network systems interact and connect to create cloud
computing environments. It acts as a blueprint that defines the best way to strategically combine
resources to build a cloud environment for a specific business need.
14.2 Why is cloud computing architecture important
The cloud computing architecture is designed in such a way that:
 It solves latency issues and improves data processing requirements
 It reduces IT operating costs and gives good accessibility to access data and digital
tools
 It helps businesses to easily scale up and scale down their cloud resources
 It has a flexibility feature which gives businesses a competitive advantage
 It results in better disaster recovery and provides high security
 It automatically updates its services
 It encourages remote working and promotes team collaboration
14.3 Where is cloud computing mostly used?
Organizations of every type, size, and industry are using the cloud for a wide variety of use cases,
such as data backup, disaster recovery, email, virtual desktops, software development and testing,
big data analytics, and customer-facing web applications. For example, healthcare companies are
using the cloud to develop more personalized treatments for patients. Financial services companies
are using the cloud to power real-time fraud detection and prevention. And video game makers are
using the cloud to deliver online games to millions of players around the world.

The cloud architecture is divided into 2 parts i.e.


1. Frontend
2. Backend

The below figure represents an internal architectural view of cloud computing.

Cloud Computing (KCS-713) 52 | P a g e


Fig17:- Architecture of Cloud Computing.

14.4 Architecture of Cloud Computing


Architecture of cloud computing is the combination of both SOA (Service Oriented
Architecture) and EDA (Event Driven Architecture). Client infrastructure, application, service,
runtime cloud, storage, infrastructure, management and security all these are the components
of cloud computing architecture.
14.4.1 Frontend:
Frontend of the cloud architecture refers to the client side of cloud computing system. Means
it contains all the user interfaces and applications which are used by the client to access the
cloud computing services/resources. For example, use of a web browser to access the cloud
platform.

 Client Infrastructure – Client Infrastructure is a part of the frontend component. It


contains the applications and user interfaces which are required to access the cloud
platform.
In other words, it provides a GUI (G r a p h i c a l User Interface) to interact with
thecloud.
14.4.2 Backend:
Backend refers to the cloud itself which is used by the service provider. It contains the resources
as well as manages the resources and provides security mechanisms. Along with this, it includes
huge storage, virtual applications, virtual machines, traffic control mechanisms, deployment
models, etc.

Cloud Computing (KCS-713) 53 | P a g e


 Application –
Application in backend refers to a software or platform to which client accesses. Means
it provides the service in backend as per the client requirement.
 Service –
Service in backend refers to the major three types of cloud based services like SaaS,
PaaS and IaaS. Also manages which type of service the user accesses.
 Runtime Cloud-
Runtime cloud in backend provides the execution and Runtime platform/environment
to the Virtual machine.
 Storage –
Storage in backend provides flexible and scalable storage service and management of
stored data.
 Infrastructure –
Cloud Infrastructure in backend refers to the hardware and software components of
cloud like it includes servers, storage, network devices, virtualization software etc.
 Security –
Security in backend refers to implementation of different security mechanisms in the
backend for secure cloud resources, systems, files, and infrastructure to end-users.
 Internet –
Internet connection acts as the medium or a bridge between frontend and backend and
establishes the interaction and communication between frontend and backend.
 Database–
Database in backend refers to provide database for storing structured data, such as SQL
and NOSQL databases. Example of Databases services include Amazon RDS,
Microsoft Azure SQL database and Google Cloud SQL.
 Networking–
Networking in backend services that provide networking infrastructure for application
in the cloud, such as load balancing, DNS and virtual private networks.
14.5 Benefits of Cloud Computing Architecture:
1. Makes overall cloud computing system simpler.
2. Improves data processing requirements.
3. Helps in providing high security.
4. Makes it more modularized.
5. Results in better disaster recovery.
6. Gives good user accessibility.
7. Reduces IT operating costs.
8. Provides high level reliability.
9. Scalability.
Cloud Computing (KCS-713) 54 | P a g e
Lecture: 20
14.6 LAYERED CLOUD ARCHITECTURE DESIGN
It is possible to organize all the concrete realizations of cloud computing into a layered view
covering the entire, from hardware appliances to software systems.

All of the physical manifestations of cloud computing can be arranged into a layered picture
that encompasses anything from software systems to hardware appliances. Utilizing cloud
resources can provide the “computer horsepower” needed to deliver services. This layer is
frequently done utilizing a data center with dozens or even millions of stacked nodes.
Because it can be constructed from a range of resources, including clusters and even
networked PCs, cloud infrastructure can be heterogeneous in character. The infrastructure can
also include database systems and other storage services.

The core middleware, whose goals are to create an optimal runtime environment for
applications and to best utilize resources, manages the physical infrastructure. Virtualization
technologies are employed at the bottom of the stack to ensure runtime environment
modification, application isolation, sandboxing, and service quality. At this level, hardware
virtualization is most frequently utilized. The distributed infrastructure is exposed as a
collection of virtual computers via hypervisors, which control the pool of available resources.
By adopting virtual machine technology, it is feasible to precisely divide up hardware resources
like CPU and memory as well as virtualize particular devices to accommodate user and
application needs.

Fig17:- Layers of Cloud Computing.

Cloud Computing (KCS-713) 55 | P a g e


14.6.1 Application Layer
1. The application layer, which is at the top of the stack, is where the actual cloud apps
are located. Cloud applications, as opposed to traditional applications, can take
advantage of the automatic-scaling functionality to gain greater performance,
availability, and lower operational costs.
2. This layer consists of different Cloud Services which are used by cloud users. Users
can access these applications according to their needs. Applications are divided into
Execution layers and Application layers.
3. In order for an application to transfer data, the application layer determines whether
communication partners are available. Whether enough cloud resources are accessible
for the required communication is decided at the application layer. Applications must
cooperate in order to communicate, and an application layer is in charge of this.

4. The application layer, in particular, is responsible for processing IP traffic handling


protocols like Telnet and FTP. Other examples of application layer systems include web
browsers, SNMP protocols, HTTP protocols, or HTTPS, which is HTTP’s successor
protocol.
14.6.2 Platform Layer
1. The operating system and application software make up this layer.
2. Users should be able to rely on the platform to provide them with Scalability,
Dependability, and Security Protection which gives users a space to create their apps,
test operational processes, and keep track of execution outcomes and performance.
SaaS application implementation’s application layer foundation.
3. The objective of this layer is to deploy applications directly on virtual machines.
4. Operating systems and application frameworks make up the platform layer, which is
built on top of the infrastructure layer. The platform layer’s goal is to lessen the
difficulty of deploying programmers directly into VM containers.
5. By way of illustration, Google App Engine functions at the platform layer to provide
API support for implementing storage, databases, and business logic of ordinary web
apps.
14.6.3 Infrastructure Layer
1. It is a layer of virtualization where physical resources are divided into a collection of
virtual resources using virtualization technologies like Xen, KVM, and VMware.
2. This layer serves as the Central Hub of the Cloud Environment, where resources are
constantly added utilizing a variety of virtualization techniques.

Cloud Computing (KCS-713) 56 | P a g e


3. A base upon which to create the platform layer. Constructed using the virtualized
network, storage, and computing resources. Give users the flexibility they want.
4. Automated resource provisioning is made possible by virtualization, which also
improves infrastructure management.
5. The infrastructure layer sometimes referred to as the virtualization layer, partitions the
physical resources using virtualization technologies like Xen, KVM, Hyper-V, and
VMware to create a pool of compute and storage resources.

14.6.4 The infrastructure layer is crucial to cloud computing since virtualization technologies
are the only ones that can provide many vital capabilities, like dynamic resource
assignment.

14.6.5 Datacenter Layer

1. In a cloud environment, this layer is responsible for Managing Physical Resources such
as servers, switches, routers, power supplies, and cooling systems.
2. Providing end users with services requires all resources to be available and managed
in data centers.
3. Physical servers connect through high-speed devices such as routers and switches to
the data center.
4. In software application designs, the division of business logic from the persistent data
it manipulates is well-established. This is due to the fact that the same data cannot be
incorporated into a single application because it can be used in numerous ways to
support numerous use cases. The requirement for this data to become a service has
arisen with the introduction of micro services.
5. A single database used by many micro services creates a very close coupling. As a
result, it is hard to deploy new or emerging services separately if such services need
database modifications that may have an impact on other services. A data layer
containing many databases, each serving a single micro service or perhaps a few closely
related micro services, is needed to break complex service interdependencies.

Cloud Computing (KCS-713) 57 | P a g e


Lecture: 21

14.7 NIST CLOUD COMPUTING REFERENCE ARCHITECTURE


(The National Institute of Standards and Technology (NIST) developed this document in furtherance
of its statutory responsibilities under the Federal Information Security Management Act (FISMA) of
2002, Public Law 107-347.)
NIST Cloud Computing reference architecture defines five major performers:
 Cloud Provider
 Cloud Carrier
 Cloud Broker
 Cloud Auditor
 Cloud Consumer
Each performer is an object (a person or an organization) that contributes to a transaction or
method and/or performs tasks in Cloud computing. There are five major actors defined in the
NIST cloud computing reference architecture, which are described below:

14.7.1 Cloud Provider:


A group or object that delivers cloud services to cloud consumers or end-users. It offers various
components of cloud computing. Cloud computing consumers purchase a growing variety of
cloud services from cloud service providers. There are various categories of cloud-based
services mentioned below:
 IaaS Providers: In this model, the cloud service providers offer infrastructure components
that would exist in an on-premises data center. These components consist of servers,
networking, and storage as well as the virtualization layer.
 SaaS Providers: In Software as a Service (SaaS), vendors provide a wide sequence of
business technologies, such as Human resources management (HRM) software, customer
relationship management (CRM) software, all of which the SaaS vendor hosts and provides
services through the internet.
 PaaS Providers: In Platform as a Service (PaaS), vendors offer cloud infrastructure and
services that can access to perform many functions. In PaaS, services and products are
mostly utilized in software development. PaaS providers offer more services than IaaS
providers. PaaS providers provide operating system and middleware along with application
stack, to the underlying infrastructure.



Cloud Computing (KCS-713) 58 | P a g e
14.7.2 Cloud Carrier:
The mediator who provides offers connectivity and transport of cloud services within cloud
service providers and cloud consumers. It allows access to the services of the cloud through
Internet networks, telecommunication, and other access devices. Network and telecom carriers
or a transport agent can provide distribution. A consistent level of services is provided when
cloud providers set up Service Level Agreements (SLA) with a cloud carrier. In general,
Carrier may be required to offer dedicated and encrypted connections.
14.7.3 Cloud Broker:
An organization or a unit that manages the performance, use, and delivery of cloud services by
enhancing specific capability and offers value-added services to cloud consumers. It combines
and integrates various services into one or more new services. They provide service arbitrage
which allows flexibility and opportunistic choices. There are major three services offered by a
cloud broker:
 Service Intermediation.
 Service Aggregation.
 Service Arbitrage.
14.7.4 Cloud Auditor:
An entity that can conduct independent assessment of cloud services, security, performance,
and information system operations of the cloud implementations. The services that are
provided by Cloud Service Providers (CSP) can be evaluated by service auditors in terms of
privacy impact, security control, and performance, etc. Cloud Auditor can make an assessment
of the security controls in the information system to determine the extent to which the controls
are implemented correctly, operating as planned and constructing the desired outcome with
respect to meeting the security necessities for the system. There are three major roles of Cloud
Auditor which are mentioned below:
 Security Audit.
 Privacy Impact Audit.
 Performance Audit.
14.7.5 Cloud Consumer:
A cloud consumer is the end-user who browses or utilizes the services provided by Cloud
Service Providers (CSP), sets up service contracts with the cloud provider. The cloud consumer
pays per use of the service provisioned. Measured services utilized by the consumer. In this, a
set of organizations having mutual regulatory constraints performs a security and risk
assessment for each use case of Cloud migrations and deployments.
Cloud Computing (KCS-713) 59 | P a g e
Cloud consumers use Service-Level Agreement (SLAs) to specify the technical performance
requirements to be fulfilled by a cloud provider. SLAs can cover terms concerning the quality
of service, security, and remedies for performance failures. A cloud provider may also list in
the SLAs a set of limitations or boundaries, and obligations that cloud consumers must accept.
In a mature market environment, a cloud consumer can freely pick a cloud provider with
better pricing and more favorable terms. Typically, acloud provider’s public pricing policy
and SLAs are non-negotiable, although a cloud consumer who assumes to have substantial
usage might be able to negotiate for better contracts.

Fig-18: Block Diagram of Cloud Stakeholders as per


NIST

Lecture: 22
15.1 CLOUD DEPLOYMENT MODELS
The selection of a cloud deployment model will depend on any number of factors and may well

Cloud Computing (KCS-713) 60 | P a g e


be heavily influenced by your organization’s risk appetite, cost, compliance, regulatory
requirements, legal obligations, and other internal business decisions and strategy.

Fig-19. Cloud Deployment Model

15.2 Public Cloud:


The public cloud makes it possible for anybody to access systems and services. The public
cloud may be less secure as it is open to everyone. The public cloud is one in which cloud
infrastructure services are provided over the internet to the general people or major industry
groups. The infrastructure in this cloud model is owned by the entity that delivers the cloud
services, not by the consumer. It is a type of cloud hosting that allows customers and users
to easily access systems and services. This form of cloud computing is an excellent example
of cloud hosting, in which service providers supply services to a variety of customers. In this
arrangement, storage backup and retrieval services are given for free, as a subscription, or on
a per-user basis. For example, Google App Engine etc.

Cloud Computing (KCS-713) 61 | P a g e


Fig.20 Public Cloud
Advantages of the Public Cloud Model
 Minimal Investment: Because it is a pay-per-use service, there is no substantial
upfront fee, making it excellent for enterprises that require immediate access to
resources.
 No setup cost: The entire infrastructure is fully subsidized by the cloud service
providers, thus there is no need to set up any hardware.
 Infrastructure Management is not required: Using the public cloud does not
necessitate infrastructure management.
 No maintenance: The maintenance work is done by the service provider (not users).
 Dynamic Scalability: To fulfill your company’s needs, on-demand resources are
accessible.
Disadvantages of the Public Cloud Model
 Less secure: Public cloud is less secure as resources are public so there is no
guarantee of high-level security.
 Low customization: It is accessed by many public so it can’t be customized
according to personal requirements.
15.3 Private Cloud
The private cloud deployment model is the exact opposite of the public cloud deployment
model. It’s a one-on-one environment for a single user (customer). There is no need to
share your hardware with anyone else. The distinction between private and public clouds is
in how you handle all of the hardware. It is also called the “internal cloud” & it refers to the
ability to access systems and services within a given border or organization. The cloud platform
is implemented in a cloud-based secure environment that is protected by powerful firewalls

Cloud Computing (KCS-713) 62 | P a g e


and under the supervision of an organization’s IT department. The private cloud gives greater
flexibility of control over cloud resources.

Fig.21 Private Cloud


Advantages of the Private Cloud Model
 Better Control: You are the sole owner of the property. You gain complete command
over service integration, IT operations, policies, and user behavior.

 Data Security and Privacy: It’s suitable for storing corporate information to whichonly
authorized staff have access. By segmenting resources within the same infrastructure,
improved access and security can be achieved.
 Supports Legacy Systems: This approach is designed to work with legacy systems that
are unable to access the public cloud.
 Customization: Unlike a public cloud deployment, a private cloud allows a company to
tailor its solution to meet its specific needs.
Disadvantages of the Private Cloud Model
 Less scalable: Private clouds are scaled within a certain range as there is less number of
clients.
 Costly: Private clouds are more costly as they provide personalized facilities.
15.4 Hybrid Cloud
By bridging the public and private worlds with a layer of proprietary software, hybrid cloud
computing gives the best of both worlds. With a hybrid solution, you may host the app in a safe
environment while taking advantage of the public cloud’s cost savings. Organizations can
move data and applications between different clouds using a combination of two or more
cloud deployment methods, depending on their needs.
Cloud Computing (KCS-713) 63 | P a g e
Fig.22 Hybrid Cloud

Advantages of the Hybrid Cloud Model


 Flexibility and control: Businesses with more flexibility can design personalized
solutions that meet their particular needs.

 Cost: Because public clouds provide scalability, you’ll only be responsible for paying
for the extra capacity if you require it.
 Security: Because data is properly separated, the chances of data theft by attackers are
considerably reduced.
Disadvantages of the Hybrid Cloud Model
 Difficult to manage: Hybrid clouds are difficult to manage as it is a combination of
both public and private cloud. So, it is complex.
 Slow data transmission: Data transmission in the hybrid cloud takes place through the
public cloud so latency occurs.









Cloud Computing (KCS-713) 64 | P a g e


Lecture: 23

15.5 COMMUNITY CLOUD


It allows systems and services to be accessible by a group of organizations. It is a distributed
system that is created by integrating the services of different clouds to address the specific
needs of a community, industry, or business. The infrastructure of the community could be
shared between the organization which has shared concerns or tasks. Itis generally managed
by a third party or by the combination of one or more organizations in the community.

Fig.23 Community Cloud


Advantages of the Community Cloud Model
 Cost Effective: It is cost-effective because the cloud is shared by multiple
organizations or communities.
 Security: Community cloud provides better security.
 Shared resources: It allows you to share resources, infrastructure, etc. with multiple
organizations.
 Collaboration and data sharing: It is suitable for both collaboration and data sharing.
Disadvantages of the Community Cloud Model
 Limited Scalability: Community cloud is relatively less scalable as many organizations
share the same resources according to their collaborative interests.
 Rigid in customization: As the data and resources are shared among different
organizations according to their mutual interests if an organization wants some changes
according to their needs they cannot do so because it will have an impact on other
organizations.
Cloud Computing (KCS-713) 65 | P a g e
Overall Analysis of Cloud Deployment Models
The overall Analysis of these models with respect to different factors is described below.

Community
Factors Public Cloud Private Cloud Cloud Hybrid Cloud

Complex, Complex, Complex,


requires a requires a requires a
Initial Setup Easy
professional professional professional
team to setup team to setup team to setup

Scalability
and High High Fixed High
Flexibility

Between public
Cost- Distributed cost
Cost-Effective Costly and private
Comparison among members
cloud

Reliability Low Low High High

Data Security Low High High High

Data Privacy Low High High High

Cloud Computing (KCS-713) 66 | P a g e


15.6 SERVICES OF CLOUD COMPUTING
Cloud Computing helps in rendering several services according to roles, companies, etc.
Cloud computing models are explained below.
 Infrastructure as a service (IaaS)
 Platform as a service (PaaS)
 Software as a service (SaaS)

Fig.24 Cloud Service Model

15.6.1 Infrastructure as a service (IaaS)


Infrastructure as a Service (IaaS) helps in delivering computer infrastructure on an external
basis for supporting operations. Generally, IaaS provides services to networking equipment,
devices, databases, and web servers.
Infrastructure as a Service (IaaS) helps large organizations, and large enterprises in managing
and building their IT platforms. This infrastructure is flexible according to the needs of the
client.
Advantages of IaaS
 IaaS is cost-effective as it eliminates capital expenses.
 IaaS cloud provider provides better security than any other software.
 IaaS provides remote access.
Disadvantages of IaaS
 In IaaS, users have to secure their own data and applications.
 Cloud computing is not accessible in some regions of the World.
Cloud Computing (KCS-713) 67 | P a g e
15.6.2 PLATFORM AS A SERVICE (PAAS)
Platform as a Service (PaaS) is a type of cloud computing that helps developers to build
applications and services over the Internet by providing them with a platform.
PaaS helps in maintaining control over their business applications.

Advantages of PaaS
 PaaS is simple and very much convenient for the user as it can be accessed via a web
browser.
 PaaS has the capabilities to efficiently manage the lifecycle.
Disadvantages of PaaS
 PaaS has limited control over infrastructure as they have less control over the
environment and are not able to make some customizations.
 PaaS has a high dependence on the provider.
15.7 SOFTWARE AS A SERVICE (SAAS)
Software as a Service (SaaS) is a type of cloud computing model that is the work ofdelivering
services and applications over the Internet. The SaaS applications are called Web- Based
Software or Hosted Software.
SaaS has around 60 percent of cloud solutions and due to this, it is mostly preferred by
companies.

Advantages of SaaS
 SaaS can access app data from anywhere on the Internet.
 SaaS provides easy access to features and services.
Disadvantages of SaaS
 SaaS solutions have limited customization, which means they have some restrictions
within the platform.
 SaaS has little control over the data of the user.
 SaaS are generally cloud-based, they require a stable internet connection for proper
working.

Cloud Computing (KCS-713) 68 | P a g e


Lecture: 24
15.8 CLOUD STORAGE AS A SERVICE:
Cloud Storage as a Service (STaaS) provides on-demand storage resources over the
internet. It abstracts the complexities of storage infrastructure, offering a scalable and cost-
effective solution for storing and managing data.
15.8.1 Advantages of Cloud Storage:
Scalability:
Cloud storage can easily scale up or down based on demand, allowing organizations to
pay for only the storage they use.
Cost Efficiency:
Organizations can avoid the upfront costs of purchasing and maintaining physical
hardware, paying only for the storage resources consumed.
Accessibility:
Data stored in the cloud can be accessed from anywhere with an internet connection,
facilitating remote access and collaboration.
Redundancy and Reliability:
Cloud storage providers often implement redundant storage mechanisms, ensuring data
durability and high availability.
Data Security:
Cloud storage services implement robust security measures, including encryption and
access controls, to protect stored data.
Automatic Updates and Maintenance:
Cloud storage providers handle infrastructure updates and maintenance, relieving users
from these operational tasks.
Cloud Storage Providers - S3 (Amazon Simple Storage Service):
Amazon S3 (Simple Storage Service) is a popular cloud storage service provided by
Amazon Web Services (AWS). It offers object storage with a simple web interface for
storing and retrieving data.
15.8.2 Key Features of Amazon S3:
Object Storage:
Amazon S3 allows users to store and retrieve any amount of data as objects, each consisting
of data, a key, and metadata.
Scalability:
S3 provides virtually unlimited storage capacity, and it scales automatically to handle
growing amounts of data.

Cloud Computing (KCS-713) 69 | P a g e


Data Durability and Availability:
S3 achieves high durability by storing data across multiple locations and availability
zones, ensuring high availability and reliability.
Security Features:
S3 supports data encryption in transit and at rest, access control policies, and integration
with AWS Identity and Access Management (IAM) for fine-grained access control.
Versioning:
S3 supports versioning, allowing users to preserve, retrieve, and restore every version of
every object stored in a bucket.
Lifecycle Management:
Users can define lifecycle policies to automatically transition objects between storage
classes or delete them when they are no longer needed.
In summary, Amazon S3 is a powerful and versatile cloud storage solution that addresses
many architectural challenges, providing a scalable, secure, and feature-rich platform for
storing and managing data in the cloud.

Cloud Computing (KCS-713) 70 | P a g e


UNIT-IV
Overview of Resource Management and Security in Cloud
Lecture: 25

What is Inter Cloud Resource Management


A theoretical model for cloud computing services is referred to as the “inter-cloud” or “cloud of
clouds.” combining numerous various separate clouds into a single fluid mass for on-demand
operations Simply put, the inter-cloud would ensure that a cloud could utilize resources outside of
its range using current agreements with other cloud service providers. There are limits to the
physical resources and the geographic reach of any one cloud.

Types of Inter-Cloud Resource Management


Federation Clouds: A federation cloud is a kind of inter-cloud where several cloud service
providers willingly link their cloud infrastructures together to exchange resources. Cloud service
providers in the federation trade resources in an open manner. With the aid of this inter-cloud
technology, private cloud portfolios, as well as government clouds (those utilized and owned by
non-profits or the government), can cooperate.
Multi-Cloud: A client or service makes use of numerous independent clouds in a multi-cloud. A
multi-cloud ecosystem lacks voluntarily shared infrastructure across cloud service providers. It is
the client’s or their agents’ obligation to manage resource supply and scheduling. This strategy is
utilized to use assets from both public and private cloud portfolios. These multi-cloud kinds include
services and libraries.

Topologies used In Inter Cloud Architecture


1. Peer-to-Peer Inter-Cloud Federation: Clouds work together directly, but they may also utilize
distributed entities as directories or brokers. Clouds communicate and engage in direct negotiation
without the use of intermediaries. The peer-to-peer federation inter cloud projects are
RESERVOIR (Resources and Services Virtualization without Barriers Project).

Cloud Computing (KCS-713) 71 | P a g e


2. Centralized Inter-Cloud Federation: In the cloud, resource sharing is carried out or facilitated
by a central body. The central entity serves as a registry for the available cloud resources. The
inter-cloud initiatives Dynamic Cloud Collaboration (DCC), and Federated Cloud Management
leverage centralized inter-cloud federation.

3. Multi-Cloud Service: Clients use a service to access various clouds. The cloud client hosts a
service either inside or externally. The services include elements for brokers. The inter-cloud
initiatives OPTIMUS, contrail, MOSAIC, STRATOS, and commercial cloud management
solutions leverage multi-cloud services.

4. Multi-Cloud Libraries: Clients use a uniform cloud API as a library to create their own
brokers. Inter clouds that employ libraries make it easier to use clouds consistently. Java library J-
clouds, Python library Apache Lib-Clouds, and Ruby library Apache Delta-Cloud are a few
examples of multiple multi-cloud libraries.

Cloud Computing (KCS-713) 72 | P a g e


Lecture: 26
What is Resource Provisioning?

The allocation of resources and services from a cloud provider to a customer is known as resource
provisioning in cloud computing, sometimes called cloud provisioning. Resource provisioning is
the process of choosing, deploying, and managing software (like load balancers and database server
management systems) and hardware resources (including CPU, storage, and networks) to assure
application performance.

To effectively utilize the resources without going against SLA and achieving the QoS requirements,
Static Provisioning/Dynamic Provisioning and Static/Dynamic Allocation of resources must be
established based on the application needs. Resource over and under-provisioning must be
prevented. Power usage is another significant restriction. Care should be taken to reduce power
consumption, dissipation, and VM placement. There should be techniques to avoid excess power
consumption.

Therefore, the ultimate objective of a cloud user is to rent resources at the lowest possible cost,
while the objective of a cloud service provider is to maximize profit by effectively distributing
resources.

Importance of Cloud Provisioning:

 Scalability: Being able to actively scale up and down with flux in demand for resources is one
of the major points of cloud computing
 Speed: Users can quickly spin up multiple machines as per their usage without the need for an
IT Administrator
 Savings: Pay as you go model allows for enormous cost savings for users, it is facilitated by
provisioning or removing resources according to the demand

Cloud Computing (KCS-713) 73 | P a g e


Challenges of Cloud Provisioning:
Complex management: Cloud providers have to use various different tools and techniques to
actively monitor the usage of resources
Policy enforcement: Organizations have to ensure that users are not able to access the resources
they shouldn’t.
Cost: Due to automated provisioning costs may go very high if attention isn’t paid to placing proper
checks in place. Alerts about reaching the cost threshold are required.

Tools for Cloud Provisioning:


 Google Cloud Deployment Manager
 IBM Cloud Orchestrator
 AWS CloudFormation
 Microsoft Azure Resource Manager

Types of Cloud Provisioning:

Static Provisioning or Advance Provisioning: Static provisioning can be used successfully for
applications with known and typically constant demands or workloads. In this instance, the cloud
provider allows the customer with a set number of resources. The client can thereafter utilize these
resources as required.
Dynamic provisioning or On-demand provisioning: With dynamic provisioning, the provider
adds resources as needed and subtracts them as they are no longer required. It follows a pay-per-use
model, i.e. the clients are billed only for the exact resources they use. Consumers must pay for each
use of the resources that the cloud service provider allots to them as needed and when necessary.

Self-service provisioning or user self-provisioning: In user self-provisioning, sometimes referred


to as cloud self-service, the customer uses a web form to acquire resources from the cloud provider,
sets up a customer account, and pays with a credit card. Shortly after, resources are made accessible
for consumer use.

Lecture: 27

What is Global Exchange of Cloud Resources?


Cloud Exchange (CEx) serves as a market maker, bringing service providers and users together. The
University of Melbourne proposed it under Inter cloud architecture (Cloud bus). It supports
brokering and exchanging cloud resources for scaling applications across multiple clouds. It
aggregates the infrastructure demands from application brokers and evaluates them against the
available supply. It supports the trading of cloud services based on competitive economic models
such as commodity markets and auctions.

Entities of the Global exchange of cloud resources.


Market directory
A market directory is an extensive database of resources, providers, and participants using the
resources. Participants can use the market directory to find providers or customers with suitable
offers.
Cloud Computing (KCS-713) 74 | P a g e
Auctioneers
Auctioneers clear bids and ask from market participants regularly. Auctioneers sit between providers
and customers and grant the resources available in the Global exchange of cloud resources to the
highest bidding customer.

Brokers
Brokers mediate between consumers and providers by buying capacity from the provider and sub-
leasing these to the consumers. They must select consumers whose apps will provide the most utility.
Brokers may also communicate with resource providers and other brokers to acquire or trade
resource shares. To make decisions, these brokers are equipped with a negotiating module informed
by the present conditions of the resources and the current demand.

Service-level agreements (SLAs)


The service level agreement (SLA) highlights the details of the service to be provided in terms of
metrics that have been agreed upon by all parties, as well as penalties for meeting and failing to meet
the expectations.

The consumer participates in the utility market via a resource management proxy that chooses a set
of brokers based on their offering. SLAs are formed between the consumer and the brokers, which
bind the latter to offer the guaranteed resources. After that, the customer either runs their
environment on the leased resources or uses the provider's interfaces to scale their applications.

Providers
A provider has a price-setting mechanism that determines the current price for their source based on
market conditions, user demand, and the current degree of utilization of the resource.

Based on an initial estimate of utility, an admission-control mechanism at a provider's end selects the
auctions to participate in or to negotiate with the brokers.

Resource management system


The resource management system provides functionalities such as advance reservations that enable
guaranteed provisioning of resource capacity.

Lecture: 28

What is Security in Cloud Computing?


Cloud computing which is one of the most demanding technology of the current time, starting from
small to large organizations have started using cloud computing services. Where there are different
types of cloud deployment models are available and cloud services are provided as per requirement
like that internally and externally security is maintained to keep the cloud system safe. Cloud
computing security or cloud security is an important concern which refers to the act of protecting
cloud environments, data, information and applications against unauthorized access, DDOS attacks,
malwares, hackers and other similar attacks.
Community Cloud: These allow to a limited set of organizations or employees to access a shared
cloud computing service environment.

Planning of security in Cloud Computing:


As security is a major concern in cloud implementation, so an organization have to plan for
security based on some factors like below represents the three main factors on which planning of
cloud security depends.
Cloud Computing (KCS-713) 75 | P a g e
 Resources that can be moved to the cloud and test its sensitivity risk are picked.
 The type of cloud is to be considered.
 The risk in the deployment of the cloud depends on the types of cloud and service models.
Types of Cloud Computing Security Controls:

There are 4 types of cloud computing security controls i.e.


1. Deterrent Controls: Deterrent controls are designed to block nefarious attacks on a cloud
system. These come in handy when there are insider attackers.
2. Preventive Controls: Preventive controls make the system resilient to attacks by eliminating
vulnerabilities in it.
3. Detective Controls: It identifies and reacts to security threats and control. Some examples of
detective control software are Intrusion detection software and network security monitoring
tools.
4. Corrective Controls: In the event of a security attack these controls are activated. They limit
the damage caused by the attack.

Importance of cloud security:

 Centralized security: Centralized security results in centralizing protection. As managing all


the devices and endpoints is not an easy task cloud security helps in doing so. This results in
enhancing traffic analysis and web filtering which means less policy and software updates.
 Reduced costs : Investing in cloud computing and cloud security results in less expenditure in
hardware and also less manpower in administration
 Reduced Administration: It makes it easier to administer the organization and does not have
manual security configuration and constant security updates.
 Reliability: These are very reliable and the cloud can be accessed from anywhere with any
device with proper authorization.

Lecture: 29

Security Issues in Cloud Computing:


There is no doubt that Cloud Computing provides various Advantages but there are also some
security issues in cloud computing. Below are some following Security Issues in Cloud Computing
as follows.
1. Data Loss –

Data Loss is one of the issues faced in Cloud Computing. This is also known as Data
Leakage. As we know that our sensitive data is in the hands of somebody else, and we don’t
have full control over our database. So, if the security of cloud service is to break by hackers
then it may be possible that hackers will get access to our sensitive data or personal files.

2. Interference of Hackers and Insecure API’s –


As we know, if we are talking about the cloud and its services it means we are talking about the
Internet. Also, we know that the easiest way to communicate with Cloud is using API. So it is
important to protect the Interface’s and API’s which are used by an external user. But also in cloud
computing, few services are available in the public domain which are the vulnerable part of Cloud
Computing because it may be possible that these services are accessed by some third parties. So, it
may be possible that with the help of these services hackers can easily hack or harm our data.

Cloud Computing (KCS-713) 76 | P a g e


3. User Account Hijacking –
Account Hijacking is the most serious security issue in Cloud Computing. If somehow the Account
of User or an Organization is hijacked by a hacker then the hacker has full authority to perform
Unauthorized Activities.

4. Changing Service Provider –


Vendor lock-In is also an important Security issue in Cloud Computing. Many
organizations will face different problems while shifting from one vendor to another. For
example, An Organization wants to shift from AWS Cloud to Google Cloud Services then
they face various problems like shifting of all data, also both cloud services have different
techniques and functions, so they also face problems regarding that. Also, it may be
possible that the charges of AWS are different from Google Cloud, etc.

5. Lack of Skill –
While working, shifting to another service provider, need an extra feature, how to use a feature,
etc. are the main problems caused in IT Companies who doesn’t have skilled Employees. So it
requires a skilled person to work with Cloud Computing.

6. Denial of Service (DoS) attack –


This type of attack occurs when the system receives too much traffic. Mostly DoS attacks
occur in large organizations such as the banking sector, government sector, etc. When a DoS
attack occurs, data is lost. So, in order to recover data, it requires a great amount of money as
well as time to handle it.
7. Shared Resources: Cloud computing relies on a shared infrastructure. If one customer’s data
or applications are compromised, it may potentially affect other customers sharing the same
resources, leading to a breach of confidentiality or integrity.
8. Compliance and Legal Issues: Different industries and regions have specific regulatory
requirements for data handling and storage. Ensuring compliance with these regulations can be
challenging when data is stored in a cloud environment that may span multiple jurisdictions.
9. Data Encryption: While data in transit is often encrypted, data at rest can be susceptible to
breaches. It’s crucial to ensure that data stored in the cloud is properly encrypted to prevent
unauthorized access.
10. Insider Threats: Employees or service providers with access to cloud systems may misuse
their privileges, intentionally or unintentionally causing data breaches. Proper access controls
and monitoring are essential to mitigate these threats.

Lecture: 30

What is Software-as-a-Service Security?

Software-as-a-service (SaaS) is an on-demand, cloud-based software delivery model that enables


organizations to subscribe to the applications they need without hosting them in house. SaaS is one
of several categories of cloud subscription services, including platform-as-a-service and
infrastructure-as-a-service. SaaS has become increasingly popular because it saves organizations
from needing to purchase servers and other infrastructure or maintain an in-house support staff.
Instead, a SaaS provider hosts and provides SaaS security and maintenance to their software. Some
well-known SaaS applications include Microsoft 365, Salesforce.com, Cisco Webex, Box, and
Adobe Creative Cloud. Most enterprise software vendors also offer cloud versions of their
applications, such as Oracle Financials Cloud.
Cloud Computing (KCS-713) 77 | P a g e
Benefits of SaaS

 On-demand and scalable resources


Organizations can purchase additional storage, end-user licenses, and features for their
applications on an as-needed basis.
 Fast implementation
Organizations can subscribe almost instantly to a SaaS application and provision employees,
unlike on-premises applications that require more time.
 Easy upgrades and maintenance
The SaaS provider handles patches and updates, often without the customer being aware of it.
 No infrastructure or staff costs
Organizations avoid paying for in-house hardware and software licenses with perpetual
ownership. They also do not need on-site IT staff to maintain and support the application.
This enables even small organizations to use enterprise-level applications that would be
costly for them to implement.
SaaS security
SaaS providers handle much of the security for a cloud application. The SaaS provider is responsible
for securing the platform, network, applications, operating system, and physical infrastructure.
However, providers are not responsible for securing customer data or user access to it. Some
providers offer a bare minimum of security, while others offer a wide range of SaaS security options.
Below are SaaS security practices that organizations can adopt to protect data in their SaaS
applications.

 Detect rogue services and compromised accounts: Organizations can use tools, such as
Cloud Access Security Brokers (CASB) to audit their networks for unauthorized cloud
services and compromised accounts.
 Apply identity and access management (IAM): A role-based identity and access
management solution can ensure that end users do not gain access to more resources than
they require for their jobs. IAM solutions use processes and user access policies to determine
what files and applications a particular user can access. An organization can apply role-based
permissions to data so that end users will see only the data they’re authorized to view.
 Encrypt cloud data: Data encryption protects both data at rest (in storage) and data in transit
between the end user and the cloud or between cloud applications. Government regulations
usually require encryption of sensitive data. Sensitive data includes financial information,
healthcare data, and personally identifiable information (PII). While a SaaS vendor may
provide some type of encryption, an organization can enhance data security by applying its
own encryption, such as by implementing a CASB.
 Enforce data loss prevention (DLP): DLP software monitors for sensitive data within SaaS
applications or outgoing transmissions of sensitive data and blocks the transmission. DLP
software detects and prevents sensitive data from being downloaded to personal devices and
blocks malware or hackers from attempting to access and download data.
 Monitor collaborative sharing of data: Collaboration controls can detect granular
permissions on files that are shared with other users, including users outside the organization
who access the file through a web link. Employees may inadvertently or intentionally share
confidential documents through email, team spaces, and cloud storage sites such as Dropbox.
 Check provider’s security: An audit of a SaaS provider can include checks on its
compliance with data security and privacy regulations, data encryption policies, employee
security practices, cyber security protection, and data segregation policies.
Cloud Computing (KCS-713) 78 | P a g e
Lecture: 31

What is Cloud Governance?


Cloud governance is a set of rules and policies adopted by companies that run services in the cloud.
The goal of cloud governance is to enhance data security, manage risk, and enable the smooth
operation of cloud systems.

The cloud makes it easier than ever for teams within the organization to develop their own systems
and deploy assets with a single click. While this promotes innovation and productivity, it can also
cause issues like:

 Poor integration between cloud systems, even within the same organization
 Duplication of effort or data between different parts of the organization
 Lack of alignment between cloud systems and business goals
 New security issues—for example, the risk of deploying cloud systems with weak or lacking access
control

Cloud governance ensures that asset deployment, system integration, data security, and other aspects
of cloud computing are properly planned, considered, and managed. It is highly dynamic, because
cloud systems can be created and maintained by different groups in the organization, involve third-
party vendors, and can change on a daily basis. Cloud governance initiatives ensure this complex
environment meets organizational policies, security best practices and compliance obligations.

Why is Cloud Governance Important?


Here are a few ways cloud governance can benefit an organization running critical services in the
cloud.
Improves Cloud Resource Management

Cloud governance can help break down cloud systems into individual accounts that represent
departments, projects or cost centers within the organization. This is a best practice recommended by
many cloud providers. Segregating cloud workloads into separate accounts can improve cost control,
visibility, and limits the business impact of security issues.

Cloud Governance Model Principles

The following five principles are a good starting point for building your cloud governance model:

1. Compliance with policies and standards—cloud usage standards must be consistent with
regulations and compliance standards used by your organization and others in your industry.
2. Alignment with business objectives—cloud strategy should be an integral part of the overall
business and IT strategy. All cloud systems and policies should demonstrably support
business goals.
3. Collaboration—there should be clear agreements between owners and users of cloud
infrastructure, and other stakeholders in the relevant organizational units, to ensure they make
appropriate and mutually beneficial use of cloud resources.

Cloud Computing (KCS-713) 79 | P a g e


4. Change management—all changes to a cloud environment must be implemented in a
consistent and standardized manner, subject to the appropriate controls.
5. Dynamic response—cloud governance should rely on monitoring and cloud automation to
dynamically respond to events in the cloud environment.

Lecture: 32
What is Virtual Machine Security in Cloud?

The term “Virtualized Security,” sometimes known as “security virtualization,” describes security
solutions that are software-based and created to operate in a virtualized IT environment. This is
distinct from conventional hardware-based network security, which is static and is supported by
equipment like conventional switches, routers, and firewalls.
Virtualized security is flexible and adaptive, in contrast to hardware-based security. It can be
deployed anywhere on the network and is frequently cloud-based so it is not bound to a specific
device.
In Cloud Computing, where operators construct workloads and applications on-demand,
virtualized security enables security services and functions to move around with those on -demand-
created workloads. This is crucial for virtual machine security. It’s crucial to protect virtualized
security in cloud computing technologies such as isolating multitenant setups in public cloud
settings. Because data and workloads move around a complex ecosystem including several
providers, virtualized security’s flexibility is useful for securing hybrid and multi-cloud settings.

Types of Hypervisors
Type-1 Hypervisors
Its functions are on unmanaged systems. Type 1 hypervisors include Lynx Secure, RTS
Hypervisor, Oracle VM, Sun xVM Server, and Virtual Logic VLX. Since they are placed on bare
systems, type 1 hypervisor do not have any host operating systems.

Type-2 Hypervisor
It is a software interface that simulates the hardware that a system typically communicates with.
Examples of Type 2 hypervisors include containers, KVM, Microsoft Hyper V, VMWare Fusion,
Virtual Server 2005 R2, Windows Virtual PC, and VMware workstation 6.0.
Type I Virtualization
In this design, the Virtual Machine Monitor (VMM) sits directly above the hardware and
eavesdrops on all interactions between the VMs and the hardware. On top of the VMM is a
management VM that handles other guest VM management and handles the majority of a hardware
connections. The Xen system is a common illustration of this kind of virtualization design.

Type II virtualization
In these architectures, like VMware Player, allow for the operation of the VMM as an application
within the host operating system (OS). I/O drivers and guest VM management are the
responsibilities of the host OS.

Cloud Computing (KCS-713) 80 | P a g e


Benefits of Virtualized Security
Virtualized security is now practically required to meet the intricate security requirements of a
virtualized network, and it is also more adaptable and effective than traditional physical security.

Cost-Effectiveness: Cloud computing’s virtual machine security enables businesses to keep their
networks secure without having to significantly raise their expenditures on pricey proprietary
hardware. Usage-based pricing for cloud-based virtualized security services can result in significant
savings for businesses that manage their resources effectively.
Flexibility: It is essential in a virtualized environment that security operations can follow workloads
wherever they go. A company is able to profit fully from virtualization while simultaneously
maintaining data security thanks to the protection it offers across various data centers, in multi-cloud,
and hybrid-cloud environments.
Operational Efficiency: Virtualized security can be deployed more quickly and easily than
hardware-based security because it doesn’t require IT, teams, to set up and configure several
hardware appliances. Instead, they may quickly scale security systems by setting them up using
centralized software. Security-related duties can be automated when security technology is used,
which frees up more time for IT employees.
Regulatory Compliance: Virtual machine security in cloud computing is a requirement for
enterprises that need to maintain regulatory compliance because traditional hardware-based security
is static and unable to keep up with the demands of a virtualized network.

Lecture: 33

What Is Identity and Access Management (IAM)?

Identity and Access Management (IAM) is a combination of policies and technologies that allows
organizations to identify users and provide the right form of access as and when required. There has
been a burst in the market with new applications, and the requirement for an organization to use
these applications has increased drastically. The services and resources you want to access can be
specified in IAM. IAM doesn’t provide any replica or backup. IAM can be used for many purposes
such as, if one want’s to control access of individual and group access for your AWS resources. With
IAM policies, managing permissions to your workforce and systems to ensure least-privilege
permissions becomes easier. The AWS IAM is a global service.

Components of Identity and Access Management (IAM) Users


1. Roles
2. Groups
3. Policies
With these new applications being created over the cloud, mobile and on-premise can hold sensitive
and regulated information. It’s no longer acceptable and feasible to just create an Identity server and
provide access based on the requests. In current times an organization should be able to track the
flow of information and provide least privileged access as and when required, obviously with a large
workforce and new applications being added every day it becomes quite difficult to do the same. So
organizations specifically concentrate on managing identity and its access with the help of a few

Cloud Computing (KCS-713) 81 | P a g e


IAM tools. It’s quite obvious that it is very difficult for a single tool to manage everything but there
are multiple IAM tools in the market that help the organizations with any of the few services given
below.
IAM Identities Classified As
1. IAM Users
2. IAM Groups
3. IAM Roles

Root User: The root user will automatically be created and granted unrestricted rights. We can
create an admin user with fewer powers to control the entire Amazon account.

IAM Users: We can utilize IAM users to access the AWS Console and their administrative
permissions differ from those of the Root user and if we can keep track of their login information.
Example
With the aid of IAM users, we can accomplish our goal of giving a specific person access to every
service available in the Amazon dashboard with only a limited set of permissions, such as read-only
access. Let’s say user-1 is a user that I want to have read-only access to the EC2 instance and no
additional permissions, such as create, delete, or update. By creating an IAM user and attaching user-
1 to that IAM user, we may allow the user access to the EC2 instance with the required permissions.

Benefits of IAM Systems


 Enhanced Security: IAM prevents unauthorized access to sensitive data and systems, thus
minimizing the access of the unauthorized personnel.
 Improved Compliance: It also guarantees that the organization complies with the legal
requirements concerning the access control as well as the tracking of activities performed by
the users.
 Increased Productivity: Automates processes of the management of users and access, thus
minimizing the numbers of manual operations and providing faster access to the required
resources.
 Reduced Risk: Portfolios reduce internal risks and data losses due to strict access protocols
in place.
 Centralized management is capable of consolidating identity and company access control and
enforcing the same across different systems.

IAM Technologies and Tools


 Single Sign-On (SSO): A choice that lets a user login and use multiple applications at once,
as well as give more security to the services. Example: Its competitors include Okta and
Microsoft Azure AD.
 Multi-Factor Authentication (MFA): A second one is that you must verify your account
with two or more ways to boost its security. Example: Some of the examples of Two Factor
Authentication applications are Duo Security and Google Authenticator.
 Role-Based Access Control (RBAC): Secures the system based on employees’ roles, where
the user will have the least privilege to access the system. Example: IBM Security Identity
Manager.
 Privileged Access Management (PAM): Performs functions associated with obtaining and
maintaining high levels of accessible (“privileged”) computing resources. Example:
CyberArk, BeyondTrust.

Cloud Computing (KCS-713) 82 | P a g e


What are Cloud Security Standards?
It was essential to establish guidelines for how work is done in the cloud due to the different security
dangers facing the cloud. They offer a thorough framework for how cloud security is upheld with
regard to both the user and the service provider.
 Cloud security standards provide a roadmap for businesses transitioning from a traditional
approach to a cloud-based approach by providing the right tools, configurations, and policies
required for security in cloud usage.
 It helps to devise an effective security strategy for the organization.
 It also supports organizational goals like privacy, portability, security, and interoperability.
 Certification with cloud security standards increases trust and gives businesses a competitive
edge.

Need for Cloud Security Standards

 Ensure cloud computing is an appropriate environment: Organizations need to make


sure that cloud computing is the appropriate environment for the applications as security and
mitigating risk are the major concerns.
 To ensure that sensitive data is safe in the cloud: Organizations need a way to make sure
that the sensitive data is safe in the cloud while remaining compliant with standards and
regulations.
 No existing clear standard: Cloud security standards are essential as earlier there were no
existing clear standards that can define what constitutes a secure cloud environment. Thus,
making it difficult for cloud providers and cloud users to define what needs to be done to
ensure a secure environment.
 Need for a framework that addresses all aspects of cloud security: There is a need for
businesses to adopt a

Cloud Computing (KCS-713) 83 | P a g e


UNIT-V
Lecture: 34

What is Hadoop in the Cloud?


Hadoop is an open source software programming framework for storing a large amount of data and
performing the computation. Its framework is based on Java programming with some native code
in C and shell scripts.

Hadoop is an open-source software framework that is used for storing and processing large
amounts of data in a distributed computing environment. It is designed to handle big data and is
based on the MapReduce programming model, which allows for the parallel processing of large
datasets.
Cloud platforms like AWS, Azure, and Google Cloud offer Hadoop-based services (Amazon EMR,
Azure HDInsight, Google Dataproc) allowing users to deploy and manage Hadoop clusters without
dealing with infrastructure setup.

Hadoop has two main components:


 HDFS (Hadoop Distributed File System): This is the storage component of Hadoop, which
allows for the storage of large amounts of data across multiple machines. It is designed to
work with commodity hardware, which makes it cost-effective.
 YARN (Yet another Resource Negotiator): This is the resource management component of
Hadoop, which manages the allocation of resources (such as CPU and memory) for
processing the data stored in HDFS

Features of hadoop:
1. It is fault tolerance.
2. It is highly available.
3. Its programming is easy.
4. It have huge flexible storage.
5. It is low cost.

Serverless Hadoop:
Evolution toward serverless computing models (e.g., AWS Lambda, Azure Functions) has
influenced the development of serverless Hadoop services, allowing users to execute Hadoop tasks
without managing underlying infrastructure.
Hadoop and Big Data Integration:
Cloud-based Hadoop solutions integrate with various big data technologies and analytics tools,
facilitating efficient data processing, analysis, and visualization.
Elasticity and Scalability:
Cloud-based Hadoop platforms provide scalability and elasticity, enabling users to dynamically
adjust cluster sizes based on workload demands without provisioning or managing physical
infrastructure.
Security and Compliance:
Advancements in cloud security have led to improved security features and compliance
certifications for Hadoop deployments on cloud platforms, ensuring data protection and regulatory
adherence.
Managed Services and Automation:
Cloud providers offer managed Hadoop services with automated provisioning, monitoring, and
maintenance, simplifying the management of Hadoop clusters and reducing administrative
overhead.

Cloud Computing (KCS-713) 84 | P a g e


Integration with Other Cloud Services:
Hadoop in the cloud integrates seamlessly with other cloud services like storage (e.g., Amazon S3,
Azure Blob Storage), databases, and machine learning tools for a comprehensive data ecosystem.

Hybrid and Multi-Cloud Deployments:


Organizations utilize hybrid and multi-cloud strategies, leveraging both on-premises and cloud-
based Hadoop environments, enabling flexibility, scalability, and redundancy.
Cost Optimization and Resource Utilization:
Cloud-based Hadoop services offer pay-as-you-go pricing models, allowing cost optimization by
scaling resources based on demand, avoiding upfront infrastructure costs.
Advancements in Hadoop Ecosystem:
Continuous enhancements and contributions to the Hadoop ecosystem (Hive, Spark, HBase, etc.)
further improve performance, efficiency, and functionality in cloud environments.
Hadoop's integration with cloud technology has expanded its capabilities, making big data
processing more accessible, scalable, and cost-effective for organizations seeking to harness the
power of large-scale data analytics and processing.

Lecture: 35

What is MapReduce?
A MapReduce is a data processing tool which is used to process the data parallelly in a distributed
form. It was developed in 2004, on the basis of paper titled as "MapReduce: Simplified Data
Processing on Large Clusters," published by Google.
The MapReduce is a paradigm which has two phases, the mapper phase, and the reducer phase. In
the Mapper, the input is given in the form of a key-value pair. The output of the Mapper is fed to the
reducer as input. The reducer runs only after the Mapper is over. The reducer too takes input in key-
value format, and the output of reducer is the final output.

MapReduce Architecture:

Cloud Computing (KCS-713) 85 | P a g e


Components of MapReduce Architecture:

Client: The MapReduce client is the one who brings the Job to the MapReduce for processing.
There can be multiple clients available that continuously send jobs for processing to the Hadoop
MapReduce Manager.
Job: The MapReduce Job is the actual work that the client wanted to do which is comprised of so
many smaller tasks that the client wants to process or execute.
Hadoop MapReduce Master: It divides the particular job into subsequent job-parts.
Job-Parts: The task or sub-jobs that are obtained after dividing the main job. The result of all the
job-parts combined to produce the final output.
Input Data: The data set that is fed to the MapReduce for processing.
Output Data: The final result is obtained after the processing.

In MapReduce, we have a client. The client will submit the job of a particular size to the Hadoop
MapReduce Master. Now, the MapReduce master will divide this job into further equivalent job-
parts. These job-parts are then made available for the Map and Reduce Task. This Map and Reduce
task will contain the program as per the requirement of the use-case that the particular company is
solving. The developer writes their logic to fulfill the requirement that the industry requires. The
input data which we are using is then fed to the Map Task and the Map will generate intermediate
key-value pair as its output. The output of Map i.e. these key-value pairs are then fed to the
Reducer and the final output is stored on the HDFS. There can be n number of Map and Reduce
tasks made available for processing the data as per the requirement. The algorithm for Map and
Reduce is made with a much optimized way such that the time complexity or space complexity is
minimum.
Usage of MapReduce
 It can be used in various application like document clustering, distributed sorting, and web
link-graph reversal.
 It can be used for distributed pattern-based searching.
 We can also use MapReduce in machine learning.
 It was used by Google to regenerate Google's index of the World Wide Web.
 It can be used in multiple computing environments such as multi-cluster, multi-core, and
mobile environment.
Lecture: 36

What is a Virtual Box?


Oracle Corporation developed a virtual box, and it is also known as VB. It acts like a hypervisor for
X86 machines. Originally, it was created by Innotek GmbH, and they made it accessible to all in
2007. After that, it was bought by Sun Microsoft in 2008. Since then, it has been developed by
Oracle, and people refer to it as Oracle VM Virtual Box. VirtualBox comes in a variety of flavors,
depending on the operating system for which it is configured. VirtualBox Ubuntu is more common,
however, VirtualBox for Windows is also popular. With the introduction of Android phones,
VirtualBox for Android has emerged as the new face of virtual machines in smartphones.

Cloud Computing (KCS-713) 86 | P a g e


Use of Virtual Box
In general, a Virtual Box is a software virtualization program that may be run as an application on
any operating system. It’s one of the numerous advantages of Virtual Box. It supports the installation
of additional operating systems, known as Guest OS. It may then set up and administer free guest
virtual machines, each with its own operating system and virtual environment. Virtual Box is
supported by several operating systems, including Windows XP, Windows 7, Linux, Windows Vista,
Mac OS X, Solaris, and Open Solaris. Windows, Linux, OS/2, BSD, Haiku, and other guest
operating systems are supported in various versions and derivatives.

It can be used in following project


 Software portability
 Application development
 System testing and debugging
 Network simulation
 General computing

What is Google App Engine (GAE)?


A scalable runtime environment, Google App Engine is mostly used to run Web applications. These
dynamic scales as demand change over time because of Google’s vast computing infrastructure.
Because it offers a secure execution environment in addition to a number of services, App Engine
makes it easier to develop scalable and high-performance Web apps. Google’s applications will scale
up and down in response to shifting demand. Croon tasks, communications, scalable data stores,
work queues, and in-memory caching are some of these services.

The App Engine SDK facilitates the testing and professionalization of applications by emulating the
production runtime environment and allowing developers to design and test applications on their
own PCs. When an application is finished being produced, developers can quickly migrate it to App
Engine, put in place quotas to control the cost that is generated, and make the programmer available
to everyone. Python, Java, and Go are among the languages that are currently supported.
Features of App Engine
Runtimes and Languages
To create an application for an app engine, you can use Go, Java, PHP, or Python. You can develop
and test an app locally using the SDK’s deployment toolkit. Each language’s SDK and nun time are
unique. Your program is run in a:
 Java Run Time Environment version 7
 Python Run Time environment version 2.7
 PHP runtime’s PHP 5.4 environment
 Go runtime 1.2 environment

Cloud Computing (KCS-713) 87 | P a g e


Advantages of Google App Engine
The Google App Engine has a lot of benefits that can help you advance your app ideas. This
comprises:
1. Infrastructure for Security: The Internet infrastructure that Google uses is arguably the
safest in the entire world. Since the application data and code are hosted on extremely secure
servers, there has rarely been any kind of illegal access to date.
2. Faster Time to Market: For every organization, getting a product or service to market
quickly is crucial. When it comes to quickly releasing the product, encouraging the
development and maintenance of an app is essential. A firm can grow swiftly with Google
Cloud App Engine’s assistance.
3. Quick to Start: You don’t need to spend a lot of time prototyping or deploying the app to
users because there is no hardware or product to buy and maintain.
4. Easy to Use: The tools that you need to create, test, launch, and update the applications are
included in Google App Engine (GAE).
5. Rich set of APIs & Services: A number of built-in APIs and services in Google App Engine
enable developers to create strong, feature-rich apps.
6. Scalability: This is one of the deciding variables for the success of any software. When
using the Google app engine to construct apps, you may access technologies like GFS, Big
Table, and others that Google uses to build its own apps.
7. Performance and Reliability: Among international brands, Google ranks among the top
ones. Therefore, you must bear that in mind while talking about performance and reliability.
8. Cost Savings: To administer your servers, you don’t need to employ engineers or even do it
yourself. The money you save might be put toward developing other areas of your company.
9. Platform Independence: Since the app engine platform only has a few dependencies, you
can easily relocate all of your data to another environment.

Lecture: 37

Programming Environment for Google AppEngine


The App Engine standard environment is based on container instances running on Google's
infrastructure. Containers are preconfigured with one of several available runtimes.

The standard environment makes it easy to build and deploy an application that runs reliably even
under heavy load and with large amounts of data.

Google App Engine (GAE) supports a number of programming languages for building applications,
including:
Go, Java, PHP, Python, .NET, Node.js, Ruby, and C#.
GAE also supports other languages through custom runtimes. GAE provides four runtime
environments, one for each of the supported programming languages.
GAE is suitable for applications that need to scale quickly in response to traffic spikes, or that are
intended to run for free or at a low cost. GAE offers a secure, sandboxed environment for
applications to run in. It also has the following features:

Cloud Computing (KCS-713) 88 | P a g e


 Automatic scaling: GAE automatically scales applications based on incoming load.
 Background threads: GAE supports background threads.
 In-place security patches: GAE includes automatic in-place security patches.
 Access to Google Cloud APIs and services: GAE allows developers to access many Google
Cloud APIs and services, including Cloud Storage, Cloud SQL, and Google Tasks.

In the standard environment, applications run in a sandbox using the runtime environment of one
of the languages supported by GAE. The environment is suitable for applications that need to scale
rapidly (up or down) in response to sudden or extreme traffic spikes. It can also be used for
applications that are intended to run for free or at very low cost.
The standard GAE environment offers seconds-level instance startup times and deployment times,
supports background threads and can be scaled to zero. It includes automatic in-place
security patches and allows developers to access many Google Cloud application programming
interfaces (APIs) and services, including Cloud Storage, Cloud SQL and Google Tasks.
The GAE flexible environment automatically scales apps up or down while also balancing the load.
It allows developers to customize the runtimes provided for the supported languages or provide their
own runtime by supplying a custom Docker image or Dockerfile.
The environment is suitable for many kinds of apps, including apps that do the following:
 Receive consistent traffic.
 Experience regular traffic fluctuations.
 Run in a Docker container with a custom runtime or source code written in other
programming languages.
 Use frameworks with native code.
 Access Google Cloud project resources residing in the Google Compute Engine network.

Lecture: 38

What is OpenStack?
It is a free open standard cloud computing platform that first came into existence on July 21′
2010. It was a joint project of Rackspace Hosting and NASA to make cloud computing more
ubiquitous in nature. It is deployed as Infrastructure-as-a-service(IaaS) in both public and private
clouds where virtual resources are made available to the users. The software platform contains
interrelated components that control multi-vendor hardware pools of processing, storage,
networking resources through a data center. In OpenStack, the tools which are used to build this
platform are referred to as “projects”. These projects handle a large number of services including
computing, networking, and storage services. Unlike virtualization, in which resources such as
RAM, CPU, etc are abstracted from the hardware using hypervisors, OpenStack uses a number of
APIs to abstract those resources so that users and the administrators are able to directly interact
with the cloud services.

OpenStack components
Apart from various projects which constitute the OpenStack platform, there are nine major services
namely Nova, Neutron, Swift, Cinder, Keystone, Horizon, Ceilometer, and Heat. Here is the basic
definition of all the components which will give us a basic idea about these components.

Cloud Computing (KCS-713) 89 | P a g e


1. Nova (compute service): It manages the compute resources like creating, deleting, and
handling the scheduling. It can be seen as a program dedicated to the automation of resources
that are responsible for the virtualization of services and high-performance computing.
2. Neutron (networking service): It is responsible for connecting all the networks across
OpenStack. It is an API driven service that manages all networks and IP addresses.
3. Swift (object storage): It is an object storage service with high fault tolerance capabilities and
it used to retrieve unstructured data objects with the help of Restful API. Being a distributed
platform, it is also used to provide redundant storage within servers that are clustered together.
It is able to successfully manage petabytes of data.
4. Cinder (block storage): It is responsible for providing persistent block storage that is made
accessible using an API (self- service). Consequently, it allows users to define and manage the
amount of cloud storage required.
5. Keystone (identity service provider): It is responsible for all types of authentications and
authorizations in the OpenStack services. It is a directory-based service that uses a central
repository to map the correct services with the correct user.
6. Glance (image service provider): It is responsible for registering, storing, and retrieving
virtual disk images from the complete network. These images are stored in a wide range of
back-end systems.
7. Horizon (dashboard): It is responsible for providing a web-based interface for OpenStack
services. It is used to manage, provision, and monitor cloud resources.
8. Ceilometer (telemetry): It is responsible for metering and billing of services used. Also, it is
used to generate alarms when a certain threshold is exceeded.
9. Heat (orchestration): It is used for on-demand service provisioning with auto-scaling of cloud
resources. It works in coordination with the ceilometer.

Features of OpenStack
 Modular architecture: OpenStack is designed with a modular architecture that enables
users to deploy only the components they need. This makes it easier to customize and scale
the platform to meet specific business requirements.
 Multi-tenancy support: OpenStack provides multi-tenancy support, which enables multiple
users to access the same cloud infrastructure while maintaining security and isolation
between them. This is particularly important for cloud service providers who need to offer
services to multiple customers.
 Open-source software: OpenStack is an open-source software platform that is free to use
and modify. This enables users to customize the platform to meet their specific requirements,
without the need for expensive proprietary software licenses.
 Distributed architecture: OpenStack is designed with a distributed architecture that enables
users to scale their cloud infrastructure horizontally across multiple physical servers. This
makes it easier to handle large workloads and improve system performance.
 API-driven: OpenStack is API-driven, which means that all components can be accessed
and controlled through a set of APIs. This makes it easier to automate and integrate with
other tools and services.
 Comprehensive dashboard: OpenStack provides a comprehensive dashboard that enables
users to manage their cloud infrastructure and resources through a user-friendly web
interface. This makes it easier to monitor and manage cloud resources without the need for
specialized technical skills.
 Resource pooling: OpenStack enables users to pool computing, storage, and networking
resources, which can be dynamically allocated and de-allocated based on demand. This
enables users to optimize resource utilization and reduce waste.

Cloud Computing (KCS-713) 90 | P a g e


Lecture: 39

What is Cloud Federation?


Cloud Federation, also known as Federated Cloud is the deployment and management of sev eral
external and internal cloud computing services to match business needs. It is a multi-national cloud
system that integrates private, community, and public clouds into scalable computing platforms.
Federated cloud is created by connecting the cloud environment of different cloud providers using
a common standard.

The architecture of Federated Cloud:


The architecture of Federated Cloud consists of three basic components:
1. Cloud Exchange
The Cloud Exchange acts as a mediator between cloud coordinator and cloud broker. The demands
of the cloud broker are mapped by the cloud exchange to the available services provided by the
cloud coordinator. The cloud exchange has a track record of what is the present cost, demand
patterns, and available cloud providers, and this information is periodically reformed by the cloud
coordinator.
2. Cloud Coordinator
The cloud coordinator assigns the resources of the cloud to the remote users based on the quality of
service they demand and the credits they have in the cloud bank. The cloud enterprises and their
membership are managed by the cloud controller.
3. Cloud Broker
The cloud broker interacts with the cloud coordinator, analyzes the Service-level agreement and
the resources offered by several cloud providers in cloud exchange. Cloud broker finalizes the
most suitable deal for their client.

Cloud Computing (KCS-713) 91 | P a g e


Properties of Federated Cloud:
1. In the federated cloud, the users can interact with the architecture either centrally or in a
decentralized manner. In centralized interaction, the user interacts with a broker to mediate
between them and the organization. Decentralized interaction permits the user to interact
directly with the clouds in the federation.
2. Federated cloud can be practiced with various niches like commercial and non-commercial.
3. The visibility of a federated cloud assists the user to interpret the organization of several
clouds in the federated environment.
4. Federated cloud can be monitored in two ways. MaaS (Monitoring as a Service) provides
information that aids in tracking contracted services to the user. Global monitoring aids in
maintaining the federated cloud.
5. The providers who participate in the federation publish their offers to a central entity. The
user interacts with this central entity to verify the prices and propose an offer.
6. The marketing objects like infrastructure, software, and platform have to pass through
federation when consumed in the federated cloud.

Benefits of Federated Cloud:


1. It minimizes the consumption of energy.
2. It increases reliability.
3. It minimizes the time and cost of providers due to dynamic scalability.
4. It connects various cloud service providers globally. The providers may buy and sell
services on demand.
5. It provides easy scaling up of resources.

Federated Cloud technologies:

The technologies that aid the cloud federation and cloud services are:
1. OpenNebula
It is a cloud computing platform for managing heterogeneous distributed data center
infrastructures. It can use the resources of its interoperability, leveraging existing informa tion
technology assets, protecting the deals, and adding the application programming interface (API).
2. Aneka coordinator
The Aneka coordinator is a proposition of the Aneka services and Aneka peer components
(network architectures) which give the cloud ability and performance to interact with other cloud
services.
Cloud Computing (KCS-713) 92 | P a g e
3. Eucalyptus
Eucalyptus defines the pooling computational, storage, and network resources that can be
measured scaled up or down as application workloads change in the utilization of the soft ware. It is
an open-source framework that performs the storage, network, and many other computational
resources to access the cloud environment.
Levels of Cloud Federation
Cloud Federation stack
Each level of the cloud federation poses unique problems and functions at a different level of the IT
stack. Then, several strategies and technologies are needed. The answers to the problems
encountered at each of these levels when combined form a reference model for a cloud federation.

Conceptual Level
The difficulties in presenting a cloud federation as an advantageous option for using services rented
from a single cloud provider are addressed at the conceptual level. At this level, it’s crucial to define
the new opportunities that a federated environment brings in comparison to a single-provider
solution and to explicitly describe the benefits of joining a federation for service providers or service
users.

At this level, the following factors need attention:


 The reasons that cloud providers would want to join a federation.
 Motivations for service users to use a federation.
 Benefits for service providers who rent their services to other service providers. Once a
provider joins the federation, they have obligations.
 Agreements on trust between suppliers.
 Consumers versus transparency.
The incentives of service providers and customers joining a federation stand out among these factors
as being the most important.

Logical and Operational Level


The obstacles in creating a framework that allows the aggregation of providers from various
administrative domains within the context of a single overlay infrastructure, or cloud federation, are
identified and addressed at the logical and operational level of a federated cloud.
Policies and guidelines for cooperation are established at this level. Additionally, this is the layer
where choices are made regarding how and when to use a service from another provider that is being
leased or leveraged. The operational component characterizes and molds the dynamic behavior of the
federation as a result of the decisions made by the individual providers, while the logical component
specifies the context in which agreements among providers are made and services are negotiated.

At this level, MOCC is put into precise and becomes a reality. At this stage, it’s crucial to deal with
the following difficulties:
 How ought a federation to should be portrayed?
 How should a cloud service, a cloud provider, or an agreement be modeled and represented?
 How should the regulations and standards that permit providers to join a federation be
defined?
 What procedures are in place to resolve disputes between providers?
 What obligations does each supplier have to the other?
 When should consumers and providers utilize the federation?
 What categories of services are more likely to be rented than purchased?
 Which percentage of the resources should be leased, and how should we value the resources
that are leased?
Cloud Computing (KCS-713) 93 | P a g e
Infrastructure Level
The technological difficulties in making it possible for various cloud computing systems to work
together seamlessly are dealt with at the infrastructure level. It addresses the technical obstacles
keeping distinct cloud computing systems from existing inside various administrative domains.
These obstacles can be removed by using standardized protocols and interfaces.

The following concerns should be addressed at this level:


 What types of standards ought to be applied?
 How should interfaces and protocols be created to work together?
 Which technologies should be used for collaboration?
 How can we design platform components, software systems, and services that support
interoperability?
Only open standards and interfaces allow for interoperability and composition amongst various cloud
computing companies. Additionally, the Cloud Computing Reference Model has layers that each has
significantly different interfaces and protocols.

Lecture: 40

Services of Cloud Federation


Active Directory Federation Services (ADFS)
Microsoft developed the Single Sign-On (SSO) system known as (ADFS). It serves as a component
of Windows Server operating systems, giving users authenticated access to programs through Active
Directory that cannot use Integrated Windows Authentication (IWA) (AD).
Through a proxy service located between Active Directory and the intended application, ADFS
manages authentication. Users’ access is granted through the usage of a Federated Trust, which
connects ADFS and the intended application. As a result, users no longer need to directly validate
their identity on the federated application in order to log on.
These Four Phases are typically followed by the Authentication Process:
 The user accesses a URL that the ADFS service has provided.
 The user is then verified by the AD service of the company through the ADFS service.
 The ADFS service then gives the user an authentication claim after successful authentication.
 The target application then receives this claim from the user’s browser and decides whether
to grant or deny access based on the Federated Trust service established.

Cloud-based Single Sign-On and Identity Federation without ADFS


Applications can assign user authentication duties to a different system through a process known as
identity federation. You can accomplish single sign-on, where users only need to log in once to be
able to access any number of their applications, by delegating access for all of your applications
through a single federation system. But because federation enables organizations to centralize the
access management function, it is far more significant than single sign-on (see our piece on this).
User experience, security, application onboarding, service logging and monitoring, operational
efficiency in IT, and many other areas may all benefit from this.

Cloud Computing (KCS-713) 94 | P a g e


Radiant One Cloud Federation Service: You’re On-Premises IdP
The newest addition to the Radiant One package is the Cloud Federation Service (CFS), which is
powered by identity virtualization. Together with Radiant One FID, CFS isolates your external and
cloud applications from the complexity of your identity systems by delegating the work of
authenticating against all of your identity stores to a single common virtual layer.

Are you ready to unleash the power of DevOps to streamline your Software Development and
Deployment? Learn about our DevOps Live Course at GeeksforGeeks, created for all professionals
in practice with continuous integration, delivery, and deployment. Learn about leading tools,
industry best practices, and techniques for automation through an interactive session with hands-
on live projects. Whether you are new to DevOps or looking to improve your skills, this course
equips you with everything needed to streamline workflows and deliver excellent quality software in
the least amount of time. Learn to take your skills in DevOps to the next level now, and harness the
power of streamlined software development!

Future of Federation.
Federated cloud computing is expected to be a major part of the future of cloud computing, with the
potential to improve performance, reduce costs, and increase flexibility:
 Democratization: Federated cloud computing can help businesses connect with customers,
partners, and employees worldwide.
 Performance: Federated cloud computing can improve performance by sharing computing
assets, servers, and facilities between multiple cloud service providers.
 Cost: Federated cloud computing can reduce costs by partial subcontracting computing
resources and facilities from nearby cost-efficient provinces.
 Flexibility: Federated cloud computing can increase flexibility by allowing organizations to
use and move between different cloud services as needed.
 Sustainability: Federated cloud computing can help with sustainability by considering the
CO2 emission factor when choosing a location for resources
 Enhanced Interoperability: As organizations increasingly adopt multi-cloud strategies,
there will be a stronger push for interoperability standards. This will allow seamless
integration and communication between different cloud services, making it easier to
manage resources across diverse environments.

 Decentralized Identity and Access Management: The rise of decentralized identity


solutions will empower users with more control over their data and access rights.
Federation will evolve to support these models, ensuring secure and efficient identity
management across multiple cloud platforms.

 Data Sovereignty and Compliance: With stricter regulations around data privacy and
sovereignty, cloud federation will enable organizations to distribute data across regions
while complying with local laws. This will help organizations maintain control over their
data while benefiting from global cloud resources.

 Hybrid and Multi-Cloud Architectures: Businesses are increasingly adopting hybrid and
multi-cloud models to avoid vendor lock-in and optimize costs. Federation will be crucial
in managing resources, workloads, and data across these diverse environments, facilitating
better resource allocation and redundancy.

Cloud Computing (KCS-713) 95 | P a g e


 Federated Learning and AI: In the realm of artificial intelligence, federated learning will
allow organizations to train models on decentralized data sources without compromising
privacy. This will enable collaborative AI development across federated cloud
environments while keeping sensitive data secure.

 Improved Security Protocols: As federated systems expand, there will be a heightened


focus on security. Advanced encryption, zero-trust architectures, and continuous
monitoring will be essential to protect data and applications in a federated cloud setup.

 Automation and Orchestration: Automation tools will play a vital role in managing
federated environments. Orchestration platforms will enable dynamic resource allocation
and workload management across multiple clouds, improving efficiency and reducing
operational overhead.

 Edge Computing Integration: The rise of edge computing will necessitate federated cloud
solutions that can manage resources and workloads across both centralized and edge
environments. This will enhance the performance and scalability of applications that
require real-time processing.

 Community-driven Development: As more organizations adopt cloud federation, there


will be a growing emphasis on community-driven development of standards and tools.
Open-source initiatives will likely flourish, fostering collaboration and innovation.

 Sustainability Considerations: As environmental concerns grow, federated cloud systems


will need to prioritize sustainability. This may involve optimizing resource usage, reducing
energy consumption, and enabling carbon-aware computing practices across federated
environments.

Overall, the future of federation in cloud computing will be characterized by greater flexibility,
enhanced collaboration, and a stronger focus on security and compliance, enabling organizations to
harness the full potential of cloud technologies.

Cloud Computing (KCS-713) 96 | P a g e


IMPORTANT
QUESTIONS -(CO1)
Question 1: What is the difference between cloud computing and distributed
computing?

Question 2: Why is cloud Computing required? List the Five


Characteristics of CloudComputing.

Question 3: Differentiate between parallel computing and grid computing.

Question 4: What is distributed computing?

Question 5: What is Utility Computing?


(CO2)
Question 1: What are the different techniques used for implementation of hardware
virtualization? Explain them in detail.
Question 2: Write steps to ensure virtual machine security in cloud computing.
Question 3: Illustrate Web services in details. Why is web services required?
Differentiatebetween API and Web Services.
Question 4: Define virtualization. Demonstrate implementation level of virtualization.
Question 5: Explain virtualization of CPU, Memory and I/O devices in details.

(CO3)
Question 1: Explain Cloud Computing reference model with diagram.
Question 2: What are the different security challenges in cloud computing? Discuss
each inbrief.
Question 3: List the Layer used in layered cloud architecture.
Question 4: What do you mean by cloud Storage? Describe its types.
Question 5: Illustrate NIST cloud computing reference architecture in details.

(CO4)
Question 1: What is load balancing? What are the advantages of load balancing?
Question 2: Explain the following challenges in cloud. i) Security. ii) Data lock-in and
Question 3: Standardization. iii) Fault tolerance and Disaster recovery.
Question 4: Why is cloud management important?

Cloud Computing (KCS-713) 97


|Page
Explain utility computing?
Question 5: What do you mean by third party cloud services? Give suitable examples

(CO5)
Question 1: Take a suitable example and explain the concept of MapReduce.
Question 2: Give a suitable definition of cloud federation stack and explain it in detail.
Question 3: What do you mean by Google App Engine (GAE) and Open stack?
Question 4: Give a suitable definition of cloud federation stack and explain it in detail
Question 5: What do you mean by Hadoop and its History? Why is it important? Illustrate
Hadoop architecture.

Cloud Computing (KCS-713) 98


|Page

You might also like