0% found this document useful (0 votes)
3 views44 pages

Unit IV

Unit IV discusses cloud enabling technologies, focusing on Service-Oriented Architecture (SOA), web services, and virtualization. SOA allows applications to utilize network services, promoting reusability and easy maintenance, while web services facilitate data exchange across different platforms using standards like SOAP and WSDL. Virtualization techniques, including hardware and operating system virtualization, enable efficient resource sharing and management through hypervisors and various implementation levels.

Uploaded by

dhanekaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views44 pages

Unit IV

Unit IV discusses cloud enabling technologies, focusing on Service-Oriented Architecture (SOA), web services, and virtualization. SOA allows applications to utilize network services, promoting reusability and easy maintenance, while web services facilitate data exchange across different platforms using standards like SOAP and WSDL. Virtualization techniques, including hardware and operating system virtualization, enable efficient resource sharing and management through hypervisors and various implementation levels.

Uploaded by

dhanekaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

UNIT IV CLOUD ENABLING TECHNOLOGIES

Service Oriented Architecture – Web Services – Basics of


Virtualization – Emulation – Types of Virtualization – Implementation
levels of Virtualization – Virtualization structures – Tools &
Mechanisms – Virtualization of CPU, Memory & I/O Devices – Desktop
Virtualization – Server Virtualization – Google App Engine – Amazon
AWS - Federation in the Cloud.

SERVICE ORIENTED ARCHITECTURE


Service-Oriented Architecture (SOA) is an architectural approach in
which applications make use of services available in the network. In this
architecture, services are provided to form applications, through a
communication call over the internet.
 SOA allows users to combine a large number of facilities from existing
services to form applications.
 SOA encompasses a set of design principles that structure system
development and provide means for integrating components into a coherent
and decentralized system.
 SOA based computing packages functionalities into a set of interoperable
services, which can be integrated into different software systems belonging to
separate business domains.
There are two major roles within Service-oriented Architecture:
1.Service provider: The service provider is the maintainer of the service and
the organization that makes available one or more services for others to use.

To advertise services, the provider can publish them in a registry, together


with a service contract that specifies the nature of the service, how to use it, the
requirements for the service, and the fees charged.
2.Service consumer: The service consumer can locate the service metadata in
the registry and develop the required client components to bind and use the
service.

Services might aggregate information and data retrieved from other services or

create workflows of services to satisfy the request ofa given service consumer.

This practice is known as service orchestration Another important interaction

pattern is service choreography, which is the coordinated interaction of services

withouta single point of control

Principles of SOA:
1. Standardized service contract: Specified through one or more service
description documents.
2. Loose coupling: Services are designed as self-contained components,
maintain relationships that minimize dependencies on other services.
3. Abstraction: A service is completely defined by service contracts and
description documents. They hide their logic, which is encapsulated within
their implementation.
4. Reusability: Designed as components, services can be reused more
effectively, thus reducing development time and the associatedcosts.
5. Autonomy: Services have control over the logic they encapsulate and, from
a service consumer point of view, there is no need to know about their
implementation.
6. Discoverability: Services are defined by description documents that
constitute supplemental metadata through which they can be effectively
discovered. Service discovery provides an effective means for utilizing
third-party resources.
7. Composability: Using services as building blocks, sophisticated and
complex operations can be implemented. Service orchestration and
choreography provide solid support for composing services and achieving
business goals
Advantages of SOA:
 Service reusability: In SOA, applications are made from existing services.
Thus, services can be reused to make many applications.

 Easy maintenance: As services are independent of each other they can be


updated and modified easily without affecting other services.
 Platform independent: SOA allows making a complex application by
combining services picked from different sources, independent of the
platform.
 Availability: SOA facilities are easily available to anyone on request.
 Reliability: SOA applications are more reliable because it is easy to debug
small services rather than huge codes
 Scalability: Services can run on different servers within an environment, this
increases scalability
Disadvantages of SOA:
 High overhead: A validation of input parameters of services is done
whenever services interact this decreases performance as it increases load
and response time.
 High investment: A huge initial investment is required for SOA.
 Complex service management: When services interact they exchange
messages to tasks. The number of messages may go in millions. It becomes a
cumbersome task to handle a large number ofmessages.

WEB SERVICES
A web service is a collection of open protocols and standards used for
exchanging data between applications or systems. Software applications written
in various programming languages and running on various platforms can use
web services to exchange data over computer networks like the Internet in a
manner similar to inter-process communication on a single computer. This
interoperability (e.g., between Java and Python, or Windows and Linux
applications) is due tothe use of open standards.
Components of Web Services
The basic web services platform is XML + HTTP. All the standardweb
services work using the following components −
 SOAP (Simple Object Access Protocol)
 UDDI (Universal Description, Discovery and Integration)
 WSDL (Web Services Description Language)

How Does a Web Service Work?


A web service enables communication among various applications by
using open standards such as HTML, XML, WSDL, and SOAP. A web service
takes the help of −
 XML to tag the data
 SOAP to transfer a message
 WSDL to describe the availability of service.
 You can build a Java-based web service on Solaris that is accessible from
your Visual Basic program that runs on Windows.
 You can also use C# to build new web services on Windows that can be
invoked from your web application that is based on JavaServer Pages (JSP)
and runs on Linux.
Web Service Roles
 There are three major roles within the web service architecture −
Service Provider
This is the provider of the web service. The service provider implements
the service and makes it available on the Internet.
Service Requestor
This is any consumer of the web service. The requestor utilizes an existing
web service by opening a network connection and sending an XML request.
Service Registry
This is a logically centralized directory of services. The registry provides
a central place where developers can publish new services or find existing ones.
It therefore serves as a centralized clearing house for companies and their
services.
Web Service Protocol Stack
A second option for viewing the web service architecture is to examine
the emerging web service protocol stack. The stack is still evolving, but
currently has four main layers.
Service Transport
This layer is responsible for transporting messages between applications.
Currently, this layer includes Hyper Text Transport Protocol (HTTP), Simple
Mail Transfer Protocol (SMTP), File Transfer Protocol (FTP), and newer
protocols such as Blocks Extensible Exchange Protocol (BEEP).
XML Messaging
This layer is responsible for encoding messages in a common XML
format so that messages can be understood at either end. Currently, this layer
includes XML-RPC and SOAP.
Service Description
This layer is responsible for describing the public interface to a specific
web service. Currently, service description is handled via the Web Service
Description Language (WSDL).
Service Discovery
This layer is responsible for centralizing services into a common registry
and providing easy publish/find functionality. Currently, service discovery is
handled via Universal Description, Discovery, and Integration (UDDI).

BASICS OF VIRTUALIZATION
Definition:
Virtualization is a technique, which allows sharing single physicalinstance
of an application or resource among multiple organizations or tenants
(customers). It does so by assigning a logical name to a physical resource and
providing a pointer to that physical resource on demand.
Virtualization Concept:
Creating a virtual machine over existing operating system and hardware is
referred as Hardware Virtualization. Virtual Machines provide an environment
that is logically separated from the underlying hardware.
The machine on which the virtual machine is created is knownas host
machine and virtual machine is referred as a guest machine. This virtual
machine is managed by a software or firmware, which is known as hypervisor.
Hypervisor:
The hypervisor is a firmware or low-level program that acts as a Virtual
Machine Manager. There are two types of hypervisor:
Type 1 hypervisor executes on bare system. LynxSecure, RTS
Hypervisor, Oracle VM, Sun xVM Server, VirtualLogic VLX are examples of
Type 1 hypervisor. The following diagram shows the Type 1 hypervisor.
The type1 hypervisor does not have any host operating system because
they are installed on a bare system.
Type 2 hypervisor is a software interface that emulates the devices with
which a system normally interacts. Containers, KVM, Microsoft Hyper V,
VMWare Fusion, Virtual Server 2005 R2, Windows Virtual PC and VMWare
workstation 6.0 are examples of Type 2 hypervisor. The following diagram
shows the Type 2 hypervisor.

EMULATION
Emulation, as name suggests, is a technique in which Virtual machines
simulates complete hardware in software. There are many virtualization
techniques that were developed in or inherited from emulation technique. It is
very useful when designing software for various systems. It simply allows us to
use current platform to access an older application, data, or operating system.
In computing, the emulator is a hardware or software that enables one
device (named Host) to function like other systems (named Guest). It is a
perfect way to execute the hardware and software in any system. Emulation
brings greater overhead, but it also has its benefits. It is relatively inexpensive,
easily accessible and allows us to run programs that have become redundant in
the available system.
An emulator changes the CPU instructions required for the architecture
and executes it on another architecture successfully. The emulation systems
could be accessed remotely by anyone and are very simpler to use. Without
affecting the underlying OS, it is an excellent capacity for embedded and OS
development. Without considering the host's capabilities, emulation will usually
manage the size of the designunder test (DUT).

TYPES OF VIRTUALIZATION
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.

1) Hardware Virtualization:
When the virtual machine software or virtual machine manager
(VMM) is directly installed on the hardware system is known as hardware
virtualization.
The main job of hypervisor is to control and monitoring the processor,
memory and other hardware resources.

After virtualization of hardware system we can install different


operating system on it and run different applications on those OS.
Usage: Hardware virtualization is mainly done for the server platforms,
because controlling virtual machines is much easier than controlling a physical
server.

2) Operating System Virtualization:


When the virtual machine software or virtual machine manager
(VMM) is installed on the Host operating system instead of directly on the
hardware system is known as operating system virtualization.
Usage:
Operating System Virtualization is mainly used for testing the
applications on different platforms of OS.

3) Server Virtualization:
When the virtual machine software or virtual machine manager
(VMM) is directly installed on the Server system is known as server
virtualization.
Usage:
Server virtualization is done because a single physical server can be
divided into multiple servers on the demand basis and for balancing the load.

4) Storage Virtualization:
Storage virtualization is the process of grouping the physical storage from
multiple network storage devices so that it looks like a single storage device.
Storage virtualization is also implemented by using software applications.
Usage:
Storage virtualization is mainly done for back-up and recoverypurposes.
IMPLEMENTATION LEVELS OF VIRTUALIZATION
It is not simple to set up virtualization. Your computer runs on an
operating system that gets configured on some particular hardware. It is not
feasible or easy to run a different operating system using the same hardware.
To do this, you will need a hypervisor. Hypervisor is a bridge between the
hardware and the virtual operating system, which allows smooth functioning.
Talking of the Implementation levels of virtualization in cloud computing,
there are a total of five levels that are commonly used. Let us now look closely at
each of these levels of virtualization implementation in cloud computing.
1)Instruction Set Architecture Level (ISA)
ISA virtualization can work through ISA emulation. This is used to run
many legacy codes that were written for a different configuration of hardware.
These codes run on any virtual machine using the ISA. With this, a binary
code that originally needed some additional layers to run is now capable of
running on the x86 machines. It can also be tweaked to run on the x64 machine.
With ISA, it is possible to make the virtual machine hardware agnostic.
For the basic emulation, an interpreter is needed, which interprets the
source code and then converts it into a hardware format

that can be read. This then allows processing. This is one of thefive
implementation levels of virtualization in cloud computing.

1) Hardware Abstraction Level (HAL)


True to its name HAL lets the virtualization perform at the level of the
hardware. This makes use of a hypervisor which is used for functioning. At this
level, the virtual machine is formed, and this manages the hardware using the
process of virtualization. It allows the virtualization of each of the hardware
components, which could be the input-output device, the memory, the processor,
etc.
Multiple users will not be able to use the same hardware and also use
multiple virtualization instances at the very same time. This is mostly used in the
cloud-based infrastructure.
2) Operating System Level
At the level of the operating system, the virtualization model is capable of
creating a layer that is abstract between the operating system and the application
This is an isolated container that is on the operating system and the physical
server, which makes use of the software and hardware. Each of these then functions in
the form of a server.
When there are several users, and no one wants to share the hardware, then this is
where the virtualization level is used. Every user will get his virtual environment
using a virtual hardware resource that is dedicated. In this way, there is no question of
any conflict.

3) Library Level
The operating system is cumbersome, and this is when the applications
make use of the API that is from the libraries at a user level. These APIs are
documented well, and this is why the library virtualization level is preferred in
these scenarios. API hooks make it possible as it controls the link of
communication from the applicationto the system.

4) Application Level
The application-level virtualization is used when there is a desire to
virtualize only one application and is the last of the implementation levels of
virtualization in cloud computing. One does not need to virtualize the entire
environment of the platform.
This is generally used when you run virtual machines that use high-level
languages. The application will sit above the virtualization layer, which in turn
sits on the application program.
It lets the high-level language programs compiled to be used in the
application level of the virtual machine run seamlessly.
VIRTUALIZATION STRUCTURES / TOOLS &
MECHANISMS

In general, there are three typical classes of VM architecture. Before


virtualization, the operating system manages the hardware. After virtualization, a
virtualization layer is inserted between the hardware and the operat-ing system.
In such a case, the virtualization layer is responsible for converting portions of
the real hardware into virtual hardware. Therefore, different operating systems
such as Linux and Windows can run on the same physical machine,
simultaneously. Depending on the position of the virtualiza-tion layer, there are
several classes of VM architectures, namely the hypervisor architecture, para-
virtualization, and host-based virtualization. The hypervisor is also known as the
VMM (Virtual Machine Monitor). They both perform the same virtualization
operations.

1) Hypervisor and Xen Architecture


The hypervisor supports hardware-level virtualization on bare metal
devices like CPU, memory, disk and network interfaces. The hypervisor
software sits directly between the physi-cal hardware and its OS. This
virtualization layer is referred to as either the VMM or the hypervisor.
The hypervisor provides hypercalls for the guest OSes and applications.
Depending on the functional-ity, a hypervisor can assume a micro-kernel
architecture like the Microsoft Hyper-V. Or it can assume a monolithic
hypervisor architecture like the VMware ESX for server virtualization.
The Xen Architecture
Xen is an open source hypervisor program developed by Cambridge
University. Xen is a micro-kernel hypervisor, which separates the policy from
the mechanism. The Xen hypervisor implements all the mechanisms, leaving the
policy to be handled by Domain 0,
*-+The core components of a Xen system are the hypervisor, kernel, and
applications. The organi-zation of the three components is important. Like other
virtualization systems, many guest OSes can run on top of the hypervisor.
However, not all guest OSes are created equal, and one in particular controls
the others. The guest OS, which has control ability, is called Domain 0, and the
others are calledDomain U. Domain 0 is a privileged guest OS of Xen. It is first
loaded when Xen boots without any file system drivers being available. Domain
0 is designed to access hardware directly and manage devices. Therefore, one of
the responsibilities of Domain 0 is to allocate and map hardware resources for
the guest domains (the Domain U domains).
2) Binary Translation with Full Virtualization
Depending on implementation technologies, hardware virtualization can
be classified into two cate-gories: full virtualization and host-based
virtualization. Full virtualization does not need to modify the host OS. It relies
on binary translation to trap and to virtualize the execution of certain sensitive,
nonvirtualizable instructions. The guest OSes and their applications consist of
noncritical and critical instructions. In a host-based system, both a host OS and a
guest OS are used. A virtuali-zation software layer is built between the host OS
and guest OS. These two classes of VM architec- ture are introduced next.

Full Virtualization
With full virtualization, noncritical instructions run on the hardware
directly while critical instructions are discovered and replaced with traps into
the VMM to be emulated by software. Both the hypervisor and VMM
approaches are considered full virtualization. Why are only critical instructions
trapped into the VMM? This is because binary translation can incur a large
performance overhead. Noncritical instructions do not control hardware or
threaten the security of the system, but critical instructions do. Therefore,
running noncritical instructions on hardware not only can promote efficiency,
but also can ensure system security.
Binary Translation of Guest OS Requests Using a VMM
This approach was implemented by VMware and many other software companies.
VMware puts the VMM at Ring 0 and the guest OS at Ring 1. The VMM scans the
instruction stream and identifies the privileged, control- and behavior-sensitive
instructions. When these instructions are identified, they are trapped into the VMM,
which emulates the behavior of these instructions. The method used in this emulation
is called binary translation. Therefore, full virtualization
combines binary translation and direct execution. The guest OS is completely
decoupled from the underlying hardware. Consequently, the guest OS is
unaware that it is being virtualized.
Host-Based Virtualization
An alternative VM architecture is to install a virtualization layer on top of
the host OS. This host OS is still responsible for managing the hardware. The
guest OSes are installed and run on top of the virtualization layer. Dedicated
applications may run on the VMs. Certainly, some other applications can also
run with the host OS directly. This host-based architecture has some distinct
advantages, as enumerated next. First, the user can install this VM architecture
without modifying the host OS. The virtualizing software can rely on the host
OS to provide device drivers and other low-level services. This will simplify the
VM design and ease its deployment.

3) Para-Virtualization with Compiler Support

Para-virtualization needs to modify the guest operating systems. A para-


virtualized VM provides special APIs requiring substantial OS modifications in user
applications. Performance degradation is a critical issue of a virtualized system. No
one wants to use a VM if it is much slower than using a physical machine. The
virtualization layer can be inserted at different positions in a machine soft-ware stack.
However, para-virtualization attempts to reduce the virtualization overhead, and thus
improve performance by modifyting only the guest OS kernel.
Para-Virtualization Architecture
When the x86 processor is virtualized, a virtualization layer is inserted
between the hardware and the OS. According to the x86 ring definition, the
virtualization layer should also be installed at Ring 0. Different instructions at
Ring 0 may cause some problems. We show that para-virtualization replaces
non virtualizable instructions with hypercalls that communicate directly with
the hypervisor or VMM. However, when the guest OS kernel is modified for
virtualization, it canno longer run on the hardware directly.

KVM (Kernel-Based VM)


This is a Linux para-virtualization system—a part of the Linux version 2.6.20
kernel. Memory management and scheduling activities are carried out by the
existing Linux kernel. The KVM does the rest, which makes it simpler than the
hypervisor that controls the entire machine. KVM is a hardware-assisted
para-virtualization tool, which improves performance and supports unmodified
guest OSes such as Windows, Linux, Solaris, and other UNIX variants.
Para-Virtualization with Compiler Support
Unlike the full virtualization architecture which intercepts and emulates
privileged and sensitive instructions at runtime, Para- virtualization handles
these instructions at compile time. The guest OS kernel is modified to replace
the privileged and sensitive instructions with hyper calls to the hypervisor or
VMM. Xen assumes such Para- virtualization architecture.
VIRTUALIZATION OF CPU, MEMORY & I/O
DEVICES
To support virtualization, processors such as the x86 employ a special
running mode and instructions, known as hardware-assisted virtualization. In
this way, the VMM and guest OS run in different modes and all sensitive
instructions of the guest OS and its applications are trapped in the VMM. To
save processor states, mode switching is completed by hardware. For the x86
architecture, Intel and AMD have proprietary technologies for hardware-assisted
virtualization.

1) Hardware Support for Virtualization


Modern operating systems and processors permit multiple processes to
run simultaneously. If there is no protection mechanism in a processor, all
instructions from different processes will access the hardware directly and cause
a system crash. Therefore, all processors have at least two modes, user mode and
supervisor mode, to ensure controlled access of critical hardware. Instructions
running in supervisor mode are called privileged instructions. Other
instructions are unprivileged instructions. In a virtualized environment, it is
more difficult to make OSes and applications run correctly because there are
more layers in the machine stack. The VMware Workstation is a VM software
suite for x86 and x86-64 computers. This software suite allows users to set up
multiple x86 and x86-64 virtual computers and to use one or more of these VMs
simultaneously with the host operating system. The VMware Workstation
assumes the host-based virtualization. Xen is a hypervisor for use in IA-32,
x86-64, Itanium, and PowerPC 970 hosts. Actually, Xen modifies Linux as
the lowest andmost privileged layer, or a hypervisor.
One or more guest OS can run on top of the hypervisor. KVM (Kernel-based
Virtual Machine) is a Linux kernel virtualization infrastructure. KVM can
support hardware-assisted virtualization and paravirtualization by using the Intel
VT-x or AMD-v and VirtIO framework, respectively.
2) CPU Virtualization
A VM is a duplicate of an existing computer system in which a majority
of the VM instructions are executed on the host processor in native mode. Thus,
unprivileged instructions of VMs run directly on the host machine for higher
efficiency. Other critical instructions should be handled carefully for correctness
and stability. The critical instructions are divided into three categories:
privileged instructions, control- sensitive instructions, and behavior-sensitive
instructions. Privileged instructions execute in a privileged mode and will be
trapped if executed outside this mode. Control-sensitive instructions attempt to
change the configuration of resources used. Behavior-sensitive instructions have
different behaviors depending on the configuration of resources, including the
load and store operations over the virtual memory.

A CPU architecture is virtualizable if it supports the ability to run the


VM’s privileged and unprivileged instructions in the CPU’s user mode while the
VMM runs in supervisor mode. When the privileged instructions including
control- and behavior-sensitive instructions of a VM are exe-cuted, they are
trapped in the VMM. In this case, the VMM acts as a unified mediator for
hardware access from different VMs to guarantee the correctness and stability of
the whole system. However, not all CPU architectures are virtualizable. RISC
CPU architectures can be naturally virtualized because all control- and behavior-
sensitiveinstructions are privileged instructions.
3) Memory Virtualization
Virtual memory virtualization is similar to the virtual memory support
provided by modern operat-ing systems. In a traditional execution environment,
the operating system maintains mappings of virtual memory to machine
memory using page tables, which is a one-stage mapping from virtual memory
to machine memory. All modern x86 CPUs include a memory management
unit (MMU) and a translation lookaside buffer (TLB) to optimize virtual
memory performance. However, in a virtual execution environment, virtual
memory virtualization involves sharing the physical system memory in RAM
and dynamically allocating it to the physical memory of the VMs.
That means a two-stage mapping process should be maintained by the
guest OS and the VMM, respectively: virtual memory to physical memory and
physical memory to machine memory. Furthermore, MMU virtualization should
be supported, which is transparent to the guest OS. The guest OS continues to
control the mapping of virtual addresses to the physical memory addresses of
VMs. But the guest OS cannot directly access the actual machine memory. The
VMM is responsible for mapping the guest physical memory to the actual
machine memory.

4) I/O Virtualization
I/O virtualization involves managing the routing of I/O requests between
virtual devices and the shared physical hardware. At the time of this writing,
there are three ways to implement I/O virtualization: full device emulation, para-
virtualization, and direct I/O. Full device emulation is the first approach for I/O
virtualization. Generally, this approach emulates well-known, real-world
devices. All the functions of a device or bus infrastructure, such as device
enumeration, identification, interrupts, and DMA, are replicated in software.
This software is located in the VMM and acts as a virtual device. The I/O access
requests of the guest OS are trapped in the VMM which interacts with the I/O
devices. The full device emulation approach is shown in Figure

5) Virtualization in Multi-Core Processors


Virtualizing a multi-core processor is relatively more complicated than
virtualizing a uni-core processor. Though multicore processors are claimed to
have higher performance by integrating multiple processor cores in a single
chip, muti-core virtualiuzation has raised some new challenges to computer
architects, compiler constructors, system designers, and application
programmers. There are mainly two difficulties: Application programs must be
parallelized to use all cores fully, and software must explicitly assign tasks to the
cores, which is a very complex problem.
Concerning the first challenge, new programming models, languages, and
libraries are needed to make parallel programming easier. The second challenge
has spawned research involving scheduling algorithms and resource
management policies. Yet these efforts cannot balance well among performance,
complexity, and other issues. What is worse, as technology scales, a new
challenge called dynamic heterogeneity is emerging to mix the fat CPU core and
thin GPU cores on the same chip, which further complicates the multi- core or
many-core resource management.
DESKTOP VIRTUALIZATION
Desktop virtualization is technology that lets users simulate a workstation
load to access a desktop from a connected device remotely or locally. This
separates the desktop environment and its applications from the physical client
device used to access it. Desktop virtualization is a key element of digital
workspaces and depends on application virtualization.

Deployment models for desktop virtualization


There are three typical deployment models for desktop virtualization:
1) Local Desktop Virtualization
Local desktop virtualization means the operating system runs on a client
device using hardware virtualization, and all processing and workloads occur on
local hardware. This type of desktop virtualization works well when users do not
need a continuous network connection and can meet application computing
requirements with local system resources. However, because this requires
processing to be done locally you cannot use local desktop virtualization to share
VMs or resources across a network to thin clients or mobile devices.
2) Remote Desktop Virtualization
Remote desktop virtualization is a common use of
virtualization that operates in a client/server computing environment. This
allows users to run operating systems and applications from a server inside a
data center while all user interactions take place on a client device. This client
device could be a laptop, thin client device, or a smartphone. The result is IT
departments have more centralized control over applications and desktops, and
can maximize the organization’s investment in IT hardware through remote
access to shared computing resources.
3) Desktop-as-a-Service (DaaS)
In desktop as a service (DaaS), VMs are hosted on a cloud-based backend
by a third-party provider. DaaS is readily scalable, can be more flexible
than on-premise solutions, and generally deploys faster than many other desktop
virtualization options.

Like other types of cloud desktop virtualization, DaaS shares many of the
general benefits of cloud computing, including support for fluctuating workloads
and changing storage demands, usage-based pricing, and the ability to make
applications and data accessible from almost any internet-connected device. The
chief drawback to DaaS is that features and configurations are not always as
customizable as required.
Virtual desktop infrastructure
A popular type of desktop virtualization is virtual desktop infrastructure
(VDI). VDI is a variant of the client-server model ofdesktop virtualization
which uses host-based VMs to deliver persistent and non persistent virtual
desktops to all kinds of connected devices. With a persistent virtual desktop,
each user has a unique desktop image that they can customize with apps and
data, knowing it will be saved for future use. A non persistent virtual desktop
infrastructure allows users to access a virtual desktop from an identical pool
when they need it; once the user logs out of a non persistent VDI, it reverts to
its unaltered state. Some of the advantages of virtual desktop infrastructure are
improved security and centralized desktop management across an organization.
Benefits of desktop virtualization
1) Resource Management:
Desktop virtualization helps IT departments get the most out of their hardware
investments by consolidating most of their computing in a data centre. Desktop
virtualization then allows organizations to issue lower-cost computer and

devices to end users because most of the intensive computing work takes place
in the data centre. By minimizinghow much computing is needed at the endpoint
devices for end users,IT departments can save money by buying less costly machines.
2) Remote work:
Desktop virtualization helps IT admins support remote workers by giving IT
central control over how desktops are virtually deployed across an
organization’s devices. Rather than manually setting up a new desktop for
each user, desktop virtualization allows IT to simply deploy a ready-to-go
virtual desktop to that user’s device. Now the user can interact with the
operating system and applications on that desktop from any location and the
employee experience will be the same as if they were working locally. Once
the user is finished using this virtual desktop, they can log off and return that
desktop image to the shared pool.
3) Security:
Desktop virtualization software provides IT admins centralized security control
over which users can access which data and which applications. If a user’s
permissions change because they leave the company, desktop virtualization
makes it easy for IT to quickly remove that user’s access to their persistent
virtual desktop and all its data—instead of having to manually uninstall
everything from that user’s devices. And because all company data lives inside
the data center rather than on each machine, a lost or stolen device does not post
the same data risk. If someone steals a laptop using desktop virtualization, there
is no company data on the actual machine and hence less risk of a breach.
SERVER VIRTUALIZATION
Server Virtualization is the partitioning of a physical server into number
of small virtual servers, each running its own operating system. These operating
systems are known as guest operating systems. These are running on another
operating system known as host operating system. Each guest running in this
manner is unaware of any other guests running on the same host. Different
virtualization techniques areemployed to achieve this transparency.
Types of Server virtualization:
1) Hypervisor
A Hypervisor or VMM(virtual machine monitor) is a layer that exits
between the operating system and hardware. It provides the necessary services
and features for the smooth running of multiple operating systems.
It identifies traps, responds to privileged CPU instructions and handles
queuing, dispatching and returning the hardware requests. A host operating
system also runs on top of the hypervisor to administer and manage the virtual
machines.
2) Para Virtualization
It is based on Hypervisor. Much of the emulation and trapping overhead
in software implemented virtualisation is handled in this model. The guest
operating system is modified and recompiled before installation into the virtual
machine.
Due to the modification in the Guest operating system, performance is
enhanced as the modified guest operating system communicates directly with
the hypervisor and emulation overhead is removed.
Example : Xen primarily uses Para virtualisation, where a customised Linux
environment is used to support the administrative environment known as domain
0.
Advantages:
 Easier
 Enhanced Performance
 No emulation overhead
Limitations:
Requires modification to guest operating system

3) Full Virtualization
It is very much similar to Para virtualisation. It can emulate the
underlying hardware when necessary. The hypervisor traps the machine
operations used by the operating system to perform I/O or modify the system
status. After trapping, these operations are emulated in software and the status
codes are returned very much consistent with what the real hardware would
deliver. This is why unmodified operating system is able to run on top of the
hypervisor.
Example : VMWare ESX server uses this method. A customised Linux version
known as Service Console is used as the administrative operating system. It is not as
fast as Para virtualisation.

Advantages:
 No modification to Guest operating system required.

Limitations:
 Complex
 Slower due to emulation
 Installation of new device driver difficult.

4) Hardware Assisted Virtualization


It is similar to Full Virtualisation and Para virtualisation in terms of
operation except that it requires hardware support. Much of the hypervisor
overhead due to trapping and emulating I/O operations and status instructions
executed within a guest OS is dealt by relying on the hardware extensions of the
x86 architecture.
Unmodified OS can be run as the hardware support for virtualisation
would be used to handle hardware access requests, privileged and protected
operations and to communicate with the virtual machine.
Examples: AMD – V Pacifica and Intel VT Vanderpool provides hardware
support for virtualisation.
Advantages:
 No modification to guest operating system required.
 Very less hypervisor overhead
Limitations:
 Hardware support Required

5) Kernel level Virtualization


Instead of using a hypervisor, it runs a separate version of the Linux
kernel and sees the associated virtual machine as a user – space process on the
physical host. This makes it easy to run multiple virtual machines on a single
host. A device driver is used for communication between the main Linux
kernel and the virtual machine. Processor support is required for
virtualisation( Intel VT or AMD – v). A slightly modified QEMU process is
used as the display and execution containers for the virtual machines. In many
ways, kernel level virtualization is a specialised form of server virtualization.
Examples: User – Mode Linux( UML ) and Kernel Virtual Machine( KVM )
Advantages:
 No special administrative software required.
 Very less overhead
Limitations:
 Hardware Support Required

6) System Level or OS Virtualization


Runs multiple but logically distinct environments on a single instance of
operating system kernel. Also called shared kernel approach as all virtual
machines share a common kernel of host operating system. Based on change
root concept “chroot”. chroot starts during boot up. The kernel uses root
filesystems to load drivers and perform other early stage system initialisation
tasks. It then switches to another root file system using chroot command to
mount an on -disk file system as its final root file system, and continue system
initialization and configuration within that file system. The chroot mechanism of
system level virtualisation is an extension of this concept. It enables the system
to start virtual servers with their own set of processes which execute relative to
their own file system root directories.

The main difference between system level and server virtualisation is


whether different operating systems can be run on different virtual systems. If
all virtual servers must share the same copy of operating system it is system
level virtualisation and if different servers can have different operating systems
(including different versions of a single operating system) it is server
virtualisation.
Examples: FreeVPS, Linux Vserver and OpenVZ are some examples.

Advantages:
 Significantly light weight than complete machines(including akernel)
 Can host many more virtual servers
 Enhanced Security and isolation
Limitations:
 Kernel or driver problem can take down all virtual servers.

GOOGLE APP ENGINE


Google App Engine (often referred to as GAE or simply App Engine) is a cloud
computing platform as a service for developing and hosting web applications in
Google-managed data centers. Applications are sandboxed and run across multiple
servers. App Engine offers automatic scaling for web applications—as the number of
requests increases for an application, App Engine automatically allocates more
resources for the web application to handle the additional demand.
Google App Engine primarily supports Go, PHP, Java, Python, Node.js,
.NET , and Ruby applications, although it can also support other languages via
"custom runtimes". The service is free up to a certain level of consumed
resources and only in standard environment but not in flexible environment.
Fees are charged for additional storage, bandwidth, or instance hours
required by the application. It was first released as a preview version in April
2008 and came out of preview in September 2011.

App Engine architecture

The App Engine architecture in cloud computing looks like this:

Services provided by App Engine


Services provided by App Engine includes:
 Platform as a Service (PaaS) to build and deploy scalable
applications
 Hosting facility in fully-managed data centers.
 A fully-managed, flexible environment platform for managingapplication
server and infrastructure
 Support in the form of popular development languages anddeveloper
tools
Google App Engine a PaaS (Platform as a Service)
Google App Engine in cloud computing is a PaaS, Platform as a Service
model, .e., it provides a platform for developers to build scalable applications
on the Google cloud platform. The best thing about GAE is its ability to
manage the built applications in Google’sdata centers.
This way, organizations only have one job to master — building
applications on the cloud. For the rest part — the App Engine provides the
platform as well as manages the applications.
Major Features of Google App Engine in Cloud Computing
Some of the prominent Google App Engine features include:
1. Collection of Development Languages and Tools
The App Engine supports numerous programming languages for
developers and offers the flexibility to import libraries and frameworks through
docker containers. You can develop and test an app locally using the SDK
containing tools for deploying apps. Every language hasits SDK and runtime.
Some of the languages offered include — Python, PHP, .NET, Java,
Ruby, C#, Go, Node.Js.
2. Fully Managed
Google allows you to add your web application code to the platform while
managing the infrastructure for you. The engine ensures that your web apps are
secure and running and saves them from malware and threats by enabling the
firewall.
3. Effective Diagnostic Services
Cloud Monitoring and Cloud Logging that helps run app scans to identify
bugs. The app reporting document helps developers fix bugs on an immediate
basis.
4. Traffic Splitting
The app engine automatically routes the incoming traffic to different
versions of the apps as a part of A/B testing. You can plan the consecutive
increments based on what version of the app works best.

Benefits of Google App Engine for Websites


Adopting the App Engine is a smart decision for your organization
— it will allow you to innovate and stay valuable. Here the answer to why
Google App Engine is a preferable choice for building applications:
1. All Time Availability
When you develop and deploy your web applications on the cloud, you
enable remote access for your applications. Considering the impact of COVID-
19 on businesses, Google App Engine is the right choice that lets the developers
develop applications remotely, while the cloud service manages the
infrastructure needs.
2. Ensure Faster Time to Market
For your web applications to succeed, ensuring faster time to market is
imperative as the requirements are likely to change if the launch time is
extended. Using Google App Engine is as easy as it can get for developers.
The diverse tool repository and other functionalities ensure that the Google
Cloud application development and testing time gets reduced, which, in turn,
ensures faster launch time for MVP and consecutive launches
.

3. Easy to Use Platform


The developers only require to write code. With zero configuration and
server management, you eliminate all the burden to manage and deploy the
code. Google App Engine makes it easy to use the platform, which offers the
flexibility to focus on other concurrent web applications and processes. The best
part is that GAE automatically handles the traffic increase through patching,
provisioning, and monitoring.
4. Diverse Set of APIs
Google App Engine has several built-in APIs and services thatallow
developers to build robust and feature-rich apps. These features include:
Access to the application log
Blobstore, serve large data objects
Google App Engine Cloud Storage
SSL Support
Page Speed Services
Google Cloud Endpoint, for mobile application
URL Fetch API, User API, Memcache API, Channel API, XXMP API,
File API
5. Increased Scalability
Scalability is synonymous with growth — an essential factor that assures
success and competitive advantage. The good news is that the Google App
Engine cloud development platform is automatically scalable. Whenever the
traffic to the web application increases, GAE automatically scales up the
resources, and vice-versa.

6. Improved Savings
With Google App Engine, you do not have to spend extra on server
management of the app. The Google Cloud service is good at handling the
backend process.
Also, Google App Engine pricing is flexible as the resources can scale
up/down based on the app’s usage. The resources automatically scale up/down
based on how the app performs in the market, thus ensuring honest pricing in the
end.

7. Smart Pricing
The major concern of organizations revolves around how much does
Google App Engine cost? For your convenience, Google App Engine has a daily
and a monthly billing cycle, i.e.,
Daily: You will be charged daily for the resources you use
Monthly: All the daily charges are calculated and added to the taxes (if
applicable) and debited from your payment method
Also, the App Engine has a dedicated billing dashboard, “App Engine
Dashboard” to view and manage your account and subsequent billings.

AMAZON AWS
Amazon web service is a platform that offers flexible, reliable, scalable,
easy-to-use and cost-effective cloud computing solutions.
AWS is a comprehensive, easy to use computing platform offered
Amazon. The platform is developed with a combination of infrastructure as a
service (IaaS), platform as a service (PaaS) and packaged software as a service
(SaaS) offerings.

History of AWS
 2002- AWS services launched
 2006- Launched its cloud products
 2012- Holds first customer event
 2015- Reveals revenues achieved of $4.6 billion
 2016- Surpassed $10 billon revenue target
 2016- Release snowball
 2019- Offers nearly 100 cloud services
Important AWS Services
Amazon Web Services offers a wide range of different business purpose
global cloud-based products. The products include storage, databases, analytics,
networking, mobile, development tools, enterprise applications, with a pay-as-
you-go pricing model. Their services are as follows
AWS Compute Services, Migration, Storage, Security Services, Database
services, Analytics, Management services, IoT, Application services,
Deployment management, Developer tools, Mobile services, Business
Productivity, Artificial Intelligence, Game Development etc…

Applications of AWS services


Amazon Web services are widely used for various computingpurposes
like:
 Web site hosting
 Application hosting/SaaS hosting
 Media Sharing (Image/ Video)
 Mobile and Social Applications
 Content delivery and Media Distribution
 Storage, backup, and disaster recovery
 Development and test environments
 Academic Computing
 Search Engines
 Social Networking

Companies using AWS


 Instagram
 Zoopla
 Smugmug
 Pinterest
 Netflix
 Dropbox

Advantages of AWS
Following are the pros of using AWS services:
 AWS allows organizations to use the already familiar programming
models, operating systems, databases, and architectures.
 It is a cost-effective service that allows you to pay only for whatyou use,
without any up-front or long-term commitments.
 You will not require to spend money on running and maintainingdata
centers.
 Offers fast deployments
 You can easily add or remove capacity.
 You are allowed cloud access quickly with limitless capacity.
 Total Cost of Ownership is very low compared to any
private/dedicated servers.
 Offers Centralized Billing and management
 Offers Hybrid Capabilities
 Allows you to deploy your application in multiple regions aroundthe
world with just a few clicks
Disadvantages of AWS
 If you need more immediate or intensive assistance, you'll have to opt for
paid support packages.
 Amazon Web Services may have some common cloud computing issues
when you move to a cloud. For example, downtime, limited control, and
backup protection.
 AWS sets default limits on resources which differ from region to region.
These resources consist of images, volumes, and snapshots.

Hardware-level changes happen to your application which may notoffer


the best performance and usage of your applications.

FEDERATION IN THE CLOUD


Cloud Federation, also known as Federated Cloud is the deployment and
management of several external and internal cloud computing services to match
business needs. It is a multi-national cloud system that integrates private,
community, and public clouds into scalable computing platforms. Federated
cloud is created by connecting the cloud environment of different cloud
providers using a common standard.

The architecture of Federated Cloud:


The architecture of Federated Cloud consists of three basic components:
1. Cloud Exchange
The Cloud Exchange acts as a mediator between cloud coordinator and
cloud broker. The demands of the cloud broker are mapped by the cloud
exchange to the available services provided by the cloud coordinator. The
cloud exchange has a track record of what is the present cost, demand
patterns, and available cloud providers, and this information is periodically
reformed by the cloud coordinator.

2. Cloud Coordinator
The cloud coordinator assigns the resources of the cloud to the remote
users based on the quality of service they demand and the credits they have in
the cloud bank. The cloud enterprises and their membership are managed by the
cloud controller.

3. Cloud Broker
The cloud broker interacts with the cloud coordinator, analyzes the
Service-level agreement and the resources offered by several cloud providers in
cloud exchange. Cloud broker finalizes the most suitable deal for their client.
Properties of Federated Cloud:
1. In the federated cloud, the users can interact with the architecture either
centrally or in a decentralized manner. In centralized interaction, the user
interacts with a broker to mediate between them and the organization.
Decentralized interaction permits the user to interact directly with the
clouds inthe federation.
2. Federated cloud can be practiced with various niches like commercial and
non-commercial.

3. The visibility of a federated cloud assists the user to interpret the


organization of several clouds in the federated environment.
4. Federated cloud can be monitored in two ways. MaaS (Monitoring as a
Service) provides information that aids in tracking contracted services to
the user. Global monitoring aids in maintaining the federated cloud.
5. The providers who participate in the federation publish their offers to a
central entity. The user interacts with this central entity to verify the prices
and propose an offer.
6. The marketing objects like infrastructure, software, and platform have to
pass through federation when consumed in the federated cloud.
Benefits of Federated Cloud:
1. It minimizes the consumption of energy.
2. It increases reliability.
3. It minimizes the time and cost of providers due to dynamic scalability.
4. It connects various cloud service providers globally. The providers may
buy and sell services on demand.
5. It provides easy scaling up of resources.
Challenges in Federated Cloud:
6. In cloud federation, it is common to have more than one provider for
processing the incoming demands. In such cases, there must be a scheme
needed to distribute the incoming demands equally among the cloud
service providers.
7. The increasing requests in cloud federation have resulted in more
heterogeneous infrastructure, making interoperability an area of concern.
It becomes a challenge for cloud users to select relevant cloud service
providers and therefore, it ties them to a particular cloud service provider.
8. A federated cloud means constructing a seamless cloud environment that
can interact with people, different devices, several application interfaces,
and other entities.
Federated Cloud technologies:
The technologies that aid the cloud federation and cloud services are:
1. OpenNebula
It is a cloud computing platform for managing heterogeneous distributed
data center infrastructures. It can use the resources of its interoperability,
leveraging existing information technology assets, protecting the deals,
and adding the application programming interface (API).
2. Aneka coordinator
The Aneka coordinator is a proposition of the Aneka services and Aneka
peer components (network architectures) which give the cloud ability and
performance to interact with other cloud services.
3. Eucalyptus
Eucalyptus defines the pooling computational, storage, and network
resources that can be measured scaled up or down as application
workloads change in the utilization of the software. It is an open-source
framework that performs the storage, network, and many other
computational resources to access the cloud environment.+

You might also like