Unit IV
Unit IV
Services might aggregate information and data retrieved from other services or
create workflows of services to satisfy the request ofa given service consumer.
Principles of SOA:
1. Standardized service contract: Specified through one or more service
description documents.
2. Loose coupling: Services are designed as self-contained components,
maintain relationships that minimize dependencies on other services.
3. Abstraction: A service is completely defined by service contracts and
description documents. They hide their logic, which is encapsulated within
their implementation.
4. Reusability: Designed as components, services can be reused more
effectively, thus reducing development time and the associatedcosts.
5. Autonomy: Services have control over the logic they encapsulate and, from
a service consumer point of view, there is no need to know about their
implementation.
6. Discoverability: Services are defined by description documents that
constitute supplemental metadata through which they can be effectively
discovered. Service discovery provides an effective means for utilizing
third-party resources.
7. Composability: Using services as building blocks, sophisticated and
complex operations can be implemented. Service orchestration and
choreography provide solid support for composing services and achieving
business goals
Advantages of SOA:
Service reusability: In SOA, applications are made from existing services.
Thus, services can be reused to make many applications.
WEB SERVICES
A web service is a collection of open protocols and standards used for
exchanging data between applications or systems. Software applications written
in various programming languages and running on various platforms can use
web services to exchange data over computer networks like the Internet in a
manner similar to inter-process communication on a single computer. This
interoperability (e.g., between Java and Python, or Windows and Linux
applications) is due tothe use of open standards.
Components of Web Services
The basic web services platform is XML + HTTP. All the standardweb
services work using the following components −
SOAP (Simple Object Access Protocol)
UDDI (Universal Description, Discovery and Integration)
WSDL (Web Services Description Language)
BASICS OF VIRTUALIZATION
Definition:
Virtualization is a technique, which allows sharing single physicalinstance
of an application or resource among multiple organizations or tenants
(customers). It does so by assigning a logical name to a physical resource and
providing a pointer to that physical resource on demand.
Virtualization Concept:
Creating a virtual machine over existing operating system and hardware is
referred as Hardware Virtualization. Virtual Machines provide an environment
that is logically separated from the underlying hardware.
The machine on which the virtual machine is created is knownas host
machine and virtual machine is referred as a guest machine. This virtual
machine is managed by a software or firmware, which is known as hypervisor.
Hypervisor:
The hypervisor is a firmware or low-level program that acts as a Virtual
Machine Manager. There are two types of hypervisor:
Type 1 hypervisor executes on bare system. LynxSecure, RTS
Hypervisor, Oracle VM, Sun xVM Server, VirtualLogic VLX are examples of
Type 1 hypervisor. The following diagram shows the Type 1 hypervisor.
The type1 hypervisor does not have any host operating system because
they are installed on a bare system.
Type 2 hypervisor is a software interface that emulates the devices with
which a system normally interacts. Containers, KVM, Microsoft Hyper V,
VMWare Fusion, Virtual Server 2005 R2, Windows Virtual PC and VMWare
workstation 6.0 are examples of Type 2 hypervisor. The following diagram
shows the Type 2 hypervisor.
EMULATION
Emulation, as name suggests, is a technique in which Virtual machines
simulates complete hardware in software. There are many virtualization
techniques that were developed in or inherited from emulation technique. It is
very useful when designing software for various systems. It simply allows us to
use current platform to access an older application, data, or operating system.
In computing, the emulator is a hardware or software that enables one
device (named Host) to function like other systems (named Guest). It is a
perfect way to execute the hardware and software in any system. Emulation
brings greater overhead, but it also has its benefits. It is relatively inexpensive,
easily accessible and allows us to run programs that have become redundant in
the available system.
An emulator changes the CPU instructions required for the architecture
and executes it on another architecture successfully. The emulation systems
could be accessed remotely by anyone and are very simpler to use. Without
affecting the underlying OS, it is an excellent capacity for embedded and OS
development. Without considering the host's capabilities, emulation will usually
manage the size of the designunder test (DUT).
TYPES OF VIRTUALIZATION
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.
1) Hardware Virtualization:
When the virtual machine software or virtual machine manager
(VMM) is directly installed on the hardware system is known as hardware
virtualization.
The main job of hypervisor is to control and monitoring the processor,
memory and other hardware resources.
3) Server Virtualization:
When the virtual machine software or virtual machine manager
(VMM) is directly installed on the Server system is known as server
virtualization.
Usage:
Server virtualization is done because a single physical server can be
divided into multiple servers on the demand basis and for balancing the load.
4) Storage Virtualization:
Storage virtualization is the process of grouping the physical storage from
multiple network storage devices so that it looks like a single storage device.
Storage virtualization is also implemented by using software applications.
Usage:
Storage virtualization is mainly done for back-up and recoverypurposes.
IMPLEMENTATION LEVELS OF VIRTUALIZATION
It is not simple to set up virtualization. Your computer runs on an
operating system that gets configured on some particular hardware. It is not
feasible or easy to run a different operating system using the same hardware.
To do this, you will need a hypervisor. Hypervisor is a bridge between the
hardware and the virtual operating system, which allows smooth functioning.
Talking of the Implementation levels of virtualization in cloud computing,
there are a total of five levels that are commonly used. Let us now look closely at
each of these levels of virtualization implementation in cloud computing.
1)Instruction Set Architecture Level (ISA)
ISA virtualization can work through ISA emulation. This is used to run
many legacy codes that were written for a different configuration of hardware.
These codes run on any virtual machine using the ISA. With this, a binary
code that originally needed some additional layers to run is now capable of
running on the x86 machines. It can also be tweaked to run on the x64 machine.
With ISA, it is possible to make the virtual machine hardware agnostic.
For the basic emulation, an interpreter is needed, which interprets the
source code and then converts it into a hardware format
that can be read. This then allows processing. This is one of thefive
implementation levels of virtualization in cloud computing.
3) Library Level
The operating system is cumbersome, and this is when the applications
make use of the API that is from the libraries at a user level. These APIs are
documented well, and this is why the library virtualization level is preferred in
these scenarios. API hooks make it possible as it controls the link of
communication from the applicationto the system.
4) Application Level
The application-level virtualization is used when there is a desire to
virtualize only one application and is the last of the implementation levels of
virtualization in cloud computing. One does not need to virtualize the entire
environment of the platform.
This is generally used when you run virtual machines that use high-level
languages. The application will sit above the virtualization layer, which in turn
sits on the application program.
It lets the high-level language programs compiled to be used in the
application level of the virtual machine run seamlessly.
VIRTUALIZATION STRUCTURES / TOOLS &
MECHANISMS
Full Virtualization
With full virtualization, noncritical instructions run on the hardware
directly while critical instructions are discovered and replaced with traps into
the VMM to be emulated by software. Both the hypervisor and VMM
approaches are considered full virtualization. Why are only critical instructions
trapped into the VMM? This is because binary translation can incur a large
performance overhead. Noncritical instructions do not control hardware or
threaten the security of the system, but critical instructions do. Therefore,
running noncritical instructions on hardware not only can promote efficiency,
but also can ensure system security.
Binary Translation of Guest OS Requests Using a VMM
This approach was implemented by VMware and many other software companies.
VMware puts the VMM at Ring 0 and the guest OS at Ring 1. The VMM scans the
instruction stream and identifies the privileged, control- and behavior-sensitive
instructions. When these instructions are identified, they are trapped into the VMM,
which emulates the behavior of these instructions. The method used in this emulation
is called binary translation. Therefore, full virtualization
combines binary translation and direct execution. The guest OS is completely
decoupled from the underlying hardware. Consequently, the guest OS is
unaware that it is being virtualized.
Host-Based Virtualization
An alternative VM architecture is to install a virtualization layer on top of
the host OS. This host OS is still responsible for managing the hardware. The
guest OSes are installed and run on top of the virtualization layer. Dedicated
applications may run on the VMs. Certainly, some other applications can also
run with the host OS directly. This host-based architecture has some distinct
advantages, as enumerated next. First, the user can install this VM architecture
without modifying the host OS. The virtualizing software can rely on the host
OS to provide device drivers and other low-level services. This will simplify the
VM design and ease its deployment.
4) I/O Virtualization
I/O virtualization involves managing the routing of I/O requests between
virtual devices and the shared physical hardware. At the time of this writing,
there are three ways to implement I/O virtualization: full device emulation, para-
virtualization, and direct I/O. Full device emulation is the first approach for I/O
virtualization. Generally, this approach emulates well-known, real-world
devices. All the functions of a device or bus infrastructure, such as device
enumeration, identification, interrupts, and DMA, are replicated in software.
This software is located in the VMM and acts as a virtual device. The I/O access
requests of the guest OS are trapped in the VMM which interacts with the I/O
devices. The full device emulation approach is shown in Figure
Like other types of cloud desktop virtualization, DaaS shares many of the
general benefits of cloud computing, including support for fluctuating workloads
and changing storage demands, usage-based pricing, and the ability to make
applications and data accessible from almost any internet-connected device. The
chief drawback to DaaS is that features and configurations are not always as
customizable as required.
Virtual desktop infrastructure
A popular type of desktop virtualization is virtual desktop infrastructure
(VDI). VDI is a variant of the client-server model ofdesktop virtualization
which uses host-based VMs to deliver persistent and non persistent virtual
desktops to all kinds of connected devices. With a persistent virtual desktop,
each user has a unique desktop image that they can customize with apps and
data, knowing it will be saved for future use. A non persistent virtual desktop
infrastructure allows users to access a virtual desktop from an identical pool
when they need it; once the user logs out of a non persistent VDI, it reverts to
its unaltered state. Some of the advantages of virtual desktop infrastructure are
improved security and centralized desktop management across an organization.
Benefits of desktop virtualization
1) Resource Management:
Desktop virtualization helps IT departments get the most out of their hardware
investments by consolidating most of their computing in a data centre. Desktop
virtualization then allows organizations to issue lower-cost computer and
devices to end users because most of the intensive computing work takes place
in the data centre. By minimizinghow much computing is needed at the endpoint
devices for end users,IT departments can save money by buying less costly machines.
2) Remote work:
Desktop virtualization helps IT admins support remote workers by giving IT
central control over how desktops are virtually deployed across an
organization’s devices. Rather than manually setting up a new desktop for
each user, desktop virtualization allows IT to simply deploy a ready-to-go
virtual desktop to that user’s device. Now the user can interact with the
operating system and applications on that desktop from any location and the
employee experience will be the same as if they were working locally. Once
the user is finished using this virtual desktop, they can log off and return that
desktop image to the shared pool.
3) Security:
Desktop virtualization software provides IT admins centralized security control
over which users can access which data and which applications. If a user’s
permissions change because they leave the company, desktop virtualization
makes it easy for IT to quickly remove that user’s access to their persistent
virtual desktop and all its data—instead of having to manually uninstall
everything from that user’s devices. And because all company data lives inside
the data center rather than on each machine, a lost or stolen device does not post
the same data risk. If someone steals a laptop using desktop virtualization, there
is no company data on the actual machine and hence less risk of a breach.
SERVER VIRTUALIZATION
Server Virtualization is the partitioning of a physical server into number
of small virtual servers, each running its own operating system. These operating
systems are known as guest operating systems. These are running on another
operating system known as host operating system. Each guest running in this
manner is unaware of any other guests running on the same host. Different
virtualization techniques areemployed to achieve this transparency.
Types of Server virtualization:
1) Hypervisor
A Hypervisor or VMM(virtual machine monitor) is a layer that exits
between the operating system and hardware. It provides the necessary services
and features for the smooth running of multiple operating systems.
It identifies traps, responds to privileged CPU instructions and handles
queuing, dispatching and returning the hardware requests. A host operating
system also runs on top of the hypervisor to administer and manage the virtual
machines.
2) Para Virtualization
It is based on Hypervisor. Much of the emulation and trapping overhead
in software implemented virtualisation is handled in this model. The guest
operating system is modified and recompiled before installation into the virtual
machine.
Due to the modification in the Guest operating system, performance is
enhanced as the modified guest operating system communicates directly with
the hypervisor and emulation overhead is removed.
Example : Xen primarily uses Para virtualisation, where a customised Linux
environment is used to support the administrative environment known as domain
0.
Advantages:
Easier
Enhanced Performance
No emulation overhead
Limitations:
Requires modification to guest operating system
3) Full Virtualization
It is very much similar to Para virtualisation. It can emulate the
underlying hardware when necessary. The hypervisor traps the machine
operations used by the operating system to perform I/O or modify the system
status. After trapping, these operations are emulated in software and the status
codes are returned very much consistent with what the real hardware would
deliver. This is why unmodified operating system is able to run on top of the
hypervisor.
Example : VMWare ESX server uses this method. A customised Linux version
known as Service Console is used as the administrative operating system. It is not as
fast as Para virtualisation.
Advantages:
No modification to Guest operating system required.
Limitations:
Complex
Slower due to emulation
Installation of new device driver difficult.
Advantages:
Significantly light weight than complete machines(including akernel)
Can host many more virtual servers
Enhanced Security and isolation
Limitations:
Kernel or driver problem can take down all virtual servers.
6. Improved Savings
With Google App Engine, you do not have to spend extra on server
management of the app. The Google Cloud service is good at handling the
backend process.
Also, Google App Engine pricing is flexible as the resources can scale
up/down based on the app’s usage. The resources automatically scale up/down
based on how the app performs in the market, thus ensuring honest pricing in the
end.
7. Smart Pricing
The major concern of organizations revolves around how much does
Google App Engine cost? For your convenience, Google App Engine has a daily
and a monthly billing cycle, i.e.,
Daily: You will be charged daily for the resources you use
Monthly: All the daily charges are calculated and added to the taxes (if
applicable) and debited from your payment method
Also, the App Engine has a dedicated billing dashboard, “App Engine
Dashboard” to view and manage your account and subsequent billings.
AMAZON AWS
Amazon web service is a platform that offers flexible, reliable, scalable,
easy-to-use and cost-effective cloud computing solutions.
AWS is a comprehensive, easy to use computing platform offered
Amazon. The platform is developed with a combination of infrastructure as a
service (IaaS), platform as a service (PaaS) and packaged software as a service
(SaaS) offerings.
History of AWS
2002- AWS services launched
2006- Launched its cloud products
2012- Holds first customer event
2015- Reveals revenues achieved of $4.6 billion
2016- Surpassed $10 billon revenue target
2016- Release snowball
2019- Offers nearly 100 cloud services
Important AWS Services
Amazon Web Services offers a wide range of different business purpose
global cloud-based products. The products include storage, databases, analytics,
networking, mobile, development tools, enterprise applications, with a pay-as-
you-go pricing model. Their services are as follows
AWS Compute Services, Migration, Storage, Security Services, Database
services, Analytics, Management services, IoT, Application services,
Deployment management, Developer tools, Mobile services, Business
Productivity, Artificial Intelligence, Game Development etc…
Advantages of AWS
Following are the pros of using AWS services:
AWS allows organizations to use the already familiar programming
models, operating systems, databases, and architectures.
It is a cost-effective service that allows you to pay only for whatyou use,
without any up-front or long-term commitments.
You will not require to spend money on running and maintainingdata
centers.
Offers fast deployments
You can easily add or remove capacity.
You are allowed cloud access quickly with limitless capacity.
Total Cost of Ownership is very low compared to any
private/dedicated servers.
Offers Centralized Billing and management
Offers Hybrid Capabilities
Allows you to deploy your application in multiple regions aroundthe
world with just a few clicks
Disadvantages of AWS
If you need more immediate or intensive assistance, you'll have to opt for
paid support packages.
Amazon Web Services may have some common cloud computing issues
when you move to a cloud. For example, downtime, limited control, and
backup protection.
AWS sets default limits on resources which differ from region to region.
These resources consist of images, volumes, and snapshots.
2. Cloud Coordinator
The cloud coordinator assigns the resources of the cloud to the remote
users based on the quality of service they demand and the credits they have in
the cloud bank. The cloud enterprises and their membership are managed by the
cloud controller.
3. Cloud Broker
The cloud broker interacts with the cloud coordinator, analyzes the
Service-level agreement and the resources offered by several cloud providers in
cloud exchange. Cloud broker finalizes the most suitable deal for their client.
Properties of Federated Cloud:
1. In the federated cloud, the users can interact with the architecture either
centrally or in a decentralized manner. In centralized interaction, the user
interacts with a broker to mediate between them and the organization.
Decentralized interaction permits the user to interact directly with the
clouds inthe federation.
2. Federated cloud can be practiced with various niches like commercial and
non-commercial.