0% found this document useful (0 votes)
20 views21 pages

CC Notes-2

cloud
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views21 pages

CC Notes-2

cloud
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Cloud Computing (KCS-713)

Cloud Computing (KCS-713)


Unit-2
 Service Oriented Architecture

 Service-Oriented Architecture (SOA) is a style of software design where services are


provided to the other components by application components, through a communication
protocol over a network. Its principles are independent of vendors and other technologies.

 In service oriented architecture, a number of services communicate with each other, in


one of two ways: through passing data or through two or more services coordinating an
activity. This is just one definition of Service-Oriented Architecture.

 SOA makes software components reusable with the help of common communication
standards (basic protocols, syntax and semantics, s/w & h/w architecture) in such a way
that they can be rapidly incorporated into new applications without any deep integration
(data applications, API & devices across IT organization).

 Each service in SOA embodies the code & data integrations required to execute a
complete business function (e.g. checking of customer’s credit, calculating a monthly loan
payment).

 Service interfaces in SOA provides loose coupling (having little or no knowledge of how
the integration is implemented).

 Simple protocols like HTTP, SOAP (simple object access protocol) are used to send
requests to read or change data.

 SOA was emerged in the late 1990s.

 Benefits
 Greater business agility.
 Ability to leverage legacy functionality (to be able to lock functionality in one platform
and to extend it to the other).
 Improved collaboration between business & IT.
Cloud Computing (KCS-713)

 Examples of SOA
 By 2010, SOA implementations were gainfully streamed in the following organizations-
 Delaware electric turned to SOA to integrate systems that previously did not talk to each
other.
 CISCO adopted SOA to make sure that its product ordering experience was consistent
across all products and channels by exposing ordering processes as services that CISCO
divisions & business partners could incorporate into their web sites.

 Characteristics of Service-Oriented Architecture

While the defining concepts of Service-Oriented Architecture vary from company to company,
there are six key tenets that overarch the broad concept of Service-Oriented Architecture. These
core values include:
 Business value
 Strategic goals
 Intrinsic inter-operability
 Shared services
 Flexibility
 Evolutionary refinement
Each of these core values can be seen on a continuum from older format distributed computing to
Service-Oriented Architecture to cloud computing (something that is often seen as an offshoot of
Service-Oriented Architecture).

 Service-Oriented Architecture Patterns


Cloud Computing (KCS-713)

 There are three roles in each of the Service-Oriented Architecture building blocks: service
provider; service broker, service registry, service repository; and service
requester/consumer.
 The service provider works in conjunction with the service registry, debating the whys and
hows of the services being offered, such as security, availability, what to charge, and
more. This role also determines the service category and if there need to be any trading
agreements.
 The service broker makes information regarding the service available to those requesting
it. The scope of the broker is determined by whoever implements it.
 The service requester locates entries in the broker registry and then binds them to the
service provider. They may or may not be able to access multiple services; that depends
on the capability of the service requester.

 Service Oriented Architecture – REST

 Representational state transfer (REST) is a software architectural style that defines a


set of constraints to be used for creating Web services. Web services that conform to the

REST architectural style, called RESTful Web services, provide interoperability between
computer systems on the internet. RESTful Web services allow the requesting systems to
access and manipulate textual representations of Web resources by using a uniform and
predefined set of stateless operations. Other kinds of Web services, such as SOAP Web
services, expose their own arbitrary sets of operations.

 "Web resources" were first defined on the World Wide Web as documents or files
identified by their URLs. However, today they have a much more generic and abstract
definition that encompasses everything, entity, or action that can be identified, named,
addressed, handled, or performed, in any way whatsoever, on the Web. In a RESTful
Web service, requests made to a resource's URI will elicit a response with
a payload formatted in HTML, XML, JSON, or some other format. The response can
confirm that some alteration has been made to the resource state, and the response can
provide hypertext links to other related resources. When HTTP is used, as is most
common, the operations (HTTP methods) available are GET, HEAD, POST, PUT,
PATCH, DELETE, CONNECT, OPTIONS and TRACE.

 By using a stateless protocol and standard operations, RESTful systems aim for fast
performance, reliability, and the ability to grow by reusing components that can be
managed and updated without affecting the system as a whole, even while it is running.
Cloud Computing (KCS-713)

The formal REST constraints are as follows:

 Client-server architecture

The principle behind the client-server constraints is the separation of concerns.


Separating the user interface concerns from the data storage concerns improves the
portability of the user interfaces across multiple platforms. It also improves scalability by
simplifying the server components. Perhaps most significant to the Web is that the
separation allows the components to evolve independently, thus supporting the Internet-
scale requirement of multiple organizational domains.

 Statelessness

The client-server communication is constrained by no client context being stored on the


server between requests. Each request from any client contains all the information
necessary to service the request, and the session state is held in the client. The session
state can be transferred by the server to another service such as a database to maintain a
persistent state for a period and allow authentication. The client begins sending requests
when it is ready to make the transition to a new state. While one or more requests are
outstanding, the client is considered to be in transition. The representation of each
application state contains links that can be used the next time the client chooses to initiate
a new state-transition.

 Cache ability

As on the World Wide Web, clients and intermediaries can cache responses. Responses
must, implicitly or explicitly, define themselves as either cacheable or non-cacheable to
prevent clients from providing stale or inappropriate data in response to further requests.
Well-managed caching partially or completely eliminates some client-server interactions,
further improving scalability and performance.

 Layered system

A client cannot ordinarily tell whether it is connected directly to the end server, or to an
intermediary along the way. If a proxy or load balancer is placed between the client and
server, it won't affect their communications and there won't be a need to update the client
or server code. Intermediary servers can improve system scalability by enabling load
balancing and by providing shared caches. Also, security can be added as a layer on top
of the web services, and then clearly separate business logic from security logic. Adding
Cloud Computing (KCS-713)

security as a separate layer enforces security policies. Finally, it also means that a server
can call multiple other servers to generate a response to the client.

 Code on demand (optional)

Servers can temporarily extend or customize the functionality of a client by transferring


executable code: for example, compiled components such as Java applets, or client-side
scripts such as JavaScript.

 Uniform interface

The uniform interface constraint is fundamental to the design of any RESTful system. It
simplifies and decouples the architecture, which enables each part to evolve
independently. The four constraints for this uniform interface are:

 Resource identification in requests


Individual resources are identified in requests, for example using URIs in RESTful Web
services. The resources themselves are conceptually separate from the representations
that are returned to the client. For example, the server could send data from its database
as HTML, XML or as JSON—none of which are the server's internal representation.
 Resource manipulation through representations
When a client holds a representation of a resource, including any metadata attached, it
has enough information to modify or delete the resource's state.
 Self-descriptive messages
Each message includes enough information to describe how to process the message. For
example, which parser to invoke can be specified by a media type .

 Cloud‐ Enabling Technology
 Broadband networks and internet architecture
 Data center technology
 Virtualization technology
 Web technology
 Multitenant technology

1. Broadband networks & Internet architecture


 All clouds must be connected to a network
Cloud Computing (KCS-713)

 Internet’s largest backbone networks, established and deployed by ISPs(internet service


provider), are interconnected by core routers
Two fundamental components
 Connectionless packet switching End‐ to‐ end (sender‐ receiver pair) data flows are
divided
o into packets of a limited size Packets are processed through network switches and
o routers, then queued and forwarded from one intermediary node to the next
 Router‐ based interconnectivity
o A router is a device that is connected to multiple networks through which it
forwards packets
o Each packet is individually processed
o Use multiple alternative network routes

Internet Reference Model

2. Data Center Technology


 A data center is a facility used to house computer systems and associated components,
such as telecommunications and storage systems
o Virtualization
o Standardization and Modularity
o Automation
o Remote Operation and Management
Cloud Computing (KCS-713)

Virtualization

Standardization and Modularity


 Data centers are built upon standardized commodity hardware and designed with modular
architecture.

3. Virtualization technology

Virtualization is the "creation of a virtual (rather than actual) version of something, such as a
server, a desktop, a storage device, an operating system or network resources".

In other words, Virtualization is a technique, which allows sharing a single physical instance of a
resource or an application among multiple customers and organizations. It does by assigning a
logical name to a physical storage and providing a pointer to that physical resource when
demanded.

Creation of a virtual machine over existing operating system and hardware is known as
Hardware Virtualization. A Virtual machine provides an environment that is logically separated
from the underlying hardware.
Cloud Computing (KCS-713)

The machine on which the virtual machine is going to create is known as Host Machine and that
virtual machine is referred as a Guest Machine

Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.

1) Hardware Virtualization:

When the virtual machine software or virtual machine manager (VMM) is directly installed on
the hardware system is known as hardware virtualization. The main job of hypervisor is to
control and monitoring the processor, memory and other hardware resources. After virtualization
of hardware system we can install different operating system on it and run different applications
on those OS.

Usage:

Hardware virtualization is mainly done for the server platforms, because controlling virtual
machines is much easier than controlling a physical server.

2) Operating System Virtualization:

When the virtual machine software or virtual machine manager (VMM) is installed on the Host
operating system instead of directly on the hardware system is known as operating system
virtualization.

Usage:

Operating System Virtualization is mainly used for testing the applications on different platforms
of OS.

3) Server Virtualization:

When the virtual machine software or virtual machine manager (VMM) is directly installed on
the Server system is known as server virtualization.

Usage:

Server virtualization is done because a single physical server can be divided into multiple servers
on the demand basis and for balancing the load.
Cloud Computing (KCS-713)

4) Storage Virtualization:

Storage virtualization is the process of grouping the physical storage from multiple network
storage devices so that it looks like a single storage device. Storage virtualization is also
implemented by using software applications.

Usage:

Storage virtualization is mainly done for back-up and recovery purposes.

 How does virtualization work in cloud computing?

 Virtualization plays a very important role in the cloud computing technology, normally
in the cloud computing, users share the data present in the clouds like application etc, but
actually with the help of virtualization users shares the Infrastructure.
 The main usage of Virtualization Technology is to provide the applications with the
standard versions to their cloud users, suppose if the next version of that application is
released, then cloud provider has to provide the latest version to their cloud users and
practically it is possible because it is more expensive.
 To overcome this problem we use basically virtualization technology, By using
virtualization, all severs and the software application which are required by other cloud
providers are maintained by the third party people, and the cloud providers has to pay the
money on monthly or annual basis.

 Data Virtualization

Data virtualization is the process of retrieve data from various resources without knowing its
type and physical location where it is stored. It collects heterogeneous data from different
resources and allows data users across the organization to access this data according to their
work requirements. This heterogeneous data can be accessed using any application such as web
portals, web services, E-commerce, Software as a Service (SaaS), and mobile application.

We can use Data Virtualization in the field of data integration, business intelligence, and cloud
computing.

Advantages of Data Virtualization

There are the following advantages of data virtualization -

o It allows users to access the data without worrying about where it resides on the memory.
o It offers better customer satisfaction, retention, and revenue growth.
o It provides various security mechanisms that allow users to safely store their personal and
professional information.
o It reduces costs by removing data replication.
o It provides a user-friendly interface to develop customized views.
Cloud Computing (KCS-713)

o It provides various simple and fast deployment resources.


o It increases business user efficiency by providing data in real-time.
o It is used to perform tasks such as data integration, business integration, Service-Oriented
Architecture (SOA) data services, and enterprise search.

Disadvantages of Data Virtualization


o It creates availability issues, because availability is maintained by third-party providers.
o It required a high implementation cost.
o It creates the availability and scalability issues.
o Although it saves time during the implementation phase of virtualization but it consumes
more time to generate the appropriate result.

Uses of Data Virtualization

There are the following uses of Data Virtualization -

1. Analyze performance

Data virtualization is used to analyze the performance of the organization compared to previous
years.

2. Search and discover interrelated data

Data Virtualization (DV) provides a mechanism to easily search the data which is similar and
internally related to each other.

3. Agile Business Intelligence

It is one of the most common uses of Data Virtualization. It is used in agile reporting, real-time
dashboards that require timely aggregation, analyze and present the relevant data from multiple
resources. Both individuals and managers use this to monitor performance, which helps to make
daily operational decision processes such as sales, support, finance, logistics, legal, and
compliance.

4. Data Management

Data virtualization provides a secure centralized layer to search, discover, and govern the unified
data and its relationships.

 Data Virtualization Tools

There are the following Data Virtualization tools -


Cloud Computing (KCS-713)

1. Red Hat JBoss data virtualization

Red Hat virtualization is the best choice for developers and those who are using micro services
and containers. It is written in Java.

2. TIBCO data virtualization

TIBCO helps administrators and users to create a data virtualization platform for accessing the
multiple data sources and data sets. It provides a builtin transformation engine to combine non-
relational and un-structured data sources.

3. Oracle data service integrator

It is a very popular and powerful data integrator tool which is mainly worked with Oracle
products. It allows organizations to quickly develop and manage data services to access a single
view of data.

4. SAS Federation Server

SAS Federation Server provides various technologies such as scalable, multi-user, and standards-
based data access to access data from multiple data services. It mainly focuses on securing data.

5. Denodo

Denodo is one of the best data virtualization tools which allows organizations to minimize the
network traffic load and improve response time for large data sets. It is suitable for both small as
well as large organizations.

 Industries that use Data Virtualization


o Communication & Technology
In Communication & Technology industry, data virtualization is used to increase revenue
per customer, create a real-time ODS for marketing, manage customers, improve
customer insights, and optimize customer care, etc.
o Finance
In the field of finance, DV is used to improve trade reconciliation, empowering data
democracy, addressing data complexity, and managing fixed-risk income.
o Government
In the government sector, DV is used for protecting the environment.
o Healthcare
Data virtualization plays a very important role in the field of healthcare. In healthcare,
DV helps to improve patient care, drive new product innovation, accelerating M&A
synergies, and provide a more efficient claim analysis.
o Manufacturing
In manufacturing industry, data virtualization is used to optimize a global supply chain,
optimize factories, and improve IT assets utilization.
Cloud Computing (KCS-713)

 Hardware Virtualization

 Previously, there was "one to one relationship" between physical servers and operating
system. Low capacity of CPU, memory, and networking requirements were available. So,
by using this model, the costs of doing business increased. The physical space, amount of
power, and hardware required meant that costs were adding up.
 The hypervisor manages shared the physical resources of the hardware between the
guest operating systems and host operating system. The physical resources become
abstracted versions in standard formats regardless of the hardware platform. The
abstracted hardware is represented as actual hardware. Then the virtualized operating
system looks into these resources as they are physical entities.
 Virtualization means abstraction. Hardware virtualization is accomplished by
abstracting the physical hardware layer by use of a hypervisor or VMM (Virtual Machine
Monitor).
 When the virtual machine software or virtual machine manager (VMM) or hypervisor
software is directly installed on the hardware system is known as hardware virtualization.
 The main job of hypervisor is to control and monitoring the processor, memory and
other hardware resources.
 After virtualization of hardware system we can install different operating system on it
and run different applications on those OS.

Usage of Hardware Virtualization

Hardware virtualization is mainly done for the server platforms, because controlling virtual
machines is much easier than controlling a physical server.

Advantages of Hardware Virtualization

The main benefits of hardware virtualization are more efficient resource utilization, lower overall
costs as well as increased uptime and IT flexibility.

1) More Efficient Resource Utilization:

Physical resources can be shared among virtual machines. Although the unused resources can be
allocated to a virtual machine and that can be used by other virtual machines if the need exists.

2) Lower Overall Costs Because Of Server Consolidation:

Now it is possible for multiple operating systems can co-exist on a single hardware platform, so
that the number of servers, rack space, and power consumption drops significantly.

3) Increased Uptime Because Of Advanced Hardware Virtualization Features:

The modern hypervisors provide highly orchestrated operations that maximize the abstraction of
the hardware and help to ensure the maximum uptime. These functions help to migrate a running
Cloud Computing (KCS-713)

virtual machine from one host to another dynamically, as well as maintain a running copy of
virtual machine on another physical host in case the primary host fails.

4) Increased IT Flexibility:

Hardware virtualization helps for quick deployment of server resources in a managed and
consistent ways. That results in IT being able to adapt quickly and provide the business with
resources needed in good time.

 Software Virtualization

 Managing applications and distribution becomes a typical task for IT departments.


Installation mechanism differs from application to application. Some programs require
certain helper applications or frameworks and these applications may have conflict with
existing applications.
 Software virtualization is just like a virtualization but able to abstract the software
installation procedure and create virtual software installations.
 Virtualized software is an application that will be "installed" into its own self-contained
unit.
 Example of software virtualization is VMware software, virtual box etc. In the next
pages, we are going to see how to install linux OS and windows OS on VMware
application.

Advantages of Software Virtualization

1) Client Deployments Become Easier:

Copying a file to a workstation or linking a file in a network then we can easily install virtual
software.

2) Easy to manage:

To manage updates becomes a simpler task. You need to update at one place and deploy the
updated virtual application to the all clients.

3) Software Migration:

Without software virtualization, moving from one software platform to another platform takes
much time for deploying and impact on end user systems. With the help of virtualized software
environment the migration becomes easier.

 Server Virtualization

 Server Virtualization is the process of dividing a physical server into several virtual
servers, called virtual private servers. Each virtual private server can run independently.
 The concept of Server Virtualization widely used in the IT infrastructure to minimizes the
costs by increasing the utilization of existing resources.
Cloud Computing (KCS-713)

Types of Server Virtualization

1. Hypervisor

In the Server Virtualization, Hypervisor plays an important role. It is a layer between


the operating system (OS) and hardware. There are two types of hypervisors.

o Type 1 hypervisor ( also known as bare metal or native hypervisors)


o Type 2 hypervisor ( also known as hosted or Embedded hypervisors)

The hypervisor is mainly used to perform various tasks such as allocate physical hardware
resources (CPU, RAM, etc.) to several smaller independent virtual machines, called "guest" on
the host machine.

2. Full Virtualization

Full Virtualization uses a hypervisor to directly communicate with the CPU and physical server.
It provides the best isolation and security mechanism to the virtual machines.

The biggest disadvantage of using hypervisor in full virtualization is that a hypervisor has its
own processing needs, so it can slow down the application and server performance.

VMware ESX server is the best example of full virtualization.

3. Para Virtualization

Para Virtualization is quite similar to the Full Virtualization. The advantage of using this
virtualization is that it is easier to use, Enhanced performance, and does not require
emulation overhead. Xen primarily and UML use the Para Virtualization.

The difference between full and pare virtualization is that, in para virtualization hypervisor does
not need too much processing power to manage the OS.

4. Operating System Virtualization

Operating system virtualization is also called as system-lever virtualization. It is a server


virtualization technology that divides one operating system into multiple isolated user-space
called virtual environments. The biggest advantage of using server visualization is that it
reduces the use of physical space, so it will save money.

Linux OS Virtualization and Windows OS Virtualization are the types of Operating System
virtualization. FreeVPS, OpenVZ, and Linux Vserver are some examples of System-Level
Virtualization.
Cloud Computing (KCS-713)

5. Hardware Assisted Virtualization

Hardware Assisted Virtualization was presented by AMD and Intel. It is also known
as Hardware virtualization, AMD virtualization, and Intel virtualization. It is designed to
increase the performance of the processor. The advantage of using Hardware Assisted
Virtualization is that it requires less hypervisor overhead.

6. Kernel-Level Virtualization

Kernel-level virtualization is one of the most important types of server virtualization. It is


an open-source virtualization which uses the Linux kernel as a hypervisor. The advantage of
using kernel virtualization is that it does not require any special administrative software and has
very less overhead.

User Mode Linux (UML) and Kernel-based virtual machine are some examples of kernel
virtualization.

Advantages of Server Virtualization

There are the following advantages of Server Virtualization -

1. Independent Restart

In Server Virtualization, each server can be restart independently and does not affect the working
of other virtual servers.

2. Low Cost

Server Virtualization can divide a single server into multiple virtual private servers, so it reduces
the cost of hardware components.

3. Disaster Recovery<

Disaster Recovery is one of the best advantages of Server Virtualization. In Server


Virtualization, data can easily and quickly move from one server to another and these data can be
stored and retrieved from anywhere.

4. Faster deployment of resources

Server virtualization allows us to deploy our resources in a simpler and faster way.

5. Security

It allows users to store their sensitive data inside the data centers.

Disadvantages of Server Virtualization

There are the following disadvantages of Server Virtualization -


Cloud Computing (KCS-713)

1. The biggest disadvantage of server virtualization is that when the server goes offline, all
the websites that are hosted by the server will also go down.
2. There is no way to measure the performance of virtualized environments.
3. It requires a huge amount of RAM consumption.
4. It is difficult to set up and maintain.
5. Some core applications and databases are not supported virtualization.
6. It requires extra hardware resources.

Uses of Server Virtualization

A list of uses of server virtualization is given below -

o Server Virtualization is used in the testing and development environment.


o It improves the availability of servers.
o It allows organizations to make efficient use of resources.
o It reduces redundancy without purchasing additional hardware components.

 Storage Virtualization

 As we know that, there has been a strong link between the physical host and the locally
installed storage devices. However, that paradigm has been changing drastically; almost
local storage is no longer needed.
 As the technology progressing, more advanced storage devices are coming to the market
that provide more functionality, and obsolete the local storage.
 Storage virtualization is a major component for storage servers, in the form of functional
RAID levels and controllers.
 Operating systems and applications with device can access the disks directly by
themselves for writing.
 The controllers configure the local storage in RAID groups and present the storage to the
operating system depending upon the configuration. However, the storage is abstracted
and the controller is determining how to write the data or retrieve the requested data for
the operating system.

Storage virtualization is becoming more and more important in various other forms:

 File servers: The operating system writes the data to a remote location with no need to
understand how to write to the physical media.
 WAN Accelerators: Instead of sending multiple copies of the same data over the WAN
environment, WAN accelerators will cache the data locally and present the re-requested
blocks at LAN speed, while not impacting the WAN performance.
 SAN and NAS: Storage is presented over the Ethernet network of the operating system.
NAS presents the storage as file operations (like NFS). SAN technologies present the
storage as block level storage (like Fibre Channel). SAN technologies receive the
operating instructions only when if the storage was a locally attached device.
Cloud Computing (KCS-713)

 Storage Tiering: Utilizing the storage pool concept as a stepping stone, storage tiering
analyzes the most commonly used data and places it on the highest performing storage
pool. The lowest one used data is placed on the weakest performing storage pool.

This operation is done automatically without any interruption of service to the data consumer.

Advantages of Storage Virtualization


1. Data is stored in the more convenient locations away from the specific host. In the case of
a host failure, the data is not compromised necessarily.
2. The storage devices can perform advanced functions like replication, reduplication, and
disaster recovery functionality.
3. By doing abstraction of the storage level, IT operations become more flexible in how
storage is provided, partitioned, and protected.

 CPU Virtualization

 A VM is a duplicate of an existing computer system in which a majority of the VM


instructions are executed on the host processor in native mode. Thus, unprivileged
instructions of VMs run directly on the host machine for higher efficiency. Other critical
instructions should be handled carefully for correctness and stability.
 The critical instructions are divided into three categories: privileged instructions, control-
sensitive instructions, and behavior-sensitive instructions.
 Privileged instructions execute in a privileged mode and will be trapped if executed
outside this mode. Control-sensitive instructions attempt to change the configuration of
resources used.
 Behavior-sensitive instructions have different behaviors depending on the configuration
of resources, including the load and store operations over the virtual memory.
 CPU architecture is virtualizable if it supports the ability to run the VM’s privileged and
unprivileged instructions in the CPU’s user mode while the VMM runs in supervisor
mode.
 When the privileged instructions including control- and behavior-sensitive instructions of
a VM are executed, they are trapped in the VMM. In this case, the VMM acts as a unified
mediator for hardware access from different VMs to guarantee the correctness and
stability of the whole system.
 However, not all CPU architectures are virtualizable. RISC CPU architectures can be
naturally virtualized because all control- and behavior-sensitive instructions are
privileged instructions.
 On the contrary, x86 CPU architectures are not primarily designed to support
virtualization. This is because about 10 sensitive instructions, such as SGDT and SMSW,
are not privileged instructions. When these instructions execute in virtualization, they
cannot be trapped in the VMM.
Cloud Computing (KCS-713)

Hardware-Assisted CPU Virtualization

This technique attempts to simplify virtualization because full or Para virtualization is


complicated. Intel and AMD add an additional mode called privilege mode level (some
people call it Ring-1) to x86 processors. Therefore, operating systems can still run at
Ring 0 and the hypervisor can run at Ring -1. All the privileged and sensitive instructions
are trapped in the hypervisor automatically. This technique removes the difficulty of
implementing binary translation of full virtualization. It also lets the operating system run
in VMs without modification.

 Memory Virtualization
 Virtual memory virtualization is similar to the virtual memory support provided by
modern operating systems.
 In a traditional execution environment, the operating system maintains mappings
of virtual memory to machine memory using page tables, which is a one-stage mapping
from virtual memory to machine memory.
 All modern x86 CPUs include a memory management unit (MMU) and a translation look
aside buffer (TLB) to optimize virtual memory performance.
 However, in a virtual execution environment, virtual memory virtualization involves
sharing the physical system memory in RAM and dynamically allocating it to
the physical memory of the VMs.
 That means a two-stage mapping process should be maintained by the guest OS and the
VMM, respectively: virtual memory to physical memory and physical memory to
machine memory.
 Furthermore, MMU virtualization should be supported, which is transparent to the guest
OS. The guest OS continues to control the mapping of virtual addresses to the physical
memory addresses of VMs. But the guest OS cannot directly access the actual machine
memory. The VMM is responsible for mapping the guest physical memory to the actual
machine memory.
Cloud Computing (KCS-713)

 I/O Virtualization

 I/O virtualization involves managing the routing of I/O requests between virtual devices and
the shared physical hardware. At the time of this writing, there are three ways to implement
I/O virtualization: full device emulation, Para-virtualization, and direct I/O. Full device
emulation are the first approach for I/O virtualization. Generally, this approach emulates
well-known, real-world devices.

 All the functions of a device or bus infrastructure, such as device enumeration,


identification, interrupts, and DMA, are replicated in software. This software is located in
the VMM and acts as a virtual device. The I/O access requests of the guest OS are
trapped in the VMM which interacts with the I/O devices.
 A single hardware device can be shared by multiple VMs that run concurrently.
However, software emulation runs much slower than the hardware it emulates.
 The Para-virtualization method of I/O virtualization is typically used in Xen. It is also
known as the split driver model consisting of a frontend driver and a backend driver. The
frontend driver is running in Domain U and the backend driver is running in Domain 0.
They interact with each other via a block of shared memory.
 The frontend driver manages the I/O requests of the guest OSes and the backend driver is
responsible for managing the real I/O devices and multiplexing the I/O data of different
Cloud Computing (KCS-713)

VMs. Although Para-I/O-virtualization achieves better device performance than full


device emulation, it comes with a higher CPU overhead.
 Direct I/O virtualization lets the VM access devices directly. It can achieve close-to-
native performance without high CPU costs. However, current direct I/O virtualization
implementations focus on networking for mainframes. There are a lot of challenges for
commodity hardware devices.
 For example, when a physical device is reclaimed (required by workload migration) for
later reassignment, it may have been set to an arbitrary state (e.g., DMA to some arbitrary
memory locations) that can function incorrectly or even crash the whole system.
 Since software-based I/O virtualization requires a very high overhead of device
emulation, hardware-assisted I/O virtualization is critical. Intel VT-d supports the
remapping of I/O DMA transfers and device-generated interrupts.
 The architecture of VT-d provides the flexibility to support multiple usage models that
may run unmodified, special-purpose, or “virtualization-aware” guest OSes.

 How Does Virtualization Simplify Disaster Recovery?


 When it comes to backup and disaster recovery, virtualization changes everything by
consolidating the entire server environment, along with all the workstations and other
systems into a single virtual machine.

 A virtual machine is effectively a single file that contains everything, including your
operating systems, programs, settings, and files. At the same time, you’ll be able to
use your virtual machine the same way you use a local desktop.

 Virtualization greatly simplifies disaster recovery, since it does not require rebuilding
a physical server environment. Instead, you can move your virtual machines over to
another system and access them as normal.

 Factor in cloud computing, and you have the complete flexibility of not having to
depend on in-house hardware at all. Instead, all you’ll need is a device with internet
access and a remote desktop application to get straight back to work as though
nothing happened.

Virtual disaster recovery planning and testing


Virtual infrastructures can be complex. In a recovery situation, that complexity can be an issue,
so it's important to have a comprehensive DR plan.A virtual disaster recovery plan has many
similarities to a traditional DR plan. An organization should:

 Decide which systems and data are the most critical for recovery, and document them.
 Get management support for the DR(disaster recovery) plan
 Complete a risk assessment and business impact analysis to outline possible risks and
their potential impacts.
Cloud Computing (KCS-713)

 Document steps needed for recovery.


 Define RTOs (recovery time objectives) and RPOs (recovery point objectives).
 Test the plan.

You might also like