Cloud Resource Virtualization

You are on page 1of 18

22/03/2019

Cloud Resource Virtualization

Introduction
• Three classes of fundamental abstractions–interpreters,
memory, and communications links–are necessary to
describe the operation of a computing system.
• The physical realization of each one of these abstractions,
such as processors that transform information, primary and
secondary memory for storing information, and
communication channels that allow different systems to
communicate with one another, can vary in terms of
bandwidth, latency, reliability, and other physical
characteristics.
• Software systems such as operating systems are
responsible for the management of the system resources –
the physical implementations of the three abstractions.

1
22/03/2019

• Resource management for a community of users with a


wide range of applications running under different
operating systems is a very difficult problem.
• Resource management becomes even more complex
when resources are oversubscribed and users are
uncooperative.
• Resource management is affected by internal factors,
such as the heterogeneity of the hardware and
software systems, the ability to approximate the global
state of the system and to redistribute the load, the
failure rates of different components, and many other
factors.

Virtualization
Virtualization simulates the interface to a physical object by any one of
four means:
• Multiplexing. Create multiple virtual objects from one instance of a
physical object. For example, a processor is multiplexed among a
number of processes or threads.
• Aggregation. Create one virtual object from multiple physical
objects. For example, a number of physical disks are aggregated
into a RAID disk.
• Emulation.Construct a virtual object from a different type of
physical object. For example,a physical disk emulates a random
access memory.
• Multiplexing and emulation. Examples: Virtual memory with
paging multiplexes real memory and disk, and a Virtual address
emulates a real address; TCP emulates are liable bit pipe and
multiplexes a physical communication channel and a processor.

2
22/03/2019

• Virtualization abstracts the underlying resources and simplifies


their use, isolates users from one another, and supports replication,
which, in turn, increases the elasticity of the system.
• Virtualization is a critical aspect of cloud computing, equally
important to the providers and consumers of cloud services, and
plays an important role in:
• System security because it allows isolation of services
running on the same hardware.
• Performance and reliability because it allows applications
to migrate from one platform to another.
• The development and management of services offered by a
provider.
• Performance isolation.

• In a cloud computing environment a VMM runs on the physical


hardware and exports hardware level abstractions to one or more
guest operating systems.
• A guest OS interacts with the virtual hardware in the same way it
would interact with the physical hardware, but under the watchful
eye of the VMM which traps all privileged operations and mediates
the interactions of the guest OS with the hardware.
• For example, a VMM can control I/O operations to two virtual disks
implemented as two different sets of tracks on a physical disk.
• New services can be added without the need to modify an
operating system.
• There are side effects of virtualization, notably the performance
penalty and the hardware costs.

3
22/03/2019

Layering and Virtualization


• A common approach to managing system complexity is
to identify a set of layers with well-defined interfaces
among them.
• The interfaces separate different levels of abstraction.
• Layering minimizes the interactions among the
subsystems and simplifies the description of the
subsystems.
• Each subsystem is abstracted through its interfaces
with the other subsystems.
• Thus, we are able to design, implement, and modify
the individual subsystems independently.

4
22/03/2019

• The first interface we discuss is the instruction set architecture (ISA) at the
boundary of the hardware and the software.
• The next interface is the application binary interface (ABI),which allows
the ensemble consisting of the application and the library modules to
access the hardware.
• The ABI does not include privileged system instructions; instead it invokes
system calls.
• Finally, the application program interface (API) defines the set of
instructions the hardware was designed to execute and gives the
application access to the ISA. It includes HLL library calls, which often
invoke system calls.
• A process is the abstraction for the code of an application at execution
time; a thread is a lightweight process.
• The ABI is the projection of the computer system seen by the process, and
the API is the projection of the system from the perspective of the HLL
program.

• Computer systems are fairly complex, and their operation is


best understood when we consider a model similar to the
one in Figure5.1, which shows the interfaces among the
software components and the hardware.
• The hardware consists of one or more multicore
processors, a system interconnect (e.g., one or more
buses), a memory translation unit, the main memory, and
I/O devices, including one or more networking interfaces.
• Applications written mostly in high-level languages (HLL)
often call library modules and are compiled into object
code.

5
22/03/2019

• It is possible to compile an HLL program for a


VM environment, as shown in Figure 5.2, where
portable code is produced and distributed and then
converted by binary translators to the ISA of the
host system.
A dynamic binary translation converts blocks of
guest instructions from the portable code to the
host instruction and leads to a significant
performance improvement as such blocks are
cached and reused.

6
22/03/2019

Virtual machine monitors


• A virtual machine monitor (VMM), also called a hypervisor, is the software that
securely partitions the resources of a computer system into one or more virtual
machines.
• A guest operating system is an operating system that runs under the control of a
VMM rather than directly on the hardware.
• The VMM runs in kernel mode, whereas a guest OS runs in user mode. Sometimes
the hardware supports a third mode of execution for the guest OS.
• VMMs allow several operating systems to run concurrently on a single hardware
platform.
• VMMs enforce isolation among these systems, thus enhancing security. A VMM
controls how the guest operating system uses the hardware resources.
• The events occurring in one VM do not affect any other VM running under the
same VMM.
• At the same time, the VMM enables:
– Multiple services to share the same platform.
– The movement of a server from one platform to another, the so-called live migration.
– System modification while maintaining backward compatibility with the original system.

• When a guest OS attempts to execute a privileged instruction, the VMM


traps the operation and enforces the correctness and safety of the
operation.
• The VMM guarantees the isolation of the individual VMs, and thus
ensures security and encapsulation, a major concern in cloud computing.
• VMM monitors system performance and takes corrective action to avoid
performance degradation; for example, the VMM may swap out a
VM(copies all pages of that VM from real memory to disk and makes the
real memory frames available for paging by other VMs) to avoid thrashing.
• A VMM virtualizes the CPU and memory. For example, the VMM traps
interrupts and dispatches them to the individual guest operating systems.
If a guest OS disables interrupts, the VMM buffers such interrupts until the
guest OS enables them. The VMM maintains a shadow page table for each
guest OS and replicates any modification made by the guest OS in its own
shadow page table.

7
22/03/2019

• This shadow page table points to the actual page frame and is used by the
hardware component called the memory management unit (MMU) for
dynamic address translation. Memory virtualization has important
implications on performance.
• VMMs use a range of optimization techniques; for example, VMware
systems avoid page duplication among different virtual machines; they
maintain only one copy of a shared page and use copy-on-write policies
whereas Xen imposes total isolation of the VM and does not allow page
sharing.
• VMMs control the virtual memory management and decide what pages
to swap out; for example, when the ESX VMware server wants to swap out
pages, it uses a balloon process inside a guest OS and requests it to
allocate more pages to itself, thus swapping out pages of some of the
processes running under that VM.
• Then it forces the balloon process to relinquish control of the free page
frames.

Virtual machines
• A virtual machine (VM) is an isolated
environment that appears to be a whole
computer but actually only has access to a
portion of the computer resources.
• Each VM appears to be running on the bare
hardware, giving the appearance of multiple
instances of the same computer, though all
are supported by a single physical system.

8
22/03/2019

• We distinguish two types of VM:


– process and
– system VMs
• A process VM is a virtual platform created for an
individual process and destroyed once the
process terminates.
– Virtually all operating systems provide a process VM
for each one of the applications running, but the more
interesting process VMs are those that support
binaries compiled on a different instruction set.

9
22/03/2019

• A system VM supports an operating system


together with many user processes.
• When the VM runs under the control of a
normal OS and provides a platform-
independent host for a single application, we
have an application virtual machine (e.g., Java
Virtual Machine [JVM]).

• Asystem virtual machine provides a complete


system; eachVMcan run its ownOS, which in
turn can run multiple applications.
• Systems such as Linux Vserver , OpenVZ (Open
VirtualiZation) , FreeBSD Jails , and Solaris
Zones , based on Linux, FreeBSD, and Solaris,
espectively, implement operating system-level
virtualization technologies.

10
22/03/2019

• Operating system-level virtualization allows a


physical server to run multiple isolated
operating system instances, subject to several
constraints; the instances are known as
containers, virtual private servers (VPSs), or
virtual environments (VEs).
• For example, OpenVZ requires both the host
and the guest OS to be Linux distributions.

Types of Virtualization
• Traditional. VM also called a “bare metal” VMM. A thin software layer that runs
directly on the host machine hardware; its main advantage is performance.
Examples: VMWare ESX, ESXi Servers, Xen, OS370, and Denali.
• Hybrid. The VMM shares the hardware with the existing OS. Example: VMWare
Workstation.
• Hosted. The VM runs on top of an existing OS.
– The main advantage of this approach is that the VM is easier to build and install.
– Another advantage of this solution is that the VMM could use several components of the host
OS, such as the scheduler, the pager, and the I/O drivers, rather than providing its own.
– A price to pay for this simplicity is the increased overhead and associated performance
penalty; indeed, the I/O operations, page faults, and scheduling requests from a guest OS are
not handled directly by the VMM. Instead, they are passed to the host OS.
– Performance as well as the challenges to support complete isolation of VMs make this
solution less attractive for servers in a cloud computing environment

11
22/03/2019

Performance and security isolation


• Performance isolation is a critical condition for quality-of-service (QoS) guarantees
in shared computing environments.
• if the run-time behavior of an application is affected by other applications running
concurrently and, thus, is competing for CPU cycles, cache, main memory, and disk
and network access, it is rather difficult to predict the completion time.
• Processor virtualization presents multiple copies of the same processor or core on
multicore systems.
• processor emulation presents a model of another hardware system in which
instructions are “emulated” in software more slowly than virtualization.
• In traditional view, there is a performance penalty because an OS is considerably
more heavyweight than a process and the overhead of context switching is larger.
• A VMM executes directly on the hardware a subset of frequently used machine
instructions generated by the application and emulates privileged instructions,
including device I/O requests.
• The subset of the instructions executed directly by the hardware includes
arithmetic instructions, memory access, and branching instructions.

• Operating systems use process abstraction not only for


resource sharing but also to support isolation.
• The software running on a virtual machine has the
constraints of its own dedicated hardware; it can only
access virtual devices emulated by the software.
• This layer of software has the potential to provide a
level of isolation nearly equivalent to the isolation
presented by two different physical systems. Thus, the
virtualization can be used to improve security in a
cloud computing environment.

12
22/03/2019

Full virtualization and


paravirtualization
• a set of sufficient conditions for a computer
architecture to support virtualization and allow a
VMM to operate efficiently
– A program running under the VMM should exhibit a
behavior essentially identical to that demonstrated
when the program runs directly on an equivalent
machine.
– The VMM should be in complete control of the
virtualized resources.
– A statistically significant fraction of machine
instructions must be executed without the
intervention of the VMM.

• Another way to identify an architecture suitable


for a virtual machine is to distinguish two classes
of machine instructions:
– sensitive instructions, which require special
precautions at execution time, and innocuous
instructions, which are not sensitive.
– In turn, sensitive instructions can be:
• Control sensitive, which are instructions that attempt to
change either the memory allocation or the privileged
mode.
• Mode sensitive, which are instructions whose behavior is
different in the privileged mode.

13
22/03/2019

– Machine instructions :To handle nonvirtualizable


instructions, one could resort to two strategies:
• Binary translation. The VMM monitors the execution
of guest operating systems; nonvirtualizable
instructions executed by a guest operating system are
replaced with other instructions.
• Paravirtualization. The guest operating system is
modified to use only instructions that can be
virtualized.

Full & Para Virtualization


• There are two basic approaches to processor
virtualization:
– full virtualization, in which each virtual machine runs on an exact copy
of the actual hardware.
– paravirtualization, in which each virtual machine runs on a slightly
modified copy of the actual hardware .

14
22/03/2019

• The reasons that paravirtualization is often


adopted are
– (i) some aspects of the hardware cannot be virtualized
– (ii) to improve performance
– (iii) to present a simpler interface.
• VMware VMMs are examples of full virtualization.
• Xen and Denali are based on paravirtualization

• Full virtualization requires a virtualizable architecture;


the hardware is fully exposed to the guest OS, which
runs unchanged, and this ensures that this direct
execution mode is efficient.
• On the other hand, paravirtualization is done because
some architectures such as x86 are not easily
virtualizable.
• Paravirtualization demands that the guest OS be
modified to run under the VMM; furthermore, the
guest OS code must be ported for individual hardware
platforms.

15
22/03/2019

Cache Virtualization
• In some cases an application running under a VM
performs better than one running under a classical OS.
This is the case of a policy called cache isolation.
• The cache is generally not partitioned equally among
processes running under a classical OS, since one
process may use the cache space better than the other.
– For example, in the case of two processes, one write-
intensive and the other read-intensive, the cache may be
aggressively filled by the first.
• Under the cache isolation policy the cache is divided
between the VMs and it is beneficial to run workloads
competing for cache in two different VMs.

Hardware support for virtualization


• The problems faced by virtualization of the x86
architecture:
• Ring deprivileging - VMM forces the guest
operating system, and the applications to run at a
privilege level greater than 0. Two solutions are
then possible:
– The (0/1/3) mode, in which the VMM, the OS, and the
application run at privilege levels 0, 1, and 3,
respectively;
– (0,3,3) mode, in which the VMM, a guest OS, and
applications run at privilege levels 0, 3, and 3,
respectively.

16
22/03/2019

– Ring aliasing. Problems created when a guest OS


is forced to run at a privilege level other than that
it was originally designed for.
– Address space compression. A VMM uses parts of
the guest address space to store several system
data structures, such as the interrupt-descriptor
table and the global-descriptor table. Such data
structures must be protected, but the guest
software must have access to them.

• Nonfaulting access to privileged state. Several instructions, LGDT, SIDT, SLDT, and
LTR that load the registers GDTR, IDTR, LDTR, and TR, can only be executed by
software running at privilege level 0, because these instructions point to data
structures that control the CPU operation. Nevertheless, instructions that store
from these registers fail silently when executed at a privilege level other than 0.
This implies that a guest OS executing one of these instructions does not realize
that the instruction has failed.
• Guest system calls. Two instructions, SYSENTER and SYSEXIT, support low-latency
system calls. The first causes a transition to privilege level 0, whereas the second
causes a transition from privilege level 0 and fails if executed at a level higher than
0. The VMM must then emulate every guest execution of either of these
instructions, which has a negative impact on performance.
• Interrupt virtualization. In response to a physical interrupt, the VMM generates a
“virtual interrupt” and delivers it later to the target guestOS. But every OS has the
ability to mask interrupts ; thus the virtual interrupt could only be delivered to the
5

guestOSwhen the interrupt is not masked. Keeping track of all guestOS attempts to
mask interrupts greatly complicates theVMMand increases the overhead

17
22/03/2019

– Access to hidden state. Elements of the system state (e.g.,


descriptor caches for segment registers) are hidden; there is no
mechanism for saving and restoring the hidden components
when there is a context switch from one VM to another.
– Ring compression. Paging and segmentation are the two
mechanisms to protect VMM code from being overwritten by a
guest OS and applications. Systems running in 64-bit mode can
only use paging, but paging does not distinguish among
privilege levels 0, 1, and 2, so the guest OS must run at privilege
level 3, the so-called (0/3/3) mode. Privilege levels 1 and 2
cannot be used; thus the name ring compression.
– Frequent access to privileged resources increases VMM
overhead. The task-priority register (TPR) is frequently used by a
guest OS. The VMM must protect the access to this register and
trap all attempts to access it. This can cause a significant
performance degradation.

18

You might also like