Cloud Resource Virtualization
Cloud Resource Virtualization
Cloud Resource Virtualization
Introduction
• Three classes of fundamental abstractions–interpreters,
memory, and communications links–are necessary to
describe the operation of a computing system.
• The physical realization of each one of these abstractions,
such as processors that transform information, primary and
secondary memory for storing information, and
communication channels that allow different systems to
communicate with one another, can vary in terms of
bandwidth, latency, reliability, and other physical
characteristics.
• Software systems such as operating systems are
responsible for the management of the system resources –
the physical implementations of the three abstractions.
1
22/03/2019
Virtualization
Virtualization simulates the interface to a physical object by any one of
four means:
• Multiplexing. Create multiple virtual objects from one instance of a
physical object. For example, a processor is multiplexed among a
number of processes or threads.
• Aggregation. Create one virtual object from multiple physical
objects. For example, a number of physical disks are aggregated
into a RAID disk.
• Emulation.Construct a virtual object from a different type of
physical object. For example,a physical disk emulates a random
access memory.
• Multiplexing and emulation. Examples: Virtual memory with
paging multiplexes real memory and disk, and a Virtual address
emulates a real address; TCP emulates are liable bit pipe and
multiplexes a physical communication channel and a processor.
2
22/03/2019
3
22/03/2019
4
22/03/2019
• The first interface we discuss is the instruction set architecture (ISA) at the
boundary of the hardware and the software.
• The next interface is the application binary interface (ABI),which allows
the ensemble consisting of the application and the library modules to
access the hardware.
• The ABI does not include privileged system instructions; instead it invokes
system calls.
• Finally, the application program interface (API) defines the set of
instructions the hardware was designed to execute and gives the
application access to the ISA. It includes HLL library calls, which often
invoke system calls.
• A process is the abstraction for the code of an application at execution
time; a thread is a lightweight process.
• The ABI is the projection of the computer system seen by the process, and
the API is the projection of the system from the perspective of the HLL
program.
5
22/03/2019
6
22/03/2019
7
22/03/2019
• This shadow page table points to the actual page frame and is used by the
hardware component called the memory management unit (MMU) for
dynamic address translation. Memory virtualization has important
implications on performance.
• VMMs use a range of optimization techniques; for example, VMware
systems avoid page duplication among different virtual machines; they
maintain only one copy of a shared page and use copy-on-write policies
whereas Xen imposes total isolation of the VM and does not allow page
sharing.
• VMMs control the virtual memory management and decide what pages
to swap out; for example, when the ESX VMware server wants to swap out
pages, it uses a balloon process inside a guest OS and requests it to
allocate more pages to itself, thus swapping out pages of some of the
processes running under that VM.
• Then it forces the balloon process to relinquish control of the free page
frames.
Virtual machines
• A virtual machine (VM) is an isolated
environment that appears to be a whole
computer but actually only has access to a
portion of the computer resources.
• Each VM appears to be running on the bare
hardware, giving the appearance of multiple
instances of the same computer, though all
are supported by a single physical system.
8
22/03/2019
9
22/03/2019
10
22/03/2019
Types of Virtualization
• Traditional. VM also called a “bare metal” VMM. A thin software layer that runs
directly on the host machine hardware; its main advantage is performance.
Examples: VMWare ESX, ESXi Servers, Xen, OS370, and Denali.
• Hybrid. The VMM shares the hardware with the existing OS. Example: VMWare
Workstation.
• Hosted. The VM runs on top of an existing OS.
– The main advantage of this approach is that the VM is easier to build and install.
– Another advantage of this solution is that the VMM could use several components of the host
OS, such as the scheduler, the pager, and the I/O drivers, rather than providing its own.
– A price to pay for this simplicity is the increased overhead and associated performance
penalty; indeed, the I/O operations, page faults, and scheduling requests from a guest OS are
not handled directly by the VMM. Instead, they are passed to the host OS.
– Performance as well as the challenges to support complete isolation of VMs make this
solution less attractive for servers in a cloud computing environment
11
22/03/2019
12
22/03/2019
13
22/03/2019
14
22/03/2019
15
22/03/2019
Cache Virtualization
• In some cases an application running under a VM
performs better than one running under a classical OS.
This is the case of a policy called cache isolation.
• The cache is generally not partitioned equally among
processes running under a classical OS, since one
process may use the cache space better than the other.
– For example, in the case of two processes, one write-
intensive and the other read-intensive, the cache may be
aggressively filled by the first.
• Under the cache isolation policy the cache is divided
between the VMs and it is beneficial to run workloads
competing for cache in two different VMs.
16
22/03/2019
• Nonfaulting access to privileged state. Several instructions, LGDT, SIDT, SLDT, and
LTR that load the registers GDTR, IDTR, LDTR, and TR, can only be executed by
software running at privilege level 0, because these instructions point to data
structures that control the CPU operation. Nevertheless, instructions that store
from these registers fail silently when executed at a privilege level other than 0.
This implies that a guest OS executing one of these instructions does not realize
that the instruction has failed.
• Guest system calls. Two instructions, SYSENTER and SYSEXIT, support low-latency
system calls. The first causes a transition to privilege level 0, whereas the second
causes a transition from privilege level 0 and fails if executed at a level higher than
0. The VMM must then emulate every guest execution of either of these
instructions, which has a negative impact on performance.
• Interrupt virtualization. In response to a physical interrupt, the VMM generates a
“virtual interrupt” and delivers it later to the target guestOS. But every OS has the
ability to mask interrupts ; thus the virtual interrupt could only be delivered to the
5
guestOSwhen the interrupt is not masked. Keeping track of all guestOS attempts to
mask interrupts greatly complicates theVMMand increases the overhead
17
22/03/2019
18