CC Module2 Part1
CC Module2 Part1
VIRTUALIZATION
Virtualization is a large umbrella of technologies and concepts that are meant to provide an
abstract environment - whether virtual hardware or an operating system to run applications.
Virtualization technologies have gained renewed interested recently due to the confluence of
several phenomena:
Nowadays, the average end-user desktop PC is powerful enough to meet almost all the needs of
everyday computing, with extra capacity that is rarely used. Almost all these PCs have resources
enough to host a virtual machine manager and execute a virtual machine with by far acceptable
performance.
Hardware and software underutilization is occurring due to (1) increased performance and
computing capacity, and (2) the effect of limited or sporadic use of resources. Computers today
are so powerful that in most cases only a fraction of their capacity is used by an application or
the system. Moreover, if we consider the IT infrastructure of an enterprise, many computers are
only partially utilized whereas they could be used without interruption on a 24/7/365 basis. For
example, desktop PCs mostly devoted to office automation tasks and used by administrative staff
are only used during work hours, remaining completely unused overnight. Using these resources
for other purposes after hours could improve the efficiency of the IT infrastructure. To
transparently provide such a service, it would be necessary to deploy a completely separate
environment, which can be achieved through virtualization.
Lack of space
The continuous need for additional capacity, whether storage or compute power, makes data
centers grow quickly. Companies such as Google and Microsoft expand their infrastructures by
building data centers as large as football fields that are able to host thousands of nodes. This
condition, along with hardware underutilization, has led to the diffusion of a technique called
server consolidation, for which virtualization technologies are fundamental.
Greening initiative
Recently, companies are increasingly looking for ways to reduce the amount of energy they
consume and to reduce their carbon footprint. Data centers are one of the major power
consumers; maintaining a data center operation not only involves keeping servers on, but a great
deal of energy is also consumed in keeping them cool. Infrastructures for cooling have a
significant impact on the carbon footprint of a data center. Hence, reducing the number of
servers through server consolidation will definitely reduce the impact of cooling and power
consumption of a data center. Virtualization technologies can provide an efficient way of
consolidating servers.
The increased demand for additional capacity, which translates into more servers in a data center,
is also responsible for a significant increment in administrative costs. Computers—in particular,
servers—do not operate all on their own, but they require care and feeding from system
administrators. Common system administration tasks include hardware monitoring, defective
hardware replacement, server setup and updates, server resources monitoring, and backups.
These are labor-intensive operations, and the higher the number of servers that have to be
managed, the higher the administrative costs. Virtualization can help reduce the number of
required servers for a given workload, thus reducing the cost of the administrative personnel.
Virtualization is a broad concept that refers to the creation of a virtual version of something,
whether hardware, a software environment, storage, or a network. In a virtualized environment
there are three major components: guest, host, and virtualization layer. The guest represents the
system component that interacts with the virtualization layer rather than with the host, as would
normally happen. The host represents the original environment where the guest is supposed to be
managed. The virtualization layer is responsible for recreating the same or a different
environment where the guest will operate.
Increased security: The ability to control the execution of a guest in a completely transparent
manner opens new possibilities for delivering a secure, controlled execution environment. The
virtual machine represents an emulated environment in which the guest is executed. All the
operations of the guest are generally performed against the virtual machine, which then translates
and applies them to the host. Resources exposed by the host can then be hidden or simply
protected from the guest. Sensitive information that is contained in the host can be naturally
hidden without the need to install complex security policies.
Managed execution: Virtualization of the execution environment not only allows increased
security, but a wider range of features also can be implemented. In particular, sharing,
aggregation, emulation, and isolation are the most relevant features.
Aggregation: Not only is it possible to share physical resource among several guests, but
virtualization also allows aggregation, which is the opposite process. A group of separate hosts
can be tied together and represented to guests as a single virtual host. This function is naturally
implemented in middleware for distributed computing.
Portability: Portability in cloud computing refers to the ability to transfer applications and data
between cloud computing environments, enabling cloud services migration from one cloud
provider to another or between public and private clouds.
Virtualization covers a wide range of emulation techniques that are applied to different areas of
computing. A classification of these techniques helps us better understand their characteristics
and use.
The first classification discriminates against the service or entity that is being emulated.
Among these categories, execution virtualization constitutes the oldest, most popular, and most
developed area. In particular we can divide these execution virtualization techniques into two
major categories by considering the type of host they require.
Process-level techniques are implemented on top of an existing operating system, which has full
control of the hardware. System-level techniques are implemented directly on hardware and do
not require—or require a minimum of support from—an existing operating system. Within these
two categories we can list various techniques that offer the guest a different type of virtual
computation environment: bare hardware, operating system resources, low-level programming
language, and application libraries.
Execution virtualization can be implemented directly on top of the hardware by the operating
system, an application, or libraries. Execution virtualization includes all techniques that aim to
emulate an execution environment that is separate from the one hosting the virtualization layer.
Modern computing systems can be expressed in terms of the reference model described in
Figure. At the bottom layer, the model for the hardware is expressed in terms of the Instruction
Set Architecture (ISA), which defines the instruction set for the processor, registers, memory,
and interrupts management. ISA is the interface between hardware and software, and it is
important to the operating system (OS) developer (System ISA) and developers of applications
that directly manage the underlying hardware (User ISA).
The instruction set exposed by the hardware has been divided into different security classes that
define who can operate with them. The first distinction can be made between privileged and
nonprivileged instructions. Nonprivileged instructions are those instructions that can be used
without interfering with other tasks because they do not access shared resources. This category
contains, for example, all the floating, fixed-point, and arithmetic instructions. Privileged
instructions are those that are executed under specific restrictions and are mostly used for
sensitive operations.
A possible implementation features a hierarchy of privileges (see Figure) in the form of ring
based security: Ring 0, Ring 1, Ring 2, and Ring 3; Ring 0 is in the most privileged level and
Ring 3 in the least privileged level. Ring 0 is used by the kernel of the OS, rings 1 and 2 are used
by the OS-level services, and Ring 3 is used by the user. Recent systems support only two levels,
with Ring 0 for supervisor mode and Ring 3 for user mode.
Figure: Security Rings and Privileged mode
All the current systems support at least two different execution modes: supervisor mode and
user mode. The first mode denotes an execution mode in which all the instructions can be
executed without any restriction. This mode, also called master mode or kernel mode, is
generally used by the operating system (or the hypervisor) to perform sensitive operations on
hardware level resources. In user mode, there are restrictions to control the machine-level
resources.
Hardware-level virtualization
In this model, the guest is represented by the operating system, the host by the physical computer
hardware, the virtual machine by its emulation, and the virtual machine manager by the
hypervisor (see Figure). Hardware-level virtualization is also called system virtualization.
Figure: A hardware virtualization reference model
It recreates a hardware environment in which guest operating systems are installed. There are
two major types of hypervisor: Type I and Type II (see Figure).
Type I hypervisors run directly on top of the hardware. Therefore, they take the place of the
operating systems and interact directly with the ISA interface exposed by the underlying
hardware, and they emulate this interface in order to allow the management of guest operating
systems. This type of hypervisor is also called a native virtual machine since it runs natively on
hardware.
A virtual machine manager is internally organized as described in Figure 2.8. Three main
modules, dispatcher, allocator, and interpreter, coordinate their activity in order to emulate the
underlying hardware. The dispatcher constitutes the entry point of the monitor and reroutes the
instructions issued by the virtual machine instance to one of the two other modules. The allocator
is responsible for deciding the system resources to be provided to the VM: whenever a virtual
machine tries to execute an instruction that results in changing the machine resources associated
with that VM, the allocator is invoked by the dispatcher. The interpreter module consists of
interpreter routines. These are executed whenever a virtual machine executes a privileged
instruction: a trap is triggered and the corresponding routine is executed.
Three properties of Virtual Machine Manager that have to be satisfied:
• Equivalence. A guest running under the control of a virtual machine manager should exhibit the
same behavior as when it is executed directly on the physical host.
• Resource control. The virtual machine manager should be in complete control of virtualized
resources.