0% found this document useful (0 votes)
97 views59 pages

Virtualization (Autosaved)

This document provides an overview of virtualization at different levels, including: - Hardware level virtualization uses a hypervisor to provide virtual hardware to multiple guest operating systems, allowing more efficient sharing of physical resources. - OS level virtualization shares hardware and the base OS through a virtualization layer, enabling isolated access to multiple virtual machines. - Virtualization can be implemented through hosted or bare-metal architectures. Hosted virtualization uses a hypervisor installed on a host OS, while bare-metal virtualization communicates directly with hardware without a host OS.

Uploaded by

Sakshi Kakade
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
97 views59 pages

Virtualization (Autosaved)

This document provides an overview of virtualization at different levels, including: - Hardware level virtualization uses a hypervisor to provide virtual hardware to multiple guest operating systems, allowing more efficient sharing of physical resources. - OS level virtualization shares hardware and the base OS through a virtualization layer, enabling isolated access to multiple virtual machines. - Virtualization can be implemented through hosted or bare-metal architectures. Hosted virtualization uses a hypervisor installed on a host OS, while bare-metal virtualization communicates directly with hardware without a host OS.

Uploaded by

Sakshi Kakade
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 59

Virtualization

Module 1- Unit 2
Contents
• Introduction and benefits of Virtualization
• Implementation levels of Virtualization

2
What is virtualization

3
Traditional vs Virtual

4
Virtualization Architecture

5
Hypervisor

6
Virtual Machines

7
Properties of Virtual Machines

8
Benefits of Virtualization

9
Implementation levels of virtualization

10
VMM Design Requirements and Providers

• There are three requirements for a VMM.


• First, a VMM should provide an environment for programs which is essentially
identical to the original machine.
• Second, programs run in this environment should show, at worst, only minor
decreases in speed.
• Third, a VMM should be in complete control of the system resources. Any
program run under a VMM should exhibit a function identical to that which it runs
on the original machine directly.
• Two possible exceptions in terms of differences are permitted with this
requirement: differences caused by the availability of system resources and
differences caused by timing dependencies. The former arises when more than one
VM is running on the same machine.

11
Hardware level Virtualization

• Hardware virtualization is the method used to create virtual versions of physical


desktops and operating systems.
• It uses a virtual machine manager (VMM) called a hypervisor to provide
abstracted hardware to multiple guest operating systems, which can then share the
physical hardware resources more efficiently.
• Hardware virtualization offers many benefits, such as better performance and
lower costs.

12
What are the components of hardware virtualization?
• The hardware layer, or virtualization host, contains the physical server components such as CPU,
memory, network, and disk drives. This is the physical hardware on which virtualization takes
place. It requires an x86-based system with one or more CPUs to run all supported guest operating
systems.
• The hypervisor creates a virtualization layer that runs between the OS and the server hardware,
allowing many instances of an operating system or different operating systems to run in parallel on
a single machine. Hypervisors isolate operating systems and applications from the underlying
computer hardware, or the host machine, from the virtual machines that use its resources.
• Virtual machines are software emulations of a computing hardware environment and provide the
functionalities of a physical computer. Virtual machines themselves consist of virtual hardware, a
guest operating system, and guest software or applications.

13
How does hardware Virtualization works?
• Hardware virtualization enables a single physical machine to function as multiple
machines by creating simulated environments.
• The physical host uses software called a hypervisor that creates an abstraction layer
between the software and hardware and manages the shared physical hardware resources
between the guest and host operating systems.
• The hypervisor connects directly to the hardware and enables it to be split into multiple
distinct environments or virtual machines. These VMs use the resources of the physical
host, including CPU, memory, and storage, which are allocated to the guests as needed.
• When done for server platforms, hardware virtualization is called server virtualization.
Hardware virtualization makes it possible to use a physical machine’s full capacity and,
by isolating VMs from one another, to protect against malware.

14
Properties of hardware level virtualization

• It supports multiple OS and applications to be run simultaneously


• It requires no system reboot or dual boot setup
• It gives the appearance of having multiple separate machines, each of
which can be used as normal system
• Degree of isolation is high
• Implementation is less risky and maintenance is easy

15
Issues in hardware level virtualization
• A lot of time is spent on installation and administration of virtual
system before testing or running the applications
• In case, physical and virtual OS are same, this kind of virtualization
results in duplication of efforts and reduces efficiency
• To eliminate these issues we implement virtualization at OS level

16
Virtualization at OS level
• It includes sharing of hardware and OS
• Physical machine is separated from logical structure by a separate virtualization layer that
can be compared with VMM.
• This layer is built on the top of the base OS to enable the user to have access to multiple
machines, each being isolated from others and running independently
• This level of support is known as middleware support for virtualization
• OS level of virtualization keeps OS, application specific data structure, user level libraries,
environmental settings and other requisites separately.
• Thus , application is unable to distinguish between real and virtual environments
• Key idea is that virtual environment remains indistinguishable from real one.
• Virtualization replicates operating environment which is established on physical machine, to
provide VE for application by creating partitions for each virtual system whenever
demanded.
• Operating environments are separated from the physical machine as well as from each other.

17
Virtualization structure

• Virtualization is achieved through a software called VMM or


hypervisor
• The software is used in two ways, thus forming two different
structures
• Hosted virtualization
• Bare-metal virtualization

18
Hosted Virtualization vs bare metal virtualization

Hosted Virtualization

19
Hosted Virtualization
• In this architecture, a base operating system (such as Windows) is first installed. A piece of
software called a hypervisor or virtual machine monitor (VMM) is installed on top of the host OS,
and allows users to run various guest operating systems within their own application windows.
• In the hosted virtualization architecture, each virtual machine (guest operating system) commonly
only has access to a limited subset of I/O devices. The host operating system retains ownership of
the physical I/O connected to a given computer, and the virtual machine monitor (VMM) provides
an emulated view of the actual hardware (when possible) to each virtual machine (VM). Because
the VMM does not have knowledge of most non-generic I/O devices such as PCI data acquisition
cards, it does not present these emulated devices to VMs. Only generic devices like network
interface cards and CD-ROM drives are emulated.
• m. The key point to remember is that I/O requests are ultimately passed through the host OS in a
hosted virtualization architecture
• One benefit of using a hosted virtualization architecture is ease of installation and configuration.
For example, the VMWare Workstation software can be set up in minutes by running a basic
installer in Windows. Once installed, an engineer can create several virtual machines that run
different operating systems – all on the same physical computer. In addition, VMMs that use
hosted virtualization commonly run on a wide variety of PCs. Since a host operating system
provides drivers for communicating with low-level hardware, VMM software can be installed on
most computers without customization. 20
• As mentioned above, hosted virtualization architectures are not capable of
emulating or providing passthrough to many PCI I/O devices. In addition, since
I/O requests from virtual machines must be directed through a host OS,
performance can be degraded. Another drawback to hosted virtualization is the
lack of support for real-time operating systems. Because the underlying host OS
dictates scheduling amongst its applications and the VMM, it is usually not
possible to run a real-time OS inside of a VM deterministically when using hosted
virtualization.
• In light of the benefits that hosted virtualization provides, it is commonly used for
testing beta software (to eliminate the need for a dedicated testing machine), or
to run legacy applications. Hosted virtualization also provides quick support for
running different operating systems on one PC, which can be useful for engineers
who need frequent access to various applications written for these different
operating systems.

21
Bare-Metal Virtualization
• The second common architecture for virtualization is bare-metal. In this architecture, a
VMM (also called hypervisor) is installed that communicates directly with system
hardware rather than relying on a host operating system. See the illustration below for a
graphical view of this architecture.
• Bare-metal virtualization solutions provide a number of options for I/O access from
virtual machines. The reader should note that because bare-metal virtualization does not
rely on a host operating system, a hypervisor using this architecture can communicate
with I/O devices directly. For I/O devices to be shared between virtual machines (e.g.
ethernet, hard drives, etc.), the hypervisor software must contain a low-level driver to
communicate with the device. It must also be able to emulate each shared device for
guest virtual machines.
• In addition to the improved I/O performance possible when partitioning devices such as
PCI data acquisition boards between individual VMs, bare-metal virtualization
architectures have the benefit of supporting real-time operating systems. Since they do
not rely on an underlying host operating system, bare-metal hypervisors can implement
features to bound interrupt latency and enable deterministic performance.

22
• This means that engineers using bare-metal virtualization can run real-time and
general purpose operating systems in parallel on the same processing hardware.
Bare-metal virtualization does, however, have some drawbacks that should be
considered. Any drivers needed to support various hardware platforms must be
included in the hypervisor, in addition to drivers for devices that will be shared
amongst virtual machines. Furthermore, since bare-metal hypervisors are not
installed on top of a host OS, they are typically more difficult to install and
configure than a hosted solution.
• The low-level nature of bare-metal hypervisors and the I/O access they provide
makes them useful for deployed applications that use multiple operating systems.
Specifically, applications that must provide real-time data processing and provide
access to general purpose OS services (such as a graphical user interface) can
benefit from bare-metal virtualization.

23
Binary Translation with Full Virtualization

• Depending on implementation technologies, hardware virtualization can be classified into two


categories: full virtualization and host-based virtualization.
• Full virtualization does not need to modify the host OS. It relies on binary translation to trap and
to virtualize the execution of certain sensitive, nonvirtualizable instructions. The guest OSes and
their applications consist of noncritical and critical instructions.
• With full virtualization, noncritical instructions run on the hardware directly while critical
instructions are discovered and replaced with traps into the VMM to be emulated by software.
• only critical instructions trapped into the VMM. This is because binary translation can incur a
large performance overhead.
• Noncritical instructions do not control hardware or threaten the security of the system, but critical
instructions do.
• Therefore, running noncritical instructions on hardware not only can promote efficiency, but also
can ensure system security.

24
• This approach was implemented by VMware and many other software companies. As shown in
Figure 3.6, VMware puts the VMM at Ring 0 and the guest OS at Ring 1. The VMM scans the
instruction stream and identifies the privileged, control- and behavior-sensitive instructions. When
these instructions are identified, they are trapped into the VMM, which emulates the behavior of
these instructions. The method used in this emulation is called binary translation. Therefore, full
virtualization combines binary translation and direct execution. The guest OS is completely
decoupled from the underlying hardware. Consequently, the guest OS is unaware that it is being
virtualized.

25
• Hypervisor mode
• The x86 family of CPUs provide a range of protection levels also known as rings in which
code can execute. Ring 0 has the highest level privilege and it is in this ring that the
operating system kernel normally runs. Code executing in ring 0 is said to be running in
system space, kernel mode or supervisor mode. All other code such as applications
running on the operating system operates in less privileged rings, typically ring 3.
• Under hypervisor virtualization a program known as a hypervisor (also known as a type 1
Virtual Machine Monitor or VMM) runs directly on the hardware of the host system in
ring 0. The task of this hypervisor is to handle resource and memory allocation for the
virtual machines in addition to providing interfaces for higher level administration and
monitoring tools.
• Clearly, with the hypervisor occupying ring 0 of the CPU, the kernels for any guest
operating systems running on the system must run in less privileged CPU rings. 

26
• Unfortunately, most operating system kernels are written explicitly to run in
ring 0 for the simple reason that they need to perform tasks that are only
available in that ring, such as the ability to execute privileged CPU
instructions and directly manipulate memory.
• A number of different solutions to this problem have been devised in recent
years, each of which is described below:
• Paravirtualization.

27
Para-Virtualization with Compiler Support
• Para-virtualization needs to modify the guest operating system’s kernel.
• A para-virtualized VM provides special APIs requiring substantial OS modifications in user
applications.
• Performance degradation is a critical issue of a virtualized system. No one wants to use a VM if
it is much slower than using a physical machine.
• The virtualization layer can be inserted at different positions in a machine soft-ware stack.
However, para-virtualization attempts to reduce the virtualization overhead, and thus improve
performance by modifying only the guest OS kernel.
• para-virtualization replaces nonvirtualizable instructions with hypercalls that communicate
directly with the hypervisor or VMM. However, when the guest OS kernel is modified for
virtualization, it can no longer run on the hardware directly. (refer diagram on next slide).
• Non virtualizable instructions: This means that resource modification can occur without the
VMM seeing and handling it which can be dangerous. Alternatively, it could mean executing an
instruction within the guest operating system in user mode and seeing a different effect than
having executed it in system mode
• This typically involves replacing any privileged operations that will only run in ring 0 of the
CPU with calls to the hypervisor (known as hypercalls). The hypervisor in turn performs the task
on behalf of the guest kernel.
28
29
Para-Virtualization with Compiler Support

• Unlike the full virtualization architecture which intercepts and emulates privileged and sensitive
instructions at runtime, para-virtualization handles these instructions at compile time.
• The guest OS kernel is modified to replace the privileged and sensitive instructions with
hypercalls to the hypervisor or VMM. Xen assumes such a para-virtualization architecture.
• The guest OS running in a guest domain may run at Ring 1 instead of at Ring 0. This implies that
the guest OS may not be able to execute some privileged and sensitive instructions.
• The privileged instructions are implemented by hypercalls to the hypervisor. After replacing the
instructions with hypercalls, the modified guest OS emulates the behavior of the original guest OS.
On an UNIX system, a system call involves an interrupt or service routine. The hypercalls apply a
dedicated service routine in Xen.

30
Virtualization of CPU
• A VM is a duplicate of an existing computer system in which a majority of the VM
instructions are executed on the host processor in native mode.
• Thus, unprivileged instructions of VMs run directly on the host machine for higher
efficiency. Other critical instructions should be handled carefully for correctness and
stability.
• The critical instructions are divided into three categories: privileged
instructions, control-sensitive instructions, and behavior-sensitive instructions.
• Privileged instructions execute in a privileged mode and will be trapped if executed
outside this mode.
• Control-sensitive instructions attempt to change the configuration of resources used.
• Behavior-sensitive instructions have different behaviors depending on the configuration
of resources, including the load and store operations over the virtual memory.

31
• A CPU architecture is virtualizable if it supports the ability to run the VM’s
privileged and unprivileged instructions in the CPU’s user mode while the VMM
runs in supervisor mode.
• When the privileged instructions including control- and behavior-sensitive
instructions of a VM are executed, they are trapped in the VMM.
• In this case, the VMM acts as a unified mediator for hardware access from
different VMs to guarantee the correctness and stability of the whole system.
• However, not all CPU architectures are virtualizable. RISC CPU architectures can
be naturally virtualized because all control and behavior-sensitive instructions are
privileged instructions.
• On the contrary, x86 CPU architectures are not primarily designed to support
virtualization. This is because about 10 sensitive instructions, such
as SGDT and SMSW, are not privileged instructions. When these instructions
execute in virtualization, they cannot be trapped in the VMM.

32
Hardware Assisted CPU Virtualization
• This technique attempts to simplify virtualization because full or paravirtualization is complicated.
• Intel and AMD add an additional mode called privilege mode level (some people call it Ring-1) to
x86 processors. Therefore, operating systems can still run at Ring 0 and the hypervisor can run at
Ring -1.
• All the privileged and sensitive instructions are trapped in the hypervisor automatically. This
technique removes the difficulty of implementing binary translation of full virtualization. It also
lets the operating system run in VMs without modification.
• Generally, hardware-assisted virtualization should have high efficiency. However, since the
transition from the hypervisor to the guest OS incurs high overhead switches between processor
modes, it sometimes cannot outperform binary translation.
• Hence, virtualization systems such as VMware now use a hybrid approach, in which a few tasks
are offloaded to the hardware but the rest is still done in software. In addition, para-virtualization
and hardware-assisted virtualization can be combined to improve the performance further.

33
Memory Virtualization
• Virtual memory virtualization is similar to the virtual memory support provided by modern operating systems.
• In a traditional execution environment, the operating system maintains mappings of virtual
memory to machine memory using page tables, which is a one-stage mapping from virtual memory to
machine memory.
• All modern x86 CPUs include a memory management unit (MMU) and a translation lookaside buffer
(TLB) to optimize virtual memory performance.
• However, in a virtual execution environment, virtual memory virtualization involves sharing the physical
system memory in RAM and dynamically allocating it to the physical memory of the VMs.
• That means a two-stage mapping process should be maintained by the guest OS and the VMM,
respectively: virtual memory to physical memory and physical memory to machine memory.
• Furthermore, MMU virtualization should be supported, which is transparent to the guest OS. The
guest OS continues to control the mapping of virtual addresses to the physical memory addresses of
VMs.
• But the guest OS cannot directly access the actual machine memory. The VMM is responsible for
mapping the guest physical memory to the actual machine memory.

34
35
• Since each page table of the guest OSes has a separate page table in the VMM corresponding to it,
the VMM page table is called the shadow page table.
• Nested page tables add another layer of indirection to virtual memory. The MMU already handles
virtual-to-physical translations as defined by the OS. Then the physical memory addresses are
translated to machine addresses using another set of page tables defined by the hypervisor. Since
modern operating systems maintain a set of page tables for every process, the shadow page tables
will get flooded. Consequently, the performance overhead and cost of memory will be very high.
• VMware uses shadow page tables to perform virtual-memory-to-machine-memory address
translation.
• Processors use TLB hardware to map the virtual memory directly to the machine memory to avoid
the two levels of translation on every access.
• When the guest OS changes the virtual memory to a physical memory mapping, the VMM updates
the shadow page tables to enable a direct lookup. The AMD Barcelona processor has featured
hardware-assisted memory virtualization since 2007. It provides hardware assistance to the two-
stage address translation in a virtual execution environment by using a technology called nested
paging.

36
I/O Virtualization

• I/O virtualization involves managing the routing of I/O requests between virtual
devices and the shared physical hardware. At the time of this writing, there are
three ways to implement I/O virtualization: full device emulation, para-
virtualization, and direct I/O. Full device emulation is the first approach for I/O
virtualization. Generally, this approach emulates well-known, real-world devices.
• All the functions of a device or bus infrastructure, such as device enumeration,
identification, interrupts, and DMA, are replicated in software. This software is
located in the VMM and acts as a virtual device. The I/O access requests of the
guest OS are trapped in the VMM which interacts with the I/O devices. The full
device emulation approach is shown in Figure 3.14

37
38
• A single hardware device can be shared by multiple VMs that run concurrently. However, software
emulation runs much slower than the hardware it emulates [10,15].
• The para-virtualization method of I/O virtualization is typically used in Xen. It is also known as
the split driver model consisting of a frontend driver and a backend driver.
• The frontend driver is running in Domain U and the backend driver is running in Domain 0. They
interact with each other via a block of shared memory.
• The frontend driver manages the I/O requests of the guest OSes and the backend driver is
responsible for managing the real I/O devices and multiplexing the I/O data of different VMs.
Although para-I/O-virtualization achieves better device performance than full device emulation, it
comes with a higher CPU overhead.

39
• Direct I/O virtualization lets the VM access devices directly. It can achieve close-to-native
performance without high CPU costs.
• However, current direct I/O virtualization implementations focus on networking for mainframes.
There are a lot of challenges for commodity hardware devices.
• For example, when a physical device is reclaimed (required by workload migration) for later
reassignment, it may have been set to an arbitrary state (e.g., DMA to some arbitrary memory
locations) that can function incorrectly or even crash the whole system. Since software-based I/O
virtualization requires a very high overhead of device emulation, hardware-assisted I/O
virtualization is critical. Intel VT-d supports the remapping of I/O DMA transfers and device-
generated interrupts. The architecture of VT-d provides the flexibility to support multiple usage
models that may run unmodified, special-purpose, or “virtualization-aware” guest OSes

40
• Another way to help I/O virtualization is via self-virtualized I/O (SV-IO) [47]. The key
idea of SV-IO is to harness the rich resources of a multicore processor. All tasks
associated with virtualizing an I/O device are encapsulated in SV-IO. It provides virtual
devices and an associated access API to VMs and a management API to the VMM. SV-IO
defines one virtual interface (VIF) for every kind of virtualized I/O device, such as virtual
network interfaces, virtual block devices (disk), virtual camera devices, and others. The
guest OS interacts with the VIFs via VIF device drivers. Each VIF consists of two mes-
sage queues. One is for outgoing messages to the devices and the other is for incoming
messages from the devices. In addition, each VIF has a unique ID for identifying it in SV-
IO.

41
Virtualization in Multicore processors
• Virtualizing a multi-core processor is relatively more complicated than virtualizing a uni-
core processor. Though multicore processors are claimed to have higher performance by
integrating multiple processor cores in a single chip, muti-core virtualiuzation has raised
some new challenges to computer architects, compiler constructors, system designers, and
application programmers. There are mainly two difficulties: Application programs must
be parallelized to use all cores fully, and software must explicitly assign tasks to the cores,
which is a very complex problem.
• Concerning the first challenge, new programming models, languages, and libraries are
needed to make parallel programming easier. The second challenge has spawned research
involving scheduling algorithms and resource management policies. Yet these efforts
cannot balance well among performance, complexity, and other issues. What is worse, as
technology scales, a new challenge called dynamic heterogeneity is emerging to mix
the fat CPU core and thin GPU cores on the same chip, which further complicates the
multi-core or many-core resource management. The dynamic heterogeneity of hardware
infrastructure mainly comes from less reliable transistors and increased complexity in
using the transistors 

42
Physical vs Virtual Processor cores

• Wells, et al. [74] proposed a multicore virtualization method to allow hardware designers
to get an abstraction of the low-level details of the processor cores. This technique
alleviates the burden and inefficiency of managing hardware resources by software. It is
located under the ISA and remains unmodified by the operating system or VMM
(hypervisor). Figure 3.16 illustrates the technique of a software-visible VCPU moving
from one core to another and temporarily suspending execution of a VCPU when there
are no appropriate cores on which it can run.

43
Virtual Hierarchy

• The emerging many-core chip multiprocessors (CMPs) provides a new computing landscape. Instead of


supporting time-sharing jobs on one or a few cores, we can use the abundant cores in a space-sharing, where
single-threaded or multithreaded jobs are simultaneously assigned to separate groups of cores for long time
intervals. This idea was originally suggested by Marty and Hill [39]. To optimize for space-shared workloads,
they propose using virtual hierarchies to overlay a coherence and caching hierarchy onto a physical
processor. Unlike a fixed physical hierarchy, a virtual hierarchy can adapt to fit how the work is space shared
for improved performance and performance isolation.
• Today’s many-core CMPs use a physical hierarchy of two or more cache levels that statically determine the
cache allocation and mapping. A virtual hierarchy is a cache hierarchy that can adapt to fit the workload or
mix of workloads [39]. The hierarchy’s first level locates data blocks close to the cores needing them for
faster access, establishes a shared-cache domain, and establishes a point of coherence for faster
communication. When a miss leaves a tile, it first attempts to locate the block (or sharers) within the first
level. The first level can also pro-vide isolation between independent workloads. A miss at the L1 cache can
invoke the L2 access.

44
• The idea is illustrated in Figure 3.17(a). Space sharing is applied to assign three workloads to three clusters of
virtual cores: namely VM0 and VM3 for database workload, VM1 and VM2 for web server workload, and
VM4–VM7 for middleware workload. The basic assumption is that each workload runs in its own VM.
However, space sharing applies equally within a single operating system. Statically distributing the directory
among tiles can do much better, provided operating sys-tems or hypervisors carefully map virtual pages to
physical frames. Marty and Hill suggested a two-level virtual coherence and caching hierarchy that
harmonizes with the assignment of tiles to the virtual clusters of VMs.

45
• Figure 3.17(b) illustrates a logical view of such a virtual cluster hierarchy in two
levels. Each VM operates in a isolated fashion at the first level. This will minimize
both miss access time and performance interference with other workloads or VMs.
Moreover, the shared resources of cache capacity, inter-connect links, and miss
handling are mostly isolated between VMs. The second level maintains a globally
shared memory. This facilitates dynamically repartitioning resources without costly
cache flushes. Furthermore, maintaining globally shared memory minimizes
changes to existing system software and allows virtualization features such as
content-based page sharing. A virtual hierarchy adapts to space-shared workloads
like multiprogramming and server consolidation. Figure 3.17 shows a case study
focused on consolidated server workloads in a tiled architecture. This many-core
mapping scheme can also optimize for space-shared multiprogrammed workloads
in a single-OS environmen
46
47
• Operating system virtualization inserts a virtualization layer inside an operating
system to partition a machine’s physical resources.
• It enables multiple isolated VMs within a single operating system kernel. This
kind of VM is often called a virtual execution environment
(VE), Virtual Private System (VPS), or simply container.
• From the user’s point of view, VEs look like real servers. This means a VE has its
own set of processes, file system, user accounts, network interfaces with IP
addresses, routing tables, firewall rules, and other personal settings.
• Although VEs can be customized for different people, they share the same
operating system kernel. Therefore, OS-level virtualization is also called single-
OS image virtualization.

48
• This refers to an abstraction layer between traditional OS and user applications.
• OS-level virtualiza-tion creates isolated containers on a single physical server and the OS instances to utilize
the hard-ware and software in data centers.
• The containers behave like real servers.
• OS-level virtualization is commonly used in creating virtual hosting environments to allocate hardware
resources among a large number of mutually distrusting users.
• It is also used, to a lesser extent, in consolidating server hardware by moving services on separate hosts into
containers or VMs on one server.

49
• With operating-system-virtualization or containerization, it is probable to run
programs within containers, to which only parts of these resources are allocated.
• A program that is expected to perceive the whole computer, once run inside a
container, can only see the allocated resources and believes them to be all that is
available.
• Several containers can be formed on each operating system, to each of which a
subset of the computer’s resources is allocated.
• Each container may include many computer programs. These programs may run
parallel or distinctly, even interrelate with each other. 

50
Features of OS level virtualization

• Resource isolation: Operating system-based virtualization provides a high level of resource


isolation, which allows each container to have its own set of resources, including CPU, memory,
and I/O bandwidth.
• Lightweight: Containers are lightweight compared to traditional virtual machines as they share the same host operating
system, resulting in faster startup and lower resource usage.
• Portability: Containers are highly portable, making it easy to move them from one environment to another without
needing to modify the underlying application.
• Scalability: Containers can be easily scaled up or down based on the application requirements, allowing applications to
be highly responsive to changes in demand.
• Security: Containers provide a high level of security by isolating the containerized application from the host operating
system and other containers running on the same system.
• Reduced Overhead: Containers incur less overhead than traditional virtual machines, as they do not need to emulate a
full hardware environment.
• Easy Management: Containers are easy to manage, as they can be started, stopped, and monitored using simple
commands.

51
• Advantages of Operating System-Based Virtualization:
• Resource Efficiency: Operating system-based virtualization allows for greater resource efficiency
as containers do not need to emulate a complete hardware environment, which reduces resource
overhead.
• High Scalability: Containers can be quickly and easily scaled up or down depending on the
demand, which makes it easy to respond to changes in the workload.Easy
Management: Containers are easy to manage as they can be managed through simple commands,
which makes it easy to deploy and maintain large numbers of containers.
Reduced Costs: Operating system-based virtualization can significantly reduce costs, as it requires
fewer resources and infrastructure than traditional virtual machines.
• Faster Deployment: Containers can be deployed quickly, reducing the time required to launch
new applications or update existing ones.
• Portability: Containers are highly portable, making it easy to move them from one environment to
another without requiring changes to the underlying application.

52
• Disadvantages of Operating System-Based Virtualization:
• Security: Operating system-based virtualization may pose security risks as containers share
the same host operating system, which means that a security breach in one container could
potentially affect all other containers running on the same system.
• Limited Isolation: Containers may not provide complete isolation between applications,
which can lead to performance degradation or resource contention.
• Complexity: Operating system-based virtualization can be complex to set up and manage,
requiring specialized skills and knowledge.
• Dependency Issues: Containers may have dependency issues with other containers or the host
operating system, which can lead to compatibility issues and hinder deployment.
• Limited Hardware Access: Containers may have limited access to hardware resources, which
can limit their ability to perform certain tasks or applications that require direct hardware
access. 

53
Xen Hypervisor
• Xen is an open source hypervisor program developed by Cambridge University. Xen is a micro-
kernel hypervisor
• A micro-kernel hypervisor includes only the basic and unchanging functions (such as physical
memory management and processor scheduling). The device drivers and other changeable
components are outside the hypervisor.
• A monolithic hypervisor implements all the aforementioned functions, including those of the
device drivers. Therefore, the size of the hypervisor code of a micro-kernel hyper-visor is smaller
than that of a monolithic hypervisor.
• Essentially, a hypervisor must be able to convert physical devices into virtual resources dedicated
for the deployed VM to use.
• Xen does not include any device drivers natively. It just provides a mechanism by which a guest
OS can have direct access to the physical devices. As a result, the size of the Xen hypervisor is
kept rather small. Xen provides a virtual environment located between the hardware and the OS.

54
• The core components of a Xen system are the hypervisor, kernel, and applications. The organi-
zation of the three components is important.
• Like other virtualization systems, many guest OSes can run on top of the hypervisor.
• However, not all guest OSes are created equal, and one in particular controls the others.
• The guest OS, which has control ability, is called Domain 0, and the others are called Domain U.
• Domain 0 is a privileged guest OS of Xen.
• It is first loaded when Xen boots without any file system drivers being available.
• Domain 0 is designed to access hardware directly and manage devices.
• Therefore, one of the responsibilities of Domain 0 is to allocate and map hardware resources for
the guest domains (the Domain U domains).

55
Xen terminology

56
Xen Architecture

57
58
• For example, Xen is based on Linux and its security level is C2. Its management VM is named Domain 0,
which has the privilege to manage other VMs implemented on the same host. If Domain 0 is compromised,
the hacker can control the entire system. So, in the VM system, security policies are needed to improve the
security of Domain 0. Domain 0, behaving as a VMM, allows users to create, copy, save, read, modify, share,
migrate, and roll back VMs as easily as manipulating a file, which flexibly provides tremendous benefits for
users. Unfortunately, it also brings a series of security problems during the software life cycle and data
lifetime.

59

You might also like