0% found this document useful (0 votes)
16 views15 pages

Unit 3 Virtualization

Virtualization

Uploaded by

sheetija saini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views15 pages

Unit 3 Virtualization

Virtualization

Uploaded by

sheetija saini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Unit 3: Virtualization Technology

Introduction
In computing, virtualization or virtualisation is the act of creating a virtual (rather than actual)
version of something, including virtual computer hardware platforms, storage devices, and
computer network resources.Virtualization began in the 1960s, as a method of logically dividing
the system resources provided by mainframe computers between different applications. Since
then, the meaning of the term has broadened.Virtualization technology has transformed
hardware into software. It allows to run multiple Operating Systems (OSs) as virtual machines
(Figure 1).Each copy of an operating system is installed in to a virtual machine.

Figure 1: Virtualization Scenario

You can see a scenario over here that we have a VMware hypervisor that is also called as a
Virtual Machine Manager (VMM). On a physical device, a VMware layer is installed out and, on
that layer, we have six OSs that are running multiple applications over there, these can be the
same kind of OSs or these can be the different kinds of OSs in it.

Why Virtualize
1. Share same hardware among independent users- Degrees of Hardware parallelism increases.
2. Reduced Hardware footprint through consolidation- Eases management and energy usage.
3. Sandbox/migrate applications- Flexible allocation and utilization.
4. Decouple applications from underlying Hardware- Allows Hardware upgrades without
impacting an OS image.

Virtualization enables sharing of resources much easily, it helps in increasing the degree of
hardware level parallelism, basically, there is sharing of the same hardware unit among
different kinds of independent units, if we say that we have the same physical hardware and
on that physical hardware, we have multiple OSs. There can be different users running on
different kind of OSs. Therefore, we have a much more processing capability with us. This also
helps in increasing the degree of hardware parallelism as well as there is a reduced hardware
footprint throughout the VM consolidation. The hardware footprint that is overall hardware
consumption also reduces out the amount of hardware that is wasted out that can also be
reduced out. This consequently helps in easing out the management process and also to reduce
the amount of energy that would have been otherwise consumed out by a particular hardware
if we would have invested in large number of hardware machines would have been used
otherwise. Virtualization helps in sandboxing capabilities or migrating different kinds of
applications that in turn enables flexible allocations and utilization of the resources.
Additionally, the decoupling of the applications from the underlying hardware is much easier
and further aids in allowing more and more hardware upgrades without actually impacting
any particular OS image.
Virtualization raises abstraction. Abstraction pertains to hiding of the inner details from a
particular user. Virtualization helps in enhancing or increasing the capability of abstraction. It
is very similar to how the virtual memory operates. It helps to access the larger address spaces
physical memory mapping is actually hidden by an OS with the help of paging. It can be similar
to hardware emulators where codes are allowed on one architecture to run on a different
physical device such as virtual devices central processing unit, memory or network interface
cards etc. No botheration is actually required out regarding the hardware details of a
particular machine. The confinement to the excess of hardware details helps in raising out the
abstraction capability through virtualization.
Basically, we have certain requirements for virtualization, first is the efficiency property.
Efficiency means that all innocuous instructions are executed by the hardware independently.
Then, the resource control property means that it is impossible for the programs to directly
affect any kind of system resources. Furthermore, there is an equivalence property that
indicates that we have a program which has a virtual machine manager or hypervisor that
performs in a particular manner,indistinguishable from another program that is running on it.

Before and After Virtualization


Before virtualization, the single physical infrastructure was used to run a single OS and its
applications, which results in underutilization of resources (Figure 2). The nonshared nature
of the hardware forces the organizations to buy a new hardware to meet their additional
computing needs. For example, if any organization wants to experiment or simulate their new
idea, they have to use separate dedicated systems for different experiments. So, to complete
their research work successfully, they tend to buy a new hardware that will increase the CapEx
and OpEx. Sometimes, if the organization does not have money to invest more on the
additional resources, they may not be able to carry out some valuable experiments because of
lack of resources. So, people started thinking about sharing a single infrastructure for multiple
purposes in the form of virtualization.

Figure 2: Before Virtualization


After virtualization was introduced, different OSs and applications were able to share a single
physical infrastructure (Figure 3). The virtualization reduces the huge amount invested in
buying additional resources. The virtualization becomes a key driver in the IT industry,
especially in cloud computing. Generally, the terms cloud computing and virtualization are not
same. There are significant differences between these two technologies.
Virtual Machine (VM):A VM involves anisolated guest OS installation within a normal host
OS.From the user perspective, VM is software platform like physical computer that runs OSs
and apps.VMs possess hardware virtually.

Figure 3: Post Virtualization Scenario

Factors Driving the Need of Virtualization

Increased Performance and Computing Capacity: PCs are having immense computing power.
Nowadays, the average end-user desktop PC is powerful enough to meet almost all the needs of
everyday computing, with extra capacity that Is rarely used. Almost all these PC share resources
enough to host a VMM and execute a VM with by far acceptable performance. The same
consideration applies to the high-end side of the PC market, where supercomputers can provide
immense compute power that can accommodate the execution of hundreds or thousands of VMs.

Increased performance and


computing capacity

Underutilized hardware and


software resources

Lack of space

Greening initiatives

Rise of administrative costs

Underutilized Hardware and Software Resources- Hardware and softwareunderutilization is


occurring due to: increased performance and computing capacity, and the effect of limited or
sporadic use of resources. The computers today are so powerful that in most cases only a fraction of
their capacity is used by an application or the system. Moreover, if we consider the IT infrastructure
of an enterprise, many computers are only partially utilized whereas they could be used without
interruption on a 24/7/365 basis.For example, desktop PCs mostly devoted to office automation
tasks and used by administrative staff are only used during work hours, remaining completely
Unused overnight. Using these resources for other purposes after hours could improve
the efficiency of the IT infrastructure. To transparently provide such a service, it would be
necessary to deploy a completely separate environment, which can be achieved through
virtualization.
Lack of Space: The continuous need for additional capacity, whether storage or compute power,
makes data centers grow quickly. Companies such as Google and Microsoft expand their
infrastructures by building data centers as large as football fields that are able to host thousands of
nodes. Although this is viable for IT giants, in most cases enterprises cannot afford to build another
data center to accommodate additional resource capacity. This condition, along with hardware
under-utilization, hassled to the diffusion of a technique called server consolidation, for which
virtualization technologies are fundamental.
Greening Initiatives: Recently, companies are increasingly looking for ways to reduce the amount
of energy they consume and to reduce their carbon footprint. Data centers are one of the major
power consumers; they contribute consistently to the impact that a company has on the
environment. Maintaining a data center operation not only involves keeping servers on, but a great
deal of energy is also consumed in keeping them cool. Infrastructures for cooling have a significant
impact on the carbon footprint of a data center. Hence, reducing the number of servers through
server consolidation will definitely reduce the impact of cooling and power consumption of a data
center. Virtualization technologies can provide an efficient way of consolidating servers.
Rise of Administrative Costs: The power consumption and cooling costs have now become higher
than the cost of IT equipment. Moreover, the increased demand for additional capacity, which
translates into more servers in a data center, is also responsible for a significant increment in
administrative costs. Computers in particular, servers do not operate all on their own, but they
require care and feeding from system administrators. Common system administration tasks include
hardware monitoring, defective hardware replacement, server setup and updates, server resources
monitoring, and backups. These are labor-intensive operations, and the higher the number of
servers that have to be managed, the higher the administrative costs. Virtualization can help reduce
the number of required servers for a given workload, thus reducing the cost of the administrative
personnel.
Share same hardware among independent users.

Degrees of hardware parallelism increases.

Reduced hardware footprint through consolidation

Eases management, energy usage.

Sandbox/migrate applications

Flexible allocation & utilization.

Decouple applications from underlying Hardware

Allows hardware upgrades without impacting an OS image.


Features of Virtualization
Virtualization Raises Abstraction
o Similar to Virtual Memory: To access larger address space, physical memory
mapping is hidden by OS using paging.
o Similar to Hardware Emulators: Allows code on one architecture to run on a different
physical device, such as, virtual devices, CPU, memory, NIC etc.
o No botheration about the physical hardware details.
Virtualization Requirements
o Efficiency Property: All innocuous instructions are executed by the hardware.
o Resource Control Property: It must be impossible for programs to directly affect
system resources.
o Equivalence Property: A program with a VMM performs in a manner indistinguishable
from another.Except: Timing & resource availability.

Virtualized Environments
Virtualization is a broad concept that refers to the creation of a virtual version of
something, whether hardware, a software environment, storage, or a network.In a
virtualized environment, there are three major components (Figure 4):

o Guest: Represents the system component that interacts with the virtualization layer
rather than with the host, as would normally happen.
o Host: Represents the original environment where the guest is supposed to be managed.
o Virtualization Layer: Responsible for recreating the same or a different environment
where the guest will operate.
Figure 4:Virtualized Environment

The components of virtualized environments include: In the case of hardware virtualization,


the guest is represented by a system image comprising an OS and installed applications. These
are installed on top of virtual hardware that is controlled and managed by the virtualization
layer, also called the VMM. The host is instead represented by physical hardware, & in some
cases OS, that defines an environment where VMM is running. The guest Applications and
users interacts with a virtual network, such as a virtual private network (VPN), which is
managed by specific software (VPN client) using physical network available on the node. VPNs
are useful for creating an illusion of being within a different physical network & thus accessing
the resources in it, which would otherwise not be available. The virtual environment is created
by means of a software program. The ability to use software to emulate a wide variety of
environments creates a lot of opportunities, previously less attractive because of excessive
overhead introduced by the virtualization layer.

How Does Virtualization Work


For virtualizing the infrastructure, a virtualization layer is installed. It can involve the use of
Bare- metal or Hosted Hypervisor architecture.It is important to understand how
virtualization actually works. Firstly, in virtualization a virtual layer is installed on the systems.
There are two prominent virtualization architectures, bare-metal and hosted hypervisor.
In a hosted architecture, a host OS is firstly installed out then a piece of software that is called
as a hypervisor or it is called as a VM monitor or Virtual Machine Manager (VMM) (Figure 5).
The VMM is installed on the top of host OS. The VMM allows the users to run different kinds of
guests OSs within their own application window of a particular hypervisor. Different kinds of
hypervisors can be Oracle's VirtualBox, Microsoft Virtual PC, VMware Workstation.

Figure 5: Hosted vs Bare-Metal Virtualization

Did you Know?


VMware server is a free application that is supported by Windows as well as by Linux OSs.

In a bare metal architecture, one hypervisor or VMM is actually installed on the bare metal
hardware. There is no intermediate OS existing over here. The VMM communicates directly with the
system hardware and there is no need for relying on any host OS. VMware ESXi and Microsoft
Hyper-V are different hypervisors that are used for bare-metal virtualization.

A. Hosted Virtualization Architecture


A hosted virtualization architecture requires an OS (Windows or Linux) installed on the computer. The
virtualization layer is installed as application on the OS.

Figure 6illustrates the hosted virtualization architecture. At the lower layer, we have the
shared hardware with a host OS running on this shared hardware. Upon the host OS, a VMM
is running that and is creating a virtual layer which is enabling different kinds of OSs to run
concurrently. So, you can see a scenario we have a hardware then we add an operating
system then a hypervisor is added and different kinds of virtual machines can run on that
particular virtual layer and each virtual machine can be running same or different kind of
OSs.
Figure 6: Hosted Virtualization Architecture

Advantages of Hosted Architecture


Ease of installation and configuration
Unmodified Host OS & Guest OS
Run on a wide variety of PCs
Disadvantages of Hosted Architecture
Performance degradation
Lack of support for real-time OSs

B. Bare-Metal Virtualization Architecture

In a bare metal architecture, there is an underlying hardware but no underlying OS. There is
just a VMM that is installed on that particular hardware and on that there are multiple VMs
that are running on a particular hardware unit. As illustrated in theFigure 7, there is shared
hardware that is running a VMM on which multiple VMs are running with simultaneous
execution of multiple OSs.
Advantages of Bare-Metal Architecture
Improved I/O performance
Supports Real-time OS

Disadvantages of Bare-Metal Architecture


Difficult to install & configure
Depends upon hardware platform

Figure 7: Bare-Metal Virtualization Scenario

Types of Virtualization
Virtualization covers a wide range of emulation techniques that are applied to different areas
of computing. A classification of these techniques helps us better understand their
characteristics and use.Before discussing virtualization techniques, it is important to know
about protection rings in OSs. The protection rings are used to isolate the OS from untrusted
user applications. The OS can be protected with different privilege levels (Figure 8).

Figure 8: Protection Rings in OSs


Protection Rings in OSs
In protection ring architecture, the rings are arranged in hierarchical order from ring 0 to ring 3.
The Ring 0 contains the programs that are most privileged, and ring 3 contains the programs that
are least privileged. Normally, the highly trusted OS instructions will run in ring 0, and it has
unrestricted access to physical resources. Ring 3 contains the untrusted user applications, and it
has restricted access to physical resources. The other two rings (ring 1 & 2) are allotted for
device drivers. 26 The protection ring architecture restricts the misuse of resources and malicious
behavior of untrusted user-level programs. For example, any user application from ring 3 cannot
directly access any physical resources as it is the least privileged level. But the kernel of the OS at
ring 0 can directly access the physical resources as it is the most privileged level.Depending on
the type of virtualization, the hypervisor and guest OS will run in different privilege levels.
Normally, the hypervisor will run with the most privileged level, and the guest OS will run at the
least privileged level than the hypervisor.
There are 4 virtualization techniques namely,
Full Virtualization (Hardware Assisted Virtualization/ Binary Translation).
Para Virtualization or OS assisted Virtualization.
Hybrid Virtualization
OS level Virtualization
Full Virtualization: The VM simulates hardware to allow an unmodified guest OS to be run in
isolation. There are 2 type of full virtualizations in the enterprise market, software-assisted and
hardware-assisted full virtualization. In both the cases, the guest OSs source information is not
modified.
The software-assisted full virtualization is also called as Binary Translation (BT) and it completely
relies on binary translation to trap and virtualize the execution of sensitive, non-virtualizable
instructions sets. It emulates the hardware using the software instruction sets. It is often criticized
for performance issue due to binary translation. The software that fall under software-assisted (BT)
include:
VMware workstation (32Bit guests)
Virtual PC
VirtualBox (32-bit guests)
VMware Server
The hardware-assisted full virtualization eliminates the binary translation and directly interrupts with
hardware using the virtualization technology which has been integrated on X86 processors since 2005
(Intel VT-x and AMD-V). The guest instructions might allow a virtual context executes privileged
instructions directly on the processor, even though it is virtualized. There is several enterprise
software that support hardware-assisted Full virtualization which falls under hypervisor type 1
(Bare metal) such as:
VMware ESXi /ESX
KVM
Hyper-V
Xen
Para Virtualization:The para-virtualization works differently from the full virtualization. It
need to simulate the hardware for the VMs. The hypervisor is installed on a physical
server (host) and a guest OS is installed into the environment. The virtual guests are aware that
it has been virtualized, unlike the full virtualization (where the guest know that it has
been virtualized) to take advantage of the functions. Also, the guest source codes can be
modified with sensitive information to communicate with the host. The guest OSs requires
extensions to make API calls to the hypervisor.
Comparatively, in the full virtualization, guests issue hardware calls but in para virtualization,
guests directly communicate with the host (hypervisor) using the drivers. The list of products
which supports para virtualization are:
Xen (Figure 9)
IBM LPAR
Oracle VM for SPARC (LDOM)
Oracle VM for X86 (OVM)
However, due to the architectural difference between windows-based and Linux-based Xen
hypervisor, Windows OS be para-virtualized. It does for Linux guest by modifying the
kernel. VMware ESXi modify the kernel for both Linux and Windows guests.

Figure 9: Xen Supports both Full-Virtualization and Para-Virtualization

Hybrid Virtualization (Hardware Virtualized with PV Drivers): In the hardware-assisted


full virtualization, the guest OSs are unmodified and many VM traps occur and thus high CPU
overheads which limit the scalability. Para virtualization is a complex method where guest
kernel needs to be modified to inject the API. Therefore, due to the issues in full- and para-
virtualization, engineers came up with hybrid paravirtualization, that is, a combination of both
full and paravirtualization. The VM uses paravirtualization for specific hardware drivers
(where there is a bottleneck with full virtualization, especially with I/O & memory intense
workloads), and host uses full virtualization for other features. The following products support
hybrid virtualization:
Oracle VM for x86
Xen
VMware ESXi

OS Level Virtualization:
kernel allows multiple user spaces aka instance. Unlike other virtualization technologies, there
is very little or no overhead since it uses the host OS kernel for execution. Oracle Solaris zone is
one ofthe famous containers in the enterprise market. The list of other containers:
Linux LCX
Docker
AIX WPAR

Processor Virtualization: It allows the VMs to share the virtual processors that are abstracted from
the physical processors available at the underlying infrastructure (Figure 10). The virtualization
layer abstracts the physical processor to the pool of virtual processors that is shared by the VMs.
The virtualization layer will be normally any hypervisors. But processor virtualization can also be
achieved from distributed servers.

Figure 10: Processor Virtualization

Memory Virtualization: Another important resource virtualization technique is memory


virtualization (Figure 11). It involves the process of providing a virtual main memory to the VMs is
known as memory virtualization or main memory virtualization. In main memory virtualization,
the physical main memory is mapped to the virtual main memory as in the virtual memory
concepts in most of the OSs.
The main idea of main memory virtualization is to map the virtual page numbers to the physical
page numbers. All the modern x86 processors are supporting main memory virtualization. The
main memory virtualization can also be achieved by using the hypervisor software. Normally, in
the virtualized data centers, the unused main memory of the different servers will consolidate as
a virtual main memory pool and can be given to the VMs.
Figure 11: Memory Virtualization

Storage Virtualization: A form of resource virtualization where multiple physical storage disks are
abstracted as a pool of virtual storage disks to the VMs (Figure 12). Normally, the virtualized
storage will be called a logical storage.

Figure 12: Storage Virtualization

Storage virtualization is mainly used for maintaining a backup or replica of the data that are stored
on the VMs. It can be further extended to support the high availability of the data. It efficiently
utilizes the underlying physical storage. Other advanced storage virtualization techniques are
storage area networks (SAN) and network-attached storage (NAS).
Network Virtualization:It is a type of resource virtualization in which the physical network can be
abstracted to create a virtual network (Figure 13).Normally, the physical network components like
router, switch, and Network Interface Card (NIC) will be controlled by the virtualization software
to provide virtual network components. Virtual network is a single software-based entity that
contains the network hardware and software resources. Network virtualization can be achieved
from internal network or by combining many external networks. It enables the communication
between the VMs that share the physical network. There are different types of network access given
to the VMs such as bridged network, network address translation (NAT), and host only.
Figure 13: Network Virtualization

Data Virtualization: Data virtualization offers the ability to retrieve the data without knowing its
type and the physical location where it is stored (Figure 14). It aaggregates the heterogeneous data
from the different sources to a single logical/virtual volume of data. This logical data can be
accessed from any applications such as web services, E-commerce applications, web portals, Software-
as-a-Service (SaaS) applications, and mobile application.It hides the type of the data and the location
of the data for the application that access it and ensures the single point access to data by
aggregating data from different sources. It is mainly used in data integration, business intelligence,
and cloud computing.

Figure 14: Data Virtualization

Application Virtualization: Application virtualization is the enabling technology for SaaS of cloud
computing that offers the ability to the user to use the application without the need to install any
software or tools in the machine (Figure 15). The complexity of installing the client tools or other
supported software is reduced. Normally, the applications will be developed and hosted in the
central server. The hosted application will be again virtualized, and the users will be given the
separated/isolated virtual copy to access.
Figure 15: Application Virtualization

Pros of Virtualization
Increased Security
The ability to control the execution of a guest in a completely transparent manner opens new
possibilities for delivering a secure, controlled execution environment. VM represents an
emulated environment in which the guest is executed. All the operations of the guest are generally
performed against the VM, which then translates & applies them to the host.By default, the file
system exposed by the virtual computer is completely separated from the one of the host
machines. This becomes the perfect environment for running applications without affecting other
users in the environment.
Managed Execution
Virtualization of the execution environment not only allows increased security, but a wider range
of features also can be implemented such as:

o Sharing: Virtualization allows creation of a separate computing environments within same


host, thereby, making it possible to fully exploit capabilities of a powerful guest (that would
otherwise be underutilized).
o Aggregation: A group of separate hosts can be tied together and represented to guests as a
single virtual host. This function is naturally implemented in middleware for distributed
computing, with a classical example represented by cluster management software, which
harnesses the physical resources of a homogeneous group of machines and represents them as
a single resource.
o Emulation: Guest programs are executed within an environment that is controlled by the
virtualization layer, which ultimately is a program. This allows for controlling and tuning the
environment that is exposed to the guests.
o Isolation: Virtualization allows providing guests whether they are OSs, applications, or other
entities with a completely separate environment, in which they are executed. The guest
program performs its activity by interacting with an abstraction layer, which provides access to
the underlying resources.
Portability
Concept of portability applies in different ways according to the specific type of virtualization
considered. In the case of a hardware virtualization solution, the guest is packaged into a virtual
image that, in most cases, can be safely moved and executed on top of different virtual machines.
o In the case of programming-level virtualization, as implemented by the JVM or the .NET
runtime, the binary code representing application components (jars or assemblies) can be run
without any recompilation on any implementation of the corresponding virtual machine.
o This makes the application development cycle more flexible and application deployment very
straight forward: One version of the application, in most cases, is able to run on different
platforms with no changes.
o Portability allows having your own system always with you and ready to use as long as the
required VMM is available. This requirement is, in general, less stringent than having all the
applications and services you need available to you anywhere you go.
More Efficient Use of Resources
Multiple systems can securely coexist and share the resources of the underlying host, without
interfering with each other. This is a prerequisite for server consolidation, which allows adjusting
the number of active physical resources dynamically according to the current load of the system,
thus creating the opportunity to save in terms of energy consumption and to be less impacting on
the environment.
Cons of Virtualization
Virtualization also has downsides. The most evident is represented by a performance decrease of
guest systems as a result of the intermediation performed by the virtualization layer. In addition,
sub-optimal use of the host because of the abstraction layer introduced by virtualization
management software can lead to a very inefficient utilization of the host or a degraded user
experience.Less evident, but perhaps more dangerous, are the implications for security, which
are mostly due to the ability to emulate a different execution environment.
Performance Degradation- Performance is definitely one of the major concerns in using
virtualization technology. Since virtualization interposes an abstraction layer between the guest
and the host, the guest can experience increased latencies. For instance, in case of hardware
virtualization, where the intermediate emulates a bare machine on top of which an entire system
can be installed, the causes of performance degradation can be traced back to the overhead
introduced by following activities:

o Maintaining the status of virtual processors


o Support of privileged instructions (trap and simulate privileged instructions)
o Support of paging within VM
o Console functions
Inefficiency and Degraded User Experience- Virtualization can sometime lead to an inefficient use
of the host. In particular, some of the specific features of the host cannot be exposed by the
abstraction layer and then become inaccessible. In the case of hardware virtualization, this could
happen for device drivers: VM can sometime simply provide a default graphic card that maps
only a subset of the features available in the host. In the case of programming-level VMs, some of
the features of the underlying OSs may become inaccessible unless specific libraries are used. For
example, in the first version of Java the support for graphic programming was very limited and
the look and feel of applications was very poor compared to native applications. These issues
have been resolved by providing a new framework called Swing for designing the user interface,
and further improvements have been done by integrating support for the OpenGL libraries in the
software development kit.
Security Holes and New Threats-Virtualization opens the door to a new and unexpected form of
phishing. The capability of emulating a host in a completely transparent manner led the way to
malicious programs that are designed to extract sensitive information from the guest. In the case of
hardware virtualization, malicious programs can preload themselves before the operating system
and act as a thin virtual machine manager toward it. The operating system is then controlled and
can be manipulated to extract sensitive information of interest to third parties.

Software Licensing Considerations- This is becoming less of a problem as more software


vendors adapt to the increased adoption of virtualization, but it is important to check with your
vendors to clearly understand how they view software use in a virtualized environment.
Possible Learning Curve- Implementing and managing a virtualized environment will require IT
staff with expertise in virtualization. On the user side a typical virtual environment will operate
similarly to the non-virtual environment. There are some applications that do not adapt well to
the virtualized environment this is something that your IT staff will need to be aware of and
address prior to converting.

Summary
Virtualization opens the door to a new and unexpected form of phishing. The capability of
emulating a host in a completely transparent manner led the way to malicious programs that
are designed to extract sensitive information from the guest.
Virtualization raises abstraction. Abstraction pertains to hiding of the inner details from a
particular user. Virtualization helps in enhancing or increasing the capability of abstraction.
Virtualization enables sharing of resources much easily, it helps in increasing the degree of
hardware level parallelism, basically, there is sharing of the same hardware unit among
different kinds of independent units.
In protection ring architecture, the rings are arranged in hierarchical order from ring 0 to ring 3.
The Ring 0 contains the programs that are most privileged, and ring 3 contains the programs
that are least privileged.
In a bare metal architecture, one hypervisor or VMM is actually installed on the bare metal
hardware. There is no intermediate OS existing over here. The VMM communicates directly
with the system hardware and there is no need for relying on any host OS.
The para-virtualization works differently from the full virtualization. It need to
simulate the hardware for the VMs. The hypervisor is installed on a physical server (host) and
a guest OS is installed into the environment.
The software-assisted full virtualization is also called as Binary Translation (BT) and it
completely relies on binary translation to trap and virtualize the execution of sensitive, non-
virtualizable instructions sets.
Memory virtualization is an important resource virtualization technique. In the main
memory virtualization, the physical main memory is mapped to the virtual main memory as
in the virtual memory concepts in most of the OSs.

You might also like