0% found this document useful (0 votes)
5 views17 pages

UNIT III-Notes - CC

Virtualization technology allows the creation of virtual versions of hardware, operating systems, and storage, enabling multiple operating systems to run on a single physical machine. It includes various types such as hardware, desktop, software, memory, storage, data, and network virtualization, each serving different purposes and implementations. The document discusses the architecture, advantages, and mechanisms of virtualization, including hypervisors, para-virtualization, and I/O virtualization methods.

Uploaded by

Aradhana Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views17 pages

UNIT III-Notes - CC

Virtualization technology allows the creation of virtual versions of hardware, operating systems, and storage, enabling multiple operating systems to run on a single physical machine. It includes various types such as hardware, desktop, software, memory, storage, data, and network virtualization, each serving different purposes and implementations. The document discusses the architecture, advantages, and mechanisms of virtualization, including hypervisors, para-virtualization, and I/O virtualization methods.

Uploaded by

Aradhana Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

UNIT III

VIRTUALIZATION TECHNOLOGY

Introduction:
In computing, virtualization is the creation of a virtual (rather than actual) version of something,
such as a hardware platform, operating system (OS), storage device, or network resources. A
physical computer is clearly a complete and actual machine, both subjectively (from the user's
point of view) and objectively (from the hardware system administrator's point of view), a virtual
machine is subjectively a complete machine , but objectively merely a set of files and running
programs on an actual, physical machine.
With virtualization, several operating systems can be run in parallel on a single central
processing unit. It differs from multitasking, which involves running several programs on the
same OS. The goal of virtualization is to centralize administrative tasks while improving
scalability and overall hardware-resource utilization. Virtualization includes automatic
computing and utility computing in which computing power is seen as utility that client can pay
for only as needed.
Virtual Machine: a representation of real machine using software that provides an operating
environment which can run/host a guest OS.
Guest OS: An OS running on virtual machine environment.
Virtualization software layer is also called as hypervisor or VMM (Virtual Machine Monitor),
a middleware layer between underlying hardware and VM in the system.

Types of Virtualization

Hardware: It refers to the creation of a virtual machine that acts like a real computer with an
operating system. Software executed on virtual machines is separated from the underlying
hardware resources. Here, the host machine is the actual machine on which the virtualization
takes place, and the guest machine is the virtual machine. The words host and guest are used to
distinguish the software that runs on the actual machine from the software that runs on the virtual
machine.
Different types of hardware virtualization include:
1. Full virtualization: Almost complete simulation of the actual hardware to allow software,
which typically consists of a guest operating system, to run unmodified.
2. Partial virtualization: Some but, not all of the target environment is simulated. Some guest
programs, therefore, may need modifications to run in this virtual environment.
3. Para-virtualization: A hardware environment is not simulated; however, the guest programs
are executed in their own isolated domains, as if they are running on a separate system. Guest
programs need to be specifically modified to run in this environment.

Desktop: Desktop virtualization is the concept of separating the logical desktop from the
physical machine. One form of desktop virtualization, virtual desktop infrastructure (VDI), can
be thought as a more advanced form of hardware virtualization. Rather than interacting with a
host computer directly via a keyboard, mouse, and monitor, the user interacts with the host
computer using another desktop computer or a mobile device over internet.
Another form, session virtualization, allows multiple users to connect and log into a shared but
powerful computer over the network and use it simultaneously. Each is given a desktop and a
personal folder in which they store their files.

Software
• Operating system-level virtualization, hosting of multiple virtualized environments within a
single OS instance.
• Application virtualization and workspace virtualization, the hosting of individual applications
in an environment separated from the underlying OS. Application virtualization is closely
associated with the concept of portable applications.
• Service virtualization, emulating the behavior of dependent system components that are needed
to exercise an application under test (AUT) for development or testing purposes. Rather than
virtualizing entire components, it virtualizes only specific slices of dependent behavior critical to
the execution of development and testing tasks.

Memory
This virtualization aggregating RAM resources from networked systems into a single memory
pool. Virtual memory gives an application program the impression that it has contiguous
working memory, isolating it from the underlying physical memory implementation.

Storage
It the process of completely abstracting the logical storage from physical storage. It enables
distributed file system through parallel processing.

Data
It refers presentation of data as an abstract layer, independent of underlying database systems,
structures and storage. Database virtualization, the decoupling of the database layer, which lies
between the storage and application layers within the application stack over all.

Network
It creates virtualized network addressing space within or across network subnets

IMPLEMENTATION LEVELS OF VIRTUALIZATION IN CLOUD COMPUTING

Virtualization is a computer architecture technology by which multiple virtual machines (VMs)


are multiplexed in the same hardware machine. The purpose of a VM is to enhance resource
sharing by many users and improve computer performance in terms of resource utilization and
application flexibility. The idea is to separate the hardware from the software to yield better
system efficiency.
A traditional computer runs with a host operating system specially tailored for its hardware
architecture, as shown in Figure. After virtualization, different user applications managed by
their own operating systems (guest OS) can run on the same hardware, independent of the host
OS. This is often done by adding additional software, called a virtualization layer as shown in
Figure.

Virtualization can be implemented at various operational levels, as given below:


 Instruction set architecture (ISA) level
 Hardware level
 Operating system level
 Library support level
 Application level

Instruction Set Architecture Level (ISA):


At the ISA level, virtualization is performed by emulating a given ISA by the ISA of the host
machine. The basic emulation method is through code interpretation. An interpreter program
interprets the source instructions to target instructions one by one. One source instruction may
require tens or hundreds of native target instructions to perform its function. Obviously, this
process is relatively slow. For better performance, dynamic binary translation is desired. This
approach translates basic blocks of dynamic source instructions to target instructions. The basic
blocks can also be extended to program traces or super blocks to increase translation efficiency.
A virtual instruction set architecture (V-ISA) thus requires adding a processor-specific software
translation layer to the compiler.

Hardware Abstraction Level (HAL):


It is performed right on top of the bare hardware and generates a virtual hardware environment
for a VM. On the other hand, the process manages the underlying hardware through
virtualization. The 4 idea is to virtualize a computer’s resources, such as its processors, memory,
and I/O devices so as hardware utilization rate by multiple users concurrently may be upgraded.

Operating System Level:


OS-level virtualization creates isolated containers on a single physical server and the OS
instances to utilize the hardware and software in data centers. The containers behave like real
servers. OS-level virtualization is commonly used in creating virtual hosting environments to
allocate hardware resources among a large number of mutually distrusting users.

Library Support Level:


Virtualization with library interfaces is possible by controlling the communication link between
applications and the rest of a system through API hooks. The software tool WINE has
implemented this approach to support Windows applications on top of UNIX hosts. Another
example is the vCUDA which allows applications executing within VMs to leverage GPU
hardware acceleration.

User-Application Level:
On a traditional OS, an application often runs as a process. Therefore, application-level
virtualization is also known as process-level virtualization. The most popular approach is to
deploy high level language (HLL) VMs. In this scenario, the virtualization layer exports an
abstraction of a VM that can run programs written and compiled to a particular abstract machine
definition. Any program written in the HLL and compiled for this VM will be able to run on it.
The Microsoft .NET CLR and Java Virtual Machine (JVM) are two good examples of this class
of VM. Other forms of application-level virtualization are known as application isolation,
application sandboxing, or application streaming. The process involves wrapping the application
in a layer that is isolated from the host OS and other applications. The result is an application
that is much easier to distribute and remove from user workstations.

Virtualization Support at the OS Level:


It is slow to initialize a hardware-level VM because each VM creates its own image from scratch
and storage of such images are also slow. OS-level virtualization provides a feasible solution for
these hardware-level virtualization issues. OS virtualization inserts a virtualization layer inside
an operating system to partition a machine’s physical resources. It enables multiple isolated VMs
within a single operating system kernel. This kind of VM is often called a virtual execution
environment (VE). This VE has its own set of processes, file system, user accounts, network
interfaces with IP addresses, routing tables, firewall rules, and other personal settings.

Advantages:
 VMs at the operating system level have minimal startup/shutdown costs, low resource
requirements, and high Scalability
 It is possible for a VM and its host environment to synchronize state changes when
necessary.

VIRTUALIZATION STRUCTURE/TOOLS AND MECHANISMS:

Before virtualization, the operating system manages the hardware. After virtualization, a
virtualization layer is inserted between the hardware and the OS. In such a case, the
virtualization layer is responsible for converting portions of the real hardware into virtual
hardware. Depending on the position of the virtualization layer, there are several classes of VM
architectures, namely
1. Hypervisor architecture
2. Paravirtualization
3. host-based virtualization
1. Hypervisor and Xen Architecture
Depending on the functionality, a hypervisor can assume a micro-kernel architecture or a
monolithic hypervisor architecture. A micro-kernel hypervisor includes only the basic and
unchanging functions (such as physical memory management and processor scheduling). The
device drivers and other changeable components are outside the hypervisor. A monolithic
hypervisor implements all the aforementioned functions, including those of the device drivers.
Therefore, the size of the hypervisor code of a micro-kernel hypervisor is smaller than that of a
monolithic hypervisor.
The Xen Architecture
Xen is an open source hypervisor program developed by Cambridge University. Xen is a
microkernel hypervisor, which separates the policy from the mechanism. It implementsall the
mechanisms, leaving the policy to be handled by Domain 0, as shown in Figure. Xen does not
include any device drivers natively. It just provides a mechanism by which a guest OS can have
direct access to the physical devices.

Like other virtualization systems, many guest OSes can run on top of the hypervisor. The guest
OS (privileged guest OS), which has control ability, is called Domain 0, and the others are called
Domain U. It is first loaded when Xen boots without any file system drivers being available.
Domain 0 is designed to access hardware directly and manage devices.

2. Binary Translation with Full Virtualization


Depending on implementation technologies, hardware virtualization can be classified into two
categories: full virtualization and host-based virtualization.
Full Virtualization
With full virtualization, noncritical instructions run on the hardware directly while critical
instructions are discovered and replaced with traps into the VMM to be emulated by software.
Both the hypervisor and VMM approaches are considered full virtualization. Noncritical
instructions do not control hardware or threaten the security of the system, but critical
instructions do. Therefore, running noncritical instructions on hardware not only can promote
efficiency, but also can ensure system security.
Host-Based Virtualization
An alternative VM architecture is to install a virtualization layer on top of the host OS. This host
OS is still responsible for managing the hardware. The guest OSes are installed and run on top of
the virtualization layer. Dedicated applications may run on the VMs. Certainly, some other
applications can also run with the host OS directly. This host based architecture has some distinct
advantages, as enumerated next. First, the user can install this VM architecture without
modifying the host OS. Second, the host-based approach appeals to many host machine
configurations.
3. Para-Virtualization with Compiler Support
It needs to modify the guest operating systems. A para-virtualized VM provides special APIs
requiring substantial OS modifications in user applications. Performance degradation is a critical
issue of a virtualized system. Figure illustrates the concept of a para-virtualized VM architecture.
The guest OS are para-virtualized. They are assisted by an intelligent compiler to replace the non
virtualizable OS instructions by hypercalls. The traditional x86 processor offers four instruction
execution rings: Rings 0, 1, 2, and 3. The lower the ring number, the higher the privilege of
instruction being executed. The OS is responsible for managing the hardware and the privileged
instructions to execute at Ring 0, while user-level applications run at Ring 3.
Although para-virtualization reduces the overhead, it has incurred problems like compatibility
and portability, because it must support the unmodified OS as well. Second, the cost is high,
because they may require deep OS kernel modifications. Finally, the performance advantage of
para-virtualization varies greatly due to workload variations.

VIRTUALIZATION OF CPU, MEMORY, AND I/O DEVICES:


To support virtualization, processors such as the x86 employ a special running mode and
instructions, known as hardware-assisted virtualization. In this way, the VMM and guest OS run
in different modes and all sensitive instructions of the guest OS and its applications are trapped
in the VMM. To save processor states, mode switching is completed by hardware.
Hardware Support for Virtualization
Modern operating systems and processors permit multiple processes to run simultaneously. If
there is no protection mechanism in a processor, all instructions from different processes will
access the hardware directly and cause a system crash. Therefore, all processors have at least two
modes, user mode and supervisor mode, to ensure controlled access of critical hardware.
Instructions running in supervisor mode are called privileged instructions. Other instructions are
unprivileged instructions. In a virtualized environment, it is more difficult to make OSes and
applications run correctly because there are more layers in the machine stack. Figure shows the
hardware support by Intel.
CPU Virtualization
Unprivileged instructions of VMs run directly on the host machine for higher efficiency. Other
critical instructions should be handled carefully for correctness and stability. The critical
instructions are divided into three categories: privileged instructions, controls sensitive
instructions, and behaviorsensitive instructions. Privileged instructions execute in a privileged
mode and will be trapped if executed outside this mode. Control-sensitive instructions attempt to
change the configuration of resources used. Behavior-sensitive instructions have different
behaviors depending on the configuration of resources, including the load and store operations
over the virtual memory.
A CPU architecture is virtualizable if it supports the ability to run the VM’s privileged and
unprivileged instructions in the CPU’s user mode while the VMM runs in supervisor mode.
When the privileged instructions including control- and behavior-sensitive instructions of a VM
are executed, they are trapped in the VMM. RISC CPU architectures can be naturally virtualized
because all controland behavior-sensitive instructions are privileged instructions. On the
contrary, x86 CPU architectures are not primarily designed to support virtualization.
Hardware-Assisted CPU Virtualization
This technique attempts to simplify virtualization because full or paravirtualization is
complicated. Intel and AMD add an additional mode called privilege mode level (some people
call it Ring-1) to x86 processors. Therefore, operating systems can still run at Ring 0 and the
hypervisor can run at Ring -1. All the privileged and sensitive instructions are trapped in the
hypervisor automatically. This technique removes the difficulty of implementing binary
translation of full virtualization. It also lets the operating system run in VMs without
modification.
Memory Virtualization
Virtual memory virtualization is similar to the virtual memory support provided by modern
operating systems. In a traditional environment, the OS maintains page table for mappings of
virtual memory to machine memory, which is a one-stage mapping. All modern x86 CPUs
include a memory management unit (MMU) and a translation lookaside buffer (TLB) to
optimize virtual memory performance. However, in a virtual execution environment, virtual
memory virtualization involves sharing the physical system memory in RAM and dynamically
allocating it to the physical memory of the VMs. A two-stage mapping process should be
maintained by the guest OS and the VMM, respectively: virtual memory to physical memory and
physical memory to machine memory. The VMM is responsible for mapping the guest physical
memory to the actual machine memory in guest OS.
Since each page table of the guest OSes has a separate page table in the VMM corresponding to
it, the VMM page table is called the shadow page table. VMware uses shadow page tables to
perform virtual-memory-to-machine-memory address translation. Processors use TLB hardware
to map the virtual memory directly to the machine memory to avoid the two levels of translation
on every access. When the guest OS changes the virtual memory to a physical memory mapping,
the VMM updates the shadow page tables to enable a direct lookup.
I/O virtualization
It involves managing the routing of I/O requests between virtual devices and the shared physical
hardware.
There are three ways to implement I/O virtualization:
1. Full device emulation
2. Para-virtualization
3. Direct I/O

Full device emulation


All the functions of a device like device enumeration, identification, interrupts, and DMA, are
replicated in software and it is located in the VMM and acts as a virtual device. The I/O access
requests of the guest OS are trapped in the VMM which interacts with the I/O devices.

Para-virtualization
It is a split driver model consisting of a frontend driver and a backend driver. The frontend driver
is running in Domain U and the backend driver is running in Domain 0. They interact with each
other via a block of shared memory. The frontend driver manages the I/O requests of the guest
OSes and the backend driver is responsible for managing the real I/O devices and multiplexing
the I/O data of different VMs. Although para-I/O-virtualization achieves better device
performance than full device emulation, it comes with a higher CPU overhead.

Direct I/O virtualization


It lets the VM access devices directly. It can achieve close-to-native performance without high
CPU costs. However, current direct I/O virtualization implementations focus on networking for
mainframes.
Another way to help I/O virtualization is via self-virtualized I/O (SV-IO). The key idea is to
harness the rich resources of a multicore processor. All tasks associated with virtualizing an I/O
device are encapsulated in SV-IO. SV-IO defines one virtual interface (VIF) for every kind of
virtualized I/O device, such as virtual network interfaces, virtual block devices (disk), virtual
camera devices, and others. The guest OS interacts with the VIFs via VIF device drivers. Each
VIF consists of two message queues. One is for outgoing messages to the devices and the other is
for incoming messages from the devices. In addition, each VIF has a unique ID for identifying it
in SV-IO.

Virtualization in Multi-Core Processors


Multicore processors are claimed to have higher performance by integrating multiple processor
cores in a single chip, muti-core virtualiuzation has raised some new challenges to computer
architects, compiler constructors, system designers, and application programmers. Application
programs must be parallelized to use all cores fully, and software must explicitly assign tasks to
the cores, which is a very complex problem. Concerning the first challenge, new programming
models, languages, and libraries are needed to make parallel programming easier. The second
challenge has spawned research involving scheduling algorithms and resource management
policies.

Virtual Hierarchy
A virtual hierarchy is a cache hierarchy that can adapt to fit the workload or mix of workloads.
The hierarchy’s first level locates data blocks close to the cores needing them for faster access,
establishes a shared-cache domain, and establishes a point of coherence for faster
communication. The first level can also provide isolation between independent workloads. A
miss at the L1 cache can invoke the L2 access.

VIRTUAL CLUSTERS AND RESOURCE MANAGEMENT:

A physical cluster is a collection of servers interconnected by a physical network such as a LAN


whereas virtual clusters have VMs that are interconnected logically by a virtual network across
several physical networks. We will study three critical design issues of virtual clusters: live
migration of VMs, memory and file migrations, and dynamic deployment of virtual clusters.

VMs in virtual cluster have the following interesting properties:


 The virtual cluster nodes can be either physical or virtual machines.
 The purpose of using VMs is to consolidate multiple functionalities on the same server.
 VMs can be colonized (replicated) in multiple servers for the purpose of promoting
distributed parallelism, fault tolerance, and disaster recovery.
 The size (number of nodes) of a virtual cluster can grow or shrink dynamically.
 The failure of any physical nodes may disable some VMs installed on the failing nodes.
But the failure of VMs will not pull down the host system.

Virtual cluster based on application partitioning or customization. The most important thing is to
determine how to store those images in the system efficiently. There are common installations
for most users or applications, such as operating systems or user-level programming libraries.
These software packages can be preinstalled as templates (called template VMs). With these
templates, users can build their own software stacks. New OS instances can be copied from the
template VM.
Fast Deployment and Effective Scheduling
Deployment means two things: to construct and distribute software stacks (OS, libraries,
applications) to a physical node inside clusters as fast as possible, and to quickly switch runtime
environments from one user’s virtual cluster to another user’s virtual cluster. If one user finishes
using his system, the corresponding virtual cluster should shut down or suspend quickly to save
the resources to run other VMs for other users.
High-Performance Virtual Storage
Basically, there are four steps to deploy a group of VMs onto a target cluster: preparing the disk
image, configuring the VMs, choosing the destination nodes, and executing the VM deployment
command on every host. Many systems use templates to simplify the disk image preparation
process. A template is a disk image that includes a preinstalled operating system with or without
certain application software. Templates could implement the COW (Copy on rite) format. A new
COW backup file is very small and easy to create and transfer.
Live VM Migration Steps
There are four ways to manage a virtual cluster. First, we can use a guest-based manager, by
which the cluster manager resides on a guest system. In this case, multiple VMs form a virtual
cluster. We can build a cluster manager on the host systems. The host-based manager supervises
the guest systems and can restart the guest system on another physical machine. Third way to
manage a virtual cluster is to use an independent cluster manager on both the host and guest
systems. Finally, you can use an integrated cluster on the guest and host systems. This means the
manager must be designed to distinguish between virtualized resources and physical resources.
A VM can be in one of the following four states.
 An inactive state is defined by the virtualization platform, under which the VM is not
enabled.
 An active state refers to a VM that has been instantiated at the virtualization platform to
perform a real task.
 A paused state corresponds to a VM that has been instantiated but disabled to process a
task or paused in a waiting state.
 A VM enters the suspended state if its machine file and virtual resources are stored back
to the disk.

live migration of a VM from one machine to another consists of the following six steps:
Steps 0 and 1: Start migration
This step makes preparations for the migration, including determining the migrating VM and the
destination host.
Steps 2: Transfer memory
Since the whole execution state of the VM is stored in memory, sending the VM’s memory to the
destination node ensures continuity of the service provided by the VM. All of the memory data is
transferred.
Step 3: Suspend the VM and copy the last portion of the data. The migrating VM’s execution is
suspended when the last round’s memory data is transferred.
Steps 4 and 5: Commit and activate the new host. After all the needed data is copied, on the
destination host, the VM reloads the states and recovers the execution of programs in it, and the
service provided by this VM continues.

Migration of Memory, Files, and Network Resources


When one system migrates to another physical node, we should consider the following issues:
Memory migration can be in a range of hundreds of megabytes to a few gigabytes in a typical
system today, and it needs to be done in an efficient manner. The Internet Suspend-Resume
(ISR) technique exploits temporal locality as memory states are likely to have considerable
overlap in the suspended and the resumed instances of a VM. To exploit temporal locality, each
file in the file system is represented as a tree of small subfiles. A copy of this tree exists in both
the suspended and resumed VM instances.

File System Migration


Location-independent view of the file system that is available on all hosts. A simple way to
achieve this is to provide each VM with its own virtual disk which the file system is mapped to
and transport the contents of this virtual disk along with the other states of the VM A distributed
file system is used in ISR serving as a transport mechanism for propagating a suspended VM
state. The actual file systems themselves are not mapped onto the distributed file system.

Network Migration
To enable remote systems to locate and communicate with a VM, each VM must be assigned a
virtual IP address known to other entities. This address can be distinct from the IP address of the
host machine where the VM is currently located. Each VM can also have its own distinct virtual
MAC address. The VMM maintains a mapping of the virtual IP and MAC addresses to their
corresponding VMs. Live migration is a key feature of system virtualization technologies. Here,
we focus on VM migration within a cluster environment where a network-accessible storage
system, such as storage area network (SAN) or network attached storage (NAS), is employed.
Only memory and CPU status needs to be transferred from the source node to the target node. In
fact, these issues with the precopy approach are caused by the large amount of transferred data
during the whole migration process. A checkpointing/recovery and trace/replay approach (CR/
TR-Motion) is proposed to provide fast VM migration. Another strategy of postcopy is
introduced for live migration of VMs. Here, all memory pages are transferred only once during
the whole migration process and the baseline total migration time is reduced.

HYPERVISOR VMware:
A hypervisor is a form of virtualization software used in Cloud hosting to divide and allocate
the resources on various pieces of hardware. The program which provides partitioning,
isolation, or abstraction is called a virtualization hypervisor. The hypervisor is a hardware
virtualization technique that allows multiple guest operating systems (OS) to run on a single
host system at the same time. A hypervisor is sometimes also called a virtual machine manager
(VMM).
TYPE-1 Hypervisor:
The hypervisor runs directly on the underlying host system. It is also known as a “Native
Hypervisor” or “Bare metal hypervisor”. It does not require any base server operating system. It
has direct access to hardware resources. Examples of Type 1 hypervisors include VMware
ESXi, Citrix XenServer, and Microsoft Hyper-V hypervisor.

TYPE-2 Hypervisor:

A Host operating system runs on the underlying host system. It is also known as ‘Hosted
Hypervisor”. Such kind of hypervisors doesn’t run directly over the underlying hardware rather
they run as an application in a Host system(physical machine). Basically, the software is
installed on an operating system. Hypervisor asks the operating system to make hardware calls.
An example of a Type 2 hypervisor includes VMware Player or Parallels Desktop. Hosted
hypervisors are often found on endpoints like PCs. The type-2 hypervisor is very useful for
engineers, and security analysts (for checking malware, or malicious source code and newly
developed applications).

KVM:

Kernel-based Virtual Machine (KVM) is an open source virtualization technology built into
Linux®. Specifically, KVM lets you turn Linux into a hypervisor that allows a host machine to
run multiple, isolated virtual environments called guests or virtual machines (VMs). KVM is part
of Linux. KVM was first announced in 2006 and merged into the mainline Linux kernel version a
year later. Because KVM is part of existing Linux code, it immediately benefits from every new
Linux feature, fix, and advancement without additional engineering.

KVM converts Linux into a type-1 (bare-metal) hypervisor. All hypervisors need some operating
system-level components—such as a memory manager, process scheduler, input/output (I/O)
stack, device drivers, security manager, a network stack, and more—to run VMs. KVM has all
these components because it’s part of the Linux kernel. Every VM is implemented as a regular
Linux process, scheduled by the standard Linux scheduler, with dedicated virtual hardware like a
network card, graphics adapter, CPU(s), memory, and disks.

SERVER VIRTUALIZATION:
Server virtualization is the process of dividing a physical server into multiple unique and isolated
virtual servers by means of a software application. Each virtual server can run its own operating
systems independently.

Key Benefits of Server Virtualization:

 Higher server ability


 Cheaper operating costs
 Eliminate server complexity
 Increased application performance
 Deploy workload quicker
Three Kinds of Server Virtualization:

Full Virtualization: Full virtualization uses a hypervisor, a type of software that directly
communicates with a physical server's disk space and CPU. The hypervisor monitors the
physical server's resources and keeps each virtual server independent and unaware of the other
virtual servers. It also relays resources from the physical server to the correct virtual server as it
runs applications. The biggest limitation of using full virtualization is that a hypervisor has its
own processing needs. This can slow down applications and impact server performance.

Para-Virtualization: Unlike full virtualization, para-virtualization involves the entire network


working together as a cohesive unit. Since each operating system on the virtual servers is aware
of one another in para-virtualization, the hypervisor does not need to use as much processing
power to manage the operating systems.

OS-Level Virtualization: Unlike full and para-virtualization, OS-level visualization does not
use a hypervisor. Instead, the virtualization capability, which is part of the physical server
operating system, performs all the tasks of a hypervisor. However, all the virtual servers must
run that same operating system in this server virtualization method.

Server virtualization is a cost-effective way to provide web hosting services and effectively
utilize existing resources in IT infrastructure. Without server virtualization, servers only use a
small part of their processing power. This results in servers sitting idle because the workload is
distributed to only a portion of the network’s servers. Data centers become overcrowded with
underutilized servers, causing a waste of resources and power.

DESKTOP VIRTUALIZATION:

Desktop virtualization is a method of simulating a user workstation so it can be accessed from a


remotely connected device. By abstracting the user desktop in this way, organizations can allow
users to work from virtually anywhere with a network connecting, using any desktop laptop,
tablet, or smart phone to access enterprise resources without regard to the device or operating
system employed by the remote user.

Remote desktop virtualization is also a key component of digital workspaces Virtual


desktop workloads run on desktop virtualization servers which typically execute on virtual
machines (VMs) either at on-premises data centers or in the public cloud.

Since the user devices is basically a display, keyboard, and mouse, a lost or stolen device
presents a reduced risk to the organization. All user data and programs exist in the desktop
virtualization server, not on client devices.
Remote desktop virtualization is typically based on a client/server model, where the
organization’s chosen operating system and applications run on a server located either in the
cloud or in a data center. In this model all interactions with users occur on a local device of the
user’s choosing, reminiscent of the so-called ‘dumb’ terminals popular on mainframes and early
Unix systems.

Benefits of Desktop Virtualization:


Resource Utilization: Since IT resources for desktop virtualization are concentrated in a data
center, resources are pooled for efficiency. The need to push OS and application updates to end-
user devices is eliminated, and virtually any desktop, laptop, tablet, or smart phone can be used
to access virtualized desktop applications. IT organizations can thus deploy less powerful and
less expensive client devices since they are basically only used for input and output.

Remote Workforce Enablement: Since each virtual desktop resides in central servers, new user
desktops can be provisioned in minutes and become instantly available for new users to access.
Additionally IT support resources can focus on issues on the virtualization servers with little
regard to the actual end-user device being used to access the virtual desktop. Finally, since all
applications are served to the client over a network, users have the ability to access their business
applications virtually anywhere there is internet connectivity. If a user leaves the organization,
the resources that were used for their virtual desktop can then be returned to centrally pooled
infrastructure.

Security: IT professionals rate security as their biggest challenge year after year. By removing
OS and application concerns from user devices, desktop virtualization enables centralized
security control, with hardware security needs limited to virtualization servers, and an emphasis
on identity and access management with role-based permissions that limit users only to those
applications and data they are authorized to access. Additionally, if an employee leaves an
organization there is no need to remove applications and data from user devices; any data on the
user device is ephemeral by design and does not persist when a virtual desktop session ends.

Types of Virualization:
The three most popular types of desktop virtualization are Virtual desktop infrastructure (VDI),
Remote desktop services (RDS), and Desktop-as-a-Service (DaaS).

VDI simulates the familiar desktop computing model as virtual desktop sessions that run on VMs
either in on-premises data center or in the cloud. Organizations who adopt this model manage the
desktop virtualization server as they would any other application server on-premises. Since all
end-user computing is moved from users back into the data center, the initial deployment of
servers to run VDI sessions can be a considerable investment, tempered by eliminating the need
to constantly refresh end-user devices.

RDS is often used where a limited number of applications need be virtualized, rather than a full
Windows, Mac, or Linux desktop. In this model applications are streamed to the local device
which runs its own OS. Because only applications are virtualized RDS systems can offer a higher
density of users per VM.
DaaS shifts the burden of providing desktop virtualization to service providers, which greatly
alleviates the IT burden in providing virtual desktops. Organizations that wish to move IT
expenses from capital expense to operational expenses will appreciate the predictable monthly
costs that DaaS providers base their business model on.

NETWOK VIRTUALIZATION:

Network Virtualization (NV) refers to abstracting network resources that were traditionally
delivered in hardware to software. NV can combine multiple physical networks to one virtual,
software-based network, or it can divide one physical network into separate, independent virtual
networks.
Network virtualization software allows network administrators to move virtual machines across
different domains without reconfiguring the network. The software creates a network overlay
that can run separate virtual network layers on top of the same physical network fabric.

Network virtualization is rewriting the rules for the way services are
delivered, from the software-defined data center (SDDC), to the cloud, to the
edge. This approach moves networks from static, inflexible, and inefficient to
dynamic, agile, and optimized. Modern networks must keep up with the
demands for cloud-hosted, distributed apps, and the increasing threats of
cybercriminals while delivering the speed and agility you need for faster time
to market for your applications. With network virtualization, you can forget
about spending days or weeks provisioning the infrastructure to support a
new application. Apps can be deployed or updated in minutes for rapid time
to value.

Benefits of Network Virtualization:

Network virtualization helps organizations achieve major advances in speed, agility, and security
by automating and simplifying many of the processes that go into running a data center network
and managing networking and security in the cloud. Here are some of the key benefits of
network virtualization:

 Reduce network provisioning time from weeks to minutes


 Achieve greater operational efficiency by automating manual processes
 Place and move workloads independently of physical topology
 Improve network security within the data center

One example of network virtualization is virtual LAN (VLAN). A VLAN is a subsection of a


local area network (LAN) created with software that combines network devices into one group,
regardless of physical location. VLANs can improve the speed and performance of busy
networks and simplify changes or additions to the network.

VMware NSX Data Center is a network virtualization platform that delivers networking and
security components like firewalling, switching, and routing that are defined and consumed in
software. NSX takes an architectural approach built on scale-out network virtualization that
delivers consistent, pervasive connectivity and security for apps and data wherever they reside,
independent of underlying physical infrastructure.

DATA CENTER VIRTUALIZATION:

Data center virtualization systems are made with hypervisors. There is a special kind of software
called a hypervisor that is used to set up and manage virtual machines (VMs). Hypervisors from
top virtualization companies allow users to create virtual machines (VMs). These companies
include Citrix, VMware, IBM, Microsoft, Virtual Box, XenServer, and others. Users can create
VMs in the cloud, in a mixed environment, or on-premises. Large data centers use virtual
machines (VMs) to spread out resources efficiently. They have lots of computing power. They
do this to get the most out of them. Virtual machines (VMs) are given all the parts of a real data
center when they are built. This includes memory, storage, and even operating systems.

Virtual Machines (VMs) are like a simulated layer that runs on top of a real data server or
system. They keep other VMs separate. Type 1 hypervisors, also called "bare-metal"
hypervisors, work directly on the hardware. Type 2 hypervisors, also called "hosted"
hypervisors, work on an operating system.

Users can set up one or more virtual data centers by using hypervisors. They can effectively
manage virtual machines (VMs). This makes the best use of resources. Once the virtual machine
(VM) is running, data center virtualization can make many resources virtual. These resources
include servers, storage, networking, security, and control.

Making virtual copies of physical data center resources is called data center virtualization. This
includes all the hardware and software tools. For example, servers, data storage devices,
networks, operating systems, apps, and platforms. Many digital businesses are looking for ways
to save money. They want to save on buying and keeping hardware and software. They want to
use the newest technologies. When companies use data center virtualization technology, they can
fulfill these goals. They also get many other benefits. Here are some of the virtualized data center
providers,

 Cisco UCS
 Vmware vsphere
 Citrix
 Microsoft Azure
 Wipro

You might also like