CCDT Unit 4
CCDT Unit 4
What is Virtualization?
Virtualization is a technology that you can use to create virtual representations of servers, storage, networks, and other
physical machines. Virtual software mimics the functions of physical hardware to run multiple virtual machines
simultaneously on a single physical machine. Businesses use virtualization to use their hardware resources efficiently and get
greater returns from their investment. It also powers cloud computing services that help organizations manage infrastructure
more efficiently.
By using virtualization, you can interact with any hardware resource with greater flexibility. Physical servers consume
electricity, take up storage space, and need maintenance. You are often limited by physical proximity and network design if
you want to access them. Virtualization removes all these limitations by abstracting physical hardware functionality into
software. You can manage, maintain, and use your hardware infrastructure like an application on the web.
Virtualization example
The email application requires more storage capacity and a Windows operating system.
The customer-facing application requires a Linux operating system and high processing power to handle large
volumes of website traffic.
The internal business application requires iOS and more internal memory (RAM).
To meet these requirements, the company sets up three different dedicated physical servers for each application. The
company must make a high initial investment and perform ongoing maintenance and upgrades for one machine at a time.
The company also cannot optimize its computing capacity. It pays 100% of the servers’ maintenance costs but uses only a
fraction of their storage and processing capacities.
With virtualization, the company creates three digital servers, or virtual machines, on a single physical server. It specifies the
operating system requirements for the virtual machines and can use them like the physical servers. However, the company
now has less hardware and fewer related expenses.
Infrastructure as a service
The company can go one step further and use a cloud instance or virtual machine from a cloud computing provider such as
AWS. AWS manages all the underlying hardware, and the company can request server resources with varying
configurations. All the applications run on these virtual servers without the users noticing any difference. Server
management also becomes easier for the company’s IT team.
What is virtualization?
To properly understand Kernel-based Virtual Machine (KVM), you first need to understand some basic concepts
in virtualization. Virtualization is a process that allows a computer to share its hardware resources with multiple digitally
separated environments. Each virtualized environment runs within its allocated resources, such as memory, processing
power, and storage. With virtualization, organizations can switch between different operating systems on the same server
without rebooting.
A virtual machine is a software-defined computer that runs on a physical computer with a separate operating system and
computing resources. The physical computer is called the host machine and virtual machines are guest machines. Multiple
virtual machines can run on a single physical machine. Virtual machines are abstracted from the computer hardware by a
hypervisor.
Hypervisor
The hypervisor is a software component that manages multiple virtual machines in a computer. It ensures that each virtual
machine gets the allocated resources and does not interfere with the operation of other virtual machines. There are two types
of hypervisors.
Type 1 hypervisor
A type 1 hypervisor, or bare-metal hypervisor, is a hypervisor program installed directly on the computer’s hardware instead
of the operating system. Therefore, type 1 hypervisors have better performance and are commonly used by enterprise
applications. KVM uses the type 1 hypervisor to host multiple virtual machines on the Linux operating system.
Type 2 hypervisor
Also known as a hosted hypervisor, the type 2 hypervisor is installed on an operating system. Type 2 hypervisors are suitable
for end-user computing.
Virtualization improves hardware resources used in your data center. For example, instead of running one server on one
computer system, you can create a virtual server pool on the same computer system by using and returning servers to the
pool as required. Having fewer underlying physical servers frees up space in your data center and saves money on electricity,
generators, and cooling appliances.
Automated IT management
Now that physical computers are virtual, you can manage them by using software tools. Administrators create deployment
and configuration programs to define virtual machine templates. You can duplicate your infrastructure repeatedly and
consistently and avoid error-prone manual configurations.
When events such as natural disasters or cyberattacks negatively affect business operations, regaining access to IT
infrastructure and replacing or fixing a physical server can take hours or even days. By contrast, the process takes minutes
with virtualized environments. This prompt response significantly improves resiliency and facilitates business continuity so
that operations can continue as scheduled.
Virtualization uses specialized software, called a hypervisor, to create several cloud instances or virtual machines on one
physical computer.
After you install virtualization software on your computer, you can create one or more virtual machines. You can access the
virtual machines in the same way that you access other applications on your computer. Your computer is called the host, and
the virtual machine is called the guest. Several guests can run on the host. Each guest has its own operating system, which
can be the same or different from the host operating system.
From the user’s perspective, the virtual machine operates like a typical server. It has settings, configurations, and installed
applications. Computing resources, such as central processing units (CPUs), Random Access Memory (RAM), and storage
appear the same as on a physical server. You can also configure and update the guest operating systems and their
applications as necessary without affecting the host operating system.
You can use virtualization technology to get the functions of many different types of physical infrastructure and all the
benefits of a virtualized environment. You can go beyond virtual machines to create a collection of virtual resources in your
virtual environment.
Server virtualization
Server virtualization is a process that partitions a physical server into multiple virtual servers. It is an efficient and cost-
effective way to use server resources and deploy IT services in an organization. Without server virtualization, physical
servers use only a small amount of their processing capacities, which leave devices idle.
Storage virtualization
Storage virtualization combines the functions of physical storage devices such as network attached storage (NAS) and
storage area network (SAN). You can pool the storage hardware in your data center, even if it is from different vendors or of
different types. Storage virtualization uses all your physical data storage and creates a large unit of virtual storage that you
can assign and control by using management software. IT administrators can streamline storage activities, such as archiving,
backup, and recovery, because they can combine multiple network storage devices virtually into a single storage device.
Network virtualization
Any computer network has hardware elements such as switches, routers, and firewalls. An organization with offices in
multiple geographic locations can have several different network technologies working together to create its enterprise
network. Network virtualization is a process that combines all of these network resources to centralize administrative tasks.
Administrators can adjust and control these elements virtually without touching the physical components, which greatly
simplifies network management.
Software-defined networking
Software-defined networking (SDN) controls traffic routing by taking over routing management from data routing in the
physical environment. For example, you can program your system to prioritize your video call traffic over application traffic
to ensure consistent call quality in all online meetings.
Network function virtualization technology combines the functions of network appliances, such as firewalls, load balancers,
and traffic analyzers that work together, to improve network performance.
Data virtualization
Modern organizations collect data from several sources and store it in different formats. They might also store data in
different places, such as in a cloud infrastructure and an on-premises data center. Data virtualization creates a software layer
between this data and the applications that need it. Data virtualization tools process an application’s data request and return
results in a suitable format. Thus, organizations use data virtualization solutions to increase flexibility for data integration
and support cross-functional data analysis.
Application virtualization
Application virtualization pulls out the functions of applications to run on operating systems other than the operating systems
for which they were designed. For example, users can run a Microsoft Windows application on a Linux machine without
changing the machine configuration. To achieve application virtualization, follow these practices:
Application streaming – Users stream the application from a remote server, so it runs only on the end user's device
when needed.
Server-based application virtualization – Users can access the remote application from their browser or client
interface without installing it.
Local application virtualization – The application code is shipped with its own environment to run on all operating
systems without changes.
Desktop virtualization
Most organizations have nontechnical staff that use desktop operating systems to run common business applications. For
instance, you might have the following staff:
A customer service team that requires a desktop computer with Windows 10 and customer-relationship
management software
A marketing team that requires Windows Vista for sales applications
You can use desktop virtualization to run these different desktop operating systems on virtual machines, which your teams
can access remotely. This type of virtualization makes desktop management efficient and secure, saving money on desktop
hardware. The following are types of desktop virtualization.
Virtual desktop infrastructure runs virtual desktops on a remote server. Your users can access them by using client devices.
In local desktop virtualization, you run the hypervisor on a local computer and create a virtual computer with a different
operating system. You can switch between your local and virtual environment in the same way you can switch between
applications.
Cloud computing is the on-demand delivery of computing resources over the internet with pay-as-you-go pricing. Instead of
buying, owning, and maintaining a physical data center, you can access technology services, such as computing power,
storage, and databases, as you need them from a cloud provider.
Virtualization technology makes cloud computing possible. Cloud providers set up and maintain their own data centers.
They create different virtual environments that use the underlying hardware resources. You can then program your system to
access these cloud resources by using APIs. Your infrastructure needs can be met as a fully managed service.
It is not simple to set up virtualization. Your computer runs on an operating system that gets configured on some particular
hardware. It is not feasible or easy to run a different operating system using the same hardware.
To do this, you will need a hypervisor. Now, what is the role of the hypervisor? It is a bridge between the hardware and the
virtual operating system, which allows smooth functioning.
Talking of the Implementation levels of virtualization in Cloud Computing., there are a total of five levels that are commonly
used. Let us now look closely at each of these levels of virtualization implementation in Cloud Computing.
ISA virtualization can work through ISA emulation. This is used to run many legacy codes written for a different hardware
configuration. These codes run on any virtual machine using the ISA. With this, a binary code that originally needed some
additional layers to run is now capable of running on the x86 machines. It can also be tweaked to run on the x64 machine.
With ISA, it is possible to make the virtual machine hardware agnostic.
For the basic emulation, an interpreter is needed, which interprets the source code and then converts it into a hardware
format that can be read. This then allows processing. This is one of the five implementation levels of virtualization in Cloud
Computing..
True to its name HAL lets the virtualization perform at the level of the hardware. This makes use of a hypervisor which is
used for functioning. The virtual machine is formed at this level, which manages the hardware using the virtualization
process. It allows the virtualization of each of the hardware components, which could be the input-output device, the
memory, the processor, etc.
Multiple users will not be able to use the same hardware and also use multiple virtualization instances at the very same time.
This is mostly used in the cloud-based infrastructure.
At the level of the operating system, the virtualization model is capable of creating a layer that is abstract between the
operating system and the application. This is an isolated container on the operating system and the physical server, which
uses the software and hardware. Each of these then functions in the form of a server.
When there are several users and no one wants to share the hardware, then this is where the virtualization level is used.
Every user will get his virtual environment using a dedicated virtual hardware resource. In this way, there is no question of
any conflict.
4) Library Level
The operating system is cumbersome, and this is when the applications use the API from the libraries at a user level. These
APIs are documented well, and this is why the library virtualization level is preferred in these scenarios. API hooks make it
possible as it controls the link of communication from the application to the system.
5) Application Level
The application-level virtualization is used when there is a desire to virtualize only one application and is the last of the
implementation levels of virtualization in Cloud Computing. One does not need to virtualize the entire environment of the
platform.
This is generally used when you run virtual machines that use high-level languages. The application will sit above the
virtualization layer, which in turn sits on the application program.
It lets the high-level language programs compiled to be used at the application level of the virtual machine run seamlessly.
Conclusion
There are in total of five implementation levels of virtualization in Cloud Computing. However, every enterprise may not
use each one of the different levels of virtualization implementation in Cloud Computing. The level used is based on the
working of the company and also on its preference for the level of virtualization. The company will use the virtual machine
to develop and test across multiple platforms. Cloud-based applications are on the rise, making virtualization a must-have
thing for enterprises worldwide
In general, there are three typical classes of VM architecture. Figure 3.1 showed the architectures of a machine before and
after virtualization. Before virtualization, the operating system manages the hardware. After virtualization, a virtualization
layer is inserted between the hardware and the operating system. In such a case, the virtualization layer is responsible for
converting portions of the real hardware into virtual hardware. Therefore, different operating systems such as Linux and
Windows can run on the same physical machine, simultaneously. Depending on the position of the virtualiza-tion layer, there
are several classes of VM architectures, namely the hypervisor architecture, para-virtualization, and host-based
virtualization. The hypervisor is also known as the VMM (Virtual Machine Monitor). They both perform the same
virtualization operations.
The hypervisor supports hardware-level virtualization (see Figure 3.1(b)) on bare metal devices like CPU, memory, disk and
network interfaces. The hypervisor software sits directly between the physical hardware and its OS. This virtualization layer
is referred to as either the VMM or the hypervisor. The hypervisor provides hypercalls for the guest OSes and applications.
Depending on the functionality, a hypervisor can assume a micro-kernel architecture like the Microsoft Hyper-V. Or it can
assume a monolithic hypervisor architecture like the VMware ESX for server virtualization.
A micro-kernel hypervisor includes only the basic and unchanging functions (such as physical memory management and
processor scheduling). The device drivers and other changeable components are outside the hypervisor. A monolithic
hypervisor implements all the aforementioned functions, including those of the device drivers. Therefore, the size of the
hypervisor code of a micro-kernel hyper-visor is smaller than that of a monolithic hypervisor. Essentially, a hypervisor must
be able to convert physical devices into virtual resources dedicated for the deployed VM to use.
Xen is an open source hypervisor program developed by Cambridge University. Xen is a micro-kernel hypervisor, which
separates the policy from the mechanism. The Xen hypervisor implements all the mechanisms, leaving the policy to be
handled by Domain 0, as shown in Figure 3.5. Xen does not include any device drivers natively [7]. It just provides a
mechanism by which a guest OS can have direct access to the physical devices. As a result, the size of the Xen hypervisor is
kept rather small. Xen provides a virtual environment located between the hardware and the OS. A number of vendors are in
the process of developing commercial Xen hypervisors, among them are Citrix XenServer [62] and Oracle VM [42].
The core components of a Xen system are the hypervisor, kernel, and applications. The organi-zation of the three
components is important. Like other virtualization systems, many guest OSes can run on top of the hypervisor. However, not
all guest OSes are created equal, and one in
particular controls the others. The guest OS, which has control ability, is called Domain 0, and the others are called Domain
U. Domain 0 is a privileged guest OS of Xen. It is first loaded when Xen boots without any file system drivers being
available. Domain 0 is designed to access hardware directly and manage devices. Therefore, one of the responsibilities of
Domain 0 is to allocate and map hardware resources for the guest domains (the Domain U domains).
For example, Xen is based on Linux and its security level is C2. Its management VM is named Domain 0, which has the
privilege to manage other VMs implemented on the same host. If Domain 0 is compromised, the hacker can control the
entire system. So, in the VM system, security policies are needed to improve the security of Domain 0. Domain 0, behaving
as a VMM, allows users to create, copy, save, read, modify, share, migrate, and roll back VMs as easily as manipulating a
file, which flexibly provides tremendous benefits for users. Unfortunately, it also brings a series of security problems during
the software life cycle and data lifetime.
Traditionally, a machine’s lifetime can be envisioned as a straight line where the current state of the machine is a point
that progresses monotonically as the software executes. During this time, con-figuration changes are made, software is
installed, and patches are applied. In such an environment, the VM state is akin to a tree: At any point, execution can go
into N different branches where multiple instances of a VM can exist at any point in this tree at any given time. VMs are
allowed to roll back to previous states in their execution (e.g., to fix configuration errors) or rerun from the same point many
times (e.g., as a means of distributing dynamic content or circulating a “live” system image).
Depending on implementation technologies, hardware virtualization can be classified into two cate-gories: full
virtualization and host-based virtualization. Full virtualization does not need to modify the host OS. It relies on binary
translation to trap and to virtualize the execution of certain sensitive, nonvirtualizable instructions. The guest OSes and their
applications consist of noncritical and critical instructions. In a host-based system, both a host OS and a guest OS are used.
A virtuali-zation software layer is built between the host OS and guest OS. These two classes of VM architec-ture are
introduced next.
With full virtualization, noncritical instructions run on the hardware directly while critical instructions are discovered and
replaced with traps into the VMM to be emulated by software. Both the hypervisor and VMM approaches are considered full
virtualization. Why are only critical instructions trapped into the VMM? This is because binary translation can incur a large
performance overhead. Noncritical instructions do not control hardware or threaten the security of the system, but critical
instructions do. Therefore, running noncritical instructions on hardware not only can promote efficiency, but also can ensure
system security.
This approach was implemented by VMware and many other software companies. As shown in Figure 3.6, VMware puts the
VMM at Ring 0 and the guest OS at Ring 1. The VMM scans the instruction stream and identifies the privileged, control-
and behavior-sensitive instructions. When these instructions are identified, they are trapped into the VMM, which emulates
the behavior of these instructions. The method used in this emulation is called binary translation. Therefore, full vir-
tualization combines binary translation and direct execution. The guest OS is completely decoupled from the underlying
hardware. Consequently, the guest OS is unaware that it is being virtualized.
The performance of full virtualization may not be ideal, because it involves binary translation which is rather time-
consuming. In particular, the full virtualization of I/O-intensive applications is a really a big challenge. Binary translation
employs a code cache to store translated hot instructions to improve performance, but it increases the cost of memory usage.
At the time of this writing, the performance of full virtualization on the x86 architecture is typically 80 percent to 97 percent
that of the host machine.
An alternative VM architecture is to install a virtualization layer on top of the host OS. This host OS is still responsible for
managing the hardware. The guest OSes are installed and run on top of the virtualization layer. Dedicated applications may
run on the VMs. Certainly, some other applications
can also run with the host OS directly. This host-based architecture has some distinct advantages, as enumerated next. First,
the user can install this VM architecture without modifying the host OS. The virtualizing software can rely on the host OS to
provide device drivers and other low-level services. This will simplify the VM design and ease its deployment.
Second, the host-based approach appeals to many host machine configurations. Compared to the hypervisor/VMM
architecture, the performance of the host-based architecture may also be low. When an application requests hardware access,
it involves four layers of mapping which downgrades performance significantly. When the ISA of a guest OS is different
from the ISA of the underlying hardware, binary translation must be adopted. Although the host-based architecture has
flexibility, the performance is too low to be useful in practice.
Para-virtualization needs to modify the guest operating systems. A para-virtualized VM provides special APIs requiring
substantial OS modifications in user applications. Performance degradation is a critical issue of a virtualized system. No one
wants to use a VM if it is much slower than using a physical machine. The virtualization layer can be inserted at different
positions in a machine soft-ware stack. However, para-virtualization attempts to reduce the virtualization overhead, and thus
improve performance by modifying only the guest OS kernel.
Figure 3.7 illustrates the concept of a paravirtualized VM architecture. The guest operating systems are para-virtualized.
They are assisted by an intelligent compiler to replace the nonvirtualizable OS instructions by hypercalls as illustrated in
Figure 3.8. The traditional x86 processor offers four instruction execution rings: Rings 0, 1, 2, and 3. The lower the ring
number, the higher the privilege of instruction being executed. The OS is responsible for managing the hardware and the
privileged instructions to execute at Ring 0, while user-level applications run at Ring 3. The best example of para-
virtualization is the KVM to be described below.
Although para-virtualization reduces the overhead, it has incurred other problems. First, its compatibility and portability
may be in doubt, because it must support the unmodified OS as well. Second, the cost of maintaining para-virtualized OSes
is high, because they may require deep OS kernel modifications. Finally, the performance advantage of para-virtualization
varies greatly due to workload variations. Compared with full virtualization, para-virtualization is relatively easy and more
practical. The main problem in full virtualization is its low performance in binary translation. To speed up binary translation
is difficult. Therefore, many virtualization products employ the para-virtualization architecture. The popular Xen, KVM, and
VMware ESX are good examples.
This is a Linux para-virtualization system—a part of the Linux version 2.6.20 kernel. Memory management and scheduling
activities are carried out by the existing Linux kernel. The KVM does the rest, which makes it simpler than the hypervisor
that controls the entire machine. KVM is a hardware-assisted para-virtualization tool, which improves performance and
supports unmodified guest OSes such as Windows, Linux, Solaris, and other UNIX variants.
Unlike the full virtualization architecture which intercepts and emulates privileged and sensitive instructions at runtime,
para-virtualization handles these instructions at compile time. The guest OS kernel is modified to replace the privileged and
sensitive instructions with hypercalls to the hypervi-sor or VMM. Xen assumes such a para-virtualization architecture.
The guest OS running in a guest domain may run at Ring 1 instead of at Ring 0. This implies that the guest OS may not be
able to execute some privileged and sensitive instructions. The privileged instructions are implemented by hypercalls to the
hypervisor. After replacing the instructions with hypercalls, the modified guest OS emulates the behavior of the original
guest OS. On an UNIX system, a system call involves an interrupt or service routine. The hypercalls apply a dedicated
service routine in Xen.
Example 3.3 VMware ESX Server for Para-VirtualizationVMware pioneered the software market for virtualization. The
company has developed virtualization tools for desktop systems and servers as well as virtual infrastructure for large data
centers. ESX is a VMM or a hypervisor for bare-metal x86 symmetric multiprocessing (SMP) servers. It accesses hardware
resources such as I/O directly and has complete resource management control. An ESX-enabled server consists of four
components: a virtualization layer, a resource manager, hardware interface components, and a service console, as shown in
Figure 3.9. To improve performance, the ESX server employs a para-virtualization architecture in which the VM kernel
interacts directly with the hardware without involving the host OS.
The VMM layer virtualizes the physical hardware resources such as CPU, memory, network and disk controllers, and human
interface devices. Every VM has its own set of virtual hardware resources. The resource manager allocates CPU, memory
disk, and network bandwidth and maps them to the virtual hardware resource set of each VM created. Hardware interface
components are the device drivers and the
VMware ESX Server File System. The service console is responsible for booting the system, initiating the execution of the
VMM and resource manager, and relinquishing control to those layers. It also facilitates the process for system
administrators.
What is a hypervisor?
A hypervisor, also known as a virtual machine monitor or VMM, is software that creates and runs virtual machines (VMs).
A hypervisor allows one host computer to support multiple guest VMs by virtually sharing its resources, such as memory
and processing.
Benefits of hypervisors
There are several benefits to using a hypervisor that hosts multiple virtual machines:
Speed: Hypervisors allow virtual machines to be created instantly, unlike bare-metal servers. This makes it
more efficient utilization of one physical server. It is more cost- and energy-efficient to run several virtual
machines on one physical machine than to run multiple underutilized physical machines for the same task.
Flexibility: Bare-metal hypervisors allow operating systems and their associated applications to run on a
variety of hardware types because the hypervisor separates the OS from the underlying hardware, so the
machine). Because the virtual machines that the hypervisor runs are independent from the physical machine,
they are portable. IT teams can shift workloads and allocate networking, memory, storage and processing
resources across multiple servers as needed, moving from machine to machine or platform to platform.
When an application needs more processing power, the virtualization software allows it to seamlessly access
additional machines.
Why use a hypervisor?
Hypervisors make it possible to use more of a system’s available resources and provide greater IT mobility since the
guest VMs are independent of the host hardware. This means they can be easily moved between different servers.
Because multiple virtual machines can run off of one physical server with a hypervisor, a hypervisor reduces:
Space
Energy
Maintenance requirements
Types of hypervisors
There are two main hypervisor types, referred to as “Type 1” (or “bare metal”) and “Type 2” (or “hosted”). A type 1
hypervisor acts like a lightweight operating system and runs directly on the host’s hardware, while a type 2 hypervisor runs
The most commonly deployed type of hypervisor is the type 1 or bare-metal hypervisor, where virtualization software is
installed directly on the hardware where the operating system is normally installed. Because bare-metal hypervisors are
isolated from the attack-prone operating system, they are extremely secure. In addition, they generally perform better and
more efficiently than hosted hypervisors. For these reasons, most enterprise companies choose bare-metal hypervisors
While bare-metal hypervisors run directly on the computing hardware, hosted hypervisors run on top of the operating system
(OS) of the host machine. Although hosted hypervisors run within the OS, additional (and different) operating systems can
be installed on top of the hypervisor. The downside of hosted hypervisors is that latency is higher than bare-metal
hypervisors. This is because communication between the hardware and the hypervisor must pass through the extra layer of
the OS. Hosted hypervisors are sometimes known as client hypervisors because they are most often used with end users and
Container vs hypervisor
Containers and hypervisors are both involved in making applications faster and more efficient, but they achieve this
in different ways.
Hypervisors:
Allow an operating system to run independently from the underlying hardware through the use of virtual
machines.
Can run multiple operating systems on top of one server (bare-metal hypervisor) or installed on top of one
Containers:
Allow applications to run independently of an operating system.
Can run on any operating system—all they need is a container engine to run.
Are extremely portable since in a container, an application has everything it needs to run.
Hypervisors and containers are used for different purposes. Hypervisors are used to create and run virtual machines (VMs),
which each have their own complete operating systems, securely isolated from the others. In contrast to VMs, containers
package up just an app and its related services. This makes them more lightweight and portable than VMs, so they are often
1. ESXi Hypervisor:
ESXi is VMware's enterprise-grade bare-metal hypervisor, designed to run directly on physical server
hardware without the need for a host operating system.
It is a type-1 hypervisor, meaning it has direct access to hardware resources and provides high
performance and resource efficiency.
ESXi is optimized for stability, security, and scalability, making it suitable for a wide range of use cases,
from small businesses to large enterprises and data centers.
2. vSphere Virtualization Platform:
vSphere is VMware's comprehensive virtualization platform, built around the ESXi hypervisor.
It includes additional components such as vCenter Server for centralized management, vSAN for
software-defined storage, and NSX for network virtualization.
vSphere provides advanced features such as live migration (VMotion), high availability (HA),
distributed resource scheduler (DRS), and fault tolerance (FT), enhancing flexibility, availability, and
performance.
3. CPU Virtualization:
VMware employs various techniques for CPU virtualization to efficiently share physical CPU resources
among multiple VMs:
Binary Translation: VMware primarily uses binary translation for CPU virtualization. This
involves intercepting CPU instructions from the guest operating system and translating them
into instructions that can be executed on the host CPU.
Hardware Virtualization Extensions: VMware supports hardware virtualization extensions
(Intel VT-x or AMD-V), which offload certain virtualization tasks to the CPU hardware,
improving performance and efficiency.
These techniques allow VMware to achieve high performance, compatibility, and scalability across a
wide range of hardware platforms.
4. Resource Management:
VMware provides robust resource management capabilities, allowing administrators to allocate CPU,
memory, storage, and network resources to VMs based on workload requirements.
Features like DRS (Distributed Resource Scheduler) automatically balance resource utilization across
hosts in a vSphere cluster, ensuring optimal performance and availability.
Resource pools and resource limits enable granular control over resource allocation, helping to prioritize
critical workloads and prevent resource contention.
5. Security:
VMware hypervisor technology includes built-in security features to protect virtualized environments
from threats and vulnerabilities.
Features such as Secure Boot, Trusted Platform Module (TPM) support, and hypervisor lockdown mode
enhance platform integrity and prevent unauthorized access.
VMware's security solutions, such as vSphere Security Hardening Guide and vSphere Trust Authority,
provide best practices and tools for securing virtualized infrastructure.
6. Compatibility and Ecosystem:
VMware hypervisor technology is widely compatible with various operating systems, applications, and
hardware platforms.
VMware provides extensive support for guest operating systems, including Windows, Linux, and other
popular OSes, ensuring broad compatibility and interoperability.
The VMware ecosystem includes a vast array of third-party integrations, management tools, and
solutions from VMware partners, enhancing the capabilities and flexibility of VMware-based
environments.
Overall, VMware's hypervisor technology, exemplified by ESXi and the vSphere platform, offers a powerful and feature-
rich solution for virtualizing CPU resources and building resilient, scalable, and secure virtualized infrastructure. With its
robust features, extensive ecosystem, and proven track record, VMware remains a leading choice for organizations seeking
to leverage the benefits of virtualization.
KVM
Kernel-based Virtual Machine (KVM) is widely used in cloud computing environments due to its open-source nature,
performance, scalability, and flexibility. Let's explore how KVM is utilized in cloud computing:
- KVM is often used as the underlying hypervisor in IaaS cloud platforms where customers can provision and manage
- Cloud providers leverage KVM to create virtualized environments where users can deploy VMs with customizable
- Examples of IaaS providers utilizing KVM include OpenStack-based clouds like OpenStack Compute (Nova) and public
cloud platforms like Google Compute Engine (GCE) and Oracle Cloud Infrastructure (OCI).
- KVM provides strong isolation between virtual machines, ensuring that each VM operates independently of others, which
- Cloud providers can securely host multiple customers (tenants) on the same physical infrastructure, with each tenant
- KVM leverages hardware virtualization extensions (Intel VT-x or AMD-V) available in modern CPUs to achieve high
- It can take advantage of hardware acceleration features to optimize virtualization performance, including support for
nested virtualization.
- KVM supports a wide range of hardware architectures, making it suitable for deploying cloud infrastructure on diverse
hardware platforms.
- KVM integrates seamlessly with various cloud management platforms and orchestration frameworks for provisioning,
- Cloud management platforms like OpenStack, Apache CloudStack, and oVirt provide comprehensive management
interfaces and APIs for deploying and managing KVM-based virtual machines and resources.
- It also supports software-defined storage (SDS) solutions for managing storage resources in a virtualized environment,
- Cloud providers can customize and automate the deployment and management of KVM-based virtual machines using
scripting tools, configuration management systems, and infrastructure as code (IaC) frameworks.
- Automation enables rapid provisioning, scaling, and orchestration of virtualized resources, improving operational
In summary, KVM plays a crucial role in cloud computing by providing a robust, open-source virtualization solution that
powers infrastructure as a service (IaaS) offerings. Its performance, scalability, and integration capabilities make it well-
suited for building and managing cloud infrastructure while delivering agility, efficiency, and cost-effectiveness to cloud
Xen is a powerful open-source hypervisor that is widely used for virtualization in cloud computing environments, data
centers, and enterprise IT infrastructures. Xen provides efficient CPU virtualization capabilities, enabling the creation and
management of virtual machines (VMs) with strong isolation and performance. Let's explore how Xen virtualizes the CPU:
1. Paravirtualization:
- Xen pioneered the concept of paravirtualization, where the guest operating system is modified to be aware of the
- In paravirtualization, the guest OS is not fully virtualized; instead, it cooperates with the hypervisor to achieve efficient
CPU virtualization.
- Xen's paravirtualization approach eliminates the need for binary translation of CPU instructions, resulting in minimal
2. CPU Scheduling:
- Xen's CPU scheduler is responsible for allocating CPU resources among virtual machines running on the same physical
host.
- The scheduler uses various algorithms and policies to prioritize CPU access based on factors such as CPU utilization,
- Xen supports features like CPU pinning, which allows administrators to assign specific physical CPU cores to individual
- While Xen initially relied solely on paravirtualization, it later added support for hardware-assisted virtualization (HVM)
- HVM allows Xen to run unmodified guest operating systems, providing broader compatibility and ease of migration from
non-virtualized environments.
- Xen can seamlessly switch between paravirtualization and hardware virtualization based on guest OS support and
configuration, optimizing performance and compatibility.
- Xen's CPU virtualization is closely tied to its memory management subsystem, which handles memory allocation,
- Xen utilizes techniques such as shadow page tables to efficiently map guest virtual addresses to physical addresses,
- Efficient memory management is essential for achieving good CPU performance in virtualized environments by reducing
- Xen provides tools and utilities for monitoring and optimizing CPU performance in virtualized environments.
- Administrators can use performance monitoring tools to identify CPU bottlenecks, analyze resource utilization, and tune
- Techniques such as CPU overcommitment and dynamic resource allocation allow Xen to efficiently utilize CPU
resources while ensuring high performance and responsiveness for virtualized workloads.
In summary, Xen offers efficient CPU virtualization capabilities through its innovative paravirtualization approach,
combined with support for hardware-assisted virtualization. By leveraging these technologies and providing advanced
features for CPU scheduling, memory management, and performance optimization, Xen enables the creation of scalable,
high-performance virtualized environments suitable for a wide range of use cases in cloud computing, data centers, and
enterprise IT infrastructures.
Memory
When discussing memory in the context of virtualization, several aspects come into play, including memory allocation,
management, optimization, and performance monitoring. Let's explore each of these aspects:
1. Memory Allocation:
- Memory allocation in virtualization involves assigning physical memory resources to virtual machines (VMs) based on
their requirements.
- Hypervisors like VMware, KVM, and Xen manage memory allocation by dynamically allocating and deallocating
memory to VMs as needed.
- Techniques such as memory ballooning, memory overcommitment, and transparent page sharing help optimize memory
2. Memory Management:
- Memory management in virtualized environments includes tasks like memory paging, mapping virtual addresses to
- Hypervisors implement memory management techniques to ensure isolation between VMs and prevent one VM from
- Techniques such as shadow page tables, nested paging, and memory introspection enhance memory management and
3. Memory Optimization:
- Memory optimization techniques aim to improve memory utilization, reduce overhead, and enhance performance in
virtualized environments.
- Technologies such as memory compression, memory deduplication, and memory ballooning help optimize memory
- Memory optimization features are often configurable and can be tailored to specific workload characteristics and resource
constraints.
4. Memory Performance Monitoring:
- Memory performance monitoring involves tracking memory usage, latency, and throughput to identify performance
- Hypervisors provide tools and utilities for monitoring memory performance, including performance counters, metrics,
- Administrators use memory performance monitoring tools to identify issues such as memory contention, resource
exhaustion, and memory leaks, and take corrective actions to improve system performance.
5. Memory Overcommitment:
- Memory overcommitment is a technique used by hypervisors to allocate more memory to VMs than physically available
on the host.
- Hypervisors employ memory overcommitment techniques such as memory ballooning, page sharing, and memory
- Memory overcommitment allows for better resource utilization and flexibility in managing virtualized environments but
I/O Devices
In virtualized environments, Input/Output (I/O) devices play a crucial role in facilitating communication between virtual
machines (VMs) and physical hardware. Here's a detailed look at I/O devices in virtualization:
- Virtual I/O devices emulate physical hardware interfaces within virtual machines, allowing VMs to communicate with
- Common virtual I/O devices include virtual network adapters, virtual disk controllers, virtual SCSI controllers, virtual
- Hypervisors provide virtual device drivers to VMs, enabling them to interact with virtual I/O devices as if they were
physical devices.
- Some hypervisors support passthrough or direct assignment of physical I/O devices to VMs, allowing VMs to access
- Common passthrough devices include network interface cards (NICs), storage controllers, GPUs, and USB devices.
- Hypervisors employ various device virtualization technologies to optimize I/O performance and resource utilization in
virtualized environments.
- Technologies such as para-virtualization, SR-IOV (Single Root I/O Virtualization), and VirtIO enhance I/O performance,
- Para-virtualized drivers optimize I/O operations by offloading processing tasks to the hypervisor or host OS, reducing
- Hypervisors implement techniques to optimize I/O performance and reduce latency in virtualized environments.
- Features such as I/O batching, I/O scheduling, and I/O caching help improve I/O throughput and responsiveness for VMs.
- Storage technologies like thin provisioning, deduplication, and caching further optimize I/O performance and reduce
- Hypervisors provide management tools and interfaces for configuring, monitoring, and managing virtual I/O devices in
virtualized environments.
- Administrators can use management consoles, APIs, and command-line tools to configure virtual I/O devices, allocate
- Advanced features like hot-add and hot-remove of virtual devices allow for dynamic reconfiguration of I/O resources
- Hypervisors enforce security and isolation between VMs by implementing robust I/O virtualization and access control
mechanisms.
- Features such as device isolation, I/O resource limits, and device passthrough policies help prevent unauthorized access
- Hypervisors also provide secure boot and integrity verification mechanisms to protect against malware and unauthorized
Virtual cluster and resource management are essential components of cloud computing and virtualized environments. They
involve organizing and optimizing resources to ensure efficient utilization, high availability, and scalability. Here's a detailed
overview of virtual cluster and resource management:
1. Virtual Cluster:
- A virtual cluster is a logical grouping of virtualized resources, such as virtual machines (VMs), storage, and networking,
that behaves like a physical cluster.
- Virtual clusters provide a flexible and scalable infrastructure for deploying and managing applications, services, and
workloads in cloud computing environments.
- Technologies such as Kubernetes, Docker Swarm, and Apache Mesos enable the creation and management of virtual
clusters, providing features like resource scheduling, load balancing, and fault tolerance.
2. Resource Management:
- Resource management involves allocating and optimizing resources, such as CPU, memory, storage, and network
bandwidth, to meet the needs of virtualized workloads.
- Hypervisors and cloud management platforms implement resource management features to ensure fair distribution,
efficient utilization, and high performance of resources.
- Techniques such as dynamic resource allocation, resource pooling, and overcommitment help optimize resource
utilization and improve scalability in virtualized environments.
3. Resource Scheduling:
- Resource scheduling involves assigning and prioritizing resources to virtualized workloads based on factors such as
workload characteristics, performance requirements, and resource availability.
- Schedulers, such as the Distributed Resource Scheduler (DRS) in VMware vSphere, dynamically allocate CPU and
memory resources to VMs based on workload demand and system constraints.
- Advanced scheduling algorithms, including load balancing, affinity rules, and priority-based scheduling, help optimize
resource allocation and improve workload performance in virtualized environments.
- High availability (HA) and fault tolerance (FT) mechanisms ensure continuous availability and reliability of virtualized
workloads by mitigating failures and minimizing downtime.
- Technologies such as VM clustering, live migration, and automatic failover enable rapid recovery and seamless failover
of virtualized workloads in the event of hardware or software failures.
- Hypervisor-based features like VMware HA and Microsoft Hyper-V Replica provide built-in mechanisms for ensuring
high availability and fault tolerance in virtualized environments.
- Virtual cluster and resource management solutions support scalability and elasticity by dynamically scaling resources up
or down based on workload demand.
- Autoscaling mechanisms automatically adjust resource allocation to match changing workload requirements, ensuring
optimal performance and cost efficiency.
- Cloud-native platforms like Kubernetes and OpenStack leverage containerization and orchestration to enable elastic
scaling and efficient resource utilization in virtualized environments.
6.Monitoring and Optimization:
- Continuous monitoring and optimization of virtual cluster and resource usage are essential for maintaining performance,
efficiency, and cost-effectiveness.
- Monitoring tools provide real-time visibility into resource utilization, performance metrics, and capacity planning,
helping administrators identify bottlenecks and optimize resource allocation.
- Optimization techniques, such as workload consolidation, right-sizing, and performance tuning, help improve resource
utilization and reduce costs in virtualized environments.Server Virtualization
is the partitioning of a physical server into a number of small virtual servers, each running its own operating system.
These operating systems are known as guest operating systems. These are running on another operating system known as
the host operating system. Each guest running in this manner is unaware of any other guests running on the same host.
Different virtualization techniques are employed to achieve this transparency.
Types of Server virtualization :
1. Hypervisor –
A Hypervisor or VMM(virtual machine monitor) is a layer that exists between the operating system and hardware. It
provides the necessary services and features for the smooth running of multiple operating systems.
It identifies traps, responds to privileged CPU instructions, and handles queuing, dispatching, and returning the hardware
requests. A host operating system also runs on top of the hypervisor to administer and manage the virtual machines.
2. Para Virtualization –
It is based on Hypervisor. Much of the emulation and trapping overhead in software implemented virtualization is
handled in this model. The guest operating system is modified and recompiled before installation into the virtual
machine.
Due to the modification in the Guest operating system, performance is enhanced as the modified guest operating system
communicates directly with the hypervisor and emulation overhead is removed.
Example: Xen primarily uses Paravirtualization, where a customized Linux environment is used to support the
administrative environment known as domain 0.
Advantages:
Easier
Enhanced Performance
No emulation overhead
Limitations:
Requires modification to a guest operating system
3. Full Virtualization –
It is very much similar to Paravirtualization. It can emulate the underlying hardware when necessary. The hypervisor
traps the machine operations used by the operating system to perform I/O or modify the system status. After trapping,
these operations are emulated in software and the status codes are returned very much consistent with what the real
hardware would deliver. This is why an unmodified operating system is able to run on top of the hypervisor.
Example: VMWare ESX server uses this method. A customized Linux version known as Service Console is used as the
administrative operating system. It is not as fast as Paravirtualization.
Advantages:
No modification to the Guest operating system is required.
Limitations:
Complex
Slower due to emulation
Installation of the new device driver is difficult.
4. Hardware-Assisted Virtualization –
It is similar to Full Virtualization and Paravirtualization in terms of operation except that it requires hardware support.
Much of the hypervisor overhead due to trapping and emulating I/O operations and status instructions executed within a
guest OS is dealt with by relying on the hardware extensions of the x86 architecture.
Unmodified OS can be run as the hardware support for virtualization would be used to handle hardware access requests,
privileged and protected operations, and to communicate with the virtual machine.
Examples: AMD – V Pacifica and Intel VT Vanderpool provide hardware support for virtualization.
Advantages:
No modification to a guest operating system is required.
Very less hypervisor overhead
Limitations:
Hardware support Required
5. Kernel level Virtualization –
Instead of using a hypervisor, it runs a separate version of the Linux kernel and sees the associated virtual machine as a
user-space process on the physical host. This makes it easy to run multiple virtual machines on a single host. A device
driver is used for communication between the main Linux kernel and the virtual machine.
Processor support is required for virtualization ( Intel VT or AMD – v). A slightly modified QEMU process is used as the
display and execution containers for the virtual machines. In many ways, kernel-level virtualization is a specialized form
of server virtualization.
Examples: User – Mode Linux( UML ) and Kernel Virtual Machine( KVM )
Advantages:
No special administrative software is required.
Very less overhead
Limitations:
Hardware Support Required
6. System Level or OS Virtualization –
Runs multiple but logically distinct environments on a single instance of the operating system kernel. Also called shared
kernel approach as all virtual machines share a common kernel of host operating system. Based on the change root
concept “chroot”.
chroot starts during bootup. The kernel uses root filesystems to load drivers and perform other early-stage system
initialization tasks. It then switches to another root filesystem using chroot command to mount an on-disk file system as
its final root filesystem and continue system initialization and configuration within that file system. The chroot
mechanism of system-level virtualization is an extension of this concept. It enables the system to start virtual servers with
their own set of processes that execute relative to their own filesystem root directories.
The main difference between system-level and server virtualization is whether different operating systems can be run on
different virtual systems. If all virtual servers must share the same copy of the operating system it is system-level
virtualization and if different servers can have different operating systems ( including different versions of a single
operating system) it is server virtualization.
Examples: FreeVPS, Linux Vserver, and OpenVZ are some examples.
Advantages:
Significantly lightweight than complete machines(including a kernel)
Can host many more virtual servers
Enhanced Security and isolation
Virtualizing an operating system usually has little to no overhead.
Live migration is possible with OS Virtualization.
It can also leverage dynamic container load balancing between nodes and clusters.
On OS virtualization, the file-level copy-on-write (CoW) method is possible, making it easier to back up
data, more space-efficient, and easier to cache than block-level copy-on-write schemes.
Limitations:
Kernel or driver problems can take down all virtual servers.
"CCDT" doesn't seem to be a widely recognized acronym in the context of desktop virtualization or computing. However,
assuming you're referring to a standard desktop environment within the realm of cloud computing or virtual desktop
infrastructure (VDI), let's explore the concept:
1. **Overview**:
- Cloud Computing Desktop (CCD) or Virtual Desktop Infrastructure (VDI) refers to the practice of hosting desktop
environments on remote servers and delivering them to end-users over a network.
- Instead of running desktop operating systems and applications locally on individual physical machines, users access
virtual desktops hosted in centralized data centers or cloud environments.
2. **Components**:
- **Virtual Desktops**: Virtual desktops are instances of desktop operating systems (such as Windows or Linux)
running on virtual machines in the cloud or data center.
- **Hypervisor or VDI Broker**: A hypervisor or VDI broker manages and orchestrates virtual desktop instances,
allocating resources and connecting users to their desktop environments.
- **User Access Devices**: Users access virtual desktops using thin clients, laptops, desktop computers, or mobile
devices connected to the network. They use remote desktop protocols (RDP), HTML5 clients, or client software to
establish connections to their virtual desktops.
3. **Benefits**:
- **Centralized Management**: IT administrators can centrally manage and maintain virtual desktops, including
provisioning, patching, and updating, leading to improved efficiency and reduced administrative overhead.
- **Resource Consolidation**: Virtual desktop infrastructure allows for better utilization of computing resources by
consolidating desktop workloads onto shared servers, resulting in cost savings and improved resource efficiency.
- **Flexibility and Scalability**: Virtual desktops offer flexibility for users to access their desktop environments from
any location and device with internet connectivity. Additionally, the infrastructure can scale up or down dynamically to
meet changing demand.
- **Enhanced Security**: Centralized control and data storage in the data center or cloud environment enhance security
and data protection, reducing the risk of data loss or theft from endpoint devices.
4. **Use Cases**:
- **Remote Work**: Virtual desktops enable remote workers to access their desktop environments and corporate
applications securely from anywhere, using any device.
- **BYOD (Bring Your Own Device)**: Virtual desktops support the BYOD trend by allowing users to access their
desktop environments from personal devices while maintaining security and compliance.
- **Task Workers**: Virtual desktops are suitable for task-oriented workers who require access to standardized desktop
environments and applications.
5. **Challenges**:
- **Network Dependency**: Virtual desktop performance relies heavily on network connectivity, and poor network
conditions can impact user experience.
- **Licensing Costs**: Desktop operating system licenses, virtualization software, and infrastructure costs can
contribute to the overall expense of deploying virtual desktops.
- **User Experience**: Ensuring a seamless and responsive user experience across diverse devices and network
conditions can be challenging.
In summary, Cloud Computing Desktop (CCD) or Virtual Desktop Infrastructure (VDI) offers a flexible and centralized
approach to desktop computing, enabling organizations to deliver desktop environments efficiently, securely, and cost-
effectively to end-users.
Network Virtualization:
Network virtualization abstracts physical network hardware and resources to create multiple virtual networks that operate
independently of each other. Here's a breakdown:
Virtual Networks: Virtual networks are created by dividing physical network infrastructure into logical segments
using software-defined networking (SDN) technologies.
Virtual Switches and Routers: Network virtualization relies on virtual switches and routers to manage traffic
within virtual networks, enabling communication between virtual machines and physical resources.
Overlay Networks: Overlay networks encapsulate network traffic within virtual tunnels, enabling communication
across physical network boundaries and data centers.
Network Segmentation and Isolation: Virtual networks provide segmentation and isolation to improve security
and performance, allowing administrators to define policies and access controls at the network level.
Benefits: Network virtualization offers benefits such as increased flexibility, scalability, and agility in managing
network resources. It simplifies network provisioning, reduces hardware costs, and enables dynamic workload
placement and migration.
Scalability: Unlike physical servers, which need extensive and, at times, expensive sourcing and time management, virtual
data centers are relatively simple, quick, and inexpensive to set up. They can be added in response to rapid rises in demand
for processing and other resources, or downsized when they are no longer necessary—something that is not possible with
metal servers.
Enhanced functionality: Virtualized resources decentralize the notion of the modern office. Before virtualization,
everything from common tasks and daily interactions to in-depth analytics and data storage happened at the server level and
could only be accessed from one location. Virtualized resources can be accessed from anywhere with a strong enough
Internet connection. For example, employees can access data and other applications from remote locations, making
productivity possible outside the office. Virtualized servers also enable versatile collaboration and sharing opportunities
through cloud-based applications like video conferencing, word processing, and other content creation tools.
Cost savings: Data center virtualization eliminates the higher management and maintenance associated with physical
servers, which are typically outsourced to third-party providers. And unlike their physical counterparts, virtual servers are
often offered as part of a consumption-based model, meaning companies only pay for what they use. By contrast, whether
physical servers are used or not, companies still incur costs for their maintenance. And the additional functionality that
virtualized data centers offer can reduce other business expenses like travel costs.
HPE also offers services like HPE GreenLake that help companies and institutions deploy and allocate virtualized, cloud-
based resources for their specific workloads and business needs. This managed solution is based on a pay-per-use model,
giving enterprises a low-cost way to scale IT to meet demand, and puts less strain on IT administrators by letting them focus
on business innovation.
Companies considering data center virtualization that see finances as a hurdle to adoption can work with HPE Financial
Services to find ways to pay for the solutions they need.