Unit 1 CC
Unit 1 CC
• Consolidate servers - VMs can be set up as servers that host other VMs, which lets
organizations reduce sprawl by concentrating more resources onto a single physical machine.
• Enable workload migration - The flexibility and portability that VMs provide are key to
increasing the velocity of migration initiatives.
• Create development and test environments - VMs can serve as isolated environments for
testing and development that include full functionality but have no impact on the surrounding
infrastructure.
• Support DevOps - VMs can easily be turned off or on, migrated, and adapted, providing
maximum flexibility for development.
• Create a hybrid environment - VMs provide the foundation for creating a cloud environment
alongside an on-premises one, bringing flexibility without abandoning legacy systems.
2. THE TWO TYPES OF VIRTUAL MACHINES
Types of Virtual Machines :
1. System Virtual Machine:
These types of virtual machines give complete system platform and gives the execution of the complete
virtual operating system. Just like virtual box, system virtual machine is providing an environment for
an OS to be installed completely. We can see in below image that our hardware of Real Machine is
being distributed between two simulated operating systems by Virtual machine monitor. And then some
programs, processes are going on in that distributed hardware of simulated machines separately.
A system virtual machine is fully virtualized to substitute for a physical machine. A system platform
supports the sharing of a host computer’s physical resources between multiple virtual machines, each
running its own copy of the operating system. This virtualization process relies on a hypervisor, which
can run on bare hardware, such as VMware ESXi, or on top of an operating system.
2. Process Virtual Machine : While process virtual machines, unlike system virtual machine, does not
provide us with the facility to install the virtual operating system completely. Rather it creates virtual
environment of that OS while using some app or program and this environment will be destroyed as
soon as we exit from that app. Like in below image, there are some apps running on main OS as well
some virtual machines are created to run other apps. This shows that as those programs required
different OS, process virtual machine provided them with that for the time being those programs are
running. Example – Wine software in Linux helps to run Windows applications.
4. TAXONOMY OF VIRTUALIZATION
Virtual machines are broadly classified into two types: System Virtual Machines (also known
as Virtual Machines) and Process Virtual Machines (also known as Application Virtual
Machines). The classification is based on their usage and degree of similarity to the linked
physical machine. The system VM mimics the whole system hardware stack and allows for the
execution of the whole operating system Process VM, on the other hand, provides a layer to an
operating system that is used to replicate the programming environment for the execution of
specific processes.
A Process Virtual Machine, also known as an application virtual machine, operates as a regular
program within a host OS and supports a single process. It is formed when the process begins
and deleted when it terminates. Its goal is to create a platform-independent programming
environment that abstracts away features of the underlying hardware or operating system,
allowing a program to run on any platform. With Linux, for example, Wine software aids in
the execution of Windows applications.
A System Virtual Machine, such as VirtualBox, offers a full system platform that allows the
operation of a whole operating system (OS).
Virtual Machines are used to distribute and designate suitable system resources to software
(which might be several operating systems or an application), and the software is restricted to
the resources provided by the VM. The actual software layer that allows virtualization is the
Virtual Machine Monitor (also known as Hypervisor). Hypervisors are classified into two
groups based on their relationship to the underlying hardware. Native VM is a hypervisor that
takes direct control of the underlying hardware, whereas hosted VM is a different software
layer that runs within the operating system and so has an indirect link with the underlying
hardware.
The system VM abstracts the Instruction Set Architecture, which differs slightly from that of
the actual hardware platform. The primary benefits of system VM include consolidation (it
allows multiple operating systems to coexist on a single computer system with strong isolation
from each other), application provisioning, maintenance, high availability, and disaster
recovery, as well as sandboxing, faster reboot, and improved debugging access.
The process VM enables conventional application execution inside the underlying operating
system to support a single process. To support the execution of numerous applications
associated with numerous processes, we can construct numerous instances of process VM. The
process VM is formed when the process starts and terminates when the process is terminated.
The primary goal of process VM is to provide platform independence (in terms of development
environment), which implies that applications may be executed in the same way on any of the
underlying hardware and software platforms. Process VM as opposed to system VM, abstracts
high-level programming languages. Although Process VM is built using an interpreter, it
achieves comparable speed to compiler-based programming languages using a just-in-time
compilation mechanism.
5. VIRTUALIZATION ARCHITECTURE
• The virtual application services help in application management, and the virtual
infrastructure services can help in infrastructure management.
• Both services are integrated into the virtual data center or operating system. Virtual
services can be used on any platform and programming environment. These services
can be accessed from the local cloud or external cloud. In return, cloud users must pay
a monthly or annual fee to the third party.
• This fee is paid to third parties for providing cloud services to end users, who in turn
provide applications in different forms according to the needs of cloud end users.
• A hypervisor separates the operating system from the underlying hardware. It allows
the host computer to run multiple virtual machines simultaneously and share the same
computer resources.
Hosted Architecture
In this type of configuration, first, the host operating system is installed on the
hardware, then the software is installed. The software is a hypervisor or virtual
machine (VM) that requires many guest operating systems or VMs to be installed on
the hardware to set up the virtualization architecture. Once the hypervisor is in place,
applications can be installed and run on the virtual machine as if they were installed
on the physical machine.
• Kernel-level virtualization: It runs a separate version of the Linux Kernel. Kernel level
allows running multiple servers in a single host. It uses a device driver to communicate
between main Linux Kernel and the virtual machine. This virtualization is a special
form of Server Virtualization.
• Array-based storage virtualization: Storage using arrays The most popular use of
virtualization is when a storage array serves as the main storage controller and is
equipped with virtualization software. This allows the array to share storage resources
with other arrays and present various physical storage types that can be used as storage
tiers.
7. NETWORK VIRTUALIZATION
Network Virtualization is a process of logically grouping physical networks and making them
operate as single or multiple independent networks called Virtual Networks.
general architecture of NV
The hypervisor is used to create a virtual switch and configuring virtual networks on it. The
third-party software is installed onto the hypervisor and it replaces the native networking
functionality of the hypervisor. A hypervisor allows us to have various VMs all working
optimally on a single piece of computer hardware.
1. Physical Network
• Physical components: Network adapters, switches, bridges, repeaters, routers and hubs.
• Grants connectivity among physical servers running a hypervisor, between physical
servers and storage systems and between physical servers and clients.
2. VM Network
Virtual LAN (VLAN), Network Overlays, Network Virtualization Platform: VMware NSX .
• A traditional computer runs with a host operating system specially tailored for its
hardware architecture.
• The VMs are shown in the upper boxes, where applications run with their own guest
OS over the virtualized CPU, memory, and I/O resources. The main function of the
software layer for virtualization is to virtualize the physical hardware of a host machine
into virtual resources to be used by the VMs, exclusively
The Virtualization software creates the abstraction of VMs by interposing a virtualization layer
at various levels of a computer system.
Common virtualization layers include • Instruction set architecture (ISA) level, • hardware
level, • operating system level, • library support level, and • application level
At the ISA level, virtualization is performed by emulating a given ISA by the ISA of the host
machine. For example, MIPS binary code can run on an x86-based host machine with the help
of ISA emulation. With this approach, it is possible to run a large amount of legacy binary code
writ-ten for various processors on any given new hardware host machine. Instruction set
emulation leads to virtual ISAs created on any hardware machine.
The basic emulation method is through code interpretation. An interpreter program interprets
the source instructions to target instructions one by one. One source instruction may require
tens or hundreds of native target instructions to perform its function. Obviously, this process is
relatively slow. For better performance, dynamic binary translation is desired. This approach
translates basic blocks of dynamic source instructions to target instructions. The basic blocks
can also be extended to program traces or super blocks to increase translation efficiency.
Instruction set emulation requires binary translation and optimization. A virtual instruction set
architecture (V-ISA) thus requires adding a processor-specific software translation layer to the
compiler.
Hardware-level virtualization is performed right on top of the bare hardware. On the one hand,
this approach generates a virtual hardware environment for a VM. On the other hand, the
process manages the underlying hardware through virtualization. The idea is to virtualize a
computer’s resources, such as its processors, memory, and I/O devices. The intention is to
upgrade the hardware utilization rate by multiple users concurrently. The idea was
implemented in the IBM VM/370 in the 1960s. More recently, the Xen hypervisor has been
applied to virtualize x86-based machines to run Linux or other guest OS applications.
This refers to an abstraction layer between traditional OS and user applications. OS-level
virtualization creates isolated containers on a single physical server and the OS instances to
utilize the hard-ware and software in data centers. The containers behave like real servers. OS-
level virtualization is commonly used in creating virtual hosting environments to allocate
hardware resources among a large number of mutually distrusting users. It is also used, to a
lesser extent, in consolidating server hardware by moving services on separate hosts into
containers or VMs on one server.
Most applications use APIs exported by user-level libraries rather than using lengthy system
calls by the OS. Since most systems provide well-documented APIs, such an interface becomes
another candidate for virtualization. Virtualization with library interfaces is possible by
controlling the communication link between applications and the rest of a system through API
hooks. The software tool WINE has implemented this approach to support Windows
applications on top of UNIX hosts. Another example is the vCUDA which allows applications
executing within VMs to leverage GPU hardware acceleration
1.5 User-Application Level
VMs. In this scenario, the virtualization layer sits as an application program on top of the
operating system, and the layer exports an abstraction of a VM that can run programs written
and compiled to a particular abstract machine definition. Any program written in the HLL and
compiled for this VM will be able to run on it. The Microsoft .NET CLR and Java Virtual
Machine (JVM) are two good examples of this class of VM
9. VIRTUALIZATION STRUCTURE
In general, there are three typical classes of VM architecture. Figure 3.1 showed the
architectures of a machine before and after virtualization. Before virtualization, the
operating system manages the hardware. After virtualization, a virtualization layer is
inserted between the hardware and the operating system. In such a case, the
virtualization layer is responsible for converting portions of the real hardware into
virtual hardware. Therefore, different operating systems such as Linux and Windows
can run on the same physical machine, simultaneously. Depending on the position of
the virtualization layer, there are several classes of VM architectures, namely
the hypervisor architecture, para-virtualization, and host-based virtualization.
The hypervisor is also known as the VMM (Virtual Machine Monitor). They both
perform the same virtualization operations.
The core components of a Xen system are the hypervisor, kernel, and applications. The
organization of the three components is important. Like other virtualization systems,
many guest OSes can run on top of the hypervisor. However, not all guest OSes are
created equal, and one in
particular controls the others. The guest OS, which has control ability, is called Domain
0, and the others are called Domain U. Domain 0 is a privileged guest OS of Xen. It is
first loaded when Xen boots without any file system drivers being available. Domain 0
is designed to access hardware directly and manage devices. Therefore, one of the
responsibilities of Domain 0 is to allocate and map hardware resources for the guest
domains (the Domain U domains).
With full virtualization, noncritical instructions run on the hardware directly while
critical instructions are discovered and replaced with traps into the VMM to be
emulated by software. Both the hypervisor and VMM approaches are considered full
virtualization. Why are only critical instructions trapped into the VMM? This is because
binary translation can incur a large performance overhead. Noncritical instructions do
not control hardware or threaten the security of the system, but critical instructions do.
Therefore, running noncritical instructions on hardware not only can promote
efficiency, but also can ensure system security.
This approach was implemented by VMware and many other software companies.
VMware puts the VMM at Ring 0 and the guest OS at Ring 1. The VMM scans the
instruction stream and identifies the privileged, control- and behavior-sensitive
instructions. When these instructions are identified, they are trapped into the VMM,
which emulates the behavior of these instructions. The method used in this emulation
is called binary translation. Therefore, full vir-tualization combines binary translation
and direct execution. The guest OS is completely decoupled from the underlying
hardware. Consequently, the guest OS is unaware that it is being virtualized.
The performance of full virtualization may not be ideal, because it involves binary
translation which is rather time-consuming. In particular, the full virtualization of I/O-
intensive applications is a really a big challenge. Binary translation employs a code
cache to store translated hot instructions to improve performance, but it increases the
cost of memory usage. At the time of this writing, the performance of full virtualization
on the x86 architecture is typically 80 percent to 97 percent that of the host machine.
1. CPU Virtualization:
CPU virtualization involves dividing the physical CPU into multiple virtual CPUs, allowing
multiple operating systems or processes to run simultaneously. This is achieved through two
main techniques:
- Full virtualization: In this approach, a software layer called a hypervisor or virtual machine
monitor (VMM) is installed directly on the physical hardware. The hypervisor intercepts and
manages all CPU instructions and resources, allowing multiple guest operating systems to run
without modifications. Each guest OS operates under the assumption that it has full control of
the CPU.
2. Memory Virtualization:
Memory virtualization allows the allocation of virtual memory to different operating systems
or processes running on a virtualized environment. It involves mapping the virtual addresses
used by guest operating systems to physical memory on the host machine. The virtualization
layer manages memory allocation and ensures isolation between different virtual machines.
- Shadow Paging: The hypervisor maintains a shadow page table that maps the guest OS's
virtual addresses to the physical memory addresses. Whenever a guest OS performs memory
operations, the hypervisor intercepts and translates the addresses accordingly.
- Ballooning: The hypervisor can reclaim memory from idle or less active guest OSes by using
a balloon driver installed on the guest OS. The balloon driver requests memory from the guest
OS, freeing up the physical memory for other virtual machines.
I/O device virtualization allows virtual machines to access and use physical I/O devices, such
as network adapters, disk drives, and graphics cards. There are various techniques to achieve
device virtualization:
- Emulation: The hypervisor emulates the behavior of the physical devices, allowing the guest
OS to interact with them as if they were directly connected. This approach incurs performance
overhead as each I/O request goes through the hypervisor.
Pass-through or Direct Assignment: In this approach, the hypervisor assigns a physical I/O
device directly to a specific virtual machine, bypassing the hypervisor's involvement. The guest
OS can then communicate with the device directly, providing near-native performance.
However, this requires that the hardware supports this feature, and the device is dedicated to a
single virtual machine.
Virtual Clusters:
A virtual cluster is a logical grouping of virtual machines (VMs) or containers that work
together to provide specific computing services or run applications. Unlike physical clusters,
virtual clusters exist in a software defined environment and leverage virtualization technologies
to efficiently allocate and manage resources.
• Virtual Machines (VMs) : These are individual instances running within a virtual cluster.
VMs emulate physical computers and have their own operating systems and applications.
Multiple VMs can coexist on a single physical server.2.
• Hypervisor : The hypervisor, also known as a Virtual Machine Monitor (VMM), is the
software or firmware layer that manages and allocates physical resources (CPU, memory,
storage, etc.) to VMs. It ensures that multiple VMs can run on the same physical hardware
without interference.
• Improved Resource Utilization : Virtual clusters allow efficient use of physical resources
by running multiple VMs on the same hardware, reducing resource wastage.
• Scalability : You can easily scale a virtual cluster up or down by adding or removing VMs,
adapting to changing workload demands.
• Isolation and Security : VMs within a virtual cluster are isolated from one another,
enhancing security and preventing resource conflicts.
• Disaster Recovery : Virtual clusters can be configured for high availability and redundancy,
ensuring continuity of services in case of hardware failures.
Resource Management in Virtual Clusters: Resource management in virtual clusters
involves efficiently allocating, monitoring, and optimizing physical resources to ensure the
performance and stability of VMs and applications.
• Resource Allocation : Resources such as CPU, memory, storage, and network bandwidth are
allocated to VMs based on their requirements. Allocation can be static or dynamic, and
priorities can be set to ensure critical workloads receive necessary resources.
• Resource Optimization : To optimize resource usage, virtual clusters can employ techniques
like load balancing, dynamic resource allocation, and auto scaling. These strategies ensure that
resources are used efficiently and that workloads are distributed evenly. \
• Overallocation : Allocating more resources than physically available can lead to resource
contention and decreased performance.
• Bottlenecks : Resource limitations in CPU, memory, or storage can create bottlenecks that
slow down applications.
• Resource Contention : Multiple VMs competing for the same resources can result in
contention, impacting performance.
➢ Kubernetes
➢ OpenStack
➢ VMware vSphere
➢ Apache Meso