0% found this document useful (0 votes)
19 views18 pages

Unit 1 CC

cloud computing notes

Uploaded by

Anitha M K
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views18 pages

Unit 1 CC

cloud computing notes

Uploaded by

Anitha M K
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

UNIT 1

VIRTUALIZATION AND VIRTUALIZATION INFRASTRUCTURE


1. What is a Virtual Machine?
A Virtual Machine (VM) is a compute resource that uses software instead of a physical computer to
run programs and deploy apps. One or more virtual “guest” machines run
on a physical “host” machine. Each virtual machine runs its own operating system and functions
separately from the other VMs, even when they are all running on the same host. This means that, for
example, a virtual MacOS virtual machine can run on a physical PC.
A virtual machine (VM) is a digital version of a physical computer. Virtual machine software can run
programs and operating systems, store data, connect to networks, and do other computing functions,
and requires maintenance such as updates and system monitoring.
A VM is a virtualized instance of a computer that can perform almost all of the same functions as a
computer, including running applications and operating systems.
Virtual machines run on a physical machine and access computing resources from software called a
hypervisor. The hypervisor abstracts the physical machine’s resources into a pool that can be
provisioned and distributed as needed, enabling multiple VMs to run on a single physical machine.
More recently, public cloud services are using virtual machines to provide virtual application
resources to multiple users at once, for even more cost efficient and flexible compute.
How multiple virtual machines work
Multiple VMs can be hosted on a single physical machine, often a server, and then managed using
virtual machine software. This provides flexibility for compute resources (compute, storage, network)
to be distributed among VMs as needed, increasing overall efficiency. This architecture provides the
basic building blocks for the advanced virtualized resources we use today, including cloud computing.
What are virtual machines used for?
VMs are the basic building blocks of virtualized computing resources and play a primary role in creating
any application, tool, or environment—for virtual machines online and on-premises. Here are a few of
the more common enterprise functions of virtual machines:

• Consolidate servers - VMs can be set up as servers that host other VMs, which lets
organizations reduce sprawl by concentrating more resources onto a single physical machine.

• Enable workload migration - The flexibility and portability that VMs provide are key to
increasing the velocity of migration initiatives.

• Create development and test environments - VMs can serve as isolated environments for
testing and development that include full functionality but have no impact on the surrounding
infrastructure.

• Improve disaster recovery and business continuity - Replicating systems in cloud


environments using VMs can provide an extra layer of security and certainty. Cloud
environments can also be continuously updated.

• Support DevOps - VMs can easily be turned off or on, migrated, and adapted, providing
maximum flexibility for development.
• Create a hybrid environment - VMs provide the foundation for creating a cloud environment
alongside an on-premises one, bringing flexibility without abandoning legacy systems.
2. THE TWO TYPES OF VIRTUAL MACHINES
Types of Virtual Machines :
1. System Virtual Machine:
These types of virtual machines give complete system platform and gives the execution of the complete
virtual operating system. Just like virtual box, system virtual machine is providing an environment for
an OS to be installed completely. We can see in below image that our hardware of Real Machine is
being distributed between two simulated operating systems by Virtual machine monitor. And then some
programs, processes are going on in that distributed hardware of simulated machines separately.
A system virtual machine is fully virtualized to substitute for a physical machine. A system platform
supports the sharing of a host computer’s physical resources between multiple virtual machines, each
running its own copy of the operating system. This virtualization process relies on a hypervisor, which
can run on bare hardware, such as VMware ESXi, or on top of an operating system.

2. Process Virtual Machine : While process virtual machines, unlike system virtual machine, does not
provide us with the facility to install the virtual operating system completely. Rather it creates virtual
environment of that OS while using some app or program and this environment will be destroyed as
soon as we exit from that app. Like in below image, there are some apps running on main OS as well
some virtual machines are created to run other apps. This shows that as those programs required
different OS, process virtual machine provided them with that for the time being those programs are
running. Example – Wine software in Linux helps to run Windows applications.

3. Interpretation – Binary Translation

Interpretation is a technique that allows software applications to be executed on different


hardware architectures or operating systems without recompilation. The software is run
through an interpreter that reads the code and executes it, translating it into machine code on
the fly. The interpreter allows the application to be executed without any modification and can
run on any platform that has the interpreter installed.
Interpretation is often used in cloud computing to run applications on virtual machines. Virtual
machines can be set up to mimic different hardware architectures or operating systems,
allowing applications to be run on platforms that they were not designed for. The interpreter
provides a layer of abstraction between the application and the underlying hardware, allowing
the application to be run on different platforms.
Binary Translation:
Binary translation is a technique that allows software applications compiled for one hardware
architecture or operating system to be executed on another platform without modification. The
binary code is translated into machine code for the target platform, allowing the application to
be run on the new platform. Binary translation can be done statically or dynamically.
Static binary translation involves translating the entire application before it is executed. This is
done by analyzing the binary code and generating equivalent code for the target platform. The
translated code is then stored on disk and executed when needed. Static binary translation is
often used to run applications on platforms that are not supported by the application's original
architecture.
Dynamic binary translation involves translating code as it is executed. This technique is often
used in virtual machines to allow applications to be run on different hardware architectures or
operating systems. The virtual machine uses a dynamic binary translator to translate the
application's code into machine code for the underlying hardware.
Interpretation and binary translation are both important techniques used in cloud computing to
run software applications on different hardware architectures or operating systems.
Interpretation provides a layer of abstraction between the application and the hardware,
allowing it to be run on different platforms without modification. Binary translation allows
applications to be run on platforms that they were not originally designed for, by translating
the binary code into machine code for the target platform. Both techniques are essential for
running applications in cloud environments, where hardware and software configurations can
vary widely.

4. TAXONOMY OF VIRTUALIZATION
Virtual machines are broadly classified into two types: System Virtual Machines (also known
as Virtual Machines) and Process Virtual Machines (also known as Application Virtual
Machines). The classification is based on their usage and degree of similarity to the linked
physical machine. The system VM mimics the whole system hardware stack and allows for the
execution of the whole operating system Process VM, on the other hand, provides a layer to an
operating system that is used to replicate the programming environment for the execution of
specific processes.
A Process Virtual Machine, also known as an application virtual machine, operates as a regular
program within a host OS and supports a single process. It is formed when the process begins
and deleted when it terminates. Its goal is to create a platform-independent programming
environment that abstracts away features of the underlying hardware or operating system,
allowing a program to run on any platform. With Linux, for example, Wine software aids in
the execution of Windows applications.
A System Virtual Machine, such as VirtualBox, offers a full system platform that allows the
operation of a whole operating system (OS).
Virtual Machines are used to distribute and designate suitable system resources to software
(which might be several operating systems or an application), and the software is restricted to
the resources provided by the VM. The actual software layer that allows virtualization is the
Virtual Machine Monitor (also known as Hypervisor). Hypervisors are classified into two
groups based on their relationship to the underlying hardware. Native VM is a hypervisor that
takes direct control of the underlying hardware, whereas hosted VM is a different software
layer that runs within the operating system and so has an indirect link with the underlying
hardware.
The system VM abstracts the Instruction Set Architecture, which differs slightly from that of
the actual hardware platform. The primary benefits of system VM include consolidation (it
allows multiple operating systems to coexist on a single computer system with strong isolation
from each other), application provisioning, maintenance, high availability, and disaster
recovery, as well as sandboxing, faster reboot, and improved debugging access.
The process VM enables conventional application execution inside the underlying operating
system to support a single process. To support the execution of numerous applications
associated with numerous processes, we can construct numerous instances of process VM. The
process VM is formed when the process starts and terminates when the process is terminated.
The primary goal of process VM is to provide platform independence (in terms of development
environment), which implies that applications may be executed in the same way on any of the
underlying hardware and software platforms. Process VM as opposed to system VM, abstracts
high-level programming languages. Although Process VM is built using an interpreter, it
achieves comparable speed to compiler-based programming languages using a just-in-time
compilation mechanism.
5. VIRTUALIZATION ARCHITECTURE

Virtualization Architecture is defined as a model that describes the concept of virtualization.


The use of virtualization is important in cloud computing. In cloud computing, end users share
data through an application called the cloud. However, end users can share their entire IT
infrastructure with virtualization itself.
In the diagram above, virtualization includes virtual applications and virtual infrastructure
services.

• The virtual application services help in application management, and the virtual
infrastructure services can help in infrastructure management.

• Both services are integrated into the virtual data center or operating system. Virtual
services can be used on any platform and programming environment. These services
can be accessed from the local cloud or external cloud. In return, cloud users must pay
a monthly or annual fee to the third party.

• This fee is paid to third parties for providing cloud services to end users, who in turn
provide applications in different forms according to the needs of cloud end users.

• A hypervisor separates the operating system from the underlying hardware. It allows
the host computer to run multiple virtual machines simultaneously and share the same
computer resources.

Types of Virtualization Architectures


There are two main types of virtualization architectures: hosted and bare metal.

Hosted Architecture
In this type of configuration, first, the host operating system is installed on the
hardware, then the software is installed. The software is a hypervisor or virtual
machine (VM) that requires many guest operating systems or VMs to be installed on
the hardware to set up the virtualization architecture. Once the hypervisor is in place,
applications can be installed and run on the virtual machine as if they were installed
on the physical machine.

Bare Metal Architecture


In this architecture, the hypervisor is installed directly on the hardware, not on top of
the operating system. Hypervisors and virtual machines are configured the same way
as infrastructure. Bare metal virtualization architecture is designed for applications
that provide real-time access or perform some form of data processing.
6. STORAGE VIRTUALIZATION
• Storage virtualization is a process of pooling physical storage devices so that IT
may address a single “virtual” storage unit. It offered considerable economic and
operational savings over bare metal storage but is now mostly overshadowed by
the cloud paradigm.
• Storage virtualization is functional RAID levels and controllers are made
desirable, which is an important component of storage servers. Applications and
operating systems on the device can directly access the discs for writing. Local
storage is configured by the controllers in RAID groups, and the operating system
sees the storage based on the configuration. The controller, however, is in charge
of figuring out how to write or retrieve the data that the operating system requests
because the storage is abstracted.

Types of Storage Virtualization

Below are some types of Storage Virtualization.

• Kernel-level virtualization: It runs a separate version of the Linux Kernel. Kernel level
allows running multiple servers in a single host. It uses a device driver to communicate
between main Linux Kernel and the virtual machine. This virtualization is a special
form of Server Virtualization.

• Hypervisor Virtualization: A hypervisor is a layer between the Operating system and


hardware. With the help of hypervisor multiple operating systems can work. Moreover,
it provides features and necessary services which help OS to work properly
• Hardware-assisted Virtualization: is type of virtualization requires hardware support.
It is similar to full Para-virtualization. Here, the unmodified OS can run as hardware
support for virtualization and we can also use to handle hardware access requests and
protect operations.
• Para-virtualization: It is based on hypervisor which handles emulation and trapping
of software. Here, the guest operating system is modified before installing it to any
further machine. The modified system communicates directly with the hypervisor and
improves the performance
• Full virtualization: This virtualization is similar to Para-Virtualization. In this, the
hypervisor traps the machine operations which is used by the operating system to
perform the operations. After trapping the operations, it emulates in particular software
and the status codes returned.

Methods of Storage Virtualization

• Network-based storage virtualization: The most popular type of virtualization used by


businesses is network-based storage virtualization. All of the storage devices in an FC
or iSCSI SAN are connected to a network device, such as a smart switch or specially
designed server, which displays the network’s storage as a single virtual pool.

• Host-based storage virtualization: Host-based storage virtualization is software-based


and most often seen in HCI systems and cloud storage. In this type of virtualization, the
host, or a hyper-converged system made up of multiple hosts, presents virtual drives of
varying capacity to the guest machines, whether they are VMs in an enterprise
environment, physical servers or computers accessing file shares or cloud storage.

• Array-based storage virtualization: Storage using arrays The most popular use of
virtualization is when a storage array serves as the main storage controller and is
equipped with virtualization software. This allows the array to share storage resources
with other arrays and present various physical storage types that can be used as storage
tiers.

7. NETWORK VIRTUALIZATION

Network Virtualization is a process of logically grouping physical networks and making them
operate as single or multiple independent networks called Virtual Networks.
general architecture of NV

Tools for Network Virtualization :

1. Physical switch OS – It is where the OS must have the functionality of network


virtualization.
2. Hypervisor – It is which uses third-party software or built-in networking and the
functionalities of network virtualization.
The basic functionality of the OS is to give the application or the executing process with a
simple set of instructions. System calls that are generated by the OS and executed through the
libc library are comparable to the service primitives given at the interface between the
application and the network through the SAP (Service Access Point).

The hypervisor is used to create a virtual switch and configuring virtual networks on it. The
third-party software is installed onto the hypervisor and it replaces the native networking
functionality of the hypervisor. A hypervisor allows us to have various VMs all working
optimally on a single piece of computer hardware.

Functions of Network Virtualization :

• It enables the functional grouping of nodes in a virtual network.


• It enables the virtual network to share network resources.
• It allows communication between nodes in a virtual network without routing of frames.
• It restricts management traffic.
• It enforces routing for communication between virtual networks.

Network Virtualization in Virtual Data Center :

1. Physical Network

• Physical components: Network adapters, switches, bridges, repeaters, routers and hubs.
• Grants connectivity among physical servers running a hypervisor, between physical
servers and storage systems and between physical servers and clients.

2. VM Network

• Consists of virtual switches.


• Provides connectivity to hypervisor kernel.
• Connects to the physical network.
• Resides inside the physical server.
Applications of Network Virtualization :

• Network virtualization may be used in the development of application testing to mimic


real-world hardware and system software.
• It helps us to integrate several physical networks into a single network or separate single
physical networks into multiple analytical networks.
• In the field of application performance engineering, network virtualization allows the
simulation of connections between applications, services, dependencies, and end-users
for software testing.
• It helps us to deploy applications in a quicker time frame, thereby supporting a faster
go-to-market.
• Network virtualization helps the software testing teams to derive actual results with
expected instances and congestion issues in a networked environment.

Examples of Network Virtualization :

Virtual LAN (VLAN), Network Overlays, Network Virtualization Platform: VMware NSX .

Advantages of Network Virtualization :

• Improves manageability • Enhances performance


• Reduces CAPEX • Enhances security
• Improves utilization
Disadvantages of Network Virtualization :

• It needs to manage IT in the • Increased complexity.


abstract. • Upfront cost.
• It needs to coexist with physical • Possible learning curve.
devices in a cloud-integrated hybrid
environment.
8. IMPLEMENTATION LEVELS OF VIRTUALIZATION

• A traditional computer runs with a host operating system specially tailored for its
hardware architecture.

• After virtualization, different user applications managed by their own operating


systems (guest OS) can run on the same hardware, independent of the host OS. This is
often done by adding additional software, called a virtualization layer. This
virtualization layer is known as hypervisor or virtual machine monitor (VMM).

• The VMs are shown in the upper boxes, where applications run with their own guest
OS over the virtualized CPU, memory, and I/O resources. The main function of the
software layer for virtualization is to virtualize the physical hardware of a host machine
into virtual resources to be used by the VMs, exclusively

The Virtualization software creates the abstraction of VMs by interposing a virtualization layer
at various levels of a computer system.
Common virtualization layers include • Instruction set architecture (ISA) level, • hardware
level, • operating system level, • library support level, and • application level

1.1 Instruction Set Architecture Level

At the ISA level, virtualization is performed by emulating a given ISA by the ISA of the host
machine. For example, MIPS binary code can run on an x86-based host machine with the help
of ISA emulation. With this approach, it is possible to run a large amount of legacy binary code
writ-ten for various processors on any given new hardware host machine. Instruction set
emulation leads to virtual ISAs created on any hardware machine.
The basic emulation method is through code interpretation. An interpreter program interprets
the source instructions to target instructions one by one. One source instruction may require
tens or hundreds of native target instructions to perform its function. Obviously, this process is
relatively slow. For better performance, dynamic binary translation is desired. This approach
translates basic blocks of dynamic source instructions to target instructions. The basic blocks
can also be extended to program traces or super blocks to increase translation efficiency.
Instruction set emulation requires binary translation and optimization. A virtual instruction set
architecture (V-ISA) thus requires adding a processor-specific software translation layer to the
compiler.

1.2 Hardware Abstraction Level

Hardware-level virtualization is performed right on top of the bare hardware. On the one hand,
this approach generates a virtual hardware environment for a VM. On the other hand, the
process manages the underlying hardware through virtualization. The idea is to virtualize a
computer’s resources, such as its processors, memory, and I/O devices. The intention is to
upgrade the hardware utilization rate by multiple users concurrently. The idea was
implemented in the IBM VM/370 in the 1960s. More recently, the Xen hypervisor has been
applied to virtualize x86-based machines to run Linux or other guest OS applications.

1.3 Operating System Level

This refers to an abstraction layer between traditional OS and user applications. OS-level
virtualization creates isolated containers on a single physical server and the OS instances to
utilize the hard-ware and software in data centers. The containers behave like real servers. OS-
level virtualization is commonly used in creating virtual hosting environments to allocate
hardware resources among a large number of mutually distrusting users. It is also used, to a
lesser extent, in consolidating server hardware by moving services on separate hosts into
containers or VMs on one server.

1.4 Library Support Level

Most applications use APIs exported by user-level libraries rather than using lengthy system
calls by the OS. Since most systems provide well-documented APIs, such an interface becomes
another candidate for virtualization. Virtualization with library interfaces is possible by
controlling the communication link between applications and the rest of a system through API
hooks. The software tool WINE has implemented this approach to support Windows
applications on top of UNIX hosts. Another example is the vCUDA which allows applications
executing within VMs to leverage GPU hardware acceleration
1.5 User-Application Level

Virtualization at the application level virtualizes an application as a VM. On a traditional OS,


an application often runs as a process. Therefore, application-level virtualization is also known
as process-level virtualization. The most popular approach is to deploy high level language
(HLL)

VMs. In this scenario, the virtualization layer sits as an application program on top of the
operating system, and the layer exports an abstraction of a VM that can run programs written
and compiled to a particular abstract machine definition. Any program written in the HLL and
compiled for this VM will be able to run on it. The Microsoft .NET CLR and Java Virtual
Machine (JVM) are two good examples of this class of VM

Other forms of application-level virtualization are known as application


isolation, application sandboxing, or application streaming. The process involves wrapping the
application in a layer that is isolated from the host OS and other applications. The result is an
application that is much easier to distribute and remove from user workstations. An example is
the LANDesk application virtualization platform which deploys software applications as self-
contained, executable files in an isolated environment without requiring installation, system
modifications, or elevated security privileges.

9. VIRTUALIZATION STRUCTURE

In general, there are three typical classes of VM architecture. Figure 3.1 showed the
architectures of a machine before and after virtualization. Before virtualization, the
operating system manages the hardware. After virtualization, a virtualization layer is
inserted between the hardware and the operating system. In such a case, the
virtualization layer is responsible for converting portions of the real hardware into
virtual hardware. Therefore, different operating systems such as Linux and Windows
can run on the same physical machine, simultaneously. Depending on the position of
the virtualization layer, there are several classes of VM architectures, namely
the hypervisor architecture, para-virtualization, and host-based virtualization.
The hypervisor is also known as the VMM (Virtual Machine Monitor). They both
perform the same virtualization operations.

1. The Xen Architecture

Xen is an open source hypervisor program developed by Cambridge University. Xen is


a micro-kernel hypervisor, which separates the policy from the mechanism. The Xen
hypervisor implements all the mechanisms, leaving the policy to be handled by Domain
0, Xen does not include any device drivers natively. It just provides a mechanism by
which a guest OS can have direct access to the physical devices. As a result, the size of
the Xen hypervisor is kept rather small. Xen provides a virtual environment located
between the hardware and the OS. A number of vendors are in the process of developing
commercial Xen hypervisors, among them are Citrix XenServer and Oracle VM .

The core components of a Xen system are the hypervisor, kernel, and applications. The
organization of the three components is important. Like other virtualization systems,
many guest OSes can run on top of the hypervisor. However, not all guest OSes are
created equal, and one in

particular controls the others. The guest OS, which has control ability, is called Domain
0, and the others are called Domain U. Domain 0 is a privileged guest OS of Xen. It is
first loaded when Xen boots without any file system drivers being available. Domain 0
is designed to access hardware directly and manage devices. Therefore, one of the
responsibilities of Domain 0 is to allocate and map hardware resources for the guest
domains (the Domain U domains).

2. Binary Translation with Full Virtualization

2.1 Full Virtualization

With full virtualization, noncritical instructions run on the hardware directly while
critical instructions are discovered and replaced with traps into the VMM to be
emulated by software. Both the hypervisor and VMM approaches are considered full
virtualization. Why are only critical instructions trapped into the VMM? This is because
binary translation can incur a large performance overhead. Noncritical instructions do
not control hardware or threaten the security of the system, but critical instructions do.
Therefore, running noncritical instructions on hardware not only can promote
efficiency, but also can ensure system security.

2.2 Binary Translation of Guest OS Requests Using a VMM

This approach was implemented by VMware and many other software companies.
VMware puts the VMM at Ring 0 and the guest OS at Ring 1. The VMM scans the
instruction stream and identifies the privileged, control- and behavior-sensitive
instructions. When these instructions are identified, they are trapped into the VMM,
which emulates the behavior of these instructions. The method used in this emulation
is called binary translation. Therefore, full vir-tualization combines binary translation
and direct execution. The guest OS is completely decoupled from the underlying
hardware. Consequently, the guest OS is unaware that it is being virtualized.

The performance of full virtualization may not be ideal, because it involves binary
translation which is rather time-consuming. In particular, the full virtualization of I/O-
intensive applications is a really a big challenge. Binary translation employs a code
cache to store translated hot instructions to improve performance, but it increases the
cost of memory usage. At the time of this writing, the performance of full virtualization
on the x86 architecture is typically 80 percent to 97 percent that of the host machine.

2.3 Host-Based Virtualization

An alternative VM architecture is to install a virtualization layer on top of the host OS.


This host OS is still responsible for managing the hardware. The guest OSes are
installed and run on top of the virtualization layer. Dedicated applications may run on
the VMs. Certainly, some other applications

can also run with the host OS


directly. This host-based architecture has some distinct advantages, as enumerated next.
First, the user can install this VM architecture without modifying the host OS. The
virtualizing software can rely on the host OS to provide device drivers and other low-
level services. This will simplify the VM design and ease its deployment.

Second, the host-based approach appeals to many host machine configurations.


Compared to the hypervisor/VMM architecture, the performance of the host-based
architecture may also be low. When an application requests hardware access, it involves
four layers of mapping which downgrades performance significantly. When the ISA of
a guest OS is different from the ISA of the underlying hardware, binary translation must
be adopted. Although the host-based architecture has flexibility, the performance is too
low to be useful in practice.

10. VIRTUALIZATION OF CPU, MEMORY AND I/O DEVICES


Virtualization is the process of creating virtual instances of physical resources, such as CPU,
memory, and I/O devices.

Here's an overview of how virtualization works for each of these components:

1. CPU Virtualization:

CPU virtualization involves dividing the physical CPU into multiple virtual CPUs, allowing
multiple operating systems or processes to run simultaneously. This is achieved through two
main techniques:

- Full virtualization: In this approach, a software layer called a hypervisor or virtual machine
monitor (VMM) is installed directly on the physical hardware. The hypervisor intercepts and
manages all CPU instructions and resources, allowing multiple guest operating systems to run
without modifications. Each guest OS operates under the assumption that it has full control of
the CPU.

2. Memory Virtualization:

Memory virtualization allows the allocation of virtual memory to different operating systems
or processes running on a virtualized environment. It involves mapping the virtual addresses
used by guest operating systems to physical memory on the host machine. The virtualization
layer manages memory allocation and ensures isolation between different virtual machines.

Memory virtualization techniques include:

- Shadow Paging: The hypervisor maintains a shadow page table that maps the guest OS's
virtual addresses to the physical memory addresses. Whenever a guest OS performs memory
operations, the hypervisor intercepts and translates the addresses accordingly.

- Ballooning: The hypervisor can reclaim memory from idle or less active guest OSes by using
a balloon driver installed on the guest OS. The balloon driver requests memory from the guest
OS, freeing up the physical memory for other virtual machines.

3. I/O Device Virtualization:

I/O device virtualization allows virtual machines to access and use physical I/O devices, such
as network adapters, disk drives, and graphics cards. There are various techniques to achieve
device virtualization:

- Emulation: The hypervisor emulates the behavior of the physical devices, allowing the guest
OS to interact with them as if they were directly connected. This approach incurs performance
overhead as each I/O request goes through the hypervisor.
Pass-through or Direct Assignment: In this approach, the hypervisor assigns a physical I/O
device directly to a specific virtual machine, bypassing the hypervisor's involvement. The guest
OS can then communicate with the device directly, providing near-native performance.
However, this requires that the hardware supports this feature, and the device is dedicated to a
single virtual machine.

11. VIRTUAL CLUSTERS AND RESOURCE MANAGEMENT

Virtual Clusters:

A virtual cluster is a logical grouping of virtual machines (VMs) or containers that work
together to provide specific computing services or run applications. Unlike physical clusters,
virtual clusters exist in a software defined environment and leverage virtualization technologies
to efficiently allocate and manage resources.

Key Components of Virtual Clusters:

• Virtual Machines (VMs) : These are individual instances running within a virtual cluster.
VMs emulate physical computers and have their own operating systems and applications.
Multiple VMs can coexist on a single physical server.2.

• Hypervisor : The hypervisor, also known as a Virtual Machine Monitor (VMM), is the
software or firmware layer that manages and allocates physical resources (CPU, memory,
storage, etc.) to VMs. It ensures that multiple VMs can run on the same physical hardware
without interference.

• Cluster Manager : A cluster manager or orchestration software is responsible for managing


VMs within the virtual cluster. It can handle tasks like VM provisioning, load balancing,
scaling, and resource allocation

Benefits of Virtual Clusters:

• Improved Resource Utilization : Virtual clusters allow efficient use of physical resources
by running multiple VMs on the same hardware, reducing resource wastage.

• Scalability : You can easily scale a virtual cluster up or down by adding or removing VMs,
adapting to changing workload demands.

• Isolation and Security : VMs within a virtual cluster are isolated from one another,
enhancing security and preventing resource conflicts.

• Disaster Recovery : Virtual clusters can be configured for high availability and redundancy,
ensuring continuity of services in case of hardware failures.
Resource Management in Virtual Clusters: Resource management in virtual clusters
involves efficiently allocating, monitoring, and optimizing physical resources to ensure the
performance and stability of VMs and applications.

• Resource Allocation : Resources such as CPU, memory, storage, and network bandwidth are
allocated to VMs based on their requirements. Allocation can be static or dynamic, and
priorities can be set to ensure critical workloads receive necessary resources.

• Resource Monitoring : Real time monitoring of resource usage is crucial to detect


bottlenecks and performance issues. Monitoring tools track metrics like CPU utilization,
memory usage, and disk I/O to identify potential problems.

• Resource Optimization : To optimize resource usage, virtual clusters can employ techniques
like load balancing, dynamic resource allocation, and auto scaling. These strategies ensure that
resources are used efficiently and that workloads are distributed evenly. \

Challenges in Resource Management

• Overallocation : Allocating more resources than physically available can lead to resource
contention and decreased performance.

• Bottlenecks : Resource limitations in CPU, memory, or storage can create bottlenecks that
slow down applications.

• Resource Contention : Multiple VMs competing for the same resources can result in
contention, impacting performance.

Tools and Technologies for Resource Management:

➢ Kubernetes
➢ OpenStack
➢ VMware vSphere
➢ Apache Meso

You might also like