0% found this document useful (0 votes)
2 views

Module 3 Virtualization, Its Types and Levels

Virtualization in cloud computing refers to the process of creating virtual versions of physical resources, allowing multiple customers to share a single resource efficiently. It enhances resource optimization, security, and flexibility while enabling the use of hypervisors to manage virtual machines. Various types of virtualization, including hardware, network, storage, and memory virtualization, contribute to improved performance and cost savings in cloud environments.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Module 3 Virtualization, Its Types and Levels

Virtualization in cloud computing refers to the process of creating virtual versions of physical resources, allowing multiple customers to share a single resource efficiently. It enhances resource optimization, security, and flexibility while enabling the use of hypervisors to manage virtual machines. Various types of virtualization, including hardware, network, storage, and memory virtualization, contribute to improved performance and cost savings in cloud environments.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 28

What is Virtualization in Cloud Computing?

The process of separating the physical delivery of any service and creating a
virtual version is called Virtualization in Cloud Computing.

Instead of using original computing resources, it includes creating a virtual or


software-created version of it using specialized software.

This technique allows us to cater to multiple customers with a single resource


instead of creating various systems.

It accomplishes this by giving a physical storage device a logical name and,


upon request, supplying a pointer to the physical resource.

Using this technique, you can easily switch between different digital
environments to access hardware resources such as an operating system,
storage device, memory, network resources, etc.

Importance of Virtualization in Cloud Computing

In the realm of cloud computing, virtualization emerges as a cornerstone that


significantly shapes the landscape by enhancing efficiency, flexibility, and
resource optimization. To better present its importance in cloud computing
here are some important points:

 The significance of virtualization is underscored by its impact created on


diverse aspects of cloud computing such as infrastructure management
and resource allocation.
 The benefits offered in the form of resource utilization in cloud
computing are its main role.
 It provides an isolated environment for better security in Virtual
Machines thus enhancing security by separating one machine from
another.

Virtualization: Explain Basic Concepts of Virtualization

Virtualization is the "creation of a virtual (rather than actual) version of


something, such as a server, a desktop, a storage device, an operating system
or network resources".

In other words, Virtualization is a technique, which allows to share a single


physical instance of a resource or an application among multiple customers

1
and organizations. It does by assigning a logical name to a physical storage and
providing a pointer to that physical resource when demanded.

What is the concept behind the Virtualization?

Creation of a virtual machine over existing operating system and hardware is


known as Hardware Virtualization. A Virtual machine provides an environment
that is logically separated from the underlying hardware.

The machine on which the virtual machine is going to create is known as Host
Machine and that virtual machine is referred as a Guest Machine.

In more practical terms, imagine you have 3 physical servers with individual
dedicated purposes. One is a mail server, another is a web server, and the last
one runs internal legacy applications. Each server is being used at about 30%
capacity—just a fraction of their running potential. But since the legacy apps
remain important to your internal operations, you have to keep them and the
third server that hosts them, right?

Traditionally, yes. It was often easier and more reliable to run individual tasks
on individual servers: 1 server, 1 operating system, 1 task. It wasn’t easy to
give 1 server multiple brains. But with virtualization, you can split the mail
server into 2 unique ones that can handle independent tasks so the legacy
apps can be migrated. It’s the same hardware, you’re just using more of it
more efficiently.

Keeping security in mind, you could split the first server again so it could
handle another task—increasing its use from 30%, to 60%, to 90%. Once you
do that, the now empty servers could be reused for other tasks or retired
altogether to reduce cooling and maintenance costs.

2
How does virtualization work in cloud computing?

Virtualization plays a very important role in the cloud computing technology,


normally in the cloud computing, users share the data present in the clouds
like application etc, but actually with the help of virtualization users shares the
Infrastructure.

The main usage of Virtualization Technology is to provide the applications with


the standard versions to their cloud users, suppose if the next version of that
application is released, then cloud provider has to provide the latest version to
their cloud users and practically it is possible because it is more expensive.

To overcome this problem we use basically virtualization technology, By using


virtualization, all severs and the software application which are required by
other cloud providers are maintained by the third party people, and the cloud
providers has to pay the money on monthly or annual basis.

Software called hypervisors separate the physical resources from the virtual
environments—the things that need those resources. Hypervisors can sit on
top of an operating system (like on a laptop) or be installed directly onto
hardware (like a server), which is how most enterprises virtualize. Hypervisors
take your physical resources and divide them up so that virtual environments
can use them.

Resources are partitioned as needed from the physical environment to the


many virtual environments. Users interact with and run computations within
the virtual environment (typically called a guest machine or virtual machine).
The virtual machine functions as a single data file. And like any digital file, it
can be moved from one computer to another, opened in either one, and be
expected to work the same.

3
When the virtual environment is running and a user or program issues an
instruction that requires additional resources from the physical environment,
the hypervisor relays the request to the physical system and caches the
changes—which all happens at close to native speed (particularly if the request
is sent through an open source hypervisor based on KVM, the Kernel-based
Virtual Machine).

Conclusion

Mainly Virtualization means, running multiple operating systems on a single


machine but sharing all the hardware resources. And it helps us to provide the
pool of IT resources so that we can share these IT resources in order get
benefits in the business.

What is a hypervisor

A hypervisor, also known as a virtual machine monitor or VMM. The hypervisor


is a piece of software that allows us to build and run virtual machines which
are abbreviated as VMs.

A hypervisor allows a single host computer to support multiple virtual


machines (VMs) by sharing resources including memory and processing.

What is the use of a hypervisor?

Hypervisors allow the use of more of a system's available resources and


provide greater IT versatility because the guest VMs are independent of the
host hardware which is one of the major benefits of the Hypervisor.

In other words, this implies that they can be quickly switched between servers.
Since a hypervisor with the help of its special feature, it allows several virtual
machines to operate on a single physical server. So, it helps us to reduce:

o The Space efficiency


o The Energy uses
o The Maintenance requirements of the server.

4
Kinds of hypervisors

There are two types of hypervisors: "Type 1" (also known as "bare metal") and
"Type 2" (also known as "hosted"). A type 1 hypervisor functions as a light
operating system that operates directly on the host's hardware, while a type 2
hypervisor functions as a software layer on top of an operating system, similar
to other computer programs.

Since they are isolated from the attack-prone operating system, bare-metal
hypervisors are extremely stable.

Furthermore, they are usually faster and more powerful than hosted
hypervisors. For these purposes, the majority of enterprise businesses opt for
bare-metal hypervisors for their data centre computing requirements.

While hosted hypervisors run inside the OS, they can be topped with additional
(and different) operating systems.

The hosted hypervisors have longer latency than bare-metal hypervisors which
is a very major disadvantage of the it. This is due to the fact that contact
between the hardware and the hypervisor must go through the OS's extra
layer.

The Type 1 hypervisor

The native or bare metal hypervisor, the Type 1 hypervisor is known by both
names.

It replaces the host operating system, and the hypervisor schedules VM


services directly to the hardware.

5
The type 1 hypervisor is very much commonly used in the enterprise data
centre or other server-based environments.

It includes KVM, Microsoft Hyper-V, and VMware vSphere. If we are running


the updated version of the hypervisor then we must have already got the KVM
integrated into the Linux kernel in 2007.

The Type 2 hypervisor

It is also known as a hosted hypervisor, The type 2 hypervisor is a software


layer or framework that runs on a traditional operating system.

It operates by separating the guest and host operating systems. The host
operating system schedules VM services, which are then executed on the
hardware.

Individual users who wish to operate multiple operating systems on a personal


computer should use a form 2 hypervisor.

This type of hypervisor also includes the virtual machines with it.

Hardware acceleration technology improves the processing speed of both


bare-metal and hosted hypervisors, allowing them to build and handle virtual
resources more quickly.

On a single physical computer, all types of hypervisors will operate multiple


virtual servers for multiple tenants. Different businesses rent data space on
various virtual servers from public cloud service providers. One server can host
multiple virtual servers, each of which is running different workloads for
different businesses.

What is a cloud hypervisor?

Hypervisors are a key component of the technology that enables cloud


computing since they are a software layer that allows one host device to
support several virtual machines at the same time.

Hypervisors allow IT to retain control over a cloud environment's


infrastructure, processes, and sensitive data while making cloud-based
applications accessible to users in a virtual environment.

6
Increased emphasis on creative applications is being driven by digital
transformation and increasing consumer expectations. As a result, many
businesses are transferring their virtual computers to the cloud.

Having to rewrite any existing application for the cloud, on the other hand, will
eat up valuable IT resources and create infrastructure silos.

A hypervisor also helps in the rapid migration of applications to the cloud as


being a part of a virtualization platform.

As a result, businesses will take advantage of the cloud's many advantages,


such as lower hardware costs, improved accessibility, and increased scalability,
for a quicker return on investment.

Benefits of hypervisors

Using a hypervisor to host several virtual machines has many advantages:

o Speed: The hypervisors allow virtual machines to be built instantly unlike


bare-metal servers. This makes provisioning resources for complex
workloads much simpler.
o Efficiency: Hypervisors that run multiple virtual machines on the
resources of a single physical machine often allow for more effective use
of a single physical server.
o Flexibility: Since the hypervisor distinguishes the OS from the underlying
hardware, the program no longer relies on particular hardware devices
or drivers, bare-metal hypervisors enable operating systems and their
related applications to operate on a variety of hardware types.
o Portability: Multiple operating systems can run on the same physical
server thanks to hypervisors (host machine). The hypervisor's virtual
machines are portable because they are separate from the physical
computer.

As an application requires more computing power, virtualization software


allows it to access additional machines without interruption.

Types of virtualization in cloud computing

1. Hardware Virtualization

7
Hardware virtualization involves partitioning a physical server into multiple
virtual servers, each running its own operating system and applications.

It enables resource optimization and consolidation of server hardware.

 reduces hardware costs,


 improves server utilization rates
 simplifies disaster recovery planning.
 provides flexibility for deploying and scaling applications.

Application

Commonly used in data centres and cloud computing environments to


efficiently allocate computing resources.

Examples

 VirtualBox,
 OpenVZ,
 VMware Workstation, etc.

Is of three types

Full Virtualization: All resources (such as CPU, memory, and storage) are
consolidated to provide a unified view of the system, making it appear as a
single resource pool to users and applications.

Partial Virtualization: Some system resources are virtualized, while others run
directly on the physical hardware. This improves performance but may require
software modifications to function properly.

Para-Virtualization: Unlike full virtualization, para-virtualization allows the


guest operating system to be aware of the virtual environment and interact
with the hypervisor directly. This results in better performance and lower
overhead, but requires modifications to the OS.

 Para-Virtualization: In this method, the guest operating system is


modified so it knows it is running in a virtualized environment. Instead of
fully emulating hardware, the guest OS directly interacts with the
hypervisor for better efficiency.

How It Works

8
 The guest OS is modified to communicate with the hypervisor, reducing
the need for complex hardware emulation.
 This leads to faster performance and lower overhead compared to full
virtualization.

Pros & Cons

Better performance than full virtualization


More efficient resource usage
Requires OS modification, so it may not support all operating systems

Example

 Xen Hypervisor uses para-virtualization to run modified guest OS


versions for better speed and efficiency.

2. Network Virtualization

Network virtualization abstracts the physical network infrastructure, allowing


multiple virtual networks to run on the same physical network hardware.

It enables the

 creation of isolated network segments,


 improving security and
 simplifying network management.

Network virtualization
 enhances scalability,
 agility,
 resource optimization in data centres.

Network virtualization is vital in cloud computing, data centre consolidation,


and creating virtual private networks (VPNs).

VLAN is one example of this type.

Can be

 Internal
 External

9
Internal vs. External Network Virtualization

a. Internal Network Virtualization

 Used within a single physical system or data centre to create multiple


isolated virtual networks.
 Helps in optimizing network resources, improving security, and
managing traffic efficiently within an organization.
 Example: VLANs (Virtual Local Area Networks) and VXLAN (Virtual
Extensible LAN) allow segmentation of a network within a data centre.

b. External Network Virtualization

 Combines multiple physical networks into a single virtualized network,


allowing seamless communication between different systems.
 Helps in load balancing, disaster recovery, and integrating cloud or
remote networks.
 Example: SD-WAN (Software-Defined Wide Area Network) enables
secure, optimized connectivity between different locations.

3. Storage Virtualization

Storage virtualization abstracts multiple physical storage devices into a single


logical storage unit.

It allows for

 efficient storage management,


 scalability, and
 improved data availability.

Storage virtualization

 simplifies storage provisioning,


 improves data redundancy, and
 enables features like snapshot backups and
 dynamic allocation of storage resources.

It’s widely used in data centres

 to pool and manage storage resources,

10
 making storage systems more efficient and
 resilient.

Some simple examples of this type are LUN’s (Logical unit number), RAID
groups, Logical volume (LV), etc.

Can be

 Block type
 File Type

Block Type Virtualization

 Definition: Virtualizes the physical storage into blocks, where each block
acts like a separate disk. The data is stored in fixed-size chunks (blocks),
and the storage is managed as if it’s a single large disk.
 Example: Storage Area Networks (SANs), where disks are presented to
the system as individual blocks of storage.
 Benefits: Provides high performance and flexibility, as it allows for
precise control of storage allocation.

File Type Virtualization

 Definition: Virtualizes storage at the file level, meaning files are treated
as units and can be accessed without worrying about the underlying
storage details.
 Example: Network Attached Storage (NAS), where files are stored and
accessed over a network, without needing to know where the physical
storage is located.
 Benefits: Easier to manage and access files remotely, and it simplifies
sharing files across different systems.

4. Memory Virtualization

Memory virtualization is a technology that abstracts physical memory from


applications and operating systems, allowing multiple virtual machines (VMs)
or processes to share a common pool of memory efficiently.

It enables the dynamic allocation of memory resources, ensuring better


performance, isolation, and flexibility in computing environments.

11
Memory virtualization is commonly implemented in virtualized environments
such as cloud computing, data centres, and enterprise IT infrastructures.

It is often managed by hypervisors like VMware ESXi, Microsoft Hyper-V, and


KVM.

Benefits of Memory Virtualization

1. Efficient Resource Utilization


o Dynamically allocates memory to applications based on demand,
preventing underutilization or over-provisioning.
2. Cost Savings
o Reduces the need for excessive physical RAM by allowing multiple
VMs to share memory efficiently.
3. Improved Performance
o Techniques such as memory deduplication and ballooning
optimize memory usage, reducing memory bottlenecks.
4. Scalability
o Easily scales up or down based on workload requirements without
physical hardware changes.
5. Fault Isolation
o Ensures memory isolation between VMs, preventing one
application from interfering with another’s memory space.
6. Better Disaster Recovery and Migration
o Supports live migration of VMs between hosts without downtime,
ensuring business continuity.

Applications of Memory Virtualization

1. Cloud Computing
o Public and private cloud providers use memory virtualization to
optimize memory usage across multiple tenants.
2. Enterprise IT Environments
o Large organizations use it to maximize memory efficiency in
virtualized data centres.
3. High-Performance Computing (HPC)
o Enhances performance in scientific simulations and AI workloads
by optimizing memory distribution.
4. Virtual Desktop Infrastructure (VDI)
o Helps multiple virtual desktops share physical memory while
maintaining performance.

12
5. Embedded Systems & IoT
o Used in resource-constrained devices to manage memory
efficiently.
6. Disaster Recovery & High Availability Systems
o Enables seamless VM migration in case of hardware failures.

Memory virtualization plays a crucial role in modern computing, enabling


businesses to improve efficiency, reduce costs, and enhance flexibility.

5. Software Virtualization

What is Software Virtualization?

Software virtualization is a technology that allows multiple virtual


environments to run on a single physical machine. It enables users to create,
manage, and run multiple operating systems, applications, or virtual instances
without requiring dedicated hardware.

Benefits of Software Virtualization

1. Cost Efficiency – Reduces hardware costs by allowing multiple


applications or OS instances to share a single physical server.
2. Resource Optimization – Maximizes the use of CPU, memory, and
storage by dynamically allocating resources.
3. Isolation & Security – Virtual machines (VMs) and containers run
independently, preventing one failure from affecting others.
4. Scalability & Flexibility – Easily scale applications and infrastructure up or
down based on demand.
5. Easier Software Testing & Deployment – Developers can test
applications in different environments without affecting the main
system.
6. Disaster Recovery & Backup – Quick recovery and cloning of virtual
environments in case of system failure.

Applications of Software Virtualization

 Server Virtualization – Running multiple virtual servers on a single


physical machine (e.g., VMware ESXi, Microsoft Hyper-V).
 Desktop Virtualization – Running virtual desktops that users can access
remotely (e.g., Citrix Virtual Apps, Microsoft Remote Desktop).

13
 Application Virtualization – Running applications in isolated
environments without installation (e.g., VMware ThinApp, Microsoft
App-V).
 Cloud Computing – Enabling cloud services like AWS, Google Cloud, and
Microsoft Azure to run multiple virtualized workloads.
 Software Testing & Development – Creating virtual environments for
testing software without affecting the production system.

Examples of Software Virtualization Tools

1. VMware Workstation/ESXi – Enterprise and personal-use virtualization


solutions.
2. Microsoft Hyper-V – A hypervisor for running multiple operating
systems.
3. Oracle VirtualBox – A free and open-source virtualization tool for
desktop OS virtualization.
4. Docker – A containerization platform for running applications in isolated
environments.
5. Citrix Virtual Apps & Desktops – Provides virtualized applications and
desktops for enterprises.

6. Data Virtualization

Data virtualization abstracts data from its physical location, format, and
structure, making it appear as a single, unified data source.

It allows organizations to access and analyze data from diverse sources without
the need for complex data movement and transformation.

It provides a logical view of data from various sources, including databases,


cloud storage, and APIs.

Data virtualization

 simplifies data access,


 integration, and management.

Data virtualization is invaluable for business intelligence, data analytics, and


real-time data integration across the enterprise.

Some examples are Data Warehouses, Data Lakes, Packaged apps, etc.

14
7. Desktop Virtualization

Desktop virtualization separates

the desktop environment (including the operating system and applications)


from the physical client device.

Users can access their virtual desktops from various devices, including PCs,
laptops, and thin clients.

Desktop virtualization

 centralizes desktop management,


 enhances security by keeping data off endpoint devices, and
 facilitates remote work and
 Bring Your Own Device (BYOD) policies.

It’s commonly used in businesses to provide remote access to employees, in


educational institutions for lab environments, and for software testing and
development.

Most popular examples of this type are

 Virtual desktop infrastructure (VDI),


 Desktop-as-a-Service (DaaS), and
 Remote desktop services (RDS).

8. Application Virtualization

Application virtualization is a technology that allows applications to run


independently of the underlying operating system.

Instead of being installed directly on a user’s device, applications are


encapsulated in a virtual environment, making them portable and isolated
from other applications.

Application virtualization

 simplifies software management and


 reduces conflicts between different applications.
 enables easy deployment of applications across multiple devices and
operating systems, enhancing flexibility and compatibility.

15
Applications

 running legacy applications on modern systems,


 streamlining software updates, and
 isolating applications for security purposes.

An example of this type is virtualizing Microsoft PowerPoint to run on Ubuntu


over an Opera browser.

Case Study on Virtualization: A Brief Overview

Background

XYZ Solutions, a mid-sized IT company, struggled with underutilized servers,


high operational costs, and complex software deployment. Additionally, data
access, storage, and memory usage were becoming bottlenecks. To optimize IT
infrastructure, improve efficiency, and cut costs, the company adopted
virtualization across multiple areas.

Virtualization Types and Implementation

1. Hardware Virtualization

 Replaced multiple physical servers with virtual machines (VMs) using


VMware ESXi and Microsoft Hyper-V.
 Result: Improved server utilization, reduced hardware costs, and easier
management.

2. Desktop Virtualization

 Employees were provided with virtual desktops via Citrix Virtual Apps
and Desktops.
 Result: Secure remote access, centralized management, and reduced
hardware dependency.

3. Storage Virtualization

 Implemented SAN (Storage Area Network) virtualization to pool storage


resources across multiple devices.
 Result: Increased storage efficiency, simplified backup, and reduced
storage redundancy.

4. Network Virtualization
16
 Used SDN (Software-Defined Networking) to create virtual networks for
better traffic management.
 Result: Improved network security, flexibility, and reduced reliance on
physical hardware.

5. Application Virtualization

 Deployed applications through Microsoft App-V, allowing software to


run without installation on local machines.
 Result: Reduced compatibility issues, easier updates, and improved
security.

6. Data Virtualization

 Used Denodo Platform to provide real-time access to data from multiple


sources without duplication.
 Result: Faster data retrieval, improved decision-making, and lower
storage needs.

7. Software Virtualization

 Implemented containerization using Docker and Kubernetes, allowing


applications to run independently of the underlying OS.
 Result: Increased portability, faster deployment, and reduced OS
dependency.

8. Memory Virtualization

 Adopted RAM virtualization using Intel VT and VMware vSphere to pool


memory resources across servers.
 Enabled swap space and ballooning techniques to allocate memory
dynamically across virtual machines.
 Result: Optimized memory usage, prevented server crashes due to
memory shortages, and improved application performance.

Outcomes & Benefits

 Cost Savings: Lower hardware expenses, energy savings, and reduced


maintenance costs.
Scalability: IT resources can be easily expanded without major
infrastructure changes.
Security: Centralized data management enhanced security and

17
compliance.
Efficiency: Improved resource utilization, reduced downtime, and better
overall performance.
Flexibility: Seamless access to applications, data, and software across
devices.
Performance Boost: Memory virtualization prevented bottlenecks and
improved speed.

Virtualization levels in cloud computing

Setting up virtualization is not an easy task as your computer functions on


operating systems configured to run on particular hardware types.

Thus, running different operating system types using the same hardware
proves to be difficult.

For this, we need a hypervisor that acts as the bridge between your hardware
and virtual operating system, allowing smoother functioning.

But, for implementation levels you need to undergo five levels of virtualization
in cloud computing. Let’s look at them:

1. Instruction Set Architecture Level (ISA)


 Instruction SET
 Emulator
 Mapping of Instructions

Here’s a brief explanation of the three concepts at the Instruction Set


Architecture (ISA) Level:

1. Instruction Set

 Definition: A collection of instructions that a CPU can execute. It defines


the set of operations (like arithmetic, data transfer, or logic operations)
the processor can perform.
 Example: x86 and ARM are popular instruction sets used in different
types of processors.

2. Emulator

18
 Definition: A software tool that mimics the behavior of hardware (or
another computer system), enabling software to run on platforms it
wasn’t designed for.
 Example: QEMU and Bochs are emulators that can simulate a different
CPU architecture for running software that isn't compatible with the
host machine.

3. Mapping of Instructions

 Definition: The process of translating instructions from one architecture


to another. It involves converting instructions from a high-level language
into machine code that the processor can understand and execute.
 Example: In cross-compilation, a source program is compiled for a
different target architecture, and its instructions are mapped
accordingly.

2. Hardware Abstraction Level (HAL)

Problem at ISA Level:

At the ISA level, some instructions may be directly accessible by user-level


software, which poses security risks and can allow user programs to interfere
with system-level operations.

For example, if an application can access hardware directly or modify system


settings, it can lead to system instability or security breaches.

How HAL Helps:

I.HAL classifies instructions as privileged and non-privileged.

Privileged vs. Non-Privileged Instructions

Privileged Instructions:

These are instructions that can only be executed by the kernel or


supervisor mode of the CPU (i.e., the OS or system-level software).

Examples: Instructions for accessing hardware directly, changing


memory management settings, or managing system interrupts.

Non-Privileged Instructions:

19
These instructions can be executed by user-level applications without
risking system security or stability.

Examples: Basic arithmetic, data movement, and branching operations.

II. HAL serves as a bridge between the OS and the hardware and ensures that
privileged instructions can only be executed by trusted software (like the OS
kernel) while user applications are restricted to non-privileged instructions.

HAL Mechanism:

 When an application tries to execute a privileged instruction, HAL


intercepts this request and traps it, passing control to the OS (kernel)
where the operation can be safely handled.
 HAL ensures that privileged operations are only executed in a controlled,
secure environment, and user applications are kept from directly
interacting with sensitive hardware resources.

Example:

When a program needs to access I/O devices or modify memory protection, it


cannot directly issue privileged instructions.

Instead, HAL ensures that only the OS can interact with these resources,
maintaining system integrity.

In Summary:

HAL ensures that privileged instructions (which control critical system


functions) are not accessible by user applications.

It abstracts hardware details and enforces privilege separation between the OS


and user-level programs, making the system more secure and stable.

OS-Level Virtualization

Problems of HAL

1. Delays are introduced as time taken in classifying instructions in


privileged and non-privileged.
2. Hardware Dependency:

20
HAL abstracts the hardware for the OS, but each OS still has to be
compatible with specific hardware. OS-level virtualization solves this by
allowing the same OS to run multiple virtual environments without
needing to worry about hardware differences between each
environment.

3. Inefficient Resource Allocation:

HAL handles basic resource management but can't easily manage


resources for multiple virtual environments. OS-level virtualization
improves this by providing efficient sharing and management of CPU,
memory, and storage across multiple isolated containers.

4. Overhead of Full Virtualization:

Full virtualization requires emulating the entire hardware for each virtual
machine, which adds significant overhead. OS-level virtualization (e.g.,
containers) avoids this by sharing the OS kernel, making it more
lightweight and efficient.

What OS level Virtualization does

OS-level virtualization allows multiple isolated environments (such as


containers) to run on a single operating system kernel.

Unlike traditional virtualization, where each virtual machine runs its own OS,
OS-level virtualization uses the same OS for all environments, but each
environment behaves as if it has its own separate system.

Key Points:

Efficient: It shares the host OS's resources, making it lightweight.

Isolation: Each environment (container) is isolated from others, ensuring


security and stability.

Fast: Containers can be quickly created, started, and stopped.

Example:

Docker is a popular tool that uses OS-level virtualization to run multiple


containers on a single OS.
21
Limitation of OS level virtualization- OS family same. If 1000 VMs are created.
HAL can be time consuming, therefore we share VM at OS level. Therefore,
same OS)

5. Library Level/Programming level/API level

To overcome the limitation of OS level, that is having same OS can be


overcome by Library level virtualization by using API.

Through API guest OS can use the services of host CPU.

Library-level virtualization uses APIs to abstract OS-level functions (like file I/O,
memory management, etc.) and provides an environment for applications to
run without needing a full OS.

It’s enables applications to run as if they have their own virtualized


environment while still sharing the host OS resources.

To overcome OS-level limitations, library-level virtualization can be used by


leveraging APIs to provide an abstraction layer that mimics the behavior of an
operating system. This allows applications to run without needing a full OS,
providing a form of virtualization that doesn’t require separate physical or
virtual machines.

Here’s how library-level virtualization with APIs can address OS-level


limitations:

Abstraction of System Resources: A library can provide a set of APIs that


abstract system calls, such as file I/O, memory management, networking, and
process control, which are typically handled by the operating system. These
libraries provide a user-level API that acts as an intermediary, making it easier
to develop applications without worrying about underlying OS details.

Isolation: A virtualized environment can be created where each application or


service operates as if it has its own isolated operating environment, even
though they are running on the same OS. This is similar to how containers
work, but at the library level. The API can manage resource allocation, control
access, and provide a level of process isolation, similar to what an OS would
provide.

Cross-Platform Compatibility: Library-level virtualization allows applications to


use a standardized API across different platforms, enabling them to run on

22
multiple OSes without modification. This bypasses OS-specific differences by
relying on a common set of APIs, similar to how Java provides platform
independence through the JVM.

User-Level Virtualization: Instead of relying on OS-level features such as


virtualization or hypervisors, the library can manage the resource sharing and
allocation on the user level. It can mimic multi-threading, memory isolation,
and scheduling, making it behave like a separate environment without needing
an OS.

Efficiency: Library-level virtualization is typically more lightweight than full OS-


level virtualization (like VMs), as it does not require a hypervisor or an
additional operating system layer. The application interacts with a specialized
library (or set of libraries) that makes resource management decisions on
behalf of the application, thus minimizing overhead.

Examples:

User-level threads: Libraries such as pthreads or libc can provide multi-


threading functionalities on top of an OS that doesn't have native support.

Containerization: Technologies like Docker use libraries (e.g., libcontainer) to


offer OS-level virtualization but can run within the same kernel and OS.

libvirt: An abstraction layer for virtualization that provides a unified API to


interact with different hypervisors and virtual machines.

5. Application Level

 Application-level virtualization is the last implementation level of


virtualization in cloud computing. It is used in case we need to virtualize
only one application like SAP
 It is generally used in the case of running virtual machines that function
on high-level languages and support high-level program compilation
such that the virtual machine runs smoothly.
 Application-level virtualization isolates the application from the
underlying system, packaging it with its dependencies. This allows the
app to run independently of the host OS and libraries, ensuring
compatibility across different systems and preventing dependency
conflicts.
 Examples of this level are JVM, .net, etc.

23
Case Study: Virtualization in a Multi-Tier Web Application

Scenario: A company builds a web application with three layers: a frontend, a


backend, and a database. They need to deploy it on multiple environments for
development, testing, and production while minimizing conflicts and
maximizing resource utilization.

1.Hardware-Level Virtualization (e.g., Virtual Machines):

 Challenge: The development, testing, and production environments


need different OS configurations and software versions.
 Solution: The company uses virtual machines (VMs) for each
environment. Each VM runs a separate instance of the OS with all the
required software installed. This isolates the environments and ensures
consistency.
 Result: Full OS isolation, but higher resource overhead compared to
other approaches.

2. Library-Level Virtualization:

 Challenge: The backend service needs specific versions of libraries, but


the OS on which the application is running has different versions.
 Solution: Using a library virtualization framework (e.g., Docker or
libsodium), the backend service can access the specific versions of
libraries it needs, without interfering with the host system.
 Result: The backend works with the correct libraries, and the application
doesn't face version conflicts, but it still shares the underlying OS
resources.

3. Application-Level Virtualization:

 Challenge: The frontend of the application requires specific


configurations and dependencies, but you need to deploy the app across
different OS environments.
 Solution: Application virtualization tools like Docker containers package
the frontend with its dependencies into a portable unit that can run
across various platforms without modification.
 Result: The frontend runs consistently on all environments, with no
dependency issues, and can be easily deployed or scaled.

4. Combination of All Levels:

24
 Solution: The company combines all levels of virtualization:
o Hardware level virtualization: VMs for isolated environments.
o Library-level virtualization: Docker for managing dependencies.
o Application-level virtualization: Containerized frontend for
portability.
 Result: The entire multi-tier web application is isolated, flexible, and
portable across different systems and environments.

Conclusion:

By using all levels of virtualization, the company ensures:

 Hardware-level isolation for environments,


 Library-level dependency management,
 Application portability for consistent performance across systems.

This approach maximizes resource efficiency, scalability, and maintainability.

Case Study: Deploying a Simple Web Application

Scenario: A developer needs to deploy a simple web application that consists


of a frontend and a backend on a local machine and then on the cloud,
ensuring consistency and minimizing conflicts.

1. OS-Level Virtualization (e.g., Virtual Machines):

 Challenge: The developer needs a development environment that


mimics the production server.
 Solution: The developer uses a virtual machine (VM) to create a local
environment running the same OS and software as the cloud production
server.
 Result: The developer can test the application in an isolated
environment that closely matches production, though it consumes more
system resources.

2. Library-Level Virtualization:

25
 Challenge: The backend uses a specific version of a database client
library, but the OS has a different version installed.
 Solution: The developer uses a library-level virtualization tool (e.g.,
Docker) to package the backend service along with its required database
client libraries, ensuring it works as expected without conflicts.
 Result: The backend service runs with the correct version of the library,
and the developer doesn't have to worry about global library conflicts.

3. Application-Level Virtualization:

 Challenge: The frontend has multiple dependencies (e.g., JavaScript


frameworks) that must work across different OS environments.
 Solution: The developer uses application-level virtualization (e.g., Docker
containers) to package the frontend with all its dependencies, ensuring
it runs seamlessly on any platform, from local machines to the cloud.
 Result: The frontend is isolated and portable, and it behaves consistently
across different systems.

4. Combination of All Levels:

 Solution: The developer combines all levels:


o VM for testing production-like environments,
o Docker for packaging the backend with its libraries,
o Docker containers for packaging the frontend.
 Result: The application is consistent across environments, and the
developer can deploy it easily on both local and cloud servers.

Review Benefits of Virtualization in Cloud Computing

Virtualization brings forth a plethora of benefits that redefine how businesses


operate and innovate.

Most of them stem from the features of virtualization. Research shows that
adopting virtualization can lead to a reduction of 70% and 30% in capital and
operational expenditure respectively.

26
Let’s explore some key advantages:

 Virtualization allows for the efficient utilization of hardware resources by


running multiple virtual machines on a single physical server. This
maximizes resource usage and minimizes waste
 Virtual machines can be quickly provisioned and de-provisioned,
enabling businesses to scale up or down based on demand. This
flexibility accelerates response times and enhances user experience.
 By consolidating multiple virtual machines on fewer physical servers,
organizations can reduce hardware and energy costs. Additionally, the
ability to run multiple operating systems on a single server reduces the
need for diverse hardware setups.
 Virtualization simplifies disaster recovery by allowing snapshots and easy
migration of virtual machines. This aids in creating robust backup and
recovery strategies.

Limitations of Virtualization in Cloud Computing

While virtualization offers a plethora of benefits, its limitations is also there

 Virtualization introduces a level of resource overhead due to the need


for virtualization layers and management. This can impact performance
and resource utilization.
 Managing a virtualized environment can be complex, especially as the
number of virtual machines and components increases. Effective
management tools and strategies are required.
 Shared resources in a virtualized environment can lead to security
concerns. Breaches in one virtual machine could potentially impact
others if proper isolation measures are not in place.
 While virtualization optimizes hardware utilization, it might not utilize
resources to their full potential, especially for workloads that demand
maximum performance.
 Organizations need to consider compatibility and portability when
adopting virtualization solutions to avoid vendor lock-in.

Virtualization security in cloud computing

Virtualization security in cloud computing or “security virtualization” are


security solutions designed to work inside a virtualized IT environment and are
mainly software-oriented.

27
 It differs from the traditional network security based on hardware and is
static as well as runs on devices such as traditional firewalls, routers, and
switches.
 Virtualized security is flexible, dynamic, deployable from anywhere, and
cloud-based.
 It allows security services and functions to move around from
dynamically created workloads.
 The flexibility of virtualized security is important for securing hybrid and
multi-cloud environments.
 According to Allied market research, global virtualization security market
size is projected to reach $6.29 billion by 2030.

Multi-Tenant Architecture

Virtualization’s impact on cloud computing extends to multi-tenant


architectures, where multiple users (tenants) share the same resources while
maintaining isolation.

Tenants are given the ability to customize parts of the application such as the
colour of the UI or business rules without changing its code.

This approach is highly efficient, as evidenced by the fact that the multi-tenant
data centre market is expected to grow by 11.36% by 2028.

28

You might also like