0% found this document useful (0 votes)
10 views10 pages

Virtual Ization

Uploaded by

barbie rose
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views10 pages

Virtual Ization

Uploaded by

barbie rose
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Topic 13

Enabling Technologies
Virtualization

Virtualization enables the hardware resources of a single computer—processors, memory, storage


and more—to be divided into multiple virtual computers, called virtual machines (VMs).

Virtualization is a process that allows for more efficient utilization of physical computer hardware and is
the foundation of cloud computing.

Virtualization uses software to create an abstraction layer over computer hardware that allows the
hardware elements of a single computer—processors, memory, storage and more—to be divided into
multiple virtual computers, commonly called virtual machines (VMs). Each VM runs its own operating
system (OS) and behaves like an independent computer, even though it is running on just a portion of
the actual underlying computer hardware.

It follows that virtualization enables more efficient utilization of physical computer hardware and allows
a greater return on an organization’s hardware investment.

Today, virtualization is a standard practice in enterprise IT architecture. It is also the technology that
drives cloud computing economics. Virtualization enables cloud providers to serve users with their
existing physical computer hardware; it enables cloud users to purchase only the computing resources
they need when they need it, and to scale those resources cost-effectively as their workloads grow.

Benefits of Virtualization

Virtualization brings several benefits to data center operators and service providers:

• Resource efficiency: Before virtualization, each application server required its own dedicated
physical CPU—IT staff would purchase and configure a separate server for each application they
wanted to run. (IT preferred one application and one operating system (OS) per computer for
reliability reasons.) Invariably, each physical server would be underused. In contrast, server
virtualization lets you run several applications—each on its own VM with its own OS—on a
single physical computer (typically an x86 server) without sacrificing reliability. This enables
maximum utilization of the physical hardware’s computing capacity.

• Easier management: Replacing physical computers with software-defined VMs makes it easier
to use and manage policies written in software. This allows you to create automated IT service
management workflows. For example, automated deployment and configuration tools enable
administrators to define collections of virtual machines and applications as services, in software
templates. This means that they can install those services repeatedly and consistently without
cumbersome, time-consuming. and error-prone manual setup. Admins can use virtualization
security policies to mandate certain security configurations based on the role of the virtual
machine. Policies can even increase resource efficiency by retiring unused virtual machines to
save on space and computing power.

• Minimal downtime: OS and application crashes can cause downtime and disrupt user
productivity. Admins can run multiple redundant virtual machines alongside each other and
failover between them when problems arise. Running multiple redundant physical servers is
more expensive.

• Faster provisioning: Buying, installing, and configuring hardware for each application is time-
consuming. Provided that the hardware is already in place, provisioning virtual machines to run
all your applications is significantly faster. You can even automate it using management software
and build it into existing workflows.

What is a virtual machine (VM)?


A virtual machine is the software implementation of a machine(i.e. a computer) that executes programs
like a physical machine. Virtualization allows an organization to create multiple virtual machines—each
with their own operating system (OS) and applications—on a single physical machine.

A System Virtual machine provides a complete system platform which supports the execution of a
complete operating system.

A process Virtual machine is designed to run a single program, which means that it supports a single
process. An essential characteristics of a virtual machine is that the software running inside it is limited
to the resources and abstractions provided by the virtual machine - it cannot break out of its virtual
world.

A virtual machine can’t interact directly with a physical computer, however. Instead, it needs a
lightweight software layer called a hypervisor to coordinate with the physical hardware upon which it
runs.

Advantages of VMs are:

▪ Multiple OS environment can co-exist on the same computer, in strong isolation from each other.
▪ The virtual machine can provide an instruction set architecture (ISA) that is somewhat different
from that of the real machine.
▪ Application provisioning, maintenance, high availability and disaster recovery.
Disadvantages of VMs are:

▪ A virtual machine is less efficient than real machine when it access the hardware indirectly.
▪ With multiple VMs are concurrently running on the same physical host, each VM may exhibit a
varying and unstable performance(speed of Execution, and not results), which may highly
depends on the workload imposed on the system by other VMs, unless proper techniques are
used for temporal isolation among virtual machines.

What is a hypervisor?
The hypervisor is a thin software layer that allows multiple operating systems to run alongside each
other and share the same physical computing resources. These operating systems come as the
aforementioned virtual machines (VMs)—virtual representations of a physical computer—and
the hypervisor assigns each VM its own portion of the underlying computing power, memory, and
storage. This prevents the VMs from interfering with each other.

There are two types of hypervisors:

• Type 1 or “bare-metal” hypervisors interact with the underlying physical resources, replacing
the traditional operating system altogether. They most commonly appear in virtual server
scenarios.
• Type 2 hypervisors run as an application on an existing OS. Most commonly used on endpoint
devices to run alternative operating systems, they carry a performance overhead because they
must use the host OS to access and coordinate the underlying hardware resources.

Types of virtualizations
• Desktop virtualization
• Network virtualization
• Storage virtualization
• Data virtualization
• Application virtualization
• Data center virtualization
• CPU virtualization
• GPU virtualization
• Linux virtualization
• Cloud virtualization

Desktop virtualization

Desktop virtualization lets you run multiple desktop operating systems, each in its own VM on the same
computer.
There are two types of desktop virtualization:

• Virtual desktop infrastructure (VDI) runs multiple desktops in VMs on a central server and streams
them to users who log in on thin client devices. In this way, VDI lets an organization provide its users
access to variety of OS's from any device, without installing OS's on any device.
• Local desktop virtualization runs a hypervisor on a local computer, enabling the user to run one or more
additional OSs on that computer and switch from one OS to another as needed without changing anything
about the primary OS.

Network virtualization

Network virtualization uses software to create a “view” of the network that an administrator can use to
manage the network from a single console. It abstracts hardware elements and functions (e.g.,
connections, switches, routers, etc.) and abstracts them into software running on a hypervisor. The
network administrator can modify and control these elements without touching the underlying physical
components, which dramatically simplifies network management.

Types of network virtualization include software-defined networking (SDN), which virtualizes


hardware that controls network traffic routing (called the “control plane”), and network function
virtualization (NFV), which virtualizes one or more hardware appliances that provide a specific
network function (e.g., a firewall, load balancer, or traffic analyzer), making those appliances easier to
configure, provision, and manage.

Storage virtualization

Storage virtualization enables all the storage devices on the network— whether they’re installed on
individual servers or standalone storage units—to be accessed and managed as a single storage device.
Specifically, storage virtualization masses all blocks of storage into a single shared pool from which
they can be assigned to any VM on the network as needed. Storage virtualization makes it easier to
provision storage for VMs and makes maximum use of all available storage on the network.

Data virtualization

Modern enterprises store data from multiple applications, using multiple file formats, in multiple
locations, ranging from the cloud to on-premise hardware and software systems. Data virtualization lets
any application access all of that data—irrespective of source, format, or location.

Data virtualization tools create a software layer between the applications accessing the data and the
systems storing it. The layer translates an application’s data request or query as needed and returns
results that can span multiple systems. Data virtualization can help break down data silos when other
types of integration aren’t feasible, desirable, or affordable.

Application virtualization

Application virtualization runs application software without installing it directly on the user’s OS. This
differs from complete desktop virtualization (mentioned above) because only the application runs in a
virtual environment—the OS on the end user’s device runs as usual. There are three types of application
virtualization:

• Local application virtualization: The entire application runs on the endpoint device but runs in a
runtime environment instead of on the native hardware.
• Application streaming: The application lives on a server which sends small components of the software
to run on the end user's device when needed.
• Server-based application virtualization The application runs entirely on a server that sends only its user
interface to the client device.

Data center virtualization

Data center virtualization abstracts most of a data center’s hardware into software, effectively enabling
an administrator to divide a single physical data center into multiple virtual data centers for different
clients.

Each client can access its own infrastructure as a service (IaaS), which would run on the same
underlying physical hardware. Virtual data centers offer an easy on-ramp into cloud-based computing,
letting a company quickly set up a complete data center environment without purchasing infrastructure
hardware.

CPU virtualization

CPU (central processing unit) virtualization is the fundamental technology that makes hypervisors,
virtual machines, and operating systems possible. It allows a single CPU to be divided into multiple
virtual CPUs for use by multiple VMs.

At first, CPU virtualization was entirely software-defined, but many of today’s processors include
extended instruction sets that support CPU virtualization, which improves VM performance.

GPU virtualization

A GPU (graphical processing unit) is a special multi-core processor that improves overall computing
performance by taking over heavy-duty graphic or mathematical processing. GPU virtualization lets
multiple VMs use all or some of a single GPU’s processing power for faster video, artificial intelligence
(AI), and other graphic- or math-intensive applications.

• Pass-through GPUs make the entire GPU available to a single guest OS.
• Shared vGPUs divide physical GPU cores among several virtual GPUs (vGPUs) for use by server-based
VMs.

Linux virtualization

Linux includes its own hypervisor, called the kernel-based virtual machine (KVM), which supports Intel
and AMD’s virtualization processor extensions so you can create x86-based VMs from within a Linux
host OS.
Cloud virtualization

As noted above, the cloud computing model depends on virtualization. By virtualizing servers, storage,
and other physical data center resources, cloud computing providers can offer a range of services to
customers, including the following:

• Infrastructure as a service (IaaS): Virtualized server, storage, and network resources you can configure
based on their requirements.
• Platform as a service (PaaS): Virtualized development tools, databases, and other cloud-based services
you can use to build you own cloud-based applications and solutions.
• Software as a service (SaaS): Software applications you use on the cloud. SaaS is the cloud-based
service most abstracted from the hardware.

Different Virtualization Techniques


These are the different virtualization techniques currently used in the market:
• Guest Operating system virtualization
• Shared Kernel Virtualization
• Kernel Level Virtualization
• Hypervisor Virtualization

1. Guest Operating System Virtualization


• This is the most simple and easiest way to do virtualization.
• In this virtualization, the host operating system contains the virtualization software.
• The host OS can be anything, such as Windows, Mac, or Linux, and the virtualization software
will run just like any other application runs on the OS.
• That virtualization software will take care of all the virtualization tasks, and it helps in running
the guest operating system.
• One can run multiple OS using that virtualization software; it will take care of all the things like
memory management, resource management, hard disk partitioning, etc.,

2. Shared Kernel Virtualization


• The technique does not require additional configuration for the host OS and hardware.
• The above figure shows that the virtualization application uses the same hardware as the host
operating system and runs inside the host OS.
• So, the advantage, in this case, is that there is no extra cost for the hardware configuration and
software configuration is required.
• But since it’s the cheapest way of virtualization, there will be performance issues due to high
levels of abstraction.
• People use VMware and VirtualBox for virtualization.

• The only drawback of this technique is the compatibility of the operating system; like one wants
to run a Windows OS with this method, then it will not work, or if someone wants to run a Linux
version 2.6 and the host OS is 2.4, then again it will not work.
• These techniques include Linux Vserver, Solaris Zones, containers, etc.

3. Kernel Level Virtualization


• In this virtualization technique, the guest operating system runs its kernel, unlike the shared
kernel virtualization.
• There can be multiple guest operating systems, each with its kernel.
• But the guest operating system kernel should have a similar configuration to the host kernel.
Otherwise, there will be compatibility issues.
• The kernel-level virtualization includes user-mode Linux and kernel-based virtual machines.
These techniques include Linux Vserver, Solaris Zones, containers, etc.

• The above diagram shows the implementation of the kernel-level visualization.


4. Hypervisor Virtualization
• In this virtualization, the hypervisor program runs directly in the hardware of the CPU, known as
ring 0, which is the highest level of privileges granted by the CPU hardware to any software.
• Generally, the Operating system only has the privileges to run in ring 0, so in this case, a
hypervisor runs in ring 0, also called type 1 VMM (Virtual machine monitor). As the name
suggests, it monitors all the guest operating systems installed in the virtual machine and provides
interfaces for higher-level administration and monitoring.
• So, suppose the hypervisor runs in ring 0. In that case, the kernel of the guest operating system
will not get proper privileges to run, so to address this issue, the hypervisor has several solutions
described below:

a. Paravirtualization
• In this technique, the system calls are made by the kernel of the guest operating system, and the
hypervisor directly handles those calls, and the hypervisor, in turn, completes all the tasks.
• The hypervisor calls the guest operating system kernel the hypercalls.
b. Full virtualization
• In this case, the guest operating system is given complete control over the guest operating
system.
• The hypervisor generally controls and monitors the calls; it provides a CPU emulation to handle
and modify the privileges.
• But this scenario is never efficient and degrades the system performance compared to
paravirtualization.
c. Hardware Virtualization
• With the latest CPU coming into the market, INTEL and AMD have devised new CPUs that help
provide an extra layer on top of ring 0, which allows the hypervisor to run and take control of the
guest operating system.
• This eliminates the overhead of the CPU emulation.

The above figure shows the working of the hypervisor.


• The hypervisor also runs a management console which helps monitor the guest operating
systems running on the CPU and allows the system administrator to manage the virtual
machines.
• The different hypervisor available in the market is Microsoft hyper V Xen, VMware ESX Server,
etc.
What is containerization?
• Containerization is the packaging of software code with just the operating system (OS) libraries
and dependencies required to run the code to create a single lightweight executable—called a
container—that runs consistently on any infrastructure. More portable and resource-efficient
than virtual machines (VMs), containers have become the de facto compute units of
modern cloud-native applications.
• Containerization allows developers to create and deploy applications faster and more securely.
With traditional methods, code is developed in a specific computing environment which, when
transferred to a new location, often results in bugs and errors. For example, when a developer
transfers code from a desktop computer to a VM or from a Linux to a Windows operating
system. Containerization eliminates this problem by bundling the application code together with
the related configuration files, libraries, and dependencies required for it to run. This single
package of software or “container” is abstracted away from the host operating system, and hence,
it stands alone and becomes portable—able to run across any platform or cloud, free of issues.
• The concept of containerization and process isolation is actually decades old, but the emergence
in 2013 of the open source Docker Engine—an industry standard for containers with simple
developer tools and a universal packaging approach—accelerated the adoption of this
technology. Today organizations are using containerization increasingly to create new
applications, and to modernize existing applications for the cloud.
• Containers are often referred to as “lightweight,” meaning they share the machine’s operating
system kernel and do not require the overhead of associating an operating system within each
application. Containers are inherently smaller in capacity than a VM and require less start-up
time, allowing far more containers to run on the same compute capacity as a single VM. This
drives higher server efficiencies and, in turn, reduces server and licensing costs.
• Perhaps most important, containerization allows applications to be “written once and run
anywhere.” This portability speeds development, prevents cloud vendor lock-in and offers other
notable benefits such fault isolation, ease of management, simplified security and more

Benefits of Containerization

Containerization offers significant benefits to developers and development teams. Among these are the
following:

• Portability: A container creates an executable package of software that is abstracted away from
(not tied to or dependent upon) the host operating system, and hence, is portable and able to run
uniformly and consistently across any platform or cloud.
• Agility: The open source Docker Engine for running containers started the industry standard for
containers with simple developer tools and a universal packaging approach that works on both
Linux and Windows operating systems. The container ecosystem has shifted to engines managed
by the Open Container Initiative (OCI). Software developers can continue using agile
or DevOps tools and processes for rapid application development and enhancement.
• Speed: Containers are often referred to as “lightweight,” meaning they share the machine’s
operating system (OS) kernel and are not bogged down with this extra overhead. Not only does
this drive higher server efficiencies, it also reduces server and licensing costs while speeding up
start-times as there is no operating system to boot.
• Fault isolation: Each containerized application is isolated and operates independently of others.
The failure of one container does not affect the continued operation of any other containers.
Development teams can identify and correct any technical issues within one container without
any downtime in other containers.
• Efficiency: Software running in containerized environments shares the machine’s OS kernel, and
application layers within a container can be shared across containers. Thus, containers are
inherently smaller in capacity than a VM and require less start-up time, allowing far more
containers to run on the same compute capacity as a single VM. This drives higher server
efficiencies, reducing server and licensing costs.
• Ease of management: A container orchestration platform automates the installation, scaling,
and management of containerized workloads and services. Container orchestration platforms can
ease management tasks such as scaling containerized apps, rolling out new versions of apps, and
providing monitoring, logging and debugging, among other functions. Kubernetes, perhaps the
most popular container orchestration system available, is an open source technology (originally
open-sourced by Google, based on their internal project called Borg) that automates Linux
container functions originally. Kubernetes works with many container engines, such as Docker,
but it also works with any container system that conforms to the Open Container Initiative (OCI)
standards for container image formats and runtimes.
• Security: The isolation of applications as containers inherently prevents the invasion of
malicious code from affecting other containers or the host system. Additionally, security
permissions can be defined to automatically block unwanted components from entering
containers or limit communications with unnecessary resources

Virtualization vs. containerization


Containers are often compared to virtual machines (VMs) because both technologies enable significant
compute efficiencies by allowing multiple types of software (Linux- or Windows-based) to be run in a
single environment. However, container technology is proving to deliver significant benefits over and
above those of virtualization and is quickly becoming the technology favored by IT professionals.

• Virtualization technology allows multiple operating systems and software applications to run
simultaneously and share the resources of a single physical computer. For example, an IT
organization can run both Windows and Linux or multiple versions of an operating system, along
with multiple applications on the same server. Each application and its related files, libraries, and
dependencies, including a copy of the operating system (OS), are packaged together as a VM.
With multiple VMs running on a single physical machine, it’s possible to achieve significant
savings in capital, operational, and energy costs.
• Containerization, on the other hand, uses compute resources even more efficiently. A container
creates a single executable package of software that bundles application code together with all of
the related configuration files, libraries, and dependencies required for it to run. Unlike VMs,
however, containers do not bundle in a copy of the OS. Instead, the container runtime engine is
installed on the host system’s operating system, becoming the conduit through which all
containers on the computing system share the same OS.

You might also like