Virtual Ization
Virtual Ization
Enabling Technologies
Virtualization
Virtualization is a process that allows for more efficient utilization of physical computer hardware and is
the foundation of cloud computing.
Virtualization uses software to create an abstraction layer over computer hardware that allows the
hardware elements of a single computer—processors, memory, storage and more—to be divided into
multiple virtual computers, commonly called virtual machines (VMs). Each VM runs its own operating
system (OS) and behaves like an independent computer, even though it is running on just a portion of
the actual underlying computer hardware.
It follows that virtualization enables more efficient utilization of physical computer hardware and allows
a greater return on an organization’s hardware investment.
Today, virtualization is a standard practice in enterprise IT architecture. It is also the technology that
drives cloud computing economics. Virtualization enables cloud providers to serve users with their
existing physical computer hardware; it enables cloud users to purchase only the computing resources
they need when they need it, and to scale those resources cost-effectively as their workloads grow.
Benefits of Virtualization
Virtualization brings several benefits to data center operators and service providers:
• Resource efficiency: Before virtualization, each application server required its own dedicated
physical CPU—IT staff would purchase and configure a separate server for each application they
wanted to run. (IT preferred one application and one operating system (OS) per computer for
reliability reasons.) Invariably, each physical server would be underused. In contrast, server
virtualization lets you run several applications—each on its own VM with its own OS—on a
single physical computer (typically an x86 server) without sacrificing reliability. This enables
maximum utilization of the physical hardware’s computing capacity.
• Easier management: Replacing physical computers with software-defined VMs makes it easier
to use and manage policies written in software. This allows you to create automated IT service
management workflows. For example, automated deployment and configuration tools enable
administrators to define collections of virtual machines and applications as services, in software
templates. This means that they can install those services repeatedly and consistently without
cumbersome, time-consuming. and error-prone manual setup. Admins can use virtualization
security policies to mandate certain security configurations based on the role of the virtual
machine. Policies can even increase resource efficiency by retiring unused virtual machines to
save on space and computing power.
• Minimal downtime: OS and application crashes can cause downtime and disrupt user
productivity. Admins can run multiple redundant virtual machines alongside each other and
failover between them when problems arise. Running multiple redundant physical servers is
more expensive.
• Faster provisioning: Buying, installing, and configuring hardware for each application is time-
consuming. Provided that the hardware is already in place, provisioning virtual machines to run
all your applications is significantly faster. You can even automate it using management software
and build it into existing workflows.
A System Virtual machine provides a complete system platform which supports the execution of a
complete operating system.
A process Virtual machine is designed to run a single program, which means that it supports a single
process. An essential characteristics of a virtual machine is that the software running inside it is limited
to the resources and abstractions provided by the virtual machine - it cannot break out of its virtual
world.
A virtual machine can’t interact directly with a physical computer, however. Instead, it needs a
lightweight software layer called a hypervisor to coordinate with the physical hardware upon which it
runs.
▪ Multiple OS environment can co-exist on the same computer, in strong isolation from each other.
▪ The virtual machine can provide an instruction set architecture (ISA) that is somewhat different
from that of the real machine.
▪ Application provisioning, maintenance, high availability and disaster recovery.
Disadvantages of VMs are:
▪ A virtual machine is less efficient than real machine when it access the hardware indirectly.
▪ With multiple VMs are concurrently running on the same physical host, each VM may exhibit a
varying and unstable performance(speed of Execution, and not results), which may highly
depends on the workload imposed on the system by other VMs, unless proper techniques are
used for temporal isolation among virtual machines.
What is a hypervisor?
The hypervisor is a thin software layer that allows multiple operating systems to run alongside each
other and share the same physical computing resources. These operating systems come as the
aforementioned virtual machines (VMs)—virtual representations of a physical computer—and
the hypervisor assigns each VM its own portion of the underlying computing power, memory, and
storage. This prevents the VMs from interfering with each other.
• Type 1 or “bare-metal” hypervisors interact with the underlying physical resources, replacing
the traditional operating system altogether. They most commonly appear in virtual server
scenarios.
• Type 2 hypervisors run as an application on an existing OS. Most commonly used on endpoint
devices to run alternative operating systems, they carry a performance overhead because they
must use the host OS to access and coordinate the underlying hardware resources.
Types of virtualizations
• Desktop virtualization
• Network virtualization
• Storage virtualization
• Data virtualization
• Application virtualization
• Data center virtualization
• CPU virtualization
• GPU virtualization
• Linux virtualization
• Cloud virtualization
Desktop virtualization
Desktop virtualization lets you run multiple desktop operating systems, each in its own VM on the same
computer.
There are two types of desktop virtualization:
• Virtual desktop infrastructure (VDI) runs multiple desktops in VMs on a central server and streams
them to users who log in on thin client devices. In this way, VDI lets an organization provide its users
access to variety of OS's from any device, without installing OS's on any device.
• Local desktop virtualization runs a hypervisor on a local computer, enabling the user to run one or more
additional OSs on that computer and switch from one OS to another as needed without changing anything
about the primary OS.
Network virtualization
Network virtualization uses software to create a “view” of the network that an administrator can use to
manage the network from a single console. It abstracts hardware elements and functions (e.g.,
connections, switches, routers, etc.) and abstracts them into software running on a hypervisor. The
network administrator can modify and control these elements without touching the underlying physical
components, which dramatically simplifies network management.
Storage virtualization
Storage virtualization enables all the storage devices on the network— whether they’re installed on
individual servers or standalone storage units—to be accessed and managed as a single storage device.
Specifically, storage virtualization masses all blocks of storage into a single shared pool from which
they can be assigned to any VM on the network as needed. Storage virtualization makes it easier to
provision storage for VMs and makes maximum use of all available storage on the network.
Data virtualization
Modern enterprises store data from multiple applications, using multiple file formats, in multiple
locations, ranging from the cloud to on-premise hardware and software systems. Data virtualization lets
any application access all of that data—irrespective of source, format, or location.
Data virtualization tools create a software layer between the applications accessing the data and the
systems storing it. The layer translates an application’s data request or query as needed and returns
results that can span multiple systems. Data virtualization can help break down data silos when other
types of integration aren’t feasible, desirable, or affordable.
Application virtualization
Application virtualization runs application software without installing it directly on the user’s OS. This
differs from complete desktop virtualization (mentioned above) because only the application runs in a
virtual environment—the OS on the end user’s device runs as usual. There are three types of application
virtualization:
• Local application virtualization: The entire application runs on the endpoint device but runs in a
runtime environment instead of on the native hardware.
• Application streaming: The application lives on a server which sends small components of the software
to run on the end user's device when needed.
• Server-based application virtualization The application runs entirely on a server that sends only its user
interface to the client device.
Data center virtualization abstracts most of a data center’s hardware into software, effectively enabling
an administrator to divide a single physical data center into multiple virtual data centers for different
clients.
Each client can access its own infrastructure as a service (IaaS), which would run on the same
underlying physical hardware. Virtual data centers offer an easy on-ramp into cloud-based computing,
letting a company quickly set up a complete data center environment without purchasing infrastructure
hardware.
CPU virtualization
CPU (central processing unit) virtualization is the fundamental technology that makes hypervisors,
virtual machines, and operating systems possible. It allows a single CPU to be divided into multiple
virtual CPUs for use by multiple VMs.
At first, CPU virtualization was entirely software-defined, but many of today’s processors include
extended instruction sets that support CPU virtualization, which improves VM performance.
GPU virtualization
A GPU (graphical processing unit) is a special multi-core processor that improves overall computing
performance by taking over heavy-duty graphic or mathematical processing. GPU virtualization lets
multiple VMs use all or some of a single GPU’s processing power for faster video, artificial intelligence
(AI), and other graphic- or math-intensive applications.
• Pass-through GPUs make the entire GPU available to a single guest OS.
• Shared vGPUs divide physical GPU cores among several virtual GPUs (vGPUs) for use by server-based
VMs.
Linux virtualization
Linux includes its own hypervisor, called the kernel-based virtual machine (KVM), which supports Intel
and AMD’s virtualization processor extensions so you can create x86-based VMs from within a Linux
host OS.
Cloud virtualization
As noted above, the cloud computing model depends on virtualization. By virtualizing servers, storage,
and other physical data center resources, cloud computing providers can offer a range of services to
customers, including the following:
• Infrastructure as a service (IaaS): Virtualized server, storage, and network resources you can configure
based on their requirements.
• Platform as a service (PaaS): Virtualized development tools, databases, and other cloud-based services
you can use to build you own cloud-based applications and solutions.
• Software as a service (SaaS): Software applications you use on the cloud. SaaS is the cloud-based
service most abstracted from the hardware.
• The only drawback of this technique is the compatibility of the operating system; like one wants
to run a Windows OS with this method, then it will not work, or if someone wants to run a Linux
version 2.6 and the host OS is 2.4, then again it will not work.
• These techniques include Linux Vserver, Solaris Zones, containers, etc.
a. Paravirtualization
• In this technique, the system calls are made by the kernel of the guest operating system, and the
hypervisor directly handles those calls, and the hypervisor, in turn, completes all the tasks.
• The hypervisor calls the guest operating system kernel the hypercalls.
b. Full virtualization
• In this case, the guest operating system is given complete control over the guest operating
system.
• The hypervisor generally controls and monitors the calls; it provides a CPU emulation to handle
and modify the privileges.
• But this scenario is never efficient and degrades the system performance compared to
paravirtualization.
c. Hardware Virtualization
• With the latest CPU coming into the market, INTEL and AMD have devised new CPUs that help
provide an extra layer on top of ring 0, which allows the hypervisor to run and take control of the
guest operating system.
• This eliminates the overhead of the CPU emulation.
Benefits of Containerization
Containerization offers significant benefits to developers and development teams. Among these are the
following:
• Portability: A container creates an executable package of software that is abstracted away from
(not tied to or dependent upon) the host operating system, and hence, is portable and able to run
uniformly and consistently across any platform or cloud.
• Agility: The open source Docker Engine for running containers started the industry standard for
containers with simple developer tools and a universal packaging approach that works on both
Linux and Windows operating systems. The container ecosystem has shifted to engines managed
by the Open Container Initiative (OCI). Software developers can continue using agile
or DevOps tools and processes for rapid application development and enhancement.
• Speed: Containers are often referred to as “lightweight,” meaning they share the machine’s
operating system (OS) kernel and are not bogged down with this extra overhead. Not only does
this drive higher server efficiencies, it also reduces server and licensing costs while speeding up
start-times as there is no operating system to boot.
• Fault isolation: Each containerized application is isolated and operates independently of others.
The failure of one container does not affect the continued operation of any other containers.
Development teams can identify and correct any technical issues within one container without
any downtime in other containers.
• Efficiency: Software running in containerized environments shares the machine’s OS kernel, and
application layers within a container can be shared across containers. Thus, containers are
inherently smaller in capacity than a VM and require less start-up time, allowing far more
containers to run on the same compute capacity as a single VM. This drives higher server
efficiencies, reducing server and licensing costs.
• Ease of management: A container orchestration platform automates the installation, scaling,
and management of containerized workloads and services. Container orchestration platforms can
ease management tasks such as scaling containerized apps, rolling out new versions of apps, and
providing monitoring, logging and debugging, among other functions. Kubernetes, perhaps the
most popular container orchestration system available, is an open source technology (originally
open-sourced by Google, based on their internal project called Borg) that automates Linux
container functions originally. Kubernetes works with many container engines, such as Docker,
but it also works with any container system that conforms to the Open Container Initiative (OCI)
standards for container image formats and runtimes.
• Security: The isolation of applications as containers inherently prevents the invasion of
malicious code from affecting other containers or the host system. Additionally, security
permissions can be defined to automatically block unwanted components from entering
containers or limit communications with unnecessary resources
• Virtualization technology allows multiple operating systems and software applications to run
simultaneously and share the resources of a single physical computer. For example, an IT
organization can run both Windows and Linux or multiple versions of an operating system, along
with multiple applications on the same server. Each application and its related files, libraries, and
dependencies, including a copy of the operating system (OS), are packaged together as a VM.
With multiple VMs running on a single physical machine, it’s possible to achieve significant
savings in capital, operational, and energy costs.
• Containerization, on the other hand, uses compute resources even more efficiently. A container
creates a single executable package of software that bundles application code together with all of
the related configuration files, libraries, and dependencies required for it to run. Unlike VMs,
however, containers do not bundle in a copy of the OS. Instead, the container runtime engine is
installed on the host system’s operating system, becoming the conduit through which all
containers on the computing system share the same OS.