Module 1-1
Module 1-1
Understanding Virtualization
Virtualization brought the ability to condense multiple physical servers into one
server that would run many virtual machines, allowing that physical server to run at
a much higher rate of utilization. This condensing of servers is called consolidation,
as illustrated in Figure below. A measure of consolidation is called the consolidation
ratio and is calculated by counting the number of VMs on a server
example, a server that has eight VMs running on it has a consolidation ratio of 8:1.
Types of Hypervisors:
Type 2 hypervisors are also less reliable because there are more points of failure:
anything that affects the availability of the underlying operating system also can
impact the hypervisor and the guests it supports.
Type 2 hypervisors are easy to install and deploy because much of the hardware
configuration work, such as networking and storage, has already been covered by
the operating system.
Type 2 hypervisors are not as efficient as Type 1 hypervisors because of this extra
layer between the hypervisor itself and the hardware. Every time a virtual machine
performs a disk read, a network operation, or any other hardware interaction, it
hands that request off to the hypervisor, just as in a Type 1 hypervisor environment.
Unlike that environment, the Type 2 hypervisor must then itself hand off the request
to the operating system, which handles the I/O requests. The operating system
passes the information back to the hypervisor
That’s why it’s also known as a bare It’s also known as a hosted hypervisor.
metal hypervisor.
Type 1 hypervisor has direct access to When a Type 2 hypervisor needs to
the underlying physical host’s communicate with the underlying
resources—e.g., CPU, RAM, storage, and hardware or access hardware resources,
network interface. it must go through the host OS first.
Enterprise data centres and cloud service They’re more common among end users.
providers use Type 1 hypervisors ie they are used in personal computers
Role of a Hypervisor
The hypervisor abstracts the physical layer and presents this abstraction for
virtualized servers or virtual machines to use. A hypervisor is installed directly onto a
server, without any operating system between it and the physical devices. Virtual
machines are then instantiated, or booted. From the virtual machine’s view, it can
see and work with a number of hardware resources. The hypervisor becomes the
interface between the hardware devices on the physical server and the virtual
devices of the virtual machines. The hypervisor presents only some subset of the
physical resources to each individual virtual machine
Virtual desktops run on servers in the data center.The applications that users
connect to are also in the data center running on servers .
Virtual desktops are accessed through thin clients which are more reliable and
less expensive than PCs. Thin clients have life spans of 7 to 10 years so can be
refreshed less frequently. They also only use between 5 and 10 percent of the
electricity of a PC.
If a thin client does break, a user can replace it himself, instead of relying on a
specialized hardware engineer to replace it. The virtual desktop where all of the data
is kept has not been affected by the hardware failure. The data no longer leaves the
data center, so the risk that a lost or stolen device will cause security issues is also
reduced.
Today PCs routinely have antivirus software applications that help protect their data
from malware and more. Virtualization allows new methods of protection. Rather
than just loading the malware software on individual virtual desktops, there are now
virtual appliances that reside in each host and protect all of the virtual desktops that
run there.
Computer programs, or applications, can also be virtualized. There are two main
reasons for application virtualization; the first is ease of deployment.Every time a
new version of each of those applications is available, the company, if it decides to
upgrade to that newer version, has to push out a copy to all of its PCs.
The second reason has to do with how different applications interact with each
other.Even simple upgrades such as Adobe Acrobat Reader or Mozilla Firefox can
become problematic. Some types of application virtualization can mitigate or even
prevent this issue by encapsulating the entire program and process.
Types of Virtualization:
Application Virtualization:
It allows the users’ OS to be remotely stored on a server in the data center. It allows
the user to access their desktop virtually, from any location by a different machine.
Users who want specific operating systems other than Windows Server will need to
have a virtual desktop
Server Virtualization:
The partitions are instances of a powerful physical server lying in a remote location
but acting like standalone servers. These partitions are also called virtual servers.
Server virtualization allows for flexible scalability as, depending upon their need,
users can request variable configurations of storage, computing power, RAM, etc
from the physical server.
The process of virtualizing a server begins with installing hypervisors on it.
Storage Virtualization
It allows for centralized management of all the storage devices by masking their
individual hardware/software configurations.
It enables users to scale their storage capacity on-demand.
It allows organizations to manage large amounts of crucial data by allocating it to a
single location.
Backing up, recycling and dropping data is much easier when consolidated at a single
storage location.
Data Virtualization:
Data virtualization is when data is managed to allow the user to change or access
data without needing to know exactly where it’s stored or what format it’s in. Data is
aggregated without moving or changing the original data, so it can be quickly
accessed from any device.
Data virtualization works by separating the collected data from its underlying data
logic. A virtualization layer, called a data virtualization tool, acts as a mediator
between the source and the front-end usage of the data.
Network Virtualization
Virtualization is the engine that will drive cloud computing by turning data center—
what used to be a hands-on, people-intensive process—into a self-managing, highly
scalable, highly available, pool of easily consumable resources.
Cloud computing creates the concept of a virtual data center, a construct that
contains everything a physical data center would. This virtual data center, deployed
in the cloud, offers resources on an as-needed basis, much like a power company
provides electricity.
These new models of computing will dramatically simplify the delivery of new
applications and allow companies to accelerate their deployments without sacrificing
scalability, resiliency, or availability.