Virtualisation

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Virtualisation

In computing, virtualisation refers to the act of creating a virtual version of some-


thing, including virtual comp, as a method of logically dividing the system resources
provided by mainframe computers between different applications. Since then, the
meaning of the term has broadened. Hardware virtualisation or platform virtualisation
refers to the creation of a virtual machine that acts like a real computer with an oper-
ating system. Software executed on these virtual machines is separated from the un-
derlying hardware resources. For example, a computer that is running Microsoft Win-
dows may host a virtual machine that looks like a computer with the Ubuntu Linux
operating system; Ubuntu-based software can be run on the virtual machine.
In hardware virtualisation, the host machine is the machine which is used by the vir-
tualisation and the guest machine is the virtual machine. The words host and guest are
used to distinguish the software that runs on the physical machine from the software
that runs on the virtual machine. The software or firmware that creates a virtual ma-
chine on the host hardware is called a hypervisor or virtual machine monitor.
Different types of hardware virtualization include:
Full virtualization – almost complete simulation of the actual hardware to allow
software environments, including a guest operating system and its apps, to run un-
modified.
Paravirtualization – the guest apps are executed in their own isolated domains, as if
they are running on a separate system, but a hardware environment is not simula-
ted. Guest programs need to be specifically modified to run in this environment.
Hardware-assisted virtualization is a way of improving overall efficiency of virtuali-
zation. It involves CPUs that provide support for virtualization in hardware, and other
hardware components that help improve the performance of a guest environment.
Hardware virtualization can be viewed as part of an overall trend in enterprise IT that
includes autonomic computing, a scenario in which the IT environment will be able
to manage itself based on perceived activity, and utility computing, in which compu-
ter processing power is seen as a utility that clients can pay for only as needed. The
usual goal of virtualization is to centralize administrative tasks while improving scal-
ability and overall hardware-resource utilization.

With virtualization, several operating systems can be run in parallel on a single cen-
tral processing unit (CPU). This parallelism tendsuter hardware platforms, storage de-
vices, and computer network resources.Virtualization began in the 1960s to reduce
overhead costs and differs from multitasking, which involves running several pro-
grams on the same OS. Using virtualization, an enterprise can better manage updates
and rapid changes to the operating system and applications without disrupting the
user. "Ultimately, virtualization dramatically improves the efficiency and availability
of resources and applications in an organization. Instead of relying on the old model
of “one server, one application” that leads to underutilized resources, virtual resour-
ces are dynamically applied to meet business needs without any excess fat" (Conso-
nusTech).
Hardware virtualization is not the same as hardware emulation. In hardware emula-
tion, a piece of hardware imitates another, while in hardware virtualization, a hypervi-
sor (a piece of software) imitates a particular piece of computer hardware or the entire
computer. Furthermore, a hypervisor is not the same as an emulator; both are compu-
ter programs that imitate hardware, but their domain of use in language differs.

Snapshots
A snapshot is a state of a virtual machine, and generally its storage devices, at an ex-
act point in time. A snapshot enables the virtual machine's state at the time of the
snapshot to be restored later, effectively undoing any changes that occurred after-
wards. This capability is useful as a backup technique, for example, prior to perform-
ing a risky operation.Virtual machines frequently use virtual disks for their storage; in
a very simple example, a 10-gigabyte hard disk drive is simulated with a 10-gigabyte
flat file. Any requests by the VM for a location on its physical disk are transparently
translated into an operation on the corresponding file. Once such a translation layer is
present, however, it is possible to intercept the operations and send them to different
files, depending on various criteria. Every time a snapshot is taken, a new file is cre-
ated, and used as an overlay for its predecessors. New data is written to the topmost
overlay; reading existing data, however, needs the overlay hierarchy to be scanned,
resulting in accessing the most recent version. Thus, the entire stack of snapshots is
virtually a single coherent disk; in that sense, creating snapshots works similarly to
the incremental backup technique.Other components of a virtual machine can also be
included in a snapshot, such as the contents of its random-access memory (RAM),
BIOS settings, or its configuration settings. "Save state" feature in video game con-
sole emulators is an example of such snapshots.Restoring a snapshot consists of dis-
carding or disregarding all overlay layers that are added after that snapshot, and di-
recting all new changes to a new overlay.

Migration (virtualization)
The snapshots described above can be moved to another host machine with its own
hypervisor; when the VM is temporarily stopped, snapshotted, moved, and then re-
sumed on the new host, this is known as migration. If the older snapshots are kept in
sync regularly, this operation can be quite fast, and allow the VM to provide uninter-
rupted service while its prior physical host is, for example, taken down for physical
maintenance.

Failover
Similar to the migration mechanism described above, failover allows the VM to con-
tinue operations if the host fails. Generally it occurs if the migration has stopped
working. However, in this case, the VM continues operation from the last-known co-
herent state, rather than the current state, based on whatever materials the backup ser-
ver was last provided with.
Nested virtualization
Nested virtualization refers to the ability of running a virtual machine within another,
having this general concept extendable to an arbitrary depth. In other words, nested
virtualization refers to running one or more hypervisors inside another hypervisor.
Nature of a nested guest virtual machine does not need not be homogeneous with its
host virtual machine; for example, application virtualization can be deployed within a
virtual machine created by using hardware virtualization.Nested virtualization be-
comes more necessary as widespread operating systems gain built-in hypervisor func-
tionality, which in a virtualized environment can be used only if the surrounding hy-
pervisor supports nested virtualization; for example, Windows 7 is capable of running
Windows XP applications inside a built-in virtual machine. Furthermore, moving al-
ready existing virtualized environments into a cloud, following the Infrastructure as a
Service (IaaS) approach, is much more complicated if the destination IaaS platform
does not support nested virtualization.The way nested virtualization can be imple-
mented on a particular computer architecture depends on supported hardware-assisted
virtualization capabilities. If a particular architecture does not provide hardware sup-
port required for nested virtualization, various software techniques are employed to
enable it.Over time, more architectures gain required hardware support; for example,
since the Haswell microarchitecture (announced in 2013), Intel started to include
VMCS shadowing as a technology that accelerates nested virtualization.

Desktop virtualization
Desktop virtualization is the concept of separating the logical desktop from the physi-
cal machine.One form of desktop virtualization, virtual desktop infrastructure (VDI),
can be thought of as a more advanced form of hardware virtualization. Rather than in-
teracting with a host computer directly via a keyboard, mouse, and monitor, the user
interacts with the host computer using another desktop computer or a mobile device
by means of a network connection, such as a LAN, Wireless LAN or even the Inter-
net. In addition, the host computer in this scenario becomes a server computer cap-
able of hosting multiple virtual machines at the same time for multiple users. Another
form, session virtualization, allows multiple users to connect and log into a shared but
powerful computer over the network and use it simultaneously. Each is given a desk-
top and a personal folder in which they store their files. With multiseat configuration,
session virtualization can be accomplished using a single PC with multiple monitors,
keyboards, and mice connected. As organizations continue to virtualize and converge
their data center environment, client architectures also continue to evolve in order to
take advantage of the predictability, continuity, and quality of service delivered by
their converged infrastructure. For example, companies like HP and IBM provide a
hybrid VDI model with a range of virtualization software and delivery models to im-
prove upon the limitations of distributed client computing.Selected client environ-
ments move workloads from PCs and other devices to data center servers, creating
well-managed virtual clients, with applications and client operating environments
hosted on servers and storage in the data center. For users, this means they can access
their desktop from any location, without being tied to a single client device. Since the
resources are centralized, users moving between work locations can still access the
same client environment with their applications and data. For IT administrators, this
means a more centralized, efficient client environment that is easier to maintain and
able to more quickly respond to the changing needs of the user and business.
Thin clients, which are seen in desktop virtualization, are simple and/or cheap compu-
ters that are primarily designed to connect to the network. They may lack significant
hard disk storage space, RAM or even processing power, but many organizations are
beginning to look at the cost benefits of eliminating “thick client” desktops that are
packed with software (and require software licensing fees) and making more strategic
investments.Desktop virtualization simplifies software versioning and patch manage-
ment, where the new image is simply updated on the server, and the desktop gets the
updated version when it reboots. It also enables centralized control over what applica-
tions the user is allowed to have access to on the workstation.
Moving virtualized desktops into the cloud creates hosted virtual desktops (HVDs),
in which the desktop images are centrally managed and maintained by a specialist
hosting firm. Benefits include scalability and the reduction of capital expenditure,
which is replaced by a monthly operational cost.

Host-based virtualisation
A host-based virtual machine is an instance of a desktop operating system that runs
on a centralized server. Access and control is provided to the user by a client device
connected over a network. Multiple host-based virtual machines can run on a single
server. With a host-based virtual machine, data is contained on the server, server re-
sources can be allocated to users as needed, users can work from a variety of clients
in different locations, and all of the virtual machines can be managed centrally. How-
ever, the client device must always be connected to the server in order to access the
virtual machine, and when one single server is compromised many users can be affec-
ted.
Host-based virtual machines are conceptually similar to Windows Terminal Server
environments, except host-based virtual machines have one virtual machine for each
user, whereas Terminal Server has many users sharing the same instance of Windows.
Host-based virtual machine is another term for virtual desktop infrastructure (VDI),
though usage of the term VDI has grown to include client-based virtual machines as
well as host-based virtual machines.

Bare metal virtualisation


A hypervisor, also known as a virtual machine monitor or VMM, is a type of virtuali-
zation software that supports the creation and management of virtual machines (VMs)
by separating a computer’s software from its hardware. Hypervisors translate requests
between the physical and virtual resources, making virtualization possible. When a
hypervisor is installed directly on the hardware of a physical machine, between the
hardware and the operating system (OS), it is called a bare metal hypervisor. Some
bare metal hypervisors are embedded into the firmware at the same level as the mo-
therboard basic input/output system (BIOS). This is necessary for some systems to
enable the operating system on a computer to access and use virtualization software.
Because the bare metal hypervisor separates the OS from the underlying hardware,
the software no longer relies on or is limited to specific hardware devices or drivers.
This means bare metal hypervisors allow operating systems and their associated ap-
plications to run on a variety of types of hardware. They also allow multiple operat-
ing systems and virtual machines (guest machines) to reside on the same physical ser-
ver (host machine). Because the virtual machines are independent of the physical ma-
chine, they can move from machine to machine or platform to platform, shifting
workloads and allocating networking, memory, storage, and processing resources
across multiple servers according to needs. For example, when an application needs
more processing power, it can seamlessly access additional machines through the vir-
tualization software. This results in greater cost and energy efficiency and better per-
formance, using fewer physical machines.

Major Features of Bare Metal


Bare metal servers are dedicated to one client and are never physically shared with
more than one customer. If that client chooses to run a virtualized platform on top of
it, they create a multitenant environment themselves. Bare metal is often the most
streamlined way to command resources.With bare metal, clients can avoid what is
known as the “noisy neighbor effect” that is present in the hypervisor environment.
These servers can also run equally well in individually owned data centers or co-loca-
tion, held by IT service providers/IaaS providers. A business also has the option to
rent a bare metal server easily on a subscription from a managed service provider.
The primary advantage a bare metal environment is its separation.The system does
not need to run inside of any other operating system. However, it still provides all of
the services to the virtual environments that are necessary.

Benefits Of Bare Metal


Without the use of bare metal, tenants receive isolation and security within the tradi-
tional hypmetal also gives administrators the option to increase resources through the
ability to add new hardware.
Lower overhead costs – Virtualization platforms incur more overhead than bare
metal because no hypervisor layer is taking the processing power of the server.
With less overhead, the responsiveness and the overall speed of a solution will im-
prove. Bare metal also allows for more hardware customization, which can im-
prove speed and responsiveness.
Cost effective for data transfer – Bare metal providers often offer much more cost-
effective approaches to outbound data transfer. Dedicated server environments
could potentially provide several terabytes of free data transfer. All else being
equal, virtualized environments would not be able to match these initial offers.
However, these scenarios are dependant upon server offers and partnerships and
never guaranteed.
Flexible deployment – Serervisor infrastructure. However, the “noisy neighbor” ef-
fect may still exist.
If one physical server is overloaded with requests or consumption from one of the
tenants on the server, isolation becomes a disadvantage. The bare metal environ-
ment completely avoids this situation.Bare ver configurations can be incredibly
precise. Depending on your workload, you may be able to mix bare metal and vir-
tual environments.
QoS – Quality of Service often work to eliminate the problem of the “noisy neigh-
bor” occur in the bare metal environment. This can be considered a financial ad-
vantage as well as a technical one. If something goes wrong, a client has someone
to hold accountable on paper. However, as with any SLA, this may vary on a case-
by-case basis.
Greater security – Organizations that are very security sensitive may worry about
falling short of regulatory compliance standards in a hypervisor multitenant envir-
onment. This is one of the most common reason that some companies are reluctant
to move to bare-metal cloud computing. Bare metal servers make it very possible
to create an entirely physical separation of resources. Remember, virtualization
does not mean less security by default. Security is incredibly complex and broad
terminology, and there are many factors involved.

Benefits Of Bare Metal Hypervisors


You may not need the elite performance of a single tenant, a bare metal server. Your
company may be able to better utilize resources by using a hypervisor. Hypervisors
have many benefits, even when compared to the highly efficient and scalable bare-
metal solution.Choose a hypervisor when you have a dynamic workload, and you do
not need to have an absolute cutting edge performance. Workloads that need to be
spun up and run for only a short period before they are turned off are perfect for this
environment.

Backup and protection – Virtual machines are much easier to secure than tradition-
al applications. Before an application can be backed up, it must be paused first.
This process is very time consuming, and it may cause the app a substantial down-
time. A virtual machine’s memory space can be captured quickly and easily using a
snapshot tool. This snapshot can then be saved to a disk in a matter of moments.
Every snapshot that is taken can be recalled, providing recovery and restoration of
lost or damaged data to a user on demand.
Improved hardware utilization – A bare metal server may only play host to a single
application and operating system. A hypervisor uses much more of the available re-
sources from the network to host multiple VM instances. Each of these instances
can run an entirely independent application and operating system on the same phy-
sical system.
Improved mobility – The structure of the VM makes it very mobile because it is an
independent entity separate from the underlying hardware. A VM can be migrated
between any remote or local virtual servers that have enough available resources.
This can be done at any point in time with effectively no disruption. This occurs so
often that it has a buzzword: live migration. That said, a virtual machine can be
moved to the same hypervisor environment on a different underlying infrastructure
as long as it can run the hypervisor. Ultimate mobility is achieved with containeriza-
tion.
Adequate security – Virtual instances created by a hypervisor are isolated logically
from each other, even if they are not separated physically. Although they may be on
the same physical server, they do not have any fundamental knowledge of each other.
If one is attacked or suffers an error, the problem does not move directly to another.
Although the noisy neighbor effect may occur, hypervisors are incredibly secure al-
though they are not physically dedicated to a single client.

You might also like