3 Exp CCL
3 Exp CCL
Experiment 3
Aim: To study and Implement Bare-metal
Virtualization using Xen.
Objective: To understand the concept of hypervisors, its types and functions in Cloud
Computing.
Theory:
Virtualization in Cloud Computing
Virtualization is the "creation of a virtual (rather than actual) version of something, such as a
server, a desktop, a storage device, an operating system or network resources".
In other words, Virtualization is a technique, which allows to share a single physical instance
of a resource or an application among multiple customers and organizations. It does by
assigning a logical name to a physical storage and providing a pointer to that physical
resource when demanded.
What is the concept behind the Virtualization?
Creation of a virtual machine over existing operating system and hardware is known as
Hardware Virtualization. A Virtual machine provides an environment that is logically
separated from the underlying hardware.
The machine on which the virtual machine is going to create is known as Host Machine and
that virtual machine is referred as a Guest Machine.
What is a hypervisor?
A hypervisor, also known as a virtual machine monitor or VMM. The hypervisor is a piece of
software that allows us to build and run virtual machines which are abbreviated as VMs.
A hypervisor allows a single host computer to support multiple virtual machines (VMs) by
sharing resources including memory and processing.
What is the use of a hypervisor?
Hypervisors allow the use of more of a system's available resources and provide greater IT
versatility because the guest VMs are independent of the host hardware which is one of the
major benefits of the Hypervisor.
In other words, this implies that they can be quickly switched between servers. Since a
hypervisor with the help of its special feature, it allows several virtual machines to operate on
a single physical server. So, it helps us to reduce:
Kinds of hypervisors
There are two types of hypervisors: "Type 1" (also known as "bare metal") and "Type 2"
(also known as "hosted"). A type 1 hypervisor functions as a light operating system that
operates directly on the host's hardware, while a type 2 hypervisor functions as a software
layer on top of an operating system, similar to other computer programs.
Since they are isolated from the attack-prone operating system, bare-metal hypervisors are
extremely stable.
Furthermore, they are usually faster and more powerful than hosted hypervisors. For these
purposes, the majority of enterprise businesses opt for bare-metal hypervisors for their data
center computing requirements.
While hosted hypervisors run inside the OS, they can be topped with additional (and
different) operating systems.
The hosted hypervisors have longer latency than bare-metal hypervisors which is a very
major disadvantage of the it. This is due to the fact that contact between the hardware and the
hypervisor must go through the OS's extra layer.
The Type 1 hypervisor
The native or bare metal hypervisor, the Type 1 hypervisor is known by both names.
It replaces the host operating system, and the hypervisor schedules VM services directly to
the hardware.
The type 1 hypervisor is very much commonly used in the enterprise data center or other
server-based environments.
It includes KVM, Microsoft Hyper-V, and VMware vSphere. If we are running the updated
version of the hypervisor then we must have already got the KVM integrated into the Linux
kernel in 2007.
The Type 2 hypervisor
It is also known as a hosted hypervisor, The type 2 hypervisor is a software layer or
framework that runs on a traditional operating system.
It operates by separating the guest and host operating systems. The host operating system
schedules VM services, which are then executed on the hardware.
Individual users who wish to operate multiple operating systems on a personal computer
should use a form 2 hypervisor.
This type of hypervisor also includes the virtual machines with it.
Hardware acceleration technology improves the processing speed of both bare-metal and
hosted hypervisors, allowing them to build and handle virtual resources more quickly.
On a single physical computer, all types of hypervisors will operate multiple virtual servers
for multiple tenants. Different businesses rent data space on various virtual servers from
public cloud service providers. One server can host multiple virtual servers, each of which is
running different workloads for different businesses.
Benefits of hypervisors
Using a hypervisor to host several virtual machines has many advantages:
● Speed: The hypervisors allow virtual machines to be built instantly unlike bare-metal
servers. This makes provisioning resources for complex workloads much simpler.
● Efficiency: Hypervisors that run multiple virtual machines on the resources of a single
physical machine often allow for more effective use of a single physical server.
● Flexibility: Since the hypervisor distinguishes the OS from the underlying
hardware, the program no longer relies on particular hardware devices or drivers,
bare-metal hypervisors enable operating systems and their related applications to
operate on a variety of hardware types.
● Portability: Multiple operating systems can run on the same physical server thanks to
hypervisors (host machine). The hypervisor's virtual machines are portable because
they are separate from the physical computer.
As an application requires more computing power, virtualization software allows it to access
additional machines without interruption.
Difference between Type-1 and Type-2 Hypervisors:
What is Cloud Scaling?
In cloud computing, scaling is the process of adding or removing compute, storage, and
network services to meet the demands a workload makes for resources in order to maintain
availability and performance as utilization increases. Scaling generally refers to adding or
reducing the number of active servers (instances) being leveraged against your workload’s
resource demands. Scaling up and scaling out refer to two dimensions across which
resources—and therefore, capacity—can be added.
What Factors Impact Cloud Resource Demands?
The demands of your cloud workloads for computational resources are usually determined
by:
● The length of time jobs have waited in the server queue (back-end, time-based)
● Vertical scaling minimizes operational overhead because there is only one server to
manage. There is no need to distribute the workload and coordinate among multiple
servers.
● Vertical scaling is best used for applications that are difficult to distribute. For
example, when a relational database is distributed, the system must
accommodate
transactions that can change data across multiple servers. Major relational databases
can be configured to run on multiple servers, but it’s often easier to vertically scale.
Vertical Scaling Limitations
● There are upper boundaries for the amount of memory and CPU that can allocated to
a single instance, and there are connectivity ceilings for each underlying physical host
● Even if an instance has sufficient CPU and memory, some of those resources may sit
idle at times, and you will continue pay for those unused resources
Scale Out (Horizontal Scaling)
Instead of resizing an application to a bigger server, scaling out splits the workload across
multiple servers that work in parallel.
Benefits of Scaling Out
● Applications that can sit within a single machine—like many websites—are well-
suited to horizontal scaling because there is little need to coordinate tasks between
servers. For example, a retail website might have peak periods, such as around the
end-of-year holidays. During those times, additional servers can be easily committed
to handle the additional traffic.
● Many front-end applications and microservices can leverage horizontal scaling.
Horizontally-scaled applications can adjust the number of servers in use according to
the workload demand patterns.
Horizontal Scaling Limitations
10. The motherboard we are installing have two Ethernet ports, both of which are
supported by XenServer Choose the one you wish to use for the management network
– you can change this later. Here we get to choose the networking settings for our
management network.
11. Select Time zone
12. Specify NTP Server address and start installation to set your time
13. Press Enter to start installation of XenServer
14. This is the loading screen and console for installed new XenServer
Step 2: Connect XenCenter to XenServer
1. Download the XenCenter a management utility from XenServer IP address as a URL
on browser. Install XenCenter and open it from start Menu of Windows on Machine
2
4.Enter IP address of XenServer and Enter User login credentials and click Add
5. Once you clicked on Add , it will ask to configure a master password for all
the XenServers.
a) #mkdir /mnt/myusb
b) #ls – l /dev/sdb1 to check drive for removable disk.
c) #mount –t vfat –o rw,users /dev/sdb1 /mnt/ myusb
d) #cd / mnt/myusb
e) # ls (content of pendrive)
f) # cp Ubuntu -16.04.5-desktop-i386.iso /var/ISO_images
g) Reboot or shutdown Xenserver from Xencenter or from console of Xenserver