0% found this document useful (0 votes)
14 views19 pages

3 Exp CCL

The document discusses virtualization and hypervisors in cloud computing. It defines key concepts like type 1 and type 2 hypervisors, and explains how they work. The document also covers scaling approaches like scaling up (vertical scaling) and scaling out (horizontal scaling), and how autoscaling dynamically adjusts resources based on workload demands.

Uploaded by

manishgodhani.tp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views19 pages

3 Exp CCL

The document discusses virtualization and hypervisors in cloud computing. It defines key concepts like type 1 and type 2 hypervisors, and explains how they work. The document also covers scaling approaches like scaling up (vertical scaling) and scaling out (horizontal scaling), and how autoscaling dynamically adjusts resources based on workload demands.

Uploaded by

manishgodhani.tp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

Akshay Jagiasi C14 62

Experiment 3
Aim: To study and Implement Bare-metal
Virtualization using Xen.
Objective: To understand the concept of hypervisors, its types and functions in Cloud
Computing.
Theory:
Virtualization in Cloud Computing
Virtualization is the "creation of a virtual (rather than actual) version of something, such as a
server, a desktop, a storage device, an operating system or network resources".
In other words, Virtualization is a technique, which allows to share a single physical instance
of a resource or an application among multiple customers and organizations. It does by
assigning a logical name to a physical storage and providing a pointer to that physical
resource when demanded.
What is the concept behind the Virtualization?
Creation of a virtual machine over existing operating system and hardware is known as
Hardware Virtualization. A Virtual machine provides an environment that is logically
separated from the underlying hardware.
The machine on which the virtual machine is going to create is known as Host Machine and
that virtual machine is referred as a Guest Machine.
What is a hypervisor?
A hypervisor, also known as a virtual machine monitor or VMM. The hypervisor is a piece of
software that allows us to build and run virtual machines which are abbreviated as VMs.
A hypervisor allows a single host computer to support multiple virtual machines (VMs) by
sharing resources including memory and processing.
What is the use of a hypervisor?
Hypervisors allow the use of more of a system's available resources and provide greater IT
versatility because the guest VMs are independent of the host hardware which is one of the
major benefits of the Hypervisor.
In other words, this implies that they can be quickly switched between servers. Since a
hypervisor with the help of its special feature, it allows several virtual machines to operate on
a single physical server. So, it helps us to reduce:

● The Space efficiency

● The Energy uses

● The Maintenance requirements of the server.

Kinds of hypervisors
There are two types of hypervisors: "Type 1" (also known as "bare metal") and "Type 2"
(also known as "hosted"). A type 1 hypervisor functions as a light operating system that
operates directly on the host's hardware, while a type 2 hypervisor functions as a software
layer on top of an operating system, similar to other computer programs.
Since they are isolated from the attack-prone operating system, bare-metal hypervisors are
extremely stable.
Furthermore, they are usually faster and more powerful than hosted hypervisors. For these
purposes, the majority of enterprise businesses opt for bare-metal hypervisors for their data
center computing requirements.
While hosted hypervisors run inside the OS, they can be topped with additional (and
different) operating systems.
The hosted hypervisors have longer latency than bare-metal hypervisors which is a very
major disadvantage of the it. This is due to the fact that contact between the hardware and the
hypervisor must go through the OS's extra layer.
The Type 1 hypervisor
The native or bare metal hypervisor, the Type 1 hypervisor is known by both names.
It replaces the host operating system, and the hypervisor schedules VM services directly to
the hardware.
The type 1 hypervisor is very much commonly used in the enterprise data center or other
server-based environments.
It includes KVM, Microsoft Hyper-V, and VMware vSphere. If we are running the updated
version of the hypervisor then we must have already got the KVM integrated into the Linux
kernel in 2007.
The Type 2 hypervisor
It is also known as a hosted hypervisor, The type 2 hypervisor is a software layer or
framework that runs on a traditional operating system.
It operates by separating the guest and host operating systems. The host operating system
schedules VM services, which are then executed on the hardware.
Individual users who wish to operate multiple operating systems on a personal computer
should use a form 2 hypervisor.
This type of hypervisor also includes the virtual machines with it.
Hardware acceleration technology improves the processing speed of both bare-metal and
hosted hypervisors, allowing them to build and handle virtual resources more quickly.
On a single physical computer, all types of hypervisors will operate multiple virtual servers
for multiple tenants. Different businesses rent data space on various virtual servers from
public cloud service providers. One server can host multiple virtual servers, each of which is
running different workloads for different businesses.
Benefits of hypervisors
Using a hypervisor to host several virtual machines has many advantages:

● Speed: The hypervisors allow virtual machines to be built instantly unlike bare-metal
servers. This makes provisioning resources for complex workloads much simpler.
● Efficiency: Hypervisors that run multiple virtual machines on the resources of a single
physical machine often allow for more effective use of a single physical server.
● Flexibility: Since the hypervisor distinguishes the OS from the underlying
hardware, the program no longer relies on particular hardware devices or drivers,
bare-metal hypervisors enable operating systems and their related applications to
operate on a variety of hardware types.
● Portability: Multiple operating systems can run on the same physical server thanks to
hypervisors (host machine). The hypervisor's virtual machines are portable because
they are separate from the physical computer.
As an application requires more computing power, virtualization software allows it to access
additional machines without interruption.
Difference between Type-1 and Type-2 Hypervisors:
What is Cloud Scaling?
In cloud computing, scaling is the process of adding or removing compute, storage, and
network services to meet the demands a workload makes for resources in order to maintain
availability and performance as utilization increases. Scaling generally refers to adding or
reducing the number of active servers (instances) being leveraged against your workload’s
resource demands. Scaling up and scaling out refer to two dimensions across which
resources—and therefore, capacity—can be added.
What Factors Impact Cloud Resource Demands?
The demands of your cloud workloads for computational resources are usually determined
by:

● The number of incoming requests (front-end traffic)

● The number of jobs in the server queue (back-end, load-based)

● The length of time jobs have waited in the server queue (back-end, time-based)

Scaling Up & Scaling Out


Scaling up refers to making an infrastructure component more powerful—larger or faster—so
it can handle more load, while scaling out means spreading a load out by adding additional
components in parallel.
Scale Up (Vertical Scaling)
Scaling up is the process of resizing a server (or replacing it with another server) to give
it supplemental or fewer CPUs, memory, or network capacity.
Benefits of Scaling Up

● Vertical scaling minimizes operational overhead because there is only one server to
manage. There is no need to distribute the workload and coordinate among multiple
servers.
● Vertical scaling is best used for applications that are difficult to distribute. For
example, when a relational database is distributed, the system must
accommodate
transactions that can change data across multiple servers. Major relational databases
can be configured to run on multiple servers, but it’s often easier to vertically scale.
Vertical Scaling Limitations

● There are upper boundaries for the amount of memory and CPU that can allocated to
a single instance, and there are connectivity ceilings for each underlying physical host
● Even if an instance has sufficient CPU and memory, some of those resources may sit
idle at times, and you will continue pay for those unused resources
Scale Out (Horizontal Scaling)
Instead of resizing an application to a bigger server, scaling out splits the workload across
multiple servers that work in parallel.
Benefits of Scaling Out

● Applications that can sit within a single machine—like many websites—are well-
suited to horizontal scaling because there is little need to coordinate tasks between
servers. For example, a retail website might have peak periods, such as around the
end-of-year holidays. During those times, additional servers can be easily committed
to handle the additional traffic.
● Many front-end applications and microservices can leverage horizontal scaling.
Horizontally-scaled applications can adjust the number of servers in use according to
the workload demand patterns.
Horizontal Scaling Limitations

● The main limitation of horizontal scaling is that it often requires the


application to be architected with scale out in mind in order to support the distribution
of workloads across multiple servers.
What is Cloud Autoscaling?
Autoscaling (sometimes spelled auto scaling or auto-scaling) is the process of automatically
increasing or decreasing the computational resources delivered to a cloud workload based on
need. The primary benefit of autoscaling, when configured and managed properly, is that
your workload gets exactly the cloud computational resources it requires (and no more or
less) at any given time. You pay only for the server resources you need, when you need
them.
Load balancing in Cloud Computing
Cloud load balancing is defined as the method of splitting workloads and computing
properties in a cloud computing. It enables enterprise to manage workload demands or
application demands by distributing resources among numerous computers, networks or
servers. Cloud load balancing includes holding the circulation of workload traffic and
demands that exist over the Internet.
As the traffic on the internet growing rapidly, which is about 100% annually of the present
traffic. Hence, the workload on the server growing so fast which leads to the overloading of
servers mainly for popular web server. There are two elementary solutions to overcome the
problem of overloading on the servers-
● First is a single-server solution in which the server is upgraded to a
higher performance server. However, the new server may also be overloaded
soon, demanding another upgrade. Moreover, the upgrading process is arduous
and expensive.
● Second is a multiple-server solution in which a scalable service system on a
cluster of servers is built. That’s why it is more cost effective as well as more scalable
to build a server cluster system for network services.
Load balancing is beneficial with almost any type of service, like HTTP, SMTP, DNS, FTP,
and POP/IMAP. It also rises reliability through redundancy. The balancing service is
provided by a dedicated hardware device or program. Cloud-based servers farms can attain
more precise scalability and availability using server load balancing.
Load balancing solutions can be categorized into two types –

● Software-based load balancers: Software-based load balancers run on standard


hardware (desktop, PCs) and standard operating systems.
● Hardware-based load balancer: Hardware-based load balancers are dedicated boxes
which include Application Specific Integrated Circuits (ASICs) adapted for a
particular use. ASICs allows high speed promoting of network traffic and are
frequently used for transport-level load balancing because hardware-based load
balancing is faster in comparison to software solution.

Major Examples of Load Balancers –

● Direct Routing Requesting Dispatching Technique: This approach of request


dispatching is like to the one implemented in IBM’s Net Dispatcher. A real server
and load balancer share the virtual IP address. In this, load balancer takes an interface
constructed with the virtual IP address that accepts request packets and it directly
routes the packet to the selected servers.
● Dispatcher-Based Load Balancing Cluster: A dispatcher does smart load balancing by
utilizing server availability, workload, capability and other user-defined criteria to
regulate where to send a TCP/IP request. The dispatcher module of a load balancer
can split HTTP requests among various nodes in a cluster. The dispatcher splits the
load among many servers in a cluster so the services of various nodes seem like a
virtual service on an only IP address; consumers interrelate as if it were a solo server,
without having an information about the back-end infrastructure.
● Linux Virtual Load Balancer: It is an opensource enhanced load balancing solution
used to build extremely scalable and extremely available network services such as
HTTP, POP3, FTP, SMTP, media and caching and Voice Over Internet Protocol
(VoIP). It is simple and powerful product made for load balancing and fail-over. The
load balancer itself is the primary entry point of server cluster systems and can
execute Internet Protocol Virtual Server (IPVS), which implements transport-layer
load balancing in the Linux kernel also known as Layer-4 switching.
Output:
1. Install XenServer: Insert Bootable CD into CDROM or Bootable Pen drive and
make first boot device from BIOS.

2. Press F2 for advance options and Make Enabled Virtualization Technology


from BIOS
3. Save and reboot

4. We begin by choosing the keymap i.e. Keyboard Layout


5. Press Enter to load Device Drivers

6. Press Enter to accept End User License Agreement


7. Select Appropriate disk on which you want to install Xenserver

8. Select appropriate Installation Media


9. Press yes to Select additional packages for installation , Otherwise Press No

10. The motherboard we are installing have two Ethernet ports, both of which are
supported by XenServer Choose the one you wish to use for the management network
– you can change this later. Here we get to choose the networking settings for our
management network.
11. Select Time zone

12. Specify NTP Server address and start installation to set your time
13. Press Enter to start installation of XenServer
14. This is the loading screen and console for installed new XenServer
Step 2: Connect XenCenter to XenServer
1. Download the XenCenter a management utility from XenServer IP address as a URL
on browser. Install XenCenter and open it from start Menu of Windows on Machine
2

2. Once you click on Citrix XenCenter ..It Looks like as below:


3. To Connect to the XenServer host which is configured earlier, click ADD a Server

4.Enter IP address of XenServer and Enter User login credentials and click Add
5. Once you clicked on Add , it will ask to configure a master password for all
the XenServers.

6. Xenserver is added now to XenCenter


Step 3: Create Local ISO Storage .
Now before creating Virtual Machine we have to Create storage repository which is nothing
but shared directory on XenCenter which holds all iso files and which is required to install
Operating system on XenServer .
Using Local command Shell
i. First View Xen Directory Structrure # df – h
ii. Now make folder to store ISO images # mkdir /var/ISO_images
iii. Using wget command download iso files . for example:
#wget http:// releases.ubuntu.com/16.04/ubuntu-16.04.6-desktop-amd64.iso
OR
USING PENDRIVE (need to Mount Pendrive)

a) #mkdir /mnt/myusb
b) #ls – l /dev/sdb1 to check drive for removable disk.
c) #mount –t vfat –o rw,users /dev/sdb1 /mnt/ myusb
d) #cd / mnt/myusb
e) # ls (content of pendrive)
f) # cp Ubuntu -16.04.5-desktop-i386.iso /var/ISO_images
g) Reboot or shutdown Xenserver from Xencenter or from console of Xenserver

Step 4: Installation of Virtual Machine from Xencenter


i. Right click on Xenserver icon on Xen center and select New VM.
ii. Select VM template

iii. Name the virtual machine


iv. Locate the operating system installation media To select appropriate OS iso file.
v. Allocate CPU and memory to VM
vi. Select networking
vii. finish

Conclusion: Thus we have successfully created a virtual machine in xenserver


using XenCenter tool.

You might also like