0% found this document useful (0 votes)
11 views30 pages

Lecture 1 Virtualization

Uploaded by

mflhabwtrkhmh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views30 pages

Lecture 1 Virtualization

Uploaded by

mflhabwtrkhmh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 30

System Administration and Maintenance

Level 4 (IT)

Prepared By:
Eng. Rasha A. Al-Arasi
***************************
****
Lecture 1: -Virtualization
-Nano server
Virtualization
Virtualization is technology that lets you create useful IT services using
resources that are traditionally bound to hardware. It allows you to use a
physical machine’s full capacity by distributing its capabilities among
many users or environments.
In more practical terms, imagine you have 3 physical servers with
individual dedicated purposes. One is a mail server, another is a web
server, and the last one runs internal legacy applications. Each server is
being used at about 30% capacity—just a fraction of their running
potential. But since the legacy apps remain important to your internal
operations, you have to keep them and the third server that hosts them,
right?
Virtualization
Traditionally, yes. It was often easier and more reliable to run
individual tasks on individual servers: 1 server, 1 operating system, 1
task. It wasn’t easy to give 1 server multiple brains. But with
virtualization, you can split the mail server into 2 unique ones that can
handle independent tasks so the legacy apps can be migrated. It’s the
same hardware, you’re just using more of it more efficiently.

Keeping security in mind, you could split the first server again
so it could handle another task—increasing its use from 30%,
to 60%, to 90%. Once you do that, the now empty servers
could be reused for other tasks or retired altogether to reduce
cooling and maintenance costs
History Of Virtualization
While virtualization technology can be sourced back to the 1960s, it
wasn’t widely adopted until the early 2000s. The technologies that
enabled virtualization—like hypervisors—were developed decades ago
to give multiple users simultaneous access to computers that performed
batch processing. Batch processing was a popular computing style in
the business sector that ran routine tasks thousands of times very
quickly (like payroll).
But, over the next few decades, other solutions to the many users/single
machine problem grew in popularity while virtualization didn’t. One of
those other solutions was time-sharing, which isolated users within
operating systems—inadvertently leading to other operating systems
like UNIX, which eventually gave way to Linux®. All the while,
virtualization remained a largely unadopted, niche technology.
History Of Virtualization
Fast forward to the the 1990s. Most enterprises had physical servers and
single-vendor IT stacks, which didn’t allow legacy apps to run on a
different vendor’s hardware. As companies updated their IT
environments with less-expensive commodity servers, operating
systems, and applications from a variety of vendors, they were bound to
underused physical hardware—each server could only run 1 vendor-
specific task.
This is where virtualization really took off. It was the natural solution to
2 problems: companies could partition their servers and run legacy apps
on multiple operating system types and versions. Servers started being
used more efficiently (or not at all), thereby reducing the costs
associated with purchase, set up, cooling, and maintenance.
Virtualization’s widespread applicability helped reduce vendor lock-in
and made it the foundation of cloud computing. It’s so prevalent across
enterprises today that specialized virtualization management software is
often needed to help keep track of it all.
Introduction
What is Virtualization?
Virtualization is a large umbrella of technologies and
concepts that are meant to provide an abstract environment
( virtual hardware or operating system) to run applications.
Is the "creation of a virtual (rather than actual) version of
something, such as a server, desktop, a storage device, an
operating system or network resources".
The idea is to separate the hardware from the software to
yield better system efficiency.
How does virtualization work?
• Software called hypervisors separate the physical resources
from the virtual environments—the things that need those
resources. Hypervisors can sit on top of an operating system
(like on a laptop) or be installed directly onto hardware (like a
server), which is how most enterprises virtualize.
• Hypervisors take your physical resources and divide them up
so that virtual environments can use them.

• Resources are partitioned as needed from the physical


environment to the many virtual environments. Users interact
with and run computations within the virtual environment
(typically called a guest machine or virtual machine).
How does virtualization work?
• The virtual machine functions as a single data file. And like any digital file,
it can be moved from one computer to another, opened in either one,
and be expected to work the same.
• When the virtual environment is running and a user or program issues an
instruction that requires additional resources from the physical
environment,

the hypervisor relays the request to


the physical system and caches the
changes—which all happens at close
to native speed (particularly if the
request is sent through an open
source hypervisor based on KVM,
the Kernel-based Virtual Machine).
Types of virtualization
Data virtualization
• Data that’s spread all over can be consolidated into a single
source. Data virtualization allows companies to treat data as a
dynamic supply—providing processing capabilities that can
bring together data from multiple sources,
easily accommodate new data sources,
and transform data according to user needs.
Data virtualization tools sit in front of
multiple data sources and allows them to be
treated as single source, delivering the
needed data—in the required form—
at the right time to any application or user.
Desktop virtualization
• Easily confused with operating system virtualization—which
allows you to deploy multiple operating systems on a single
machine—desktop virtualization allows a central administrator
(or automated administration tool) to deploy simulated
desktop environments to hundreds of physical
machines at once. Unlike traditional
desktop environments that are physically
installed, configured, and updated
on each machine, desktop virtualization
allows admins to perform mass configurations,
updates, and security checks on all virtual desktops.
Server virtualization
• Servers are computers designed to process a high volume of
specific tasks really well so other computers—like laptops and
desktops—can do a variety of other tasks.
Virtualizing a server lets it to do more
of those specific functions and involves
partitioning it so that the component
s can be used to serve multiple functions.
Operating system virtualization
• Operating system virtualization happens at the kernel—the
central task managers of operating systems. It’s a useful way
to run Linux and Windows environments side-by-side.
Enterprises can also push virtual operating systems to
computers, which:
• Reduces bulk hardware costs, since
the computers don’t require such
high out-of-the-box capabilities.
• Increases security, since all virtual
instances can be monitored and isolated.
• Limits time spent on IT services like
software updates.
Network functions virtualization
• Network functions virtualization (NFV) separates a network's key
functions (like directory services, file sharing, and IP
configuration) so they can be distributed among environments.
Once software functions are independent of the physical machines
they once lived on,
specific functions can be packaged
together into a new network and assigned
to an environment. Virtualizing networks
reduces the number of physical components—
like switches, routers, servers, cables,
and hubs—that are needed to create multiple, independent networks,
and it’s particularly popular in the telecommunications industry.
The architecture of a computer system

The main function of the software layer for virtualization is to


virtualize the physical hardware of a host machine into virtual
resources to be used by the VMs, exclusively
Data Center Physical Infrastructure
Data Center Virtual Infrastructure
Virtualization increases Hardware Utilizations
Key properties of Virtual machine

Run multiple Operating systems on physical


machine.
Divided system resources between visualization
machines.
Virtualized Environment
In a virtualized environment there are three major
components:

Guest : represents the system component that interacts


with the virtualization layer rather than with the host.

Host : represents the original environment where the guest


is supposed to be managed.

virtualization layer : responsible for recreating the same


or a different environment where the guest will operate.
Virtualization Reference Model
Hypervisor
A hypervisor, also called a virtual machine manager, is a program that
allows multiple operating systems to share a single hardware host.
Each operating system appears to have the host's processor, memory,
and other resources all to itself. However, the hypervisor is actually
controlling the host processor and resources, allocating what is needed
to each operating system in turn and making sure that the guest
operating systems (called virtual machines) cannot disrupt each other.
It is implemented as a layer between real hardware and traditional OS.
Manages the HW resources of a computing system
(acts as a traditional OS).
VMM Design Requirements and Providers

VMM Requirements:
VMM should provide an environment for programs which
is essentially identical to the original machine.
Programs run in this environment should show, at worst,
only minor decreases in speed.
VMM should be in complete control of the system
resources. ( any program run under a VMM should exhibit
a function identical to that which it runs on the original
machine directly.
VMM Resources Control

The VMM is responsible for allocating hardware resources for


programs
it is not possible for a program to access any resource not
explicitly allocated to it
It is possible under certain circumstances for a VMM to regain
control of resources already allocated.
Types Hypervisor
Type 1 hypervisor or the bare metal hypervisor
sits on the bare metal computer hardware like the CPU, memory, etc.
All the guest OS are a layer above the hypervisor. So the hypervisor is the first
layer over the hardware.
Examples are Microsoft Hyper-V, Esxi.
Type 2 or the hosted hypervisor
Do not run over the bare metal hardware but they run over a host operating
system.
The hypervisor is the second layer over the hardware. The guest OS run a layer
over the hypervisor and so they form the third layer.
Examples are FreeBSD, Microsoft Hyper-V.
Types Hypervisor
Major VMM and Hypervisor Providers
What is Hyper-V?
• Hyper-V is the hardware virtualization role in Windows
Server 2016.
• The hypervisor controls access to hardware.
• Hardware drivers are installed in the host operating
system.
• Many guest operating systems are supported:
– Windows Server 2008 SP2 or newer
– Windows Vista SP2 or newer
– Linux
– FreeBSD
• Hyper-V is covered in detail in the next Lecture.
What’s new since Windows Server 2012 was released?

• New features and improvements introduced in


Windows Server 2016:
• Nano Server • PowerShell Direct
• Containers • Shielded virtual machines
• Docker support • Windows Defender
• Rolling upgrades for • Storage Spaces Direct
Hyper-V and storage • Storage Replica
clusters • Remote Desktop Services
• Hot add/remove virtual • Microsoft Passport
memory & network • Azure AD Join support
adapters • Privileged Access
• Nested virtualization Management

You might also like