Lecture 1 Virtualization
Lecture 1 Virtualization
Level 4 (IT)
Prepared By:
Eng. Rasha A. Al-Arasi
***************************
****
Lecture 1: -Virtualization
-Nano server
Virtualization
Virtualization is technology that lets you create useful IT services using
resources that are traditionally bound to hardware. It allows you to use a
physical machine’s full capacity by distributing its capabilities among
many users or environments.
In more practical terms, imagine you have 3 physical servers with
individual dedicated purposes. One is a mail server, another is a web
server, and the last one runs internal legacy applications. Each server is
being used at about 30% capacity—just a fraction of their running
potential. But since the legacy apps remain important to your internal
operations, you have to keep them and the third server that hosts them,
right?
Virtualization
Traditionally, yes. It was often easier and more reliable to run
individual tasks on individual servers: 1 server, 1 operating system, 1
task. It wasn’t easy to give 1 server multiple brains. But with
virtualization, you can split the mail server into 2 unique ones that can
handle independent tasks so the legacy apps can be migrated. It’s the
same hardware, you’re just using more of it more efficiently.
Keeping security in mind, you could split the first server again
so it could handle another task—increasing its use from 30%,
to 60%, to 90%. Once you do that, the now empty servers
could be reused for other tasks or retired altogether to reduce
cooling and maintenance costs
History Of Virtualization
While virtualization technology can be sourced back to the 1960s, it
wasn’t widely adopted until the early 2000s. The technologies that
enabled virtualization—like hypervisors—were developed decades ago
to give multiple users simultaneous access to computers that performed
batch processing. Batch processing was a popular computing style in
the business sector that ran routine tasks thousands of times very
quickly (like payroll).
But, over the next few decades, other solutions to the many users/single
machine problem grew in popularity while virtualization didn’t. One of
those other solutions was time-sharing, which isolated users within
operating systems—inadvertently leading to other operating systems
like UNIX, which eventually gave way to Linux®. All the while,
virtualization remained a largely unadopted, niche technology.
History Of Virtualization
Fast forward to the the 1990s. Most enterprises had physical servers and
single-vendor IT stacks, which didn’t allow legacy apps to run on a
different vendor’s hardware. As companies updated their IT
environments with less-expensive commodity servers, operating
systems, and applications from a variety of vendors, they were bound to
underused physical hardware—each server could only run 1 vendor-
specific task.
This is where virtualization really took off. It was the natural solution to
2 problems: companies could partition their servers and run legacy apps
on multiple operating system types and versions. Servers started being
used more efficiently (or not at all), thereby reducing the costs
associated with purchase, set up, cooling, and maintenance.
Virtualization’s widespread applicability helped reduce vendor lock-in
and made it the foundation of cloud computing. It’s so prevalent across
enterprises today that specialized virtualization management software is
often needed to help keep track of it all.
Introduction
What is Virtualization?
Virtualization is a large umbrella of technologies and
concepts that are meant to provide an abstract environment
( virtual hardware or operating system) to run applications.
Is the "creation of a virtual (rather than actual) version of
something, such as a server, desktop, a storage device, an
operating system or network resources".
The idea is to separate the hardware from the software to
yield better system efficiency.
How does virtualization work?
• Software called hypervisors separate the physical resources
from the virtual environments—the things that need those
resources. Hypervisors can sit on top of an operating system
(like on a laptop) or be installed directly onto hardware (like a
server), which is how most enterprises virtualize.
• Hypervisors take your physical resources and divide them up
so that virtual environments can use them.
VMM Requirements:
VMM should provide an environment for programs which
is essentially identical to the original machine.
Programs run in this environment should show, at worst,
only minor decreases in speed.
VMM should be in complete control of the system
resources. ( any program run under a VMM should exhibit
a function identical to that which it runs on the original
machine directly.
VMM Resources Control