Virtualization Power Point Presentation
Virtualization Power Point Presentation
Virtualization Power Point Presentation
ON
What is a hardware?
What is Software?
What is an OS?
Abstract view of a Computer System?
What is a Server?
What is Virtual Memory?
The world is getting smarter
Virtual Virtual
Container Container
Hardware Hardware
X86 X86
X86 X86
Windows Windows
Suse Red Hat
XP 2003
Virtualization again…
x86 server deployments introduced new IT challenges:
• Low server infrastructure utilization (10-18%)
• Increasing physical infrastructure costs (facilities, power, cooling, etc)
• Increasing IT management costs (configuration, deployment, updates, etc)
• Insufficient failover and disaster protection
The solution for all these problems was to virtualize x86 platforms
Evolution of Virtualization
Computing Infrastructure - Virtualization
• It matches the benefits of high hardware utilization with running several operating
systems (applications) in separated virtualized environments
– Each application runs in its own operating system
– Each operating system does not know it is sharing the underlying hardware
with others
Isolation
Manageability
Server consolidation
o Run a web server and a mail server on the same physical server
Easier development
o Develop critical operating system components (file system, disk driver) without affecting computer sta
bility
Hypervisor
• In computing, a hypervisor (also: virtual machine monitor) is
a virtualization platform that allows multiple operating syste
ms to run on a host computer at the same time. The term usua
-lly refers to an implementation using full virtualization.
Two types of hypervisors
• Definitions
– Hypervisor (or VMM – Virtual Machine Monitor) is a software layer t
hat allows several virtual machines to run on a physical machine
– The physical OS and hardware are called the Host
– The virtual machine OS and applications are called the Guest
VMware ESX, Microsoft Hyper-V, Xen VMware Workstation, Microsoft Virtual PC,
23 Sun VirtualBox, QEMU, KVM
Hypervisor
• Hypervisors are currently classified in two types:
– Type 1 hypervisor (or Type 1 virtual machine monitor) is software that runs directly on
a given hardware platform (as an operating system control program). A "guest" operati
ng system thus runs at the second level above the hardware.
• The classic type 1 hypervisor was CP/CMS, developed at IBM in the 1960s, ances
tor of IBM's current z/VM. More recent examples are Xen, VMware's ESX Server,
and Sun's Hypervisor (released in 2005).
– Type 2 hypervisor (or Type 2 virtual machine monitor) is software that runs within an o
perating system environment. A "guest" operating system thus runs at the third level ab
ove the hardware.
• Examples include VMware server and Microsoft Virtual Server.
Bare-metal or hosted?
• Bare-metal
– Has complete control over hardware
– Doesn’t have to “fight” an OS
• Hosted
– Avoid code duplication: need not code a process scheduler, memory mana
gement system – the OS already does that
– Can run native processes alongside VMs
– Familiar environment – how much CPU and memory does a VM take?
– A combination Mostly hosted, but some parts are inside the OS kernel for per
formance reasons
Limitations
Limitations
• Not all applications are specifically designed to be virtualization-friendly.
• This means that some aspects of your computer technology within your business might not leave you wit
h the available option of virtualization.
• KVM – a Linux based open source hypervisor. First introduced into the Linux kernel in February 2007, it
is now a mature hypervisor and is probably the most widely deployed open source hypervisor in an open
source environment. KVM is used in products such as Redhat Enterprise Virtualization (RHEV).
• Xen – An open source hypervisor which originated in a 2003 Cambridge University research project. It ru
ns on Linux (though being a Type 1 hypervisor, more properly one might say that its dom0 host runs on L
inux, which in turn runs on Xen). It was originally supported by XenSource Inc, which was acquired by C
itrix Inc in 2007.
• VMware – is not a hypervisor, but the name of a company, VMware Inc. Our experience with VMware i
nvolves its vSphere product. vSphere uses VMware’s ESXi hypervisor. VMware’s hypervisor is very mat
ure and extremely stable.
•
Hyper-V – Hyper-V is a commercial hypervisor provided by Microsoft. Whilst excellent for running Win
dows, being a hypervisor it will run any operating system supported by the hardware platform.
Challenges
Assuring Compatibility and Efficiency
• When it comes to Virtualization, one of the biggest obstacles that needs to be overcome is preparing a ro
bust infrastructure and making sure that all its underlying components such as CPU, Storage Devices, Op
erating Systems, Network etc. are compatible, efficient and will provide sufficient performance without
affecting the workflow.
• Making sure that all these are in place may take a lot of time and may require special skills. It also depen
ds on how modern your equipment is.
• If you do not have the skills readily available in-house, you may need to acquire these or engage a consul
tant beforehand and establish a preset list of mandatory changes and improvements in order to assure a s
mooth virtualization process.
The Network
• Virtualizing servers before making sure that the network infrastructure can handle it is a risky undertaking.
• After the process is complete the network will be under a lot more strain and making sure that it has what it
takes to sustain the added traffic is critical.
• Once the virtualization process is complete and problems arise, it will be very difficult to tell whether
the performance issues are linked to network issues or at the server end.
Computational and storage capacity
• Besides obvious differences, virtual machines also generate I/O requests at an increased frequency and si
nce virtualization means creating more servers, discs may have trouble keeping up with the increas
ed workload.
• It is not uncommon to realize that some applications are actually slower when run through virtualized
machines even if memory management techniques like page sharing are in place and being used.
• This is because larger blocks of data have increased priority which means that smaller I/O requests have t
o wait before being processed.
• A viable solution for this kind of issues is workload analysis before the virtualization so that you are able
to estimate approximate future hardware and network usage and prepare accordingly.
Pitfalls
Mismatching Servers
• This aspect is commonly overlooked especially by smaller companies that don't invest sufficient funds in
their IT infrastructure and prefer to build it from several bits and pieces.
• This usually leads to simultaneous virtualization of servers that come with different chip technology (A
MD and Intel).
• Frequently, migration of virtual machines between them won't be possible and server restarts will be the o
nly solution.
• This is a major hindrance and actually means losing the benefits of live migration and virtualization.
Creating Too many Virtual Machines per Server
• One of the great things about virtual machines is that they can be easily created and migrated from server
to server according to needs.
• However, this can also create problems sometimes because IT staff members may get carried away and d
eploy more Virtual Machines than a server can handle.
• This will actually lead to a loss of performance that can be quite difficult to spot.
• A practical way to work around this is to have some policies in place regarding VM limitations and to ma
ke sure that the employees adhere to them.
Misplacing Applications
• A virtualized infrastructure is a more complex than a traditional one and with a number of applications de
ployed, losing track of applications is a distinct possibility.
• Within a physical server infrastructure keeping track of all the apps and the machines running them isn’t a
difficult task.
• However, once you add a significant number of virtual machines to the equation, things can get messy an
d App patching, software licensing and updating can turn into painfully long processes.
Features of VM
CLONE:
• A clone is a copy of an existing virtual machine.
• With clones, you can make many copies of a virtual machine from a single install
ation and configuration process.
• Clones are useful when you must deploy many identical virtual machines to a gro
up.
Features of VM
CLONE:
• Clone is exact copy of your existing VM but it gives you option to change the name of your
destination VM as well as the resources.
• When you clone a virtual machine, you create a copy of the entire virtual machine, includin
g its settings, any configured virtual devices, installed software, and other contents of the vir
tual machine's disks.
• You also have the option to use guest operating system customization to change some of the
properties of the clone, such as the computer name and networking settings .
Snapshots
Snapshots create additional vmdk files as consumes disk space hence take snapsh
ots with care and always delete the snapshots after completing your testings.
Template
Is a master copy of a virtual machine that can be used to create and provision virtual machin
es.
Templates cannot be powered on or edited, and are more difficult to alter than ordinary virtu
al machine.
A template offers a more secure way of preserving a virtual machine configuration that you
want to deploy many times.
VM Templates >A copy of pre-installed VMs containing all the software and configuration s
ettings that would make the VM work when deployed
Template
Templates are pre-configured VMs used for multiple deployments say you have to deploy
W2K8 R2 Server 20 times in this case best would be to create a master copy of W2K8 R2 w
ith all basic setups and create a template of it.
The configuration file of a template VM will be *.vmtx and not *.vmx that way you can ide
ntify the VM in your data store as template VM.
The template typically includes a specified operating system and a configuration that provid
es virtual counterparts to hardware components. Optionally, a template can include an install
ed guest operating system and a set of applications.
• Import and export
• .ovf
• .ova
What is the use of privileged instruction
• A class of instructions, usually including storage protection setting, interrupt handling, timer control, inpu
t/output, and special processorstatus-setting instructions, that can be executed only when the computer is i
n a special privileged mode that is generally available to an operating or executive system, but not to use
r programs.
• A machine code instruction that may only be executed when the processor is running in sup
ervisor mode.
• A trap is an exception in a user process. It's caused by division by zero or invalid memory access. It's als
o the usual way to invoke a kernel routine (a system call) because those run with a higher priority than us
er code.
FULL VIRTUALIZATION (What is ?) •
• It is a virtualization technique used to provide a virtual machine environment which is a complete simulat
ion of the underlying hardware.
• • All operating systems and applications which can run natively on the hardware can also be run in the vir
tual machine.
• Responsible for hosting and managing virtual machines & running of guest OS.
• HOSTED
• BAR E M ETAL
• virtualization – Challenges (X86)
• OS kernel is designed to run at ring 0 to execute the code directly on the hardware and handle privilege
d instruction .
• The hypervisor also provides hypercall interfaces for other critical kernel operations
such as memory management, interrupt handling and time keeping.
Full vs Para Virtulization
• Paravirtualization is different from full virtualization, where the unmodified OS does not know it is virtua
lized and sensitive OS calls are trapped using binary translation.
• Performance advantage of paravirtualization over full virtualization can vary greatly depending on the
workload.
• As paravirtualization cannot support unmodified operating systems (e.g. Windows 2000/XP), its compat
ibility and portability is poor.
• Paravirtualization can also introduce significant support and maintainability issues in production enviro
nments as it requires deep OS kernel modifications.
• The open source Xen project is an example of paravirtualization that virtualizes the processor and me
mory using a modified Linux kernel and virtualizes the I/O using custom guest OS device drivers
Types of Clone: Full and Linked
• There are two types of clone:
• The Full Clone — A full clone is an independent copy of a virtual machine that shares noth
ing with the parent virtual machine after the cloning operation. Ongoing operation of a full c
lone is entirely separate from the parent virtual machine.
• The Linked Clone — A linked clone is a copy of a virtual machine that shares virtual disks
with the parent virtual machine in an ongoing manner. This conserves disk space, and allow
s multiple virtual machines to use the same software installation.
Difference Between Full Clone and Linked Clone
The Full Clone :
• A full clone is an independent virtual machine, with no need to access the parent.
• A linked clone must have continued access to the parent. Without access to the parent, a lin
ked clone is disabled. See Linked Clone and Access to the Parent Virtual Machine.
• In brief, all files available on the parent at the moment of the snapshot continue to remain av
ailable to the linked clone.
• Ongoing changes to the virtual disk of the parent do not affect the linked clone, and changes
to the disk of the linked clone do not affect the parent.
Benefits of Full Clones
• Full clones do not require an ongoing connection to the parent virtual machine.
• Overall performance of a full clone is the same as a never-cloned virtual machine, while a li
nked clone trades potential performance degradation for a guaranteed conservation of disk s
pace.
• If you are focused on performance, you should prefer a full clone over a linked clone.
Benefits of Linked Clones
• Linked clones are created swiftly. A full clone can take several minutes if the files involved
are large.
• A linked clone lowers the barriers to creating new virtual machines, so you might swiftly an
d easily create a unique virtual machine for each task you have.
• If a group of people needs to access the same virtual disks, then the people can easily pass a
round clones with references to those virtual disks.
• For example, a support team can reproduce a bug in a linked clone and then just email that li
nked clone to development. This is feasible only when a virtual machine isn't gigabytes in si
ze.
Server Virtualization
• “Server virtualization in general terms lets
you take a single physical device and instal
l (and run simultaneously) two or more OS
environments that are potentially different
and have different identities, application st
acks, and so on.”
Typical Server Model
Virtualized Server Model
VMware By the Numbers
Founded 1998
2006 Revenue $709 M
Number of Employees 2,500+
Number of VMware Infrastructure Customers 20,000+
Number of Users 4+ million
Number of Channel Partners 3,000+
Number of VMware Certified Professionals 10,000+
Who Uses VMware?
100% of the Fortune 100
The Challenge
Virtualization Technology Overview
Old Model:
Traditional x86 Architecture
• Single OS image per machine
• Software and hardware tightly coupled
• Multiple applications often conflict
• Underutilized resources
$40,000 / rack
Source: IDC
What is Virtualization?
Without Virtualization With Virtualization
Applicatio
n
Operating System
Hardwar
e
•Partitioning
•Isolation
Fault
and security isolation at the
hardware level
Advancedresource controls preserve
performance
Key Properties of Virtual Machines
•Partitioning
•Hardware-Independence
BEFORE
VMware
AFTER VMware SAVINGS
Servers 1000 80 $5,816 (per server
removed)
* Note: Savings include estimated cost of VMware licenses, Support and Subscription
The Enterprise PC Challenge
IT: Are We
Having Fun
Yet?
ESX Server
ESX Server
Physical Server • Performance
• Stability
• Scalability
• Cross-platform
support
Non-Disruptive Capacity on Demand
Instant Provisioning in a Virtualized Environment
Physical
Virtual
•Provisioning time reduced to minutes, not da
ys to weeks!
From server boot to running VMs in Minutes
3i
1. Power on server and boot int
o hypervisor
2. Configure Admin Password
3. (optional) Modify network con
figuration
4. Connect VI Client to IP Addre
ss
Or manage with VirtualC
enter
VMware VMotion
• Live migration of
virtual machines
• Zero downtime
VMware DRS
Business Demand
• Dynamic and
intelligent allocation of
hardware resources
• Ensure optimal
Resource Pool alignment between
business and IT
Ensure High availability wi
th VMware HA
• VMware HA
automatically restarts
X Resource Pool
virtual machines when a
physical server fails
Distributed Power Management (coming soon)
NEW!
Minimize power consumption while guaranteeing service levels
Business Demand
– Storage independen
t live migration of v
irtual machine disks
• Zero downtime to vi
rtual machines
• LUN independent
• Supported for Fibre
channel SANs
• Examples of Virtualization
– Virtual drives
– Virtual memory
– Virtual machines
– Virtual servers
History
• 1960s Machines
– Did not scale well
– Extremely expensive
– Cost efficiency was desired
• RAM
– No set amount for RAM
– Estimate minimum amounts of RAM and upgrade based on perfor
mance
Virtualization Hardware
• Storage
– Local storage on servers is limited
– Allow for 20% extra storage space for VM files and server snapshots
– Storage Networks (highly recommended)
• Storage Area Network (SAN) – Large data transfers
• Network Attached Storage (NAS) – File-based data storage
Pros and Cons of Server Virtualizati
• Pros
on
– Cost
• Less physical servers
• Less server space (consolidation of servers)
• Less energy costs
• Less maintenance
– Efficient Administration
• Easier management, management through one machine
• Single point of failure
• Smaller IT staff
Pros and Cons of Server Virtualizati
• Pros
on
– Growth and Scalability
• Upgrading one server upgrades them all
• Easy growth
• Less hardware complications
– Security
• Single server security maintenance
• Hypervisor software often provides security benefits
– Legacy Servers
• Upgrading servers to a virtual setup from old systems
• Goes hand-in-hand with scalability
Pros and Cons of Server Virtualizati
• Cons
on
– Slow Performance
• High stress on single machine
• Longer processing times
• More network bottlenecking
Save Investments:
• Save Investments Companies who want to use hundreds or thousands of virtual servers can
without any difficulty use maximum amount of space in an efficient, effective and significa
nt manner.
• It is the most excellent solution for the companies who cannot afford their own dedicated vi
rtual private server .
• Decrease power consumption This is the second most suitable benefit you will get by the us
e of server virtualization technology that will decrease virtual servers’ energy utilization an
d cooling necessities significantly.
Top 6 Reasons To Invest In Server Virtualization
Redundancy:
• Redundancy It provides immense security measures if in case one virtual server fails due to
any trouble then you will be having another server running the same application in your han
d.
• High security & recovery You will get high amount of security and recovery benefits due to
the isolated environment it includes in it.
Citrix XenApp
Introduction to XenApp
• Ed Iacobucci Founded Citrix in 1989 in Texas then moved to Florida
• Citrix developed Multi Win Technology later Licensed to Microsoft Which become basis of T
erminal Server
Terminal server
• A hardware device or server that enables one or more terminals to connect to a local area net
work (LAN) or the Internet without the need for each terminal to have a network interface
card (NIC) or modem.
• Terminals can be PCs, printers, IBM 3270 emulators or other devices with a RS-232 / RS-42
3 serial port interface.
• Citrix provides ability to access the published Desktops or applications through a web interf
ace or a client tool.
• Citrix Provides the ability to provide single application to a users desktop where it looks lik
e a local installation yet the process runs on citrix server.
• Citrix Gives users access to windows, web, legacy as well as the other information from any
where any device on any connection .
• The access given via a Central Server, his or her keystroke movements are transferred to the
users workstation to the server.
Different Versions of Citrix :
XP MetaFrame XP with feature release 1
MetaFrame XP with feature release 2
MetaFrame XP with feature release 3
MetaFrame XP Presentation Server with feature release 3
MetaFrame Presentation Server 3
Presentation Server 4
Presentation Server 4.5
Presentation Server 4.5 with service pack1
XenApp 5
XenApp 6
XenApp 6.5
What is XenApp ?
• Terminal Server is Server based Computing Model which allows multiple simultaneous user
s to login and run application on centralized server
Security
Desktop Models
Web Based Virtualized/Cloud Device Centric App Based
1. Legacy applications
2. Application collaboration
3. Separation of environments
4. Consumer choice
5. Familiarity
6. Data synchronization
OS
Data
Apps
Settings
Preferences
Virtual Desktops
Traditional VDI Reality
Operationally intensive
Difficult to maintain
Is it strategic
Do you want to be building and managing
data centers
Cloud Based Desktops
• Background
– Only virtual desktop solution built from the ground up for t
he cloud/DaaS delivery model
Founded 2007
Footprint Global • Who we are
– Software company delivering cloud-hosted virtual desktop i
nfrastructure (VDI)
– Optimized, tested & deployed with the worlds largest comp
anies and service providers
• Solutions Include
– Desktone Cloud for enterprises
– Desktone Cloud Platform for service providers
Cloud-Based Desktops (DaaS)
Business Benefits
How it Works
Client Manages Desktone Delivers
IT Shared Resources • High performance network
• Secure & Compliant
• Centralized management & reporting
• Provisioning on demand
• Personalized desktops
• Bring your own licenses
Remote Display
Access Anywhere
Easy to Try & Easy to Buy
1. Drastically Reduce Cost
2. Easy to try
Features
CPU Cores 1 1 2 4 Custom
OS Type Windows 7 Enterprise Windows XP Pro, Windows Windows XP Pro, Windows Windows XP Pro, Windows Windows XP Pro, Windows
7 Enterprise, Linux 7 Pro or Enterprise, Linux 7 pro or Enterprise, Linux 7 Pro or Enterprise, Linux
Display Protocol RDP RDP, RGS, Citrix Receiver RDP, RGS, Citrix Receiver RDP, RGS, Citrix Receiver RDP, RGS, Citrix Receiver
©2011 Desktone,
End User Login
• VDI was supposed to solve problems – it has introduced other issues for most especially cost and complexity
• No hidden costs
Data Virtualization
• Data virtualization is any approach to data management that allows an application to retrie
ve and manipulate data without requiring technical details about the data.
• Once it has been uploaded to Facebook, however, you can retrieve the photo without having to know its n
ew file path.
• In fact, you will have absolutely no idea where Facebook is storing your photo because Facebook softwar
e has an abstraction layer that hides that technical information.
• This abstraction layer is what is meant by some vendors when they use the term data virtualization.
• The term data virtualization, however, simply means that the technical information about the data has bee
n hidden.
• Data integration and query logic. What
data has been requested and how to pr
ocess it. where that logic exist.
Remote Desktop Virtualization
• All applications and data used remain on the remote system with only display, ke
yboard, and mouse information communicated with the local client device, which
may be a conventional PC/laptop, a thin client device, a tablet, or even a smartpho
ne.
• A common implementation of this approach involves hosting multiple desktop op
erating system instances on a server hardware platform running a hypervisor.
Secure, mobile access to applications. Many SMBs allow employees to work remotely or off hours fro
m their own devices, but provisioning those devices can be difficult and costly - and allowing employees
to do it themselves can open up the company to security risks.
• However, virtualized desktops allow employees to access even high-performance applications by enablin
g hardware-based GPU sharing through a secure connection from any device, on even high-latency, low-b
andwidth networks such as hotel room WiFi.
Flexibility. Using desktop virtualization allows enterprises to provision just a few types of desktops to its
users, reducing the need to configure desktops for each employee.
• Additionally, because virtual desktops can be provisioned so quickly, it's easier for the company to onboa
rd new hires with just a few mouse clicks.
• The right virtualization solution will allow an administrator to personalize and manage desktops through
a single interface, eliminating the need to drill down to individual desktops.
Benefits of Desktop Virtualization
Ease of maintenance. Virtualized desktops also allow for easier desktop maintenance.
• At the end of the day, when the employee logs off from her computer, the desktop can be reset, removing
any downloaded software or customizations that she may have added to her computer.
• This not only prevents software and customizations from slowing down the machine but also provides an
easy way to troubleshoot: if the system freezes, the employee can simply reboot and have the desktop rest
ored.
Desktop security. A common problem that most SMBs face is employees downloading software or other
potentially risky items (like PowerPoint presentations featuring cute kittens - and malware).
• Desktop virtualization allows the administrator to set permissions, preventing these documents carrying
Trojan horses from residing on the system. This provides, quite simply, peace of mind, as well as easing
maintenance costs.
Benefits of Desktop Virtualization
Reduced costs. All of these benefits end up at the same place: reduced costs for th
e business.
• Because the software licensing requirements are smaller, there are cost savings o
n applications alone.
• Companies also save money in the IT department, as fewer staff are needed to ma
nage desktops and troubleshoot user problems.
• SMBs also save money with major support issues like removing malware.
FlexCast
• Different types of workers across the enterprise need different types of desktops.
• Xen Desktop can meet all the requirements in a single solution with its unique citrix
Flexcast delivery.
• With FlexCast, your IT group can deliver virtual desktops and applications tailored t
o meet the performance, security and flexibility requirements of each individual user
Taskworkers
• Task workers performs a set of well defined task.
• They access small set of applications and have limited access on their pc’s.
• Since these workers interact with your customers, partners and employees they hav
e access to your most critical data.
• Xen desktop enables IT to provide standard desktop and applications while keeping
data secure
Knowledge Workers
• Traditional office workers perform their work in office.
• These workers expect access to their data and application wherever they are.
• The agent registers with the controller to make the desktop available for connectio
n to a user device and verifies incoming request from those devices before establis
hing a connection
• Runs on the user device and manages the online plug in which displays th
e user desktop.
• The plugin request the desktop from the controller which sends the conne
ction details about the assigned desktop back to the plug in.
• The online plugin then initiates a desktop session with the vd agent runnin
g on the assigned desktop.
VDI IN A BOX
• Citrix VDI-in-a-Box installs on a single server.
• The software works with any hypervisor and is simpler to install and work with than enterpr
ise-grade virtual desktop infrastructure (VDI) platforms such as Vmware View or Citrix Xe
nDesktop.
• VDI-in-a-Box costs less than Citrix XenDesktop, but it does not come with rights to XenAp
p or XenServer Enterprise
Operating-System Virtualization
Operating-System Virtualization
• Operating-system-level virtualization is a server-virtualization method whe
re the kernel of an operating system allows for multiple isolated user-spac
e instances, instead of just one
• These instances run on top of an existing host operating system and provide
a set of libraries that applications interact with, giving them the illusion that
they are running on a machine dedicated to its use. The instances are know
n as Containers, Virtual Private Servers or Virtual Environments
• A standard operating system so that it can run different applications handle
d by multiple users on a single computer at a time.
• The operating systems do not interfere with each other even though they ar
e on the same computer.
Operating-System Virtualization
• In OS virtualization, the operating system is altered so th
at it operates like several different, individual systems.
• System administrators may also use it, to a lesser extent, for consolidating
server hardware by moving services on separate hosts into containers on t
he one server.
• The goal of the project was to create a modern hypervisor that builds on the experience of
previous generations of technologies and leverages the modern hardware available today.
• KVM is implemented as a loadable kernel module that converts the Linux kernel into a ba
re metal hypervisor.
• There are two key design principals that the KVM project adopted that have helped it matur
e rapidly into a stable and high performance hypervisor and overtake other open source h
ypervisors.
• Firstly, because KVM was designed after the advent of hardware assisted virtualiaztion, it di
d not have to implement features that were provided by hardware.
• The KVM hypervisor requires Intel VT-X or AMD-V enabled CPUs and leverages those feat
ures to virtualize the CPU.
• By requiring hardware support rather than optimizing with it if available, KVM was able to
design an optimized hypervisor solution without requiring the “baggage” of supporting lega
cy hardware or requiring modifications to the guest operating system.
• Secondly the KVM team applied a tried and true adage – “don't reinvent the wheel”.
• There are many components that a hypervisor requires in addition to the ability to virtualize
the CPU and memory, for example: a memory manager, a process scheduler, an I/O stack, d
evice drivers, a security manager, a network stack, etc.
• In fact a hypervisor is really a specialized operating system, differing only from it's general
purpose peers in that it runs virtual machines rather than applications.
• Since the Linux kernel already includes the core features required by a hypervisor and has b
een hardened into an mature and stable enterprise platform by over 15 years of support and
development it is more efficient to build on that base rather than writing all the required co
mponents such as a memory manager, scheduler, etc from the ground up.
• For example while the Linux kernel has a mature and proven memory manager including su
pport for NUMA and large scale systems.
• The Xen hypervisor has needed to build this support from scratch. Likewise features like po
wer management which are already mature and field proven in Linux had to be re-implemen
ted in the Xen hypervisor.
• Another key decision made by the KVM team was to incorporate the KVM into the upstrea
m Linux kernel.
• The KVM code was submitted to the Linux kernel community in December of 2006 and wa
s accepted into the 2.6.20 kernel in January of 2007.
• At this point KVM became a core part of Linux and is able to inherit key features from the
Linux kernel.
• By contrast the patches required to build the Linux Domain0 for Xen are still not part of the
Linux kernel and require vendors to create and maintain a fork of the Linux kernel.
• This has lead to an increased burden on distributors of Xen who cannot easily leverage the f
eatures of the upstream kernel.
• Any new feature, bug fix or patch added to the upstream kernel must be back-ported to work
with the Xen patch sets.
QEMU
• QEMU (short for Quick Emulator) is a free and open-source hosted hypervisor .
• When used as a machine emulator, QEMU can run OSes and programs made for o
ne machine (e.g. an ARM board) on a different machine (e.g. your own PC).
• When using KVM, QEMU can virtualize x86, server and embedded Powe
rPC, and S390 guests.
• QEMU is a hosted virtual machine monitor.
• QEMU can also be used purely for CPU emulation for user-level processes, allow
ing applications compiled for one architecture to be run on another.
Licensing
• QEMU was written by Fabrice Bellard and is free software and is mainly licensed under GN
U General Public License (GPL).
• Various parts are released under BSD license, GNU Lesser General Public License (LGPL)
or other GPL-compatible licenses.
• There is an option to use the proprietary FMOD library when running on Microsoft Windo
ws, which, if used, disqualifies the use of a single open source software license.
User-mode emulation
• In this mode QEMU runs single Linux or Mac OS X programs that were compiled for a diff
erent instruction set.
• System calls are thunked for 32/64 bit mismatches.
• Fast cross-compilation and cross-debugging are the main targets for user-mode emulation.
System emulation
• In this mode QEMU emulates a full computer system, including peripherals.
• It can be used to provide virtual hosting of several virtual computers on a single computer.
• QEMU can boot many guest operating systems, including Linux, Solaris, Microsoft Windo
ws, DOS, and BSD.
• It supports emulating several instruction sets, including x86, MIPS, 32-bit ARMv7, ARMv8
, PowerPC, SPARC, ETRAX CRIS and MicroBlaze.
KVM Hosting
• Here QEMU deals with the setting up and migration of KVM images
• It is still involved in the emulation of hardware, but the execution of the g
uest is done by KVM as requested by QEMU.
Xen Hosting
• QEMU is involved only in the emulation of hardware.
• The execution of the guest is done within Xen and is totally hidden from
QEMU.
Features
• QEMU can save and restore the state of the virtual machine with all programs running.
• Guest operating-systems do not need patching in order to run inside QEMU.
• The virtual network cards can also connect to network cards of other instances of QEMU or
to local TAP interfaces.
• Network connectivity can also be achieved by bridging a TUN/TAP interface used by QEM
U with a non-virtual Ethernet interface on the host OS using the host OS's bridging features.
• QEMU integrates several services to allow the host and guest systems to commun
icate.
• It can also boot Linux kernels without a bootloader.
• QEMU does not require administrative rights to run, unless additional kernel mod
ules for improving speed are used (like KQEMU), or when some modes of its net
work connectivity model are utilized.
QEMU supports the following disk image formats:
• This worked by running user mode code (and optionally some kernel code
) directly on the host computer's CPU, and by using processor and periphe
ral emulation only for kernel-mode and real-mode code.
• KQEMU could execute code from many guest OSes even if the host CP
U did not support hardware-assisted virtualization.
Qemu:
• QEmu is a complete and standalone software of its own.
• Mainly it works by a special 'recompiler' that transforms binary code written for a
given processor into another one (say, to run MIPS code on a PPC mac, or ARM i
n an x86 PC).
• KQemu:
• In the specific case where both source and target are the same architecture (like th
e common case of x86 on x86).
• it still has to parse the code to remove any 'privileged instructions' and replace the
m with context switches.
• In that case, userspace Qemu still allocates all the RAM for the emulated machine
, and loads the code.
• The difference is that instead of recompiling the code, it calls KQemu to scan/patc
h/execute it.
• This is a lot faster than plain Qemu because most code is unchanged, but still has t
o transform ring0 code (most of the code in the VM's kernel), so performance still
suffers
Virtualization Comes in Many Forms
Physical memory
App
App
App
Storage Virtualization
• Figure illustrates a virtualized storage environment.
• At the top are four servers, each of which has one virtual volume assigned
, which is currently in use by an application.
• These virtual volumes are mapped to the actual storage in the arrays, as sh
own at the bottom of the figure.
228
What are the innovations and fundamentals associated with storage?
229
What are the innovations and fundamentals associated with storage?
Storage devices have evolved from tapes to hard drives to RAID hard
drives increasing capacity and resiliency.
230
What are the innovations and fundamentals associated with storage?
SCSI
Where it is done
Host Based Network Storage Device/Storage
Virtualization Based Virtualization Subsystem Virtualization
How it is implemented
In-band Out-of-band
Virtualization Virtualization
• The SNIA (Storage Networking Industry Association) storage virtualization taxon
omy provides a systematic classification of storage virtualization, with three level
s defining what, where, and how storage can be virtualized.
• The first level of the storage virtualization taxonomy addresses “what” is created.
Storage - 234
Storage Virtualization Configuration
Servers Servers
Virtualization
Virtualization Appliance
Appliance
Storage
Network
Storage
Network
Storage Storage
Arrays Arrays
Out-of-Band In-Band
(a) (b)
(a) In out-of-band implementation, the virtualized environment configuration is stored external to the data path
(b) The in-band implementation places the virtualization function in the data path
Storage - 235
• In an out-of-band implementation, the virtualized environment configuration is stored external to the data
path.
• The configuration is stored on the virtualization appliance configured external to the storage network that c
arries the data.
• This configuration is also called split-path because the control and data paths are split (the control path run
s through the appliance, the data path does not).
• This configuration enables the environment to process data at a network speed with only minimal latency a
dded for translation of the virtual configuration to the physical storage.
• The data is not cached at the virtualization appliance beyond what would normally occur in a typical SAN
configuration.
• Since the virtualization appliance is hardware-based and optimized for Fibre Channel communication, it ca
n be scaled significantly. In addition, because the data is unaltered in an out-of-band implementation, many
of the existing array features and functions can be utilized in addition to the benefits provided by virtualiza
tion.
• The in-band implementation places the virtualization function in the data path, as shown in Figure.
• General-purpose servers or appliances handle the virtualization and function as a translation engine for th
e virtual configuration to the physical storage.
• While processing, data packets are often cached by the appliance and then forwarded to the appropriate ta
rget.
• An in-band implementation is software-based and data storing and forwarding through the appliance resul
ts in additional latency.
• It introduces a delay in the application response time because the data remains in the network for some ti
me before being committed to disk.
• In terms of infrastructure, the in-band architecture increases complexity and adds a new layer of virtualiza
tion (the appliance), while limiting the ability to scale the storage infrastructure.
- 238
Block-Level Storage Virtualization
• Ties together multiple independent st
orage arrays Servers
• Each host knows exactly where its file-level resources are located.
• Underutilized storage resources and capacity problems result because files are bound to a sp
ecific file server.
• It is necessary to move the files from one server to another because of performance reasons
or when the file server fills up.
• Moving files across the environment is not easy and requires downtime for the file servers.
Moreover, hosts and applications need to be reconfigured with the new path, making it diffic
ult for storage administrators to improve storage efficiency while maintaining the required s
ervice level.
VMware vSphere
VMware vSphere
• VMware vSphere leverages(maximum advantage) the power of virtualization to transfor
m datacenters into simplified cloud computing infrastructures and enables IT organiz
ations to deliver flexible and reliable IT services.
• Your existing applications see dedicated resources, but your servers can be managed as
a pool of resources.
• Vmware vSphere virtualizes and aggregates the underlying physical hardware resourc
es across multiple systems.
1) Infrastructure Services
2) Application Services
3) VMware vCenter Server
Infrastructure Services
• VMware vCompute—the VMware capabilities that abstract away from underlying differen
t server resources. vCompute services aggregate these resources across many discrete server
s and assign them to applications.
• VMware vStorage—the set of technologies that enables the most efficient use and manage
ment of storage in virtual environments.
• VMware vNetwork—the set of technologies that simplify and enhance networking in virtu
al environments.
• Application Services: are the set of services provided to ensure availability,
security, and scalability for applications.
• Clients Users can access the VMware vSphere datacenter through clients such as the
vSphere Client or Web Access through a Web browser.
• Application Services: are the set of services provided to ensure availability,
security, and scalability for applications.
• Clients Users can access the VMware vSphere datacenter through clients such as the
vSphere Client or Web Access through a Web browser.
• Application Services: are the set of services provided to ensure availability,
security, and scalability for applications.
• Clients Users can access the VMware vSphere datacenter through clients such as the
vSphere Client or Web Access through a Web browser.
• VMware vSphere Client - allows users to remotely connect to ESXi or vCenter Server fro
m any Windows PC.
• VMware vSphere Web Client - allows users to remotely connect to vCenter Server from a
variety of Web browsers and operating systems (OSes).
• vSphere Virtual Machine File System (VMFS) - provides a high performance cluster file
system for ESXi VMs.
• vSphere Virtual SMP - allows a single virtual machine to use multiple physical processors
at the same time.
• vSphere vMotion - allows live migration for powered-on virtual machines in the same data center
• vSphere Storage vMotion - allows virtual disks or configuration files to be moved to a new data stor
e while a VM is running.
• vSphere High Availability (HA) - allows virtual machines to be restarted on other available servers.
• vSphere Distributed Resource Scheduler (DRS) - divides and balances computing capacity for VMs
dynamically across collections of hardware resources.
• vSphere Storage DRS - divides and balances storage capacity and I/O across collections of data stores
dynamically.
• vSphere Distributed Switch (VDS) - allows VMs to maintain network configurations as the VMs mi
• Vmware ESX and VMware ESXi: A virtualization layer run on physical servers that abstr
acts processor, memory, storage, and resources into multiple virtual machines.
• VMware ESX 4.0 contains a built-in service console. It is available as an installable CD-RO
M boot image
• VMware ESXi 4.0 does not contain a service console. It is available in two forms: VMware
ESXi 4.0 Embedded and VMware ESXi 4.0 Installable.
• ESXi 4.0 Embedded is firmware that is built into a server’s physical hardware.
• ESXi 4.0 Installable is software that is available as an installable CD-ROM boot image. You
install the ESXi 4.0 Installable software onto a server’s hard drive.
Physical Topology of vSphere Datacenter
• Computing servers Industry standard x86 servers that run ESX/ESXi on the bare met
al.
• ESX/ESXi software provides resources for and runs the virtual machines.
• You can group a number of similarly configured x86 servers with connections to the same n
etwork and storage subsystems to provide an aggregate set of resources in the virtual enviro
nment, called a cluster.
• Storage networks and Arrays Fibre Channel SAN arrays, iSCSI SAN arrays, and NAS arr
ays are widely used storage technologies supported by VMware vSphere to meet different
• datacenter storage needs.
• The storage arrays are connected to and shared between groups of servers through storage ar
ea networks.
• This arrangement allows aggregation of the storage resources and provides more flexibility i
n provisioning them to virtual machines.
• IP networks Each computing server can have multiple Ethernet network interface car
ds (NICs) to provide high bandwidth and reliable networking to the entire VMware vSphere
datacenter.
Workloads Virtualized
Server Software-defined
Virtualization Datacenter
Storage/ Servers Networking Security Management/
Availability Monitoring
VDC
SOFTWARE-DEFINED
DATACENTER SERVICES
DAYS/ MINUTES/
WEEKS HOURS SECONDS
2008 2012 FUTURE
• However, in many data centers, the benefits of server virtualization have “stalled”—a
source of frustration to IT executives
BEYOND VIRTULIZATION
• This stall often occurs as the rapid expansion of virtual server deployments threatens
to overload storage and data network facilities, resulting in over provisioning of st
orage capacity and sharply increased administration workloads
SDDC
• In a sense, the SDDC is simply the logical extension of server virtualizati
on.
• The SDDC does the same for all of the resources needed to host an applic
ation, including storage, networking and security.
SDDC
• In the past, each new application required a dedicated server, which could take up to 10 wee
ks to deploy.
• Worse, provisioning these physical resources consumes a great deal of IT time, which would
be better spent on strategic initiatives.
• In a very real sense, the full potential of server virtualization cannot be realized when o
ther resources are physical.
SDDC
• In the SDDC, all resources are virtualized so they can be automatically de
ployed, with little or no human involvement.
• Meeting and exceeding these expectations requires an automated infrastructure that can provisio
n resources in minutes, not weeks.
• So that key applications are up and running quickly—and delivering business value.
• In the SDDC, resources are deployed automatically from pools, speeding the time to applic
ation rollout and providing an unprecedented degree of flexibility in the data center archite
cture.
• As a result, the organization has the agility to respond quickly to changes in the marketplace—a
nd gain competitive advantage.
Minimize IT Spend
• In the SDDC, all of these functions are performed by software running on commodity x86 s
ervers.
• Instead of being locked in to the vendor’s hardware, IT managers can buy commodity serve
rs in quantity through a competitive bid process.
• This shift not only saves money, but also avoids situations where problems in the vendor’s
manufacturing process or supply chain result in delivery delays and impact data center oper
ations.
Unmatched Efficiency and Resiliency
• The SDDC provides a flexible and stable platform for any application, including innovative
services such as high-performance computing, big data (Hadoop), and latency-sensitive appl
ications.
• Changes are made and workloads balanced by adjusting the software layer rather than hard
ware.
• When a failure occurs, the management software in the SDDC automatically redirects workl
oads to other servers anywhere in the datacenter, minimizing service-level recovery time an
d avoiding outages.
Comparison of Different Hypervisors
Introduction
• To find the best hypervisor technology, first decide whether you want a ho
sted or bare-metal virtualization hypervisor.
• The choice of hypervisor does not only apply to an enterprise’s private data center
— different cloud services make use of different virtualization platforms.
• Amazon EC2, the largest infrastructure cloud, uses Xen as a hypervisor, but Micro
soft Azure uses Hyper-V and VMware partners use ESX. Recently, Google launch
ed its own IaaS cloud that uses KVM as a hypervisor.
• Once you choose the type of hypervisor that fits your needs, you need to choose t
he best hypervisor technology.
Bare-metal virtualization hypervisors
• VMware has the most mature hypervisor technology by far, offering advanced fea
tures and scalability.
• The vendor does offer a free version of ESXi, but it’s very limited and has none of
the advanced features of the paid editions.
• VMware also offers lower-cost bundles that can make hypervisor technology mor
e affordable for small infrastructures.
Bare-metal virtualization hypervisors
Microsoft Hyper-V
• Hyper-V lacks many of the advanced features that VMware’s broad product line p
rovides.
• But with its tight Windows integration, Microsoft’s hypervisor technology may be
the best hypervisor for organizations that don’t require a lot of bells and whistles.
Bare-metal virtualization hypervisors
Citrix XenServer
• The core hypervisor technology is free, but like VMware’s free ESXi, it has almo
st no advanced features.
• Citrix has several paid editions of XenServer that offer advanced management, au
tomation and availability features.
Oracle VM
• Oracle VM is Oracle’s homegrown hypervisor technology based on open source
Xen.
• If you want hypervisor support and product updates, though, it will cost you.
• One advantage of Oracle VM, though, is that it’s certified with most of Oracle’s ot
her products and therefore includes no-hassle support.
Hosted virtualization hypervisors
VMware Workstation/Fusion/Player
• This hypervisor technology can only run a single virtual machine (VM) and does not allow
you to create VMs.
• VMware Workstation is a more robust hypervisor with some advanced features, such as reco
rd-and-replay and VM snapshot support.
• For developers that need sandbox environments and snapshots, or for labs and demonstratio
n purposes. VMware Fusion is the Mac version of Workstation, which only costs $89 but lac
ks some of the features and abilities of Workstation.
• This hypervisor technology is better suited for running Windows and Linux on Macs.
Hosted virtualization hypervisors
VMware Server
• VMware Server is a free, hosted virtualization hypervisor that’s very similar to VMware Wo
rkstation.
• However, VMware Server lacks some of the features of Workstation and only supports a sin
gle snapshot per VM.
• VMware has halted development on Server since 2009, but it works well as a no-frills hoste
d hypervisor and is an easy alternative to using the free version of ESXi.
Hosted virtualization hypervisors
Oracle VM VirtualBox
• Despite being a free, hosted product with a very small footprint, VirtualBox share
s many features with VMware vSphere and Microsoft Hyper-V.
• Red Hat’s Kernel-based Virtual Machine (KVM) has qualities of both a hosted an
d a bare-metal virtualization hypervisor.
• KVM turns the Linux kernel itself into a hypervisor so VMs have direct access to
the physical hardware.
• This hypervisor technology is not free, however, and while KVM has enterprise fe
atures and scalability, it lacks some of the more advanced features and application
programming interfaces that VMware and Microsoft offer.
Hosted virtualization hypervisors
Parallels Desktop
• Parallels is known for its popular Parallels Desktop for Mac hypervisor, which is
very similar to VMware Fusion.
• Parallels also has a desktop version of its hypervisor technology that runs on both
Windows and Linux.
• Plus, it has a more powerful edition called Parallels Server for Mac, which has gre
ater scalability and more advanced features.
• Parallels’ hypervisors are also pretty mature, having been first launched in 2005.
They offer a very low-cost, feature-rich hosted hypervisor that can be used for a v
ariety of purposes.
Introduction to Virtual Machines
Introduction to Virtual Machines
• The key to managing complexity in computer systems is their division into levels of abstra
ction separated by well-defined interfaces.
• The details of a hard disk, for example that it is comprised of sectors and tracks, are abstrac
ted by the operating system so the disk appears to application software as a set of variable-si
zed files
• An application programmer can then create, write, and read files, without knowledge of the
way the hard disk is constructed and organized.
Introduction to Virtual Machines
• The levels of abstraction are arranged in a hierarchy, with lower levels implemented in har
dware and higher levels in software.
• In the hardware levels, all the components are physical, have real properties, and their interf
aces are defined so that the various parts can be physically connected.
• In the software levels, components are logical, with fewer restrictions based on physical cha
racteristics.
• We are most concerned with the abstraction levels that are at or near the hardware/software
boundary.
• These are the levels where software is separated from the machine on which it runs.
Introduction to Virtual Machines
• From the perspective of the operating system, a machine is largely composed of hardwar
e, including one or more processors that run a specific instruction set, some real memory, an
d I/O devices.
• From the perspective of application programs, for example, the machine is a combinatio
n of the operating system and those portions of the hardware accessible through user–l
evel binary instructions.
Introduction to Virtual Machines
• Let us now turn to the other aspect of managing complexity: the use of well-defined interfa
ces.
• Well-defined interfaces allow computer design tasks to be decoupled so that teams of hardw
are and software designers can work more or less independently.
• The instruction set is one such interface. Processor designers, say at Intel, develop micropro
cessors that implement the Intel IA-321 instruction set.
• While software engineers at Microsoft develop compilers that map high level languages to t
he same instruction set.
• As long as both groups satisfy the instruction set specification, compiled software will exec
ute correctly on a machine incorporating the IA-32 microprocessor
Introduction to Virtual Machines
• As the Intel/Microsoft example suggests, well defined interfaces permit development of i
nteracting computer subsystems at different companies, and at different times, sometime
s years apart.
• Application software developers do not need to be aware of detailed changes inside the oper
ating system, and hardware and software can be upgraded according to different schedules.
• Software can run on different platforms implementing the same instruction set.
Introduction to Virtual Machines
• Despite their many advantages, well-defined interfaces can also be confining. Subsystems a
nd components designed to specifications for one interface will not work with those des
igned for another.
• There are processors with different instruction sets (e.g., Intel IA-32 and IBM PowerPC), a
nd there are different operating systems (e.g., Windows and Linux).
Introduction to Virtual Machines
• Many operating systems are developed for a specific system architecture, e.g., for a unipr
ocessor or a shared memory multiprocessor, and are designed to manage hardware resources
directly.
• The implicit assumption is that the hardware resources of a system are managed by a single
operating system.
• This binds all hardware resources into a single entity under a single management regime.
• And this, in turn, limits the flexibility of the system, not only in terms of available software
(as discussed above), but also in terms of security and failure isolation, especially when the
system is shared by multiple users or groups of users.
Introduction to Virtual Machines
• Virtualization provides a way of relaxing the above constraints and increasing flexibility.
• When a system (or subsystem), e.g.,., a processor, memory, or I/O device, is virtualized, its i
nterface and all resources visible through the interface are mapped onto the interface and re
sources of a real system actually implementing it.
• Consequently, the real system is transformed so that it appears to be a different, virtual syste
m, or even a set of multiple virtual systems.
• Formally, virtualization involves the construction of an isomorphism that maps a virtual gue
st system to a real host
Introduction to Virtual Machines
• Consider again the example of a hard disk. In some applications, it may be desirable to parti
tion a single large hard disk into a number of smaller virtual disks.
• The virtual disks are mapped to a real disk by implementing each of the virtual disks as a si
ngle large file on the real disk .
• Virtualizing software provides a mapping between virtual disk contents and real disk conten
ts (the function V in the isomorphism) using the file abstraction as an intermediate step.
• Each of the virtual disks is given the appearance of having a number of logical tracks and se
ctors (although fewer than in the large disk).
• A write to a virtual disk (the function e in the isomorphism) is mirrored by a file write and a
corresponding real disk write, in the host system (the function e´ in the isomorphism).
Introduction to Virtual Machines
• The concept of virtualization can be applied not only to subsystems such as disks, but to an
entire machine.
• For example, virtualizing software installed on an Apple MacIntosh can provide a Windows
/IA-32 virtual machine capable of running PC application programs.
• In general machine can circumvent real machine compatibility constraints and hardware res
ource constraints to enable a higher degree of software portability and flexibility.
Introduction to Virtual Machines
• There is a wide variety of virtual machines that provide an equally wide variety of benefits.
• A large multiprocessor server can be divided into smaller, virtual servers while retaining the
ability to balance the use of hardware resources across the system.
Introduction to Virtual Machines
• Virtual machines can also employ emulation techniques to support cross-platform software
compatibility.
• For example, a platform implementing the PowerPC instruction set can be converted into a
virtual platform running the IA-32 instruction set. Consequently, software written for one pl
atform will run on the other.
• This compatibility can be provided either at the system level (e.g., to run Windows OS on a
MacIntosh) or at the program or process level (e.g., to run Excel on a Sun Solaris/SPARC pl
atform).
• In addition to emulation virtual machines can also provide dynamic, on-the-fly optimization
of program binaries.
• Finally, through emulation, virtual machines can enable new, proprietary instruction sets, e.g
, incorporating VLIWs, while supporting programs in an existing, standard instruction set.
Introduction to Virtual Machines
• Virtual machines have been investigated and built by operating system developers, language
designers, compiler developers, and hardware designers.
• Although each application of virtual machines has its unique characteristics, there also are u
nderlying concepts and technologies that are common across the spectrum of virtual
• machines.
• Because the various virtual machine architectures and underlying technologies have been de
veloped by different groups it is especially important to unify this body of knowledge and u
nderstand the base technologies that cut across the various forms of virtual machines.
• The goals are to describe the family of virtual machines in a unified way, to discuss the com
mon
• There are two parts of an ISA that are important in the definition of virtual machines.
• The first part includes those aspects of the ISA that are visible to an application program.
• The second part includes those aspects that are visible only to supervisor software, such as t
he operating system,which is responsible for managing hardware resources.
• This is the system ISA. Of course, the supervisor software can also employ all the elements
of the user ISA.
• In Figure 4, the user ISA only, and interface consists of both the user and system ISA.
• The Application Binary Interface (ABI) provides a program with access to the hardware res
ources and services available in a system and has two major components.
• The first is the set of all user instructions; system instructions are not included in the ABI.
• At the ABI level, all application programs interact with the shared hardware resources indir
ectly, by invoking the operating system via a system call interface, which is the second com
ponent of the ABI.
• System calls provide a specific set of operations that an operating system may perform on b
ehalf of a user
• The Application Programming Interface (API) is usually defined with respect to a high level
language (HLL).
• A key element of an API is a standard library (or libraries) that an application calls to invoke
various services available on the system, including those provided by the operating system.
Major Program Interfaces
• ISA Interface -- supports all conventional software
Application Software
System Calls
Operating System
System ISA User ISA
ISA
Introduction 309
Virtualization
• An isomorphism from guest to host
– Map guest state to host state
– Implement “equivalent” functions
e(Si )
Si Sj
Guest
V(Si ) V(Sj )
e'(Si')
Si' Sj'
Host
Introduction 310
Virtualization
• Similar to abstraction
Except
virtualization
– Details not necessarily hidden file file
Introduction 312
The “Machine”
• Different perspectives on wh Application
Programs
at the Machine is: Libraries
Execution Hardware
Memory
Translation
System Interconnect
Execution Hardware
I/O devices
• User ISA + OS calls and Main
Memory
Networking
Introduction 314
The “Machine”
• Different perspectives on wh Application
Programs
at the Machine is: Libraries
Execution Hardware
Introduction 315
Virtual Machines
add Virtualizing Software to a Host platform
and support Guest process or system on a Virtual Machine (VM)
Virtualizing
VMM Software
Virtual
Machine
Hardware
Host "Machine"
Introduction 316
The Family of Virtual Machines
• Lots of things are called “virtual machines”
IBM VM/370
Java
VMware
Introduction 317
• As characterized by the isomorphism described earlier, the process of virtualization consists
of two parts
• 1) the mapping of virtual resources or state, e.g., registers, memory, or files, to real resource
s in the underlying machine and
• 2) the use of real machine instructions and/or system calls to carry out the actions specified
by virtual machine instructions and/or system calls; e.g., emulation of the virtual machine A
BI or ISA.
Process VMs
• Just as there is a process perspective and a system perspective of machines, there are also pr
ocess level and system level virtual machines.
• As the name suggests, a process virtual machine is capable of supporting an individual proc
ess.
• In process VMs, the virtualizing software is placed at the ABI interface, on top of the OS/ha
rdware combination
• The virtualizing software emulates both user level instructions and operating system calls.
Process VMs
• we usually refer to the underlying platform as the host, and the software that runs in the VM
environment as the guest.
• The real platform that corresponds to a virtual machine, i.e., the real machine being emulat
ed by the virtual machine, is referred to as the native machine.
• The name given to the virtualizing software depends on the type of virtual machine being im
plemented.
• In process VMs, virtualizing software is often referred to as the runtime, which is short for
“runtime software”.
• The runtime is created to support a guest process and runs on top of an operating system. Th
e VM supports the guest process as long as the guest process executes, and terminates suppo
rt when the guest process terminates
Process VMs
• Execute application binaries with an ISA different from hardware
platform
• Couple at ABI level via Runtime System
• Examples: IA-32 EL, FX!32
Guest Application Process Application Process
Runtime Virtualizing
Software
OS Virtual
Machine
Host Machine
Hardware
Introduction 321
•
System Virtual Machines
system virtual machine provides a complete system environment.
• This environment can support an operating system along with its potentially many user processes.
• It provides a guest operating system with access to underlying hardware resources, including netwo
rking, I/O, and, on the desktop, a display and graphical user interface.
• The VM supports the operating system as long as the system environment is alive.
• virtualizing software is placed between the underlying hardware machine and conventional so
ftware.
• In this particular example, virtualizing software emulates the hardware ISA so that conventional sof
tware “sees” a different ISA than the one supported by hardware.
• In many system VMs the guest and host run the same ISA, however. In system VMs, the virtualizin
g software is often referred to as the Virtual Machine Monitor (VMM), a term coined when the VM c
oncept was first developed in the late 1960’s.
System Virtual Machines
• In this example (Figure 8a), one ISA is emulated by another. Virtualizing software can enha
nce emulation with optimization, by taking implementation-specific information into conside
ration as it performs emulation.
• Virtualizing software can also provide resource replication, for example by giving a single
• hardware platform the appearance of multiple platforms (Figure 8b), each capable of runnin
g a complete operating system and/or a set of applications.
• Finally, the virtual machine functions can be composed (Figure 8c) to form a wide variety o
f architectures, freed of many of the traditional compatibility and resource constraints.
Process VMs
• Process level VMs provide user applications with a virtual ABI environment.
• The combination of the OS call interface and the user instruction set for
m the machine that executes a user process.
• The operating system timeshares the hardware and manages underlying resourc
es to make this possible.
• In effect, the operating system provides a replicated process level virtual machin
e for each of the concurrently executing applications.
Emulators and Dynamic Binary Translators
• A more challenging problem for process level virtual machines is to support program binari
es compiled to a different instruction set than the one executed by the host’s hardware,
• i.e., to emulate one instruction set on hardware designed for another.
• The example illustrates the Digital FX!32 system (Hookway and Herdeg 1997).
• The FX!32 system can run Intel IA-32 application binaries compiled for Windows NT on an
Alpha hardware platform also running Windows NT.
• An interpreter program executing the target ISA fetches, decodes and emulates t
he execution of individual source instructions.
• This can be a relatively slow process, requiring tens of native target instructions f
or each source instruction interpreted.
• For better performance, binary translation is typically used.
• With binary translation, blocks of source instructions are converted to target instructions tha
t perform equivalent functions.
• There can be a relatively high overhead associated with the translation process, but once a bl
ock of instructions is translated, the translated instructions can be cached and repeatedly exe
cuted much faster than they can be interpreted.
• Because binary translation is the most important feature of this type of process virtual machi
ne, they are sometimes called dynamic binary translators.
• Same-ISA Binary Optimizers
• Most dynamic binary translators not only translate from source to target code, but
they also perform some code optimizations.
• This leads naturally to virtual machines where the instruction sets used by host an
d the guest are the same, and optimization of a program binary is the primary purp
ose of the virtual machine.
• Thus, same-ISA dynamic binary optimizers are implemented in a manner very sim
ilar to emulating virtual machines, including staged optimization and software cac
hing of optimized code.
High Level Language Virtual Machines
Traditional HLL VM
Introduction 333
Co-Designed VMs
Perform both translation and
X86 Apps
optimization
VM provides interface between Windows
standard ISA software and
implementation ISA
Primary goal is performance or
power efficiency VLIW
apps 1
OS 1
apps 2
OS 1
ISA 2
Introduction 335
Composition: Example
Java application
JVM
Linux x86
VMware
Windows x86
Code Morphing
Crusoe VLIW
Introduction 336
Summary (Taxonomy)
VM type (Process or System)
Host/Guest ISA same or different
Process VMs System VMs
• DR is the coordinated process of restoring systems, data, and the infrastructure, required to suppor
t key ongoing business operations in the event of a disaster.
• It is the process of restoring and/or resuming business operations from a consistent copy of the dat
a.
• After all recoveries are completed, the data is validated to ensure that it is correct.
Hot site:
• It is a site equipped with all the required hardware, operating system, application, and network sup
port that help perform business operations, and where the equipment is available and running at all
times.
Cold site:
• A site to where an enterprise’s operations can be moved, in the event of disast
er.
• It has minimum IT infrastructure and environmental facilities in place, but are
not activated.
Cluster:
• A group of servers and other necessary resources, coupled to operate as a sin
gle system.
• Clusters ensure high availability and load balancing.
• Typically, in failover clusters, one server runs an application and updates the
data, and the other is kept as a standby to take over completely, when require
d. In more sophisticated clusters, multiple servers may access data, while typi
cally, one server is kept as a standby.
RTO and RPO
Recovery Point Objective (RPO) Recovery Time Objective (RTO)
• Point in time to which systems and data • Time within which systems, applications, o
must be recovered after an outage r functions must be recovered after an ou
• Amount of data loss that a business can tage
endure • Amount of downtime that a business can
endure and survive
Weeks Weeks
Tape Backup Tape Restore
Days Days
Periodic Replication Disk Restore
Hours Hours
Asynchronous Replication Minutes
Manual Migration
Minutes
341
RPO
• For example, if the RPO is six hours, backups or replicas must be made at least once in 6 ho
urs.
• The figure shows various RPOs and their corresponding ideal recovery strategies.
• An organization may plan for an appropriate BC technology solution on the basis of the RP
O it sets.
• For example, if RPO is 24 hours, that means that backups are created on an offsite tape driv
e every midnight.
• The corresponding recovery strategy is to restore data from the set of last backup tapes. Sim
ilarly, for zero RPO, data is mirrored synchronously to a remote site.
RTO
• For example, if the RTO is two hours, then use a disk backup because it enables a faster rest
ore than a tape backup.
• However, for an RTO of one week, tape backup will most likely meet requirements.
• Few examples of RTOs and the recovery strategies to ensure data availability are listed belo
w:
• The Backup copy is used when the primary copy is lost or corrupted.
Su Su Su Su Su
Su M T W T F S Su M T W T F S Su M T W T F S Su M T W T F S Su
Incremental Backup
Su M T W T F S Su M T W T F S Su M T W T F S Su M T W T F S Su
alog
• Storage node Application Server/ Backup Server/
Backup Client Storage Node
– Responsible for writing dat
Backup device
a to backup device
• Backup device
– Stores backup data
Classic Data Center
347
Backup and Restore Operation
• Backup operation
– Backup server initiates a scheduled backup
• Instructs storage node to load backup media and instructs clients to send backup d
ata to the storage node
• Storage node sends backup data to backup device and media information to backu
p server
• Backup server updates catalog and records the status
• Restore operation
– Backup client initiates the restore
• Backup server scans backup catalog to identify data to be restored and the client th
at will receive the data
• Backup server instructs storage node to load backup media
• Storage node restores the backup data to the client and sends metadata to the back
up server
Classic Data Center 348
Backup Optimization: Deduplication
Deduplication
Technology that conserves storage capacity and/or network traffic by
eliminating duplicate data.
• When duplicate data is detected, it is not retained; instead, a "data pointer" is mod
ified so that the storage system references an exact copy of that data already store
d on disk.
• Deduplication could occur close to where the data is created, which is often referr
ed to as “Source Based Deduplication."
• It could occur close to where the data is stored, which is commonly called “Target
Based Deduplication”
Benefits of Deduplication
• By eliminating redundant data, far less infrastructure is required to hold the ba
ckup images
– Lowers infrastructure costs
• This means that data Deduplication is performed at the start of the backup process—before t
he data is transmitted to the backup environment.
• Source based Deduplication can radically reduce the amount of backup data sent over netwo
rks during backup processes.
• Source based Deduplication increases the overhead on the backup client, which impacts the
backup performance.
Where does Deduplication Occur?
Source Based Deduplication Target Based Deduplication
Data is Deduplicated at the source (backup client) Data is Deduplicated at the target (backup
Backup client sends only new, unique segments device)
across the network to the backup device Backup client sends native data to the backup
Reduced storage capacity and network device
bandwidth requirements and increased Increased network bandwidth and storage
overhead on the backup client capacity requirements
SAN
SAN
Backup client Backup device Backup client Backup device
Business Continuity in
VDC
356
Backup operation
• A backup operation in a VDC environment often requires backing up the VM state.
• The VM state includes the state of its virtual disks, memory (i.e. RAM), network configurati
on, as well as the power state (on, off, or suspended).
• A virtual disk includes all the information typically backed up (OS, applications, and data.)
Business 358
•
Image based Backup
Creates a copy of the guest OS, its data, VM state, and configurations
The backup is saved
as a single file – “image”
Backup server creates
the backup copies and
offloads backup
processing from the Backup Server
hypervisor Backup Disk/
Tape
• Restores directly at
VM level only
• Operates at hypervisor
level
• Mounts image on backup server Physical Server
SAN Storage
Business
Continuity in VDC 359
Backup Considerations in a VDC
• Reduced computing resources
– Existence of multiple VMs running on the same physical machine leaves fewer res
ources available for backup process
• Complex VM configurations
– A backup agent running on VM has no access to VM configuration files
– Not possible for a backup agent running on hypervisor level to access storage dire
ctly attached to a VM using RDM
Business
Continuity in VDC 360
Backup Optimization: Deduplication
• Backup images of VM disk files are
candidates for deduplication in a V
DC environment
• Deduplication types and methods ar
e same as those employed in CDC
Data to be backed up
Business 361
Restoring a VM
• Restore VM to a required
state using the backup
– Selection of the restore point depen
ds upon RPO Restoring Physical Machine
• Steps for restore process
– Selection of VM and virtual disks to Configur
e ConfigureInstall Start
restore from backup Install OS applications/data“single-step
Hardwar OS
– Selection of the destination e from backup Automated
Restoring Virtual Machine recovery”
– Configuration settings
• Restoring a VM may take
significantly fewer steps, c Restore VM Power on VM
ompared to recovering a
physical machine
Business 362
Puppet
A system administrator's job primarily consists of configuring, deploying and mai
ntaining server machines.
• There are tasks that are very challenging and interesting at times, but most of the
daily routine, consists of many boring and repetitive tasks.
• Almost all system administrator's, try to get rid of those repetitive, and boring task
s with the help of scripting and automating them. But there are issues with scriptin
g and automating also.
• Scripts which are custom made to solve or automate a task, are seldom documente
d, published or announced.
• And the main disadvantage of this is that when it comes to larger infrastructure wi
th different platforms to deploy and manage, these scripts does not serve the purp
ose.
• In other words you can say that puppet can be used for an entire life of a server, starting fro
m bootstrapping a server, to shredding and giving up a server.
• To give you an overview let me say that you can define distinct configuration for each and e
very host using puppet, and continuously check and confirm whether the required configurat
ion is in place and is not altered(if altered puppet will revert back to the required configurati
on ) on the host.
• Puppet keeps the configuration of your hosts under check, and can be use
d in one shot to configure a machine from scratch (like installing package,
editing and configuring,create and mange users required) etc.
• The main added advantage is that you can manage the configuration of al
most all open-source tools available out there, using puppet.
Who made puppet and who supports it?
• Puppet is made by Luke Kanies. It is based on ruby language. Currently puppet is supported
by Puppet Labs(Luke Kanies is the CEO of Puppet Labs). GPLv2 is used to license puppet.
• If they are different our software may not work the way we expect to behave.
• Virtual servers and physical servers have the same software and OS configuration
and even deployment software is repeated and consistent manner.
• When the environment is repeated and consistent we can start to build on the envi
ronment ,modify them and make changes without worrying about the compatibilit
y issue from one system to another or server over time.
• AWS allows us to create an environment were we can adjust our server infrast
ructure incoming traffic of website .
• When we see our website traffic increasing we can auto provision brand new s
ervers to handle more and more web traffic.
• AWS allows us to scale. But it is Puppet that drives installation and configurat
ion of s/w on those web servers. Allow those webservers to be consistent even
after s/w updates.
Why Puppet
• Puppet is a choice among many other
Ansible, Salt, Chef, others.
• Puppet Lang is Declarative (It knows the end state of the system, Resources that are defined
are dependent on each other).
• Puppet runtime uses resources abstract layer to ensure we can define a resource an
d not be concerned about the individual command use to create a resource at run ti
me.
• Eg: This definition of the web directory can apply on linux, mac, windows.
• Second snippet is to identify the local n/w host. Identify the virtual host on web ap
p to use local ip.