Virtualization Power Point Presentation

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 376

VIRTUALIZATI

ON

ALLPPT.com _ Free PowerPoint Templates, Diagrams and Charts


Objective of this course

To understand the working of a Virtual Machine.


Outcome of this course
1.The different types of Virtualization.

2. When and How Virtualization should be attempted.

3. The inner working of a Virtual Machine and its


Management.

4. How commercial Virtual Machines are implemented.


SYLLABUS
Unit I : Introduction

Virtualization overview – Benefits – Need of virtualiz


ation – Limitations − Traditional vs. Contemporary
Virtualization – Pitfalls of virtualization – Hyperviso
rs – Comparing today's Hypervisors – Virtualization
considerations for Cloud providers
SYLLABUS
Unit II: Types of Virtualization

Types of hardware virtualization: Full virtualization


– Para virtualization – Desktop virtualization −
Server virtualization – Data virtualization – OS level
Virtualization – Application level virtualization
– Comparing Virtualization approaches – Managing
heterogeneous virtualization environment –
Customized and Modifying virtualization – Advance
d virtualization – Case Studies.
SYLLABUS
Unit III: Virtual Machines

Understanding virtual machines − Taxonomy of virt


ual machines – Life cycle − Process and system level
virtual machines – Emulation – Binary translation te
chniques – Managing storage for 32 virtual machine
s − Virtualising storage – Backup and recovery virtu
al machine – Applications of virtual machines.
SYLLABUS
Unit IV: Hypervisors

Building and managing Virtual machines – Xen Hyp


ervisor and its Architecture – VMWare Vsphere – Ke
rnel Virtual Machine (KVM) − Microsoft Hyper-V −
Virtual Box.
SYLLABUS
Unit V: Automation & Management

Cloud Management reference architecture – Data Cent


er challenges and solutions − Goals of automating virt
ualization management – Automating the Data Center
− Benefits of data center automation – Virtualization f
or autonomic service provisioning – Virtualization Ma
nagement − Evaluating virtualization management sol
utions – Tools for automation Puppet, Chef.
Before Starting

What is a hardware?
What is Software?
What is an OS?
Abstract view of a Computer System?
What is a Server?
What is Virtual Memory?
The world is getting smarter

Smart traffic Intelligent


Smart Smart energy
systems oil field
healthcare grids
technologies

Smart water Smart weather Smart cities


management Smart food
systems
Introduction -Virtualization
• Virtualization is way to run multiple operating systems and user applications on the same har
dware
– E.g., run both Windows and Linux on the same laptop
• How is it different from dual-boot?
– Both OSes run simultaneously
• The OSes are completely isolated from each other
What is virtualization?
Virtualization is a broad term (virtual memory, storage, network, etc)
Focus for this course: platform virtualization
Virtualization basically allows one computer to do the job of multiple computers, by sharing the resources of a single
hardware across multiple environments

Virtual Virtual
Container Container

App. A App. B App. C App. D


App. A App. B App. C App. D

Operating System Virtualization Layer

Hardware Hardware

‘Nonvirtualized’ system Virtualized system


A single OS controls all It makes it possible to run multiple
hardware platform resources Virtual Containers on a single
physical platform
Evolution of Virtualization
Computing Infrastructure – 2000

• 1 machine  1 OS  several applications


• Applications can affect each other
• Big disadvantage: machine utilization is very low, most of the times it is
below than 25%

App App App App App App App App

X86 X86
X86 X86
Windows Windows
Suse Red Hat
XP 2003

12% Hardware 15% Hardware 18% Hardware 10% Hardware


Utilization Utilization Utilization Utilization
Evolution of Virtualization

Virtualization again…
x86 server deployments introduced new IT challenges:
• Low server infrastructure utilization (10-18%)
• Increasing physical infrastructure costs (facilities, power, cooling, etc)
• Increasing IT management costs (configuration, deployment, updates, etc)
• Insufficient failover and disaster protection

The solution for all these problems was to virtualize x86 platforms
Evolution of Virtualization
Computing Infrastructure - Virtualization

• It matches the benefits of high hardware utilization with running several operating
systems (applications) in separated virtualized environments
– Each application runs in its own operating system
– Each operating system does not know it is sharing the underlying hardware
with others

App. A App. B App. C App. D

X86 X86 X86 X86


Windows Windows Suse Red Hat
XP 2003 Linux Linux

X86 Multi-Core, Multi Processor

70% Hardware Utilization


Hardware vs Virtualization
Need for virtualization
 Fully utilize hardware resources

 Running heterogeneous and conflicting environments

 Isolation

 Manageability

 Reduced Power requirements

 Reduced ownership cost

 Server consolidation

o Run a web server and a mail server on the same physical server

 Easier development

o Develop critical operating system components (file system, disk driver) without affecting computer sta
bility
Hypervisor
• In computing, a hypervisor (also: virtual machine monitor) is
a virtualization platform that allows multiple operating syste
ms to run on a host computer at the same time. The term usua
-lly refers to an implementation using full virtualization.
Two types of hypervisors
• Definitions
– Hypervisor (or VMM – Virtual Machine Monitor) is a software layer t
hat allows several virtual machines to run on a physical machine
– The physical OS and hardware are called the Host
– The virtual machine OS and applications are called the Guest

Type 1 (bare-metal) Type 2 (hosted)


VM1 VM2 Guest
Guest VM1 VM2 Process Hypervisor
Hypervisor OS Host
Host Hardware
Hardware

VMware ESX, Microsoft Hyper-V, Xen VMware Workstation, Microsoft Virtual PC,
23 Sun VirtualBox, QEMU, KVM
Hypervisor
• Hypervisors are currently classified in two types:

– Type 1 hypervisor (or Type 1 virtual machine monitor) is software that runs directly on
a given hardware platform (as an operating system control program). A "guest" operati
ng system thus runs at the second level above the hardware.
• The classic type 1 hypervisor was CP/CMS, developed at IBM in the 1960s, ances
tor of IBM's current z/VM. More recent examples are Xen, VMware's ESX Server,
and Sun's Hypervisor (released in 2005).

– Type 2 hypervisor (or Type 2 virtual machine monitor) is software that runs within an o
perating system environment. A "guest" operating system thus runs at the third level ab
ove the hardware.
• Examples include VMware server and Microsoft Virtual Server.
Bare-metal or hosted?
• Bare-metal
– Has complete control over hardware
– Doesn’t have to “fight” an OS
• Hosted
– Avoid code duplication: need not code a process scheduler, memory mana
gement system – the OS already does that
– Can run native processes alongside VMs
– Familiar environment – how much CPU and memory does a VM take?
– A combination Mostly hosted, but some parts are inside the OS kernel for per
formance reasons
Limitations
Limitations
• Not all applications are specifically designed to be virtualization-friendly.
• This means that some aspects of your computer technology within your business might not leave you wit
h the available option of virtualization.

Instant Access to Data


• Since data is essential to your business it is essential that you only choose virtualization options that offer
adequate data protection.
• Not owning your own servers can put your data at risk and this is not ideal. You do not want your data to
be vulnerable.
FOUR MAJOR HYPERVISORS

We compare four major hypervisors – KVM, Xen, VMware and Hyper-V.

• KVM – a Linux based open source hypervisor. First introduced into the Linux kernel in February 2007, it
is now a mature hypervisor and is probably the most widely deployed open source hypervisor in an open
source environment. KVM is used in products such as Redhat Enterprise Virtualization (RHEV).

• Xen – An open source hypervisor which originated in a 2003 Cambridge University research project. It ru
ns on Linux (though being a Type 1 hypervisor, more properly one might say that its dom0 host runs on L
inux, which in turn runs on Xen). It was originally supported by XenSource Inc, which was acquired by C
itrix Inc in 2007.

• VMware – is not a hypervisor, but the name of a company, VMware Inc. Our experience with VMware i
nvolves its vSphere product. vSphere uses VMware’s ESXi hypervisor. VMware’s hypervisor is very mat
ure and extremely stable.

Hyper-V – Hyper-V is a commercial hypervisor provided by Microsoft. Whilst excellent for running Win
dows, being a hypervisor it will run any operating system supported by the hardware platform.
Challenges
Assuring Compatibility and Efficiency

• When it comes to Virtualization, one of the biggest obstacles that needs to be overcome is preparing a ro
bust infrastructure and making sure that all its underlying components such as CPU, Storage Devices, Op
erating Systems, Network etc. are compatible, efficient and will provide sufficient performance without
affecting the workflow.

• Making sure that all these are in place may take a lot of time and may require special skills. It also depen
ds on how modern your equipment is.

• If you do not have the skills readily available in-house, you may need to acquire these or engage a consul
tant beforehand and establish a preset list of mandatory changes and improvements in order to assure a s
mooth virtualization process.
The Network

• Virtualizing servers before making sure that the network infrastructure can handle it is a risky undertaking.

• After the process is complete the network will be under a lot more strain and making sure that it has what it
takes to sustain the added traffic is critical.

• Once the virtualization process is complete and problems arise, it will be very difficult to tell whether
the performance issues are linked to network issues or at the server end.
Computational and storage capacity

• Besides obvious differences, virtual machines also generate I/O requests at an increased frequency and si
nce virtualization means creating more servers, discs may have trouble keeping up with the increas
ed workload.

• It is not uncommon to realize that some applications are actually slower when run through virtualized
machines even if memory management techniques like page sharing are in place and being used.

• This is because larger blocks of data have increased priority which means that smaller I/O requests have t
o wait before being processed.

• A viable solution for this kind of issues is workload analysis before the virtualization so that you are able
to estimate approximate future hardware and network usage and prepare accordingly.
Pitfalls
Mismatching Servers

• This aspect is commonly overlooked especially by smaller companies that don't invest sufficient funds in
their IT infrastructure and prefer to build it from several bits and pieces.

• This usually leads to simultaneous virtualization of servers that come with different chip technology (A
MD and Intel).

• Frequently, migration of virtual machines between them won't be possible and server restarts will be the o
nly solution.

• This is a major hindrance and actually means losing the benefits of live migration and virtualization.
Creating Too many Virtual Machines per Server

• One of the great things about virtual machines is that they can be easily created and migrated from server
to server according to needs.

• However, this can also create problems sometimes because IT staff members may get carried away and d
eploy more Virtual Machines than a server can handle.

• This will actually lead to a loss of performance that can be quite difficult to spot.

• A practical way to work around this is to have some policies in place regarding VM limitations and to ma
ke sure that the employees adhere to them.
Misplacing Applications

• A virtualized infrastructure is a more complex than a traditional one and with a number of applications de
ployed, losing track of applications is a distinct possibility.

• Within a physical server infrastructure keeping track of all the apps and the machines running them isn’t a
difficult task.

• However, once you add a significant number of virtual machines to the equation, things can get messy an
d App patching, software licensing and updating can turn into painfully long processes.
Features of VM
CLONE:
• A clone is a copy of an existing virtual machine.

• The existing virtual machine is called the parent of the clone.

• Installing a guest operating system and applications can be time consuming.

• With clones, you can make many copies of a virtual machine from a single install
ation and configuration process.

• Clones are useful when you must deploy many identical virtual machines to a gro
up.
Features of VM
CLONE:

• Clone is exact copy of your existing VM but it gives you option to change the name of your
destination VM as well as the resources.

• When you clone a virtual machine, you create a copy of the entire virtual machine, includin
g its settings, any configured virtual devices, installed software, and other contents of the vir
tual machine's disks.

• You also have the option to use guest operating system customization to change some of the
properties of the clone, such as the computer name and networking settings .
Snapshots

 Snapshot is an instance in time of a VM to preserve its state, snapshots are usually


used for testing/development purposes as it allows you to revert back to previous s
tate of VM.

 Snapshots create additional vmdk files as consumes disk space hence take snapsh
ots with care and always delete the snapshots after completing your testings.
Template

 Is a master copy of a virtual machine that can be used to create and provision virtual machin
es.

 Templates cannot be powered on or edited, and are more difficult to alter than ordinary virtu
al machine.

 A template offers a more secure way of preserving a virtual machine configuration that you
want to deploy many times.

 VM Templates >A copy of pre-installed VMs containing all the software and configuration s
ettings that would make the VM work when deployed
Template

 Templates are pre-configured VMs used for multiple deployments say you have to deploy
W2K8 R2 Server 20 times in this case best would be to create a master copy of W2K8 R2 w
ith all basic setups and create a template of it.

 Thereafter use this template to deploy your 20 VMs.

 The configuration file of a template VM will be *.vmtx and not *.vmx that way you can ide
ntify the VM in your data store as template VM.

 The template typically includes a specified operating system and a configuration that provid
es virtual counterparts to hardware components. Optionally, a template can include an install
ed guest operating system and a set of applications.
• Import and export
• .ovf
• .ova
What is the use of privileged instruction
• A class of instructions, usually including storage protection setting, interrupt handling, timer control, inpu
t/output, and special processorstatus-setting instructions, that can be executed only when the computer is i
n a special privileged mode that is generally available to an operating or executive system, but not to use
r programs.

• A machine code instruction that may only be executed when the processor is running in sup
ervisor mode.

• Privileged instructions include operations such as I/O and memory management.

• A trap is an exception in a user process. It's caused by division by zero or invalid memory access. It's als
o the usual way to invoke a kernel routine (a system call) because those run with a higher priority than us
er code.
FULL VIRTUALIZATION (What is ?) •

• It is a virtualization technique used to provide a virtual machine environment which is a complete simulat
ion of the underlying hardware.

• • All operating systems and applications which can run natively on the hardware can also be run in the vir
tual machine.

• • The guest OS need not be modified.

• • Guest OS do not aware the existence of VM

• . • Each VM is independent of each other.


Types of Full Virtualization
• Hypervisor or Virtual Machine Monitor (VMM)

• SW component that implements virtual machine hardware abstraction.

• Responsible for hosting and managing virtual machines & running of guest OS.

• HOSTED

• BAR E M ETAL
• virtualization – Challenges (X86)

• CPU provide 4 protection level( Ring 0 to Ring 3) to OS to execute code.

• OS kernel is designed to run at ring 0 to execute the code directly on the hardware and handle privilege
d instruction .

• User Application(s) run at ring 3 (less privileged)

• So Where Hypervisor resides?


Para Virtulization
• Paravirtualization refers to communication between the guest OS and the hypervisor
to improve performance and efficiency.

• Paravirtualization, involves modifying the OS kernel to replace non -virtualizable in


structions with hypercalls that communicate directly with the virtualization layer hy
pervisor. (Virtualized Resources and Non-Virtualized Resources).

• The hypervisor also provides hypercall interfaces for other critical kernel operations
such as memory management, interrupt handling and time keeping.
Full vs Para Virtulization
• Paravirtualization is different from full virtualization, where the unmodified OS does not know it is virtua
lized and sensitive OS calls are trapped using binary translation.

• Performance advantage of paravirtualization over full virtualization can vary greatly depending on the
workload.

• As paravirtualization cannot support unmodified operating systems (e.g. Windows 2000/XP), its compat
ibility and portability is poor.

• Paravirtualization can also introduce significant support and maintainability issues in production enviro
nments as it requires deep OS kernel modifications.

• The open source Xen project is an example of paravirtualization that virtualizes the processor and me
mory using a modified Linux kernel and virtualizes the I/O using custom guest OS device drivers
Types of Clone: Full and Linked
• There are two types of clone:

• The Full Clone — A full clone is an independent copy of a virtual machine that shares noth
ing with the parent virtual machine after the cloning operation. Ongoing operation of a full c
lone is entirely separate from the parent virtual machine.

• The Linked Clone — A linked clone is a copy of a virtual machine that shares virtual disks
with the parent virtual machine in an ongoing manner. This conserves disk space, and allow
s multiple virtual machines to use the same software installation.
Difference Between Full Clone and Linked Clone
The Full Clone :

• A full clone is an independent virtual machine, with no need to access the parent.

• A linked clone must have continued access to the parent. Without access to the parent, a lin
ked clone is disabled. See Linked Clone and Access to the Parent Virtual Machine.

The Linked Clone


• A linked clone is made from a snapshot of the parent.

• In brief, all files available on the parent at the moment of the snapshot continue to remain av
ailable to the linked clone.

• Ongoing changes to the virtual disk of the parent do not affect the linked clone, and changes
to the disk of the linked clone do not affect the parent.
Benefits of Full Clones
• Full clones do not require an ongoing connection to the parent virtual machine.

• Overall performance of a full clone is the same as a never-cloned virtual machine, while a li
nked clone trades potential performance degradation for a guaranteed conservation of disk s
pace.

• If you are focused on performance, you should prefer a full clone over a linked clone.
Benefits of Linked Clones
• Linked clones are created swiftly. A full clone can take several minutes if the files involved
are large.

• A linked clone lowers the barriers to creating new virtual machines, so you might swiftly an
d easily create a unique virtual machine for each task you have.

• Another benefit of linked clones is that they are easier to share.

• If a group of people needs to access the same virtual disks, then the people can easily pass a
round clones with references to those virtual disks.

• For example, a support team can reproduce a bug in a linked clone and then just email that li
nked clone to development. This is feasible only when a virtual machine isn't gigabytes in si
ze.
Server Virtualization
• “Server virtualization in general terms lets
you take a single physical device and instal
l (and run simultaneously) two or more OS
environments that are potentially different
and have different identities, application st
acks, and so on.”
Typical Server Model
Virtualized Server Model
VMware By the Numbers

Founded 1998
2006 Revenue $709 M
Number of Employees 2,500+
Number of VMware Infrastructure Customers 20,000+
Number of Users 4+ million
Number of Channel Partners 3,000+
Number of VMware Certified Professionals 10,000+
Who Uses VMware?
100% of the Fortune 100
The Challenge
Virtualization Technology Overview

Old Model:
Traditional x86 Architecture
• Single OS image per machine
• Software and hardware tightly coupled
• Multiple applications often conflict
• Underutilized resources

 Old model is challenging!


State of Infrastructure Today – Physical

Server Sprawl Power & Cooling


38 m physical servers by 2010 -
700% increase in 15 years 50c for every $1 spent on
$140 bn in excess server servers
capacity - a 3-year supply
$29 bn in power and
cooling industry wide

Space Crunch Operating Cost


$8 in maintenance for every $1
$1,000 /sqft spent on new infrastructure
$2,400 / server 20-30 : 1 server-to-admin ratio

$40,000 / rack

Source: IDC
What is Virtualization?
Without Virtualization With Virtualization

Applicatio
n
Operating System

Hardwar
e

VMware provides hardware virtualization that presents a complete x86


platform to the virtual machine
Allows multiple applications to run in isolation within virtual machines
on the same physical machine
Virtualization provides direct access to the hardware resources to give you
much greater performance than software emulation
Virtualization Increases Hardware
Before VMware UtilizationAfter VMware

Virtualization enables consolidation of workloads from


underutilized servers onto a single server
to safely achieve higher utilization
Key Properties of Virtu
al Machines

•Partitioning

 Run multiple operating systems on


one physical machine
 Divide system resources between
virtual machines
Key Properties of Virtual Machines
•Partitioning

 Run multiple operating systems on one physical machine


 Divide system resources between virtual machines

•Isolation

 Fault
and security isolation at the
hardware level
 Advancedresource controls preserve
performance
Key Properties of Virtual Machines
•Partitioning

 Run multiple operating systems on one physical machine


 Divide system resources between virtual machines
•Isolation

 Fault and security isolation at the hardware level


 Advanced resource controls preserve performance
•Encapsulation

 Entirestate of the virtual machine can


be saved to files
 Move and copy virtual machines as
easily as moving and copying files
Key Properties of Virtual Machines
•Partitioning
 Run multiple operating systems on one physical machine
 Divide system resources between virtual machines
•Isolation
 Fault and security isolation at the hardware level
 Advanced resource controls preserve performance
•Encapsulation
 Entire state of the virtual machine can be saved to files
 Move and copy virtual machines as easily as moving and copying
files

•Hardware-Independence

 Provision or migrate any virtual machine to any similar


or different physical server
State of Infrastructure with Virtualization

BEFORE
VMware
AFTER VMware SAVINGS
Servers 1000 80 $5,816 (per server
removed)

HBAs 500 160 $290


SAN Switches 22 8 na
Network Switches 84 10 $296
Power (kWh) 407 52 $759
Cooling (kWh) 509 64 $949
Real Estate (Sq ft) 2053 257 $431(3yr)
Total Savings $8,541* (per
(Over 3 years) server)

* Note: Savings include estimated cost of VMware licenses, Support and Subscription
The Enterprise PC Challenge

IT: Are We
Having Fun
Yet?
ESX Server

• Deploy multiple virtual machines on a single


Virtual Machines physical server
• Market leading:

ESX Server
Physical Server • Performance
• Stability
• Scalability
• Cross-platform
support
Non-Disruptive Capacity on Demand
Instant Provisioning in a Virtualized Environment

Physical

Configure Install Configure Assign Configure Test Apps


hardware OS OS & IP Addr Network
Tools 20-40 hrs of
work
<1 hr of work
Deploy Power 4-6 week lead
from on VM 1-2 days lead time
Template time

Virtual
•Provisioning time reduced to minutes, not da
ys to weeks!
From server boot to running VMs in Minutes
3i


1. Power on server and boot int
o hypervisor
 2. Configure Admin Password
 3. (optional) Modify network con
figuration
4. Connect VI Client to IP Addre
 ss
Or manage with VirtualC
enter
VMware VMotion

73% of VMware customers have implemented VMotion in production

• Live migration of
virtual machines
• Zero downtime
VMware DRS

67% of VMware customers use DRS in production

Business Demand
• Dynamic and
intelligent allocation of
hardware resources
• Ensure optimal
Resource Pool alignment between
business and IT
Ensure High availability wi
th VMware HA
• VMware HA
automatically restarts

X Resource Pool
virtual machines when a
physical server fails
Distributed Power Management (coming soon)
NEW!
Minimize power consumption while guaranteeing service levels

Business Demand

Consolidates workloads onto fewer


servers when the cluster needs fewer
resources
Places unneeded servers in standby
mode
Power Off
Brings servers back online as
workload needs increase
Resource Pool
NEW! Storage VMotion
Storage VMotion minimizes planned downtime for storage

– Storage independen
t live migration of v
irtual machine disks
• Zero downtime to vi
rtual machines
• LUN independent
• Supported for Fibre
channel SANs
• Examples of Virtualization
– Virtual drives
– Virtual memory
– Virtual machines
– Virtual servers
History
• 1960s Machines
– Did not scale well
– Extremely expensive
– Cost efficiency was desired

• IBM-360 Operating System (1964)


– Virtual Memory

• IBM 370 Operating System (1972)


– Virtual Machines
– Used in many mainframe environments
Virtualization Software
• Microsoft Virtual Server (2005)
– Came with Microsoft Server 2003
– Did not scale well with 64 bit systems
– Replaced by Hyper-V

• Microsoft Hyper-V (2008 & 2012)


– Hyper-V is short for Hypervisor
– Free release with Server 2008 and 2012
– Best option for Microsoft based virtualization
Hyper-V Architecture
Virtualization Software
• VMware (Company)
– Releases most popular line of virtualization software
– First company to utilize virtualization on x86 machines
– Software runs on Linux, Windows, and MAC
• vSphere (aka ESX)
– Costly
– High overhead
• VMware Server
– Free
– Not as powerful as ESX
ESX Architecture
Hypervisor
• The Hypervisor is the piece of softw
are that enables virtualization

• It allows the host machine to allocat


e resources to guest machines
Hypervisor
Type I versus Type II Hypervisor
Virtualization Hardware
• CPU
– At least one CPU core per virtual machine
– Having free cores for high stress situations recommended

• RAM
– No set amount for RAM
– Estimate minimum amounts of RAM and upgrade based on perfor
mance
Virtualization Hardware
• Storage
– Local storage on servers is limited
– Allow for 20% extra storage space for VM files and server snapshots
– Storage Networks (highly recommended)
• Storage Area Network (SAN) – Large data transfers
• Network Attached Storage (NAS) – File-based data storage
Pros and Cons of Server Virtualizati
• Pros
on
– Cost
• Less physical servers
• Less server space (consolidation of servers)
• Less energy costs
• Less maintenance

– Efficient Administration
• Easier management, management through one machine
• Single point of failure
• Smaller IT staff
Pros and Cons of Server Virtualizati
• Pros
on
– Growth and Scalability
• Upgrading one server upgrades them all
• Easy growth
• Less hardware complications
– Security
• Single server security maintenance
• Hypervisor software often provides security benefits
– Legacy Servers
• Upgrading servers to a virtual setup from old systems
• Goes hand-in-hand with scalability
Pros and Cons of Server Virtualizati
• Cons
on
– Slow Performance
• High stress on single machine
• Longer processing times
• More network bottlenecking

– Single Point of Failure


• Many servers on one host machine
• Hardware or software failures can be critical
• Backup servers will need to be setup
Pros and Cons of Server Virtualizati
on
• Cons
– Cost
• High initial investment
• Software licensing costs
– Security
• All servers through one machine
– Learning curve
• Many different types of software
• Different architecture
Pros and Cons of Dedicated Server
s
• Pros
– High Performance
• All resources on server are dedicated
• Can handle high stress scenarios

– Multiple Points of Failure


• Easier to identify problems
• Only one server will fail at a time
Pros and Cons of Dedicated Server
• Pros
s
– Price
• Old servers already exist
• No long term investments
• If it’s not broke, don’t fix it

– Small Learning Curve


• Dedicated servers have been around for a long time
• IT staff will not need to learn any new systems if dedicated serv
ers already exist
Pros and Cons of Dedicated Server
• Cons
s
– Price
• Long term costs of dedicated servers can add up
• More applications and services = more servers

– Servers not being utilized


• Servers may not be efficient
• Even at peak, some servers may not need all resources
Pros and Cons of Dedicated Server
• Cons
s
– Lack of growth and consistency
• Adding servers for more services and applications
• Expanding of physical space with servers
• Software patches and updates will be inconsistent
• Management can be difficult and inconsistent
Top 6 Reasons To Invest In Server Virtualization

Save Investments:
• Save Investments Companies who want to use hundreds or thousands of virtual servers can
without any difficulty use maximum amount of space in an efficient, effective and significa
nt manner.

• It is the most excellent solution for the companies who cannot afford their own dedicated vi
rtual private server .

Decrease power consumption:

• Decrease power consumption This is the second most suitable benefit you will get by the us
e of server virtualization technology that will decrease virtual servers’ energy utilization an
d cooling necessities significantly.
Top 6 Reasons To Invest In Server Virtualization

Maintenance & management:


• Maintenance & management Server Virtualization helps in making maintenance and manag
ement everyday jobs simpler and easier to do.
• As well as it reduces maintenance expenditures by 30 to 40%.

Reliability & availability:


• Reliability & availability By using server virtualization technique, you will get an enormous
boost of increased reliability and availability that permits servers and applications to be avai
lable all the times to support advanced service levels you needed for your business’s smooth
functioning.
Top 6 Reasons To Invest In Server Virtualization

Redundancy:
• Redundancy It provides immense security measures if in case one virtual server fails due to
any trouble then you will be having another server running the same application in your han
d.

High security & recovery:

• High security & recovery You will get high amount of security and recovery benefits due to
the isolated environment it includes in it.
Citrix XenApp
Introduction to XenApp
• Ed Iacobucci Founded Citrix in 1989 in Texas then moved to Florida

• Iacobucci was IBM developer worked on OS/2 Project.

• Citrix was originally named as citrus

• Iacobucci original vision to build Multi user Support for OS/2

• Citrix First Product was Citrix Multi View based on OS/2

• Citrix developed Multi Win Technology later Licensed to Microsoft Which become basis of T
erminal Server
Terminal server
• A hardware device or server that enables one or more terminals to connect to a local area net
work (LAN) or the Internet without the need for each terminal to have a network interface
card (NIC) or modem.

• Terminals can be PCs, printers, IBM 3270 emulators or other devices with a RS-232 / RS-42
3 serial port interface.

• Terminal servers can often support connections of up to 128 terminal devices.


What is Citrix and What is the uses of Citrix ?

• Citrix is a terminal Server computing Environment.

• Citrix provides ability to access the published Desktops or applications through a web interf
ace or a client tool.

• Citrix Provides the ability to provide single application to a users desktop where it looks lik
e a local installation yet the process runs on citrix server.

• Citrix Gives users access to windows, web, legacy as well as the other information from any
where any device on any connection .

• The access given via a Central Server, his or her keystroke movements are transferred to the
users workstation to the server.
Different Versions of Citrix :
 XP MetaFrame XP with feature release 1
 MetaFrame XP with feature release 2
 MetaFrame XP with feature release 3
 MetaFrame XP Presentation Server with feature release 3
 MetaFrame Presentation Server 3
 Presentation Server 4
 Presentation Server 4.5
 Presentation Server 4.5 with service pack1
 XenApp 5
 XenApp 6
 XenApp 6.5
What is XenApp ?

• An Extension to Microsoft Windows Remote Desktop Services ( Terminal Server or Termin


al Services)

• Terminal Server is Server based Computing Model which allows multiple simultaneous user
s to login and run application on centralized server

• Core Technology behind XenApp is ICA


XenApp 6.5 has total 3 Editions
Advanced:
• Smaller N/W
• Users are able to access any device any where
• High Definition user experience
• Single Instance Management
Enterprise
• VM hosted application
• Profile Management
• Monitoring and Recovery
• Capacity Management
Platinum
• Large N/W
• Provisioning Services
• Smart Access based service
• Access Monitoring
• Single Sign on and Password Management
• Citrix access gateway
ZONE
• Zone is the collection of servers connected geographically .

• In a farm there can be a one zone or multiple zone


Data Collectors
• Dynamic information of each xen app server in the zone

• It stores the information of server loading the zone

• Which server has how much load

• Published application information

• Connected and disconnected sessions


Desktop Virtualization

©2011 Desktone, Inc. All rights reserved.


Desktop Management Today
Expensive Tipping point for change is here...
Support-heavy
Migration to Windows 7
Insecure

OS New mobile access


Data
Apps
Settings
Preferences Tighter IT budgets

Security
Desktop Models
Web Based Virtualized/Cloud Device Centric App Based

1. Legacy applications
2. Application collaboration
3. Separation of environments
4. Consumer choice
5. Familiarity
6. Data synchronization

©2011 Desktone, Inc. All rights reserved.


The Promise of Virtual Desktops (VDI)

Virtual desktops  Cost reduction


Centrally managed  IT consolidation
In IT data center  Easier to manage
 Happy users

OS
Data
Apps
Settings
Preferences

Virtual Desktops
Traditional VDI Reality

… but major BARRIERS exist


Start up
Huge up-front costs

Many moving parts


Complex to design & build

Operationally intensive
Difficult to maintain

Is it strategic
Do you want to be building and managing
data centers
Cloud Based Desktops

©2011 Desktone, Inc. All rights reserved.


Why the Switch to Cloud?
For a variety of reasons, cloud technologies are too compelling to ignore.

Dynamically scalable, Accessibility


Speed Cost
virtualized resources
provided as a service
 Maximize revenue
• “Utility” billing (pay as you use)  Reduce cost
• “Unlimited” processing and storage  Expedite time to market
• Elasticity to scale up or down  Focus resources

• On demand, self-service  Do more projects


Cloud Service Models

Software as a Service (SaaS)


Desktops as a
Service (DaaS)
Platform as a Service (PaaS)

Infrastructure as a Service (IaaS)


Cloud Virtual Desktops vs. Traditional

Physical Cloud Virtual Desktop


• Takes 3-5 days for a new system • New systems in 90 seconds
• Break/fix issues • Never breaks
• Endpoint data loss/ security • Data can not be lost or stolen
• Vulnerable to power outages • Always on
• Hardware & desktop are one and • Desktop can be accessed from a
the same ny connected device from anywh
• Up-front capital required ere
• Hardware warranty cost • No upfront capital
• No hardware warranty required
Desktone’s Solution

©2009 Desktone, Inc. All rights reserved.


Desktone Overview

• Background
– Only virtual desktop solution built from the ground up for t
he cloud/DaaS delivery model
Founded 2007
Footprint Global • Who we are
– Software company delivering cloud-hosted virtual desktop i
nfrastructure (VDI)
– Optimized, tested & deployed with the worlds largest comp
anies and service providers

• Solutions Include
– Desktone Cloud for enterprises
– Desktone Cloud Platform for service providers
Cloud-Based Desktops (DaaS)

Business Benefits

Desktop Cloud  Easy to manage


 Device independence
 Lower TCO
 Low cash out = low risk
 Customer Satisfaction

From Any Device

From Any Location

PARTNERS & FIXED


TEMPORARY SUPPLIERS TELEWORKERS BRANCH OFFICES
OFFICES
©2011 Desktone, Inc. All rights reserved.
•Access from anywhere

How it Works
Client Manages Desktone Delivers
IT Shared Resources • High performance network
• Secure & Compliant
• Centralized management & reporting
• Provisioning on demand

Network Connection (VPN)


• Active Directory • Storage
• User Data

End User Devices

• Personalized desktops
• Bring your own licenses
Remote Display

Access Anywhere
Easy to Try & Easy to Buy
1. Drastically Reduce Cost

2. Easy to try

Free Trial Pro Enterprise Ultimate Premium

Features
CPU Cores 1 1 2 4 Custom

Memory (GB) 2 2 4 8 Custom

Hard Disk (GB) 25 25 25 50 Custom

OS Type Windows 7 Enterprise Windows XP Pro, Windows Windows XP Pro, Windows Windows XP Pro, Windows Windows XP Pro, Windows
7 Enterprise, Linux 7 Pro or Enterprise, Linux 7 pro or Enterprise, Linux 7 Pro or Enterprise, Linux

Display Protocol RDP RDP, RGS, Citrix Receiver RDP, RGS, Citrix Receiver RDP, RGS, Citrix Receiver RDP, RGS, Citrix Receiver

©2011 Desktone, Inc. | All rights reserved.


The Desktone
Cloud consists of
two primary
interfaces

Desktone Enterprise Desktone Portal


Center Used by end-users for access
Used by desktop admins to to resources on the Desktone
manage the Desktone Cloud Cloud

©2011 Desktone, Inc. All rights reserved.


End-User Access
End Point Devices

1. iPad - “DaaS Mobile Client”


available in the iTunes Store
2. Thin clients - all leading
vendors are supported
3. Standard PCs – access
through their preferred web
browser or DaaS Client
4. Google Chromebook – access
via HTML5
©2011 Desktone,
Demonstration

Enabling Desktops as a Service™

©2011 Desktone,
End User Login

©2011 Desktone, Inc. All rights reserved.


End User Self Serv
ice

©2011 Desktone, Inc. All rights reserved.


Administration Lo
gin

©2011 Desktone, Inc. All rights reserved.


Dashboard View

©2011 Desktone, Inc. All rights reserved.


Enterprise Service
Summary

©2011 Desktone, Inc. All rights reserved.


Gold Patterns & I
mages

©2011 Desktone, Inc. All rights reserved.


Creating Virtual De
sktops

©2011 Desktone, Inc. All rights reserved.


Summary
• Cloud computing is changing the world of IT

• The current desktop management market is ripe for change

• VDI was supposed to solve problems – it has introduced other issues for most especially cost and complexity

• Cloud-hosted desktops have a significant TCO saving over VDI


• No upfront investment in infrastructure

• No complicated infrastructure to configure and build

• Centralized support and management

• No expensive management resources required

• No hidden costs
Data Virtualization
• Data virtualization is any approach to data management that allows an application to retrie
ve and manipulate data without requiring technical details about the data.

• Such as how it is formatted or where it is physically located.


Data Virtualization
• You are probably familiar with the concept of data virtualization if you store photos on the social network
ing site Facebook.
• When you upload a photo to Facebook from your desktop computer, you must provide the upload tool wi
th information about the location of the photo -- the photo's file path.

• Once it has been uploaded to Facebook, however, you can retrieve the photo without having to know its n
ew file path.

• In fact, you will have absolutely no idea where Facebook is storing your photo because Facebook softwar
e has an abstraction layer that hides that technical information.

• This abstraction layer is what is meant by some vendors when they use the term data virtualization.

• The term data virtualization, however, simply means that the technical information about the data has bee
n hidden.
• Data integration and query logic. What
data has been requested and how to pr
ocess it. where that logic exist.
Remote Desktop Virtualization

• Remote desktop virtualization implementations operate in a client/server computi


ng environment.

• Application execution takes place on a remote operating system which communic


ates with the local client device over a network using a remote display protocol th
rough which the user interacts with applications.

• All applications and data used remain on the remote system with only display, ke
yboard, and mouse information communicated with the local client device, which
may be a conventional PC/laptop, a thin client device, a tablet, or even a smartpho
ne.
• A common implementation of this approach involves hosting multiple desktop op
erating system instances on a server hardware platform running a hypervisor.

• This is generally referred to as "Virtual Desktop Infrastructure" or "VDI". (Note t


hat "VDI" is often used incorrectly to refer to any desktop virtualization impleme
ntation.

• Virtual desktop infrastructure (VDI) is the practice of hosting a desktop operating


system within a virtual machine VM running on a centralized server. VDI is a vari
ation on the client/server computing model, sometimes referred to as server-based
computing.
Benefits of Desktop Virtualization

Secure, mobile access to applications. Many SMBs allow employees to work remotely or off hours fro
m their own devices, but provisioning those devices can be difficult and costly - and allowing employees
to do it themselves can open up the company to security risks.

• However, virtualized desktops allow employees to access even high-performance applications by enablin
g hardware-based GPU sharing through a secure connection from any device, on even high-latency, low-b
andwidth networks such as hotel room WiFi.

Flexibility. Using desktop virtualization allows enterprises to provision just a few types of desktops to its
users, reducing the need to configure desktops for each employee.

• Additionally, because virtual desktops can be provisioned so quickly, it's easier for the company to onboa
rd new hires with just a few mouse clicks.

• The right virtualization solution will allow an administrator to personalize and manage desktops through
a single interface, eliminating the need to drill down to individual desktops.
Benefits of Desktop Virtualization

Ease of maintenance. Virtualized desktops also allow for easier desktop maintenance.

• At the end of the day, when the employee logs off from her computer, the desktop can be reset, removing
any downloaded software or customizations that she may have added to her computer.

• This not only prevents software and customizations from slowing down the machine but also provides an
easy way to troubleshoot: if the system freezes, the employee can simply reboot and have the desktop rest
ored.

Desktop security. A common problem that most SMBs face is employees downloading software or other
potentially risky items (like PowerPoint presentations featuring cute kittens - and malware).

• Desktop virtualization allows the administrator to set permissions, preventing these documents carrying
Trojan horses from residing on the system. This provides, quite simply, peace of mind, as well as easing
maintenance costs.
Benefits of Desktop Virtualization

Reduced costs. All of these benefits end up at the same place: reduced costs for th
e business.

• Because the software licensing requirements are smaller, there are cost savings o
n applications alone.

• Companies also save money in the IT department, as fewer staff are needed to ma
nage desktops and troubleshoot user problems.

• SMBs also save money with major support issues like removing malware.
FlexCast
• Different types of workers across the enterprise need different types of desktops.

• Some require simplicity and standardization.

• While others require high performance and personalization.

• Xen Desktop can meet all the requirements in a single solution with its unique citrix
Flexcast delivery.

• With FlexCast, your IT group can deliver virtual desktops and applications tailored t
o meet the performance, security and flexibility requirements of each individual user
Taskworkers
• Task workers performs a set of well defined task.

• They access small set of applications and have limited access on their pc’s.

• Since these workers interact with your customers, partners and employees they hav
e access to your most critical data.

• Xen desktop enables IT to provide standard desktop and applications while keeping
data secure
Knowledge Workers
• Traditional office workers perform their work in office.

• But today's workers work from every where .

• These workers expect access to their data and application wherever they are.

• Xen desktop provides it


Mobile Workers

• These workers expect the ability to pers


onalize their PC’S by installing their own
application and storing their own data,
External Contractors
• Are increasing part of your business .

• They need to access your application and data, yet yo


u have a very little control over the device they use a
nd location they work from.

• Xen desktop provides access to the application that t


he external contractors need while enforcing security
policy
Shared Work station
• The primary challenge is the constant requirement to
re-provision desktops with latest os and applications
as the needs of organization change.

• Xen desktop provides the tools to provision new env


ironment from a single , easily managed image
Hosted Desktop Delivery
Controller:

• The controller brokers connection request from the client dev


ices, assigning a desktop to each end user on demand.

• It also manages the licensing and the database that contains th


e persistent configuration information for the site
Virtual Desktop Agent:

• Runs on each desktop that will be delivered to end user.

• The agent registers with the controller to make the desktop available for connectio
n to a user device and verifies incoming request from those devices before establis
hing a connection

• After establishing a connection it provides services that manages communication


between vd and user
Citrix Receiver:

• Runs on the user device and manages the online plug in which displays th
e user desktop.

• The plugin request the desktop from the controller which sends the conne
ction details about the assigned desktop back to the plug in.

• The online plugin then initiates a desktop session with the vd agent runnin
g on the assigned desktop.
VDI IN A BOX
• Citrix VDI-in-a-Box installs on a single server.

• Customers can configure their own commodity computer to run


as a VDI server -- making it easier to simply add hardware whe
n capacity increases.

• Administrators can manage Citrix VDI-in-a-Box through a sing


le console.

• The software takes advantage of the HDX protocol technology.


• Citrix Systems Inc. acquired VDI-in-a-Box from Kaviza in May 2011.

• The software works with any hypervisor and is simpler to install and work with than enterpr
ise-grade virtual desktop infrastructure (VDI) platforms such as Vmware View or Citrix Xe
nDesktop.

• VDI-in-a-Box costs less than Citrix XenDesktop, but it does not come with rights to XenAp
p or XenServer Enterprise
Operating-System Virtualization
Operating-System Virtualization
• Operating-system-level virtualization is a server-virtualization method whe
re the kernel of an operating system allows for multiple isolated user-spac
e instances, instead of just one
• These instances run on top of an existing host operating system and provide
a set of libraries that applications interact with, giving them the illusion that
they are running on a machine dedicated to its use. The instances are know
n as Containers, Virtual Private Servers or Virtual Environments
• A standard operating system so that it can run different applications handle
d by multiple users on a single computer at a time.

• The operating systems do not interfere with each other even though they ar
e on the same computer.
Operating-System Virtualization
• In OS virtualization, the operating system is altered so th
at it operates like several different, individual systems.

• The virtualized environment accepts commands from dif


ferent users running different applications on the same m
achine.

• The users and their requests are handled separately by th


e virtualized operating system.
Uses

• Operating-system-level virtualization is commonly used in virtual hostin


g environments, where it is useful for securely allocating finite hardware r
esources amongst a large number of mutually-distrusting users.

• System administrators may also use it, to a lesser extent, for consolidating
server hardware by moving services on separate hosts into containers on t
he one server.

• Operating-system-level virtualization implementations capable of live mi


gration to be used for dynamic load balancing of container between nodes
in a cluster.
Operating system-level virtualization

• virtualizing a physical server at the operating system level, enabling


multiple isolated and secure virtualized servers to run on a single phy
sical server.
• Examples:
– Parallels Workstation
– Linux-VServer, Virtuozzo
– OpenVZ, Solaris Containers
– FreeBSD Jails
– Chroot ?
Comparison
Thinner Containers, better performance
KVM – KERNEL BASED VIRTUAL MACHINE
• Kernel-based Virtual Machine (KVM) project represents the latest generation of open sou
rce virtualization.

• The goal of the project was to create a modern hypervisor that builds on the experience of
previous generations of technologies and leverages the modern hardware available today.

• KVM is implemented as a loadable kernel module that converts the Linux kernel into a ba
re metal hypervisor.

• There are two key design principals that the KVM project adopted that have helped it matur
e rapidly into a stable and high performance hypervisor and overtake other open source h
ypervisors.
• Firstly, because KVM was designed after the advent of hardware assisted virtualiaztion, it di
d not have to implement features that were provided by hardware.

• The KVM hypervisor requires Intel VT-X or AMD-V enabled CPUs and leverages those feat
ures to virtualize the CPU.

• By requiring hardware support rather than optimizing with it if available, KVM was able to
design an optimized hypervisor solution without requiring the “baggage” of supporting lega
cy hardware or requiring modifications to the guest operating system.
• Secondly the KVM team applied a tried and true adage – “don't reinvent the wheel”.

• There are many components that a hypervisor requires in addition to the ability to virtualize
the CPU and memory, for example: a memory manager, a process scheduler, an I/O stack, d
evice drivers, a security manager, a network stack, etc.

• In fact a hypervisor is really a specialized operating system, differing only from it's general
purpose peers in that it runs virtual machines rather than applications.

• Since the Linux kernel already includes the core features required by a hypervisor and has b
een hardened into an mature and stable enterprise platform by over 15 years of support and
development it is more efficient to build on that base rather than writing all the required co
mponents such as a memory manager, scheduler, etc from the ground up.
• For example while the Linux kernel has a mature and proven memory manager including su
pport for NUMA and large scale systems.

• The Xen hypervisor has needed to build this support from scratch. Likewise features like po
wer management which are already mature and field proven in Linux had to be re-implemen
ted in the Xen hypervisor.
• Another key decision made by the KVM team was to incorporate the KVM into the upstrea
m Linux kernel.

• The KVM code was submitted to the Linux kernel community in December of 2006 and wa
s accepted into the 2.6.20 kernel in January of 2007.

• At this point KVM became a core part of Linux and is able to inherit key features from the
Linux kernel.

• By contrast the patches required to build the Linux Domain0 for Xen are still not part of the
Linux kernel and require vendors to create and maintain a fork of the Linux kernel.

• This has lead to an increased burden on distributors of Xen who cannot easily leverage the f
eatures of the upstream kernel.

• Any new feature, bug fix or patch added to the upstream kernel must be back-ported to work
with the Xen patch sets.
QEMU
• QEMU (short for Quick Emulator) is a free and open-source hosted hypervisor .

• QEMU is a generic and open source machine emulator and virtualizer.

• When used as a machine emulator, QEMU can run OSes and programs made for o
ne machine (e.g. an ARM board) on a different machine (e.g. your own PC).

• By using dynamic translation, it achieves very good performance.



• When used as a virtualizer, QEMU achieves near native performance by e
xecuting the guest code directly on the host CPU.

• QEMU supports virtualization when executing under the Xen hypervisor


or using the KVM kernel module in Linux.

• When using KVM, QEMU can virtualize x86, server and embedded Powe
rPC, and S390 guests.
• QEMU is a hosted virtual machine monitor.

• It runs different OS on host pc

• It emulates CPUs through dynamic binary translation and pro


vides a set of device models, enabling it to run a variety of un
modified guest operating systems
• It also can be used together with KVM in order to run virtual machines at near-nat
ive speed.

• QEMU can also be used purely for CPU emulation for user-level processes, allow
ing applications compiled for one architecture to be run on another.
Licensing
• QEMU was written by Fabrice Bellard and is free software and is mainly licensed under GN
U General Public License (GPL).

• Various parts are released under BSD license, GNU Lesser General Public License (LGPL)
or other GPL-compatible licenses.

• There is an option to use the proprietary FMOD library when running on Microsoft Windo
ws, which, if used, disqualifies the use of a single open source software license.

• However, the default is to use DirectSound.


Operating modes

User-mode emulation
• In this mode QEMU runs single Linux or Mac OS X programs that were compiled for a diff
erent instruction set.
• System calls are thunked for 32/64 bit mismatches.
• Fast cross-compilation and cross-debugging are the main targets for user-mode emulation.

System emulation
• In this mode QEMU emulates a full computer system, including peripherals.
• It can be used to provide virtual hosting of several virtual computers on a single computer.
• QEMU can boot many guest operating systems, including Linux, Solaris, Microsoft Windo
ws, DOS, and BSD.
• It supports emulating several instruction sets, including x86, MIPS, 32-bit ARMv7, ARMv8
, PowerPC, SPARC, ETRAX CRIS and MicroBlaze.
KVM Hosting
• Here QEMU deals with the setting up and migration of KVM images
• It is still involved in the emulation of hardware, but the execution of the g
uest is done by KVM as requested by QEMU.

Xen Hosting
• QEMU is involved only in the emulation of hardware.
• The execution of the guest is done within Xen and is totally hidden from
QEMU.
Features
• QEMU can save and restore the state of the virtual machine with all programs running.
• Guest operating-systems do not need patching in order to run inside QEMU.

QEMU supports the emulation of various architectures, including:

• IA-32 (x86) PCs


• x86-64 PCs
• MIPS R6 and earlier variants
• Sun's SPARC sun4m
• Sun's SPARC sun4u
• ARM development boards (Integrator/CP and Versatile/PB)
• SH4 SHIX board
• PowerPC (PReP and Power Macintosh)
• ETRAX CRIS
• MicroBlaze
• QEMU can emulate network cards (of different models) which share the host system's conn
ectivity by doing network address translation, effectively allowing the guest to use the same
network as the host.

• The virtual network cards can also connect to network cards of other instances of QEMU or
to local TAP interfaces.

• Network connectivity can also be achieved by bridging a TUN/TAP interface used by QEM
U with a non-virtual Ethernet interface on the host OS using the host OS's bridging features.
• QEMU integrates several services to allow the host and guest systems to commun
icate.
• It can also boot Linux kernels without a bootloader.

• QEMU does not require administrative rights to run, unless additional kernel mod
ules for improving speed are used (like KQEMU), or when some modes of its net
work connectivity model are utilized.
QEMU supports the following disk image formats:

• OS X Universal Disk Image Format (.dmg) – Read-only


• Bochs – Read-only
• Linux cloop – Read-only
• Parallels disk image (.hdd, .hds) – Read-only
• QEMU copy-on-write (.qcow2, qed, .qcow, cow)
• VirtualBox Virtual Disk Image (.vdi)
• Virtual PC Virtual Hard Disk (.vhd)
• Virtual VFAT
• VMware Virtual Machine Disk (.vmdk)
• Raw images (.img) that contain sector-by-sector contents of a disk
• CD/DVD images (.iso) that contain sector-by-sector contents of an optical disk (e.g. booting live OSes)
• KQEMU was a Linux kernel module, also written by Fabrice Bellard, whi
ch notably sped up emulation of x86 or x86-64 guests on platforms with t
he same CPU architecture.

• This worked by running user mode code (and optionally some kernel code
) directly on the host computer's CPU, and by using processor and periphe
ral emulation only for kernel-mode and real-mode code.

• KQEMU could execute code from many guest OSes even if the host CP
U did not support hardware-assisted virtualization.
Qemu:
• QEmu is a complete and standalone software of its own.

• You use it to emulate machines, it is very flexible and portable.

• Mainly it works by a special 'recompiler' that transforms binary code written for a
given processor into another one (say, to run MIPS code on a PPC mac, or ARM i
n an x86 PC).
• KQemu:
• In the specific case where both source and target are the same architecture (like th
e common case of x86 on x86).

• it still has to parse the code to remove any 'privileged instructions' and replace the
m with context switches.

• To make it as efficient as possible on x86 Linux, there's a kernel module called K


Qemu that handles this.
• Being a kernel module, KQemu is able to execute most code unchanged, replacin
g only the lowest-level ring0-only instructions.

• In that case, userspace Qemu still allocates all the RAM for the emulated machine
, and loads the code.

• The difference is that instead of recompiling the code, it calls KQemu to scan/patc
h/execute it.

• All the peripheral hardware emulation is done in Qemu.

• This is a lot faster than plain Qemu because most code is unchanged, but still has t
o transform ring0 code (most of the code in the VM's kernel), so performance still
suffers
Virtualization Comes in Many Forms

Virtual Each application sees its own logical


Memory memory, independent of physical memory

Virtual Each application sees its own logical


Networks network, independent of physical network

Virtual Each application sees its own logical


Servers server, independent of physical servers

Virtual Each application sees its own logical


Storage storage, independent of physical storage

Storage - 222 222


Memory Virtualization

Virtual Each application sees its own logical


Memory memory, independent of physical memory

Physical memory

App

App

App

Benefits of Virtual Memory


Swap space
• Remove physical-memory limits
Storage - 223 • Run multiple applications at once
223
Network Virtualization

Virtual Each application sees its own logical


Networks network, independent of physical network

VLAN A VLAN B VLAN C Benefits of Virtual Networks


• Common network links with access-
control properties of separate links
Switch
• Manage logical networks instead of
physical networks
Switch VLAN trunk
• Virtual SANs provide similar benefits
Storage - 224 for storage-area networks 224
Server Virtualization

Before Server Virtualization: After Server Virtualization:

App App App App App App


Application Operating system Operating system
Operating system Virtualization layer

 Single operating system image per  Virtual Machines (VMs) break


machine dependencies between operating
 Software and hardware tightly coupled system and hardware
 Running multiple applications on same  Manage operating system and
machine often creates conflict application as single unit by
 Underutilized resources encapsulating them into VMs
 Strong fault and security isolation
 Hardware-independent
Storage Virtualization Servers

• Process of presenting a logical view of phy


sical storage resources to hosts
• Logical storage appears and behaves as phy
sical storage directly connected to host
• Examples of storage virtualization are:
– Host-based volume management
– LUN creation Virtualization
– Tape virtualization Layer

• Benefits of storage virtualization:


– Increased storage utilization
– Adding or deleting storage without aff
ecting application’s availability
– Non-disruptive data migration
Heterogeneous Physical Storage

Storage Virtualization
• Figure illustrates a virtualized storage environment.

• At the top are four servers, each of which has one virtual volume assigned
, which is currently in use by an application.

• These virtual volumes are mapped to the actual storage in the arrays, as sh
own at the bottom of the figure.

• When I/O is sent to a virtual volume, it is redirected through the virtualiza


tion at the storage network layer to the mapped physical array
Why storage virtualization?
Storage Virtualization aims to provide a layer of abstraction to
manage storage and reduce complexity !!!

Provided continuous availability despite Effectively group and manage


exponential growth (e.g. FaceBook- Over 55 heterogeneous storage devices &
billion page views a month, 41 million active servers (e.g. Estimated number of
users1)
Google Servers 450,000 2!)

Allocate and manage


storage in accordance to
the Quality of Service (QoS)
associated with the data
(e. g. Gartner estimates
average data center
doubling its storage every
18 to 24 months)!)
Mergers and Acquisitions (e.g. Microsoft & Multiple Storage Software Platforms (e.g.
Yahoo!) IBM, EMC, HP,..)

228
What are the innovations and fundamentals associated with storage?

Client side storage innovations… variety of storage device innovations


that are smaller, higher capacity and cheaper have helped end users cope
with increasing storage requirements!

229
What are the innovations and fundamentals associated with storage?

Server side storage innovations… a combination of storage


devices, storage interfaces and storage software innovations have
helped enterprises cope with exponential growth of data storage
requirement !

Storage devices have evolved from tapes to hard drives to RAID hard
drives increasing capacity and resiliency.

230
What are the innovations and fundamentals associated with storage?

Storage interface innovations have evolved from SCSI to ISCI, Fiber


Channel (FCP) and InfiniBand to inter connect devices and transport
the data faster.

SCSI

ISCSI FCP Infiniband


231
SNIA Storage Virtualization Taxonomy
Storage
Virtualization
What is created

Tape, Tape Drive, File System,


Block Disk Other Device
Tape Library File/record
Virtualization Virtualization Virtualization
Virtualization Virtualization

Where it is done
Host Based Network Storage Device/Storage
Virtualization Based Virtualization Subsystem Virtualization

How it is implemented
In-band Out-of-band
Virtualization Virtualization
• The SNIA (Storage Networking Industry Association) storage virtualization taxon
omy provides a systematic classification of storage virtualization, with three level
s defining what, where, and how storage can be virtualized.

• The first level of the storage virtualization taxonomy addresses “what” is created.

• It specifies the types of virtualization: block virtualization, file virtualization, disk


virtualization, tape virtualization, or any other device virtualization. Block-level a
nd file-level virtualization are the core focus areas covered later in this module.
Storage Virtualization Requires a M
ulti-Level Approach
Path management
Server Volume management
Replication

Storage Path redirection


Network Load balancing - ISL trucking
Access control - Zoning

Volume management - LUNs


Storage Access control
Replication
RAID

Storage - 234
Storage Virtualization Configuration
Servers Servers

Virtualization
Virtualization Appliance
Appliance
Storage
Network
Storage
Network

Storage Storage
Arrays Arrays

Out-of-Band In-Band
(a) (b)
(a) In out-of-band implementation, the virtualized environment configuration is stored external to the data path
(b) The in-band implementation places the virtualization function in the data path
Storage - 235
• In an out-of-band implementation, the virtualized environment configuration is stored external to the data
path.

• The configuration is stored on the virtualization appliance configured external to the storage network that c
arries the data.

• This configuration is also called split-path because the control and data paths are split (the control path run
s through the appliance, the data path does not).

• This configuration enables the environment to process data at a network speed with only minimal latency a
dded for translation of the virtual configuration to the physical storage.

• The data is not cached at the virtualization appliance beyond what would normally occur in a typical SAN
configuration.

• Since the virtualization appliance is hardware-based and optimized for Fibre Channel communication, it ca
n be scaled significantly. In addition, because the data is unaltered in an out-of-band implementation, many
of the existing array features and functions can be utilized in addition to the benefits provided by virtualiza
tion.
• The in-band implementation places the virtualization function in the data path, as shown in Figure.

• General-purpose servers or appliances handle the virtualization and function as a translation engine for th
e virtual configuration to the physical storage.

• While processing, data packets are often cached by the appliance and then forwarded to the appropriate ta
rget.

• An in-band implementation is software-based and data storing and forwarding through the appliance resul
ts in additional latency.

• It introduces a delay in the application response time because the data remains in the network for some ti
me before being committed to disk.

• In terms of infrastructure, the in-band architecture increases complexity and adds a new layer of virtualiza
tion (the appliance), while limiting the ability to scale the storage infrastructure.

• An in-band implementation is suitable for static environments with predictable workloads.


Storage Virtualization Challenges
• Scalability
– Ensure storage devices perform appropriate requirements
• Functionality
– Virtualized environment must provide same or better functionality
– Must continue to leverage existing functionality on arrays
• Manageability
– Virtualization device breaks end-to-end view of storage infrastructure
– Must integrate existing management tools
• Support
– Interoperability in multivendor environment

- 238
Block-Level Storage Virtualization
• Ties together multiple independent st
orage arrays Servers

– Presented to host as a single stora


ge device
– Mapping used to redirect I/O on t
his device to underlying physical a
rrays Virtualization Applied at SAN Level

• Deployed in a SAN environment


• Non-disruptive data mobility and data
migration
• Enable significant cost and resource o
ptimization
Heterogeneous Storage Arrays
File-Level Virtualization
Before File-Level Virtualization After File-Level Virtualization
Clients Clients Clients Clients
IP
IP Network
Network
Virtualization
Appliance

File File File File


Server Storage Server Server Server
Storage
Array Array

NAS Devices/Platforms NAS Devices/Platforms

 Every NAS device is an independent  Break dependencies between end-user


entity, physically and logically access and data location
 Underutilized storage resources  Storage utilization is optimized
 Downtime caused by data migrations  Nondisruptive migrations
Storage - 240
• Before virtualization, each NAS device or file server is physically and logically independent
.

• Each host knows exactly where its file-level resources are located.

• Underutilized storage resources and capacity problems result because files are bound to a sp
ecific file server.

• It is necessary to move the files from one server to another because of performance reasons
or when the file server fills up.

• Moving files across the environment is not easy and requires downtime for the file servers.
Moreover, hosts and applications need to be reconfigured with the new path, making it diffic
ult for storage administrators to improve storage efficiency while maintaining the required s
ervice level.
VMware vSphere
VMware vSphere
• VMware vSphere leverages(maximum advantage) the power of virtualization to transfor
m datacenters into simplified cloud computing infrastructures and enables IT organiz
ations to deliver flexible and reliable IT services.

• vSphere simplifies IT by separating applications and operating systems (OSs) from th


e underlying hardware.

• Your existing applications see dedicated resources, but your servers can be managed as
a pool of resources.

• Vmware vSphere virtualizes and aggregates the underlying physical hardware resourc
es across multiple systems.

• It provides pools of virtual resources to the datacenter.


• As a cloud operating system, VMware vSphere manages large collections of infrastructure
(such as CPUs, storage, and networking) as a seamless and dynamic operating environment,
and also manages the complexity of a datacenter.

• The following component layers make up VMware vSphere:

1) Infrastructure Services
2) Application Services
3) VMware vCenter Server
Infrastructure Services
• VMware vCompute—the VMware capabilities that abstract away from underlying differen
t server resources. vCompute services aggregate these resources across many discrete server
s and assign them to applications.

• VMware vStorage—the set of technologies that enables the most efficient use and manage
ment of storage in virtual environments.

• VMware vNetwork—the set of technologies that simplify and enhance networking in virtu
al environments.
• Application Services: are the set of services provided to ensure availability,
security, and scalability for applications.

• VMware vCenter Server : provides a single point of control of the datacenter.

• It provides essential datacenter services such as access control, performance monitoring, an


d configuration.

• Clients Users can access the VMware vSphere datacenter through clients such as the
vSphere Client or Web Access through a Web browser.
• Application Services: are the set of services provided to ensure availability,
security, and scalability for applications.

• VMware vCenter Server : provides a single point of control of the datacenter.

• It provides essential datacenter services such as access control, performance monitoring, an


d configuration.

• Clients Users can access the VMware vSphere datacenter through clients such as the
vSphere Client or Web Access through a Web browser.
• Application Services: are the set of services provided to ensure availability,
security, and scalability for applications.

• VMware vCenter Server : provides a single point of control of the datacenter.

• It provides essential datacenter services such as access control, performance monitoring, an


d configuration.

• Clients Users can access the VMware vSphere datacenter through clients such as the
vSphere Client or Web Access through a Web browser.
• VMware vSphere Client - allows users to remotely connect to ESXi or vCenter Server fro
m any Windows PC.

• VMware vSphere Web Client - allows users to remotely connect to vCenter Server from a
variety of Web browsers and operating systems (OSes).

• VMware vSphere SDKs - provides interfaces for accessing vSphere components.

• vSphere Virtual Machine File System (VMFS) - provides a high performance cluster file
system for ESXi VMs.

• vSphere Virtual SMP - allows a single virtual machine to use multiple physical processors
at the same time.
• vSphere vMotion - allows live migration for powered-on virtual machines in the same data center

• vSphere Storage vMotion - allows virtual disks or configuration files to be moved to a new data stor
e while a VM is running.

• vSphere High Availability (HA) - allows virtual machines to be restarted on other available servers.

• vSphere Distributed Resource Scheduler (DRS) - divides and balances computing capacity for VMs
dynamically across collections of hardware resources.

• vSphere Storage DRS - divides and balances storage capacity and I/O across collections of data stores
dynamically.

• vSphere Fault Tolerance - provides continuous availability.

• vSphere Distributed Switch (VDS) - allows VMs to maintain network configurations as the VMs mi
• Vmware ESX and VMware ESXi: A virtualization layer run on physical servers that abstr
acts processor, memory, storage, and resources into multiple virtual machines.

• VMware ESX 4.0 contains a built-in service console. It is available as an installable CD-RO
M boot image

• VMware ESXi 4.0 does not contain a service console. It is available in two forms: VMware
ESXi 4.0 Embedded and VMware ESXi 4.0 Installable.

• ESXi 4.0 Embedded is firmware that is built into a server’s physical hardware.

• ESXi 4.0 Installable is software that is available as an installable CD-ROM boot image. You
install the ESXi 4.0 Installable software onto a server’s hard drive.
Physical Topology of vSphere Datacenter
• Computing servers Industry standard x86 servers that run ESX/ESXi on the bare met
al.

• ESX/ESXi software provides resources for and runs the virtual machines.

• Each computing server is referred to as a standalone host in the virtual environment.

• You can group a number of similarly configured x86 servers with connections to the same n
etwork and storage subsystems to provide an aggregate set of resources in the virtual enviro
nment, called a cluster.
• Storage networks and Arrays Fibre Channel SAN arrays, iSCSI SAN arrays, and NAS arr
ays are widely used storage technologies supported by VMware vSphere to meet different
• datacenter storage needs.

• The storage arrays are connected to and shared between groups of servers through storage ar
ea networks.

• This arrangement allows aggregation of the storage resources and provides more flexibility i
n provisioning them to virtual machines.
• IP networks Each computing server can have multiple Ethernet network interface car
ds (NICs) to provide high bandwidth and reliable networking to the entire VMware vSphere
datacenter.
Workloads Virtualized

25% 60% >90%

2008 2012 FUTURE


The Next Big Thing
THEN NOW

Server Software-defined
Virtualization Datacenter
Storage/ Servers Networking Security Management/
Availability Monitoring

VDC

SOFTWARE-DEFINED
DATACENTER SERVICES

DAYS/ MINUTES/
WEEKS HOURS SECONDS
2008 2012 FUTURE

Business Demands - Everything Now


SOFTWARE-DEFINED
DATACENTER
All infrastructure is virtualized and delivered as a
service, and the control of this datacenter is
entirely automated by software
BEYOND VIRTULIZATION
• One technology— server virtualization—has literally transformed the corporate data center
into a more adaptable and efficient platform for business applications.

• More than 50 percent of all applications run on virtual machines.

• Virtualization—an innovation pioneered by VMware—has delivered enormous benefits suc


h as reduced capital spending, greater asset utilization and enhanced IT productivity.

• Moreover, virtualization is considered an indispensable software component of the corporat


e IT infrastructure .

• However, in many data centers, the benefits of server virtualization have “stalled”—a
source of frustration to IT executives
BEYOND VIRTULIZATION

• While virtualization delivers impressive initial productivity boosts, continuing results o


ften do not meet management’s expectations for further improvements in IT asset use a
nd operational efficiency.

• This stall often occurs as the rapid expansion of virtual server deployments threatens
to overload storage and data network facilities, resulting in over provisioning of st
orage capacity and sharply increased administration workloads
SDDC
• In a sense, the SDDC is simply the logical extension of server virtualizati
on.

• Analogous to the way that server virtualization dramatically maximizes t


he deployment of computing power.

• The SDDC does the same for all of the resources needed to host an applic
ation, including storage, networking and security.
SDDC
• In the past, each new application required a dedicated server, which could take up to 10 wee
ks to deploy.

• Today, server virtualization allows a virtual machine to be provisioned within minutes.

• However, the other resources needed by the application—storage, network, security—are p


hysical, not virtual, so they take much longer to deploy, on the order of a week or more.

• Worse, provisioning these physical resources consumes a great deal of IT time, which would
be better spent on strategic initiatives.

• In a very real sense, the full potential of server virtualization cannot be realized when o
ther resources are physical.
SDDC
• In the SDDC, all resources are virtualized so they can be automatically de
ployed, with little or no human involvement.

• Applications can be operational in minutes, shortening time to value and


dramatically reducing IT staff time spent on application provisioning and
deployment.
Benefits
Faster Time to Value
• The data center today is more than just a cost center—it is viewed by business users and executi
ves as a competitive differentiator and strategic asset.

• Meeting and exceeding these expectations requires an automated infrastructure that can provisio
n resources in minutes, not weeks.

• So that key applications are up and running quickly—and delivering business value.

• In the SDDC, resources are deployed automatically from pools, speeding the time to applic
ation rollout and providing an unprecedented degree of flexibility in the data center archite
cture.

• As a result, the organization has the agility to respond quickly to changes in the marketplace—a
nd gain competitive advantage.
Minimize IT Spend

• By pooling and intelligently assigning resources, t


he software-defined data center maximizes the uti
lization of the physical infrastructure, extending t
he value of investments.
Eliminate Vendor Lock-in
• Today’s data center features a staggering array of custom hardware in routers, switches, stor
age controllers, firewalls, intrusion detection and other components.

• In the SDDC, all of these functions are performed by software running on commodity x86 s
ervers.

• Instead of being locked in to the vendor’s hardware, IT managers can buy commodity serve
rs in quantity through a competitive bid process.

• This shift not only saves money, but also avoids situations where problems in the vendor’s
manufacturing process or supply chain result in delivery delays and impact data center oper
ations.
Unmatched Efficiency and Resiliency

• The SDDC provides a flexible and stable platform for any application, including innovative
services such as high-performance computing, big data (Hadoop), and latency-sensitive appl
ications.

• Provisioning and management are automated by programmable policy-based software.

• Changes are made and workloads balanced by adjusting the software layer rather than hard
ware.

• When a failure occurs, the management software in the SDDC automatically redirects workl
oads to other servers anywhere in the datacenter, minimizing service-level recovery time an
d avoiding outages.
Comparison of Different Hypervisors
Introduction

• To find the best hypervisor technology, first decide whether you want a ho
sted or bare-metal virtualization hypervisor.

• No single hypervisor always outperforms the others.

• This suggests that effectively managing hypervisor diversity in order to m


atch applications to the best platform is an important.
Introduction
• There are numerous virtualization platforms ranging from open-source hypervisor
s such as KVM and Xen, to commercial hypervisors such as VMware vSphere an
d Microsoft Hyper-V.

• The choice of hypervisor does not only apply to an enterprise’s private data center
— different cloud services make use of different virtualization platforms.

• Amazon EC2, the largest infrastructure cloud, uses Xen as a hypervisor, but Micro
soft Azure uses Hyper-V and VMware partners use ESX. Recently, Google launch
ed its own IaaS cloud that uses KVM as a hypervisor.

• Once you choose the type of hypervisor that fits your needs, you need to choose t
he best hypervisor technology.
Bare-metal virtualization hypervisors

VMware ESX and ESXi

• VMware has the most mature hypervisor technology by far, offering advanced fea
tures and scalability.

• However, VMware’s bare-metal virtualization hypervisor can be expensive to imp


lement because of its higher licensing costs.

• The vendor does offer a free version of ESXi, but it’s very limited and has none of
the advanced features of the paid editions.

• VMware also offers lower-cost bundles that can make hypervisor technology mor
e affordable for small infrastructures.
Bare-metal virtualization hypervisors

Microsoft Hyper-V

• Microsoft Hyper-V has emerged as a serious competitor to VMware ESX and ES


Xi.

• Hyper-V lacks many of the advanced features that VMware’s broad product line p
rovides.

• But with its tight Windows integration, Microsoft’s hypervisor technology may be
the best hypervisor for organizations that don’t require a lot of bells and whistles.
Bare-metal virtualization hypervisors

Citrix XenServer

• Citrix XenServer is a mature platform that began as an open source project.

• The core hypervisor technology is free, but like VMware’s free ESXi, it has almo
st no advanced features.

• Citrix has several paid editions of XenServer that offer advanced management, au
tomation and availability features.

• But despite offering a stable bare-metal virtualization hypervisor, Citrix struggles


to compete with Microsoft and VMware on hypervisor technology.
Bare-metal virtualization hypervisors

Oracle VM
• Oracle VM is Oracle’s homegrown hypervisor technology based on open source
Xen.

• If you want hypervisor support and product updates, though, it will cost you.

• A simple, no-frills hypervisor, Oracle VM lacks many of the advanced features fo


und in other bare-metal virtualization hypervisors.

• As with XenServer, the development cycle of Oracle VM is longer and limited, w


hich makes it hard to compete with VMware and Hyper-V.

• One advantage of Oracle VM, though, is that it’s certified with most of Oracle’s ot
her products and therefore includes no-hassle support.
Hosted virtualization hypervisors

VMware Workstation/Fusion/Player

• VMware Player is a free virtualization hypervisor.

• This hypervisor technology can only run a single virtual machine (VM) and does not allow
you to create VMs.

• VMware Workstation is a more robust hypervisor with some advanced features, such as reco
rd-and-replay and VM snapshot support.

• For developers that need sandbox environments and snapshots, or for labs and demonstratio
n purposes. VMware Fusion is the Mac version of Workstation, which only costs $89 but lac
ks some of the features and abilities of Workstation.

• This hypervisor technology is better suited for running Windows and Linux on Macs.
Hosted virtualization hypervisors

VMware Server

• VMware Server is a free, hosted virtualization hypervisor that’s very similar to VMware Wo
rkstation.

• However, VMware Server lacks some of the features of Workstation and only supports a sin
gle snapshot per VM.

• This hypervisor technology is designed to run headless with a network-based administration


utility and is optimized for running more server-like workloads.

• VMware has halted development on Server since 2009, but it works well as a no-frills hoste
d hypervisor and is an easy alternative to using the free version of ESXi.
Hosted virtualization hypervisors

Oracle VM VirtualBox

• Oracle VM VirtualBox is a mature virtualization hypervisor that’s suitable for ma


ny needs and use cases.

• VirtualBox hypervisor technology provides reasonable performance and features i


f you want to virtualize on a budget.

• Despite being a free, hosted product with a very small footprint, VirtualBox share
s many features with VMware vSphere and Microsoft Hyper-V.

• Oracle VM VirtualBox provides a decent alternative to more expensive hypervisor


s for both server and desktop virtualization.
Hosted virtualization hypervisors
Red Hat Enterprise Virtualization

• Red Hat’s Kernel-based Virtual Machine (KVM) has qualities of both a hosted an
d a bare-metal virtualization hypervisor.

• KVM turns the Linux kernel itself into a hypervisor so VMs have direct access to
the physical hardware.

• KVM in Red Hat Enterprise Virtualization offers many enterprise-level features a


nd comes with a Windows-based management server for managing multiple KVM
hosts.

• This hypervisor technology is not free, however, and while KVM has enterprise fe
atures and scalability, it lacks some of the more advanced features and application
programming interfaces that VMware and Microsoft offer.
Hosted virtualization hypervisors
Parallels Desktop

• Parallels is known for its popular Parallels Desktop for Mac hypervisor, which is
very similar to VMware Fusion.

• Parallels also has a desktop version of its hypervisor technology that runs on both
Windows and Linux.

• Plus, it has a more powerful edition called Parallels Server for Mac, which has gre
ater scalability and more advanced features.

• Parallels’ hypervisors are also pretty mature, having been first launched in 2005.
They offer a very low-cost, feature-rich hosted hypervisor that can be used for a v
ariety of purposes.
Introduction to Virtual Machines
Introduction to Virtual Machines
• The key to managing complexity in computer systems is their division into levels of abstra
ction separated by well-defined interfaces.

• Levels of abstraction allow implementation details at lower levels of a design to be ignore


d or simplified, thereby simplifying the design of components at higher levels.

• The details of a hard disk, for example that it is comprised of sectors and tracks, are abstrac
ted by the operating system so the disk appears to application software as a set of variable-si
zed files

• An application programmer can then create, write, and read files, without knowledge of the
way the hard disk is constructed and organized.
Introduction to Virtual Machines
• The levels of abstraction are arranged in a hierarchy, with lower levels implemented in har
dware and higher levels in software.

• In the hardware levels, all the components are physical, have real properties, and their interf
aces are defined so that the various parts can be physically connected.

• In the software levels, components are logical, with fewer restrictions based on physical cha
racteristics.

• We are most concerned with the abstraction levels that are at or near the hardware/software
boundary.

• These are the levels where software is separated from the machine on which it runs.
Introduction to Virtual Machines
• From the perspective of the operating system, a machine is largely composed of hardwar
e, including one or more processors that run a specific instruction set, some real memory, an
d I/O devices.

• From the perspective of application programs, for example, the machine is a combinatio
n of the operating system and those portions of the hardware accessible through user–l
evel binary instructions.
Introduction to Virtual Machines
• Let us now turn to the other aspect of managing complexity: the use of well-defined interfa
ces.

• Well-defined interfaces allow computer design tasks to be decoupled so that teams of hardw
are and software designers can work more or less independently.

• The instruction set is one such interface. Processor designers, say at Intel, develop micropro
cessors that implement the Intel IA-321 instruction set.

• While software engineers at Microsoft develop compilers that map high level languages to t
he same instruction set.

• As long as both groups satisfy the instruction set specification, compiled software will exec
ute correctly on a machine incorporating the IA-32 microprocessor
Introduction to Virtual Machines
• As the Intel/Microsoft example suggests, well defined interfaces permit development of i
nteracting computer subsystems at different companies, and at different times, sometime
s years apart.

• Application software developers do not need to be aware of detailed changes inside the oper
ating system, and hardware and software can be upgraded according to different schedules.

• Software can run on different platforms implementing the same instruction set.
Introduction to Virtual Machines
• Despite their many advantages, well-defined interfaces can also be confining. Subsystems a
nd components designed to specifications for one interface will not work with those des
igned for another.

• There are processors with different instruction sets (e.g., Intel IA-32 and IBM PowerPC), a
nd there are different operating systems (e.g., Windows and Linux).
Introduction to Virtual Machines
• Many operating systems are developed for a specific system architecture, e.g., for a unipr
ocessor or a shared memory multiprocessor, and are designed to manage hardware resources
directly.

• The implicit assumption is that the hardware resources of a system are managed by a single
operating system.

• This binds all hardware resources into a single entity under a single management regime.

• And this, in turn, limits the flexibility of the system, not only in terms of available software
(as discussed above), but also in terms of security and failure isolation, especially when the
system is shared by multiple users or groups of users.
Introduction to Virtual Machines
• Virtualization provides a way of relaxing the above constraints and increasing flexibility.

• When a system (or subsystem), e.g.,., a processor, memory, or I/O device, is virtualized, its i
nterface and all resources visible through the interface are mapped onto the interface and re
sources of a real system actually implementing it.

• Consequently, the real system is transformed so that it appears to be a different, virtual syste
m, or even a set of multiple virtual systems.

• Formally, virtualization involves the construction of an isomorphism that maps a virtual gue
st system to a real host
Introduction to Virtual Machines
• Consider again the example of a hard disk. In some applications, it may be desirable to parti
tion a single large hard disk into a number of smaller virtual disks.

• The virtual disks are mapped to a real disk by implementing each of the virtual disks as a si
ngle large file on the real disk .

• Virtualizing software provides a mapping between virtual disk contents and real disk conten
ts (the function V in the isomorphism) using the file abstraction as an intermediate step.

• Each of the virtual disks is given the appearance of having a number of logical tracks and se
ctors (although fewer than in the large disk).

• A write to a virtual disk (the function e in the isomorphism) is mirrored by a file write and a
corresponding real disk write, in the host system (the function e´ in the isomorphism).
Introduction to Virtual Machines
• The concept of virtualization can be applied not only to subsystems such as disks, but to an
entire machine.

• A Virtual Machine (VM) is implemented by adding a layer of software to a real machine to s


upport the desired virtual machine’s architecture.

• For example, virtualizing software installed on an Apple MacIntosh can provide a Windows
/IA-32 virtual machine capable of running PC application programs.

• In general machine can circumvent real machine compatibility constraints and hardware res
ource constraints to enable a higher degree of software portability and flexibility.
Introduction to Virtual Machines
• There is a wide variety of virtual machines that provide an equally wide variety of benefits.

• Multiple, replicated virtual machines can be implemented on a single hardware platform to


provide individuals or user groups with their own operating system environments. The differ
ent system environments (possibly with different operating systems), also provide isolation
and enhanced security.

• A large multiprocessor server can be divided into smaller, virtual servers while retaining the
ability to balance the use of hardware resources across the system.
Introduction to Virtual Machines
• Virtual machines can also employ emulation techniques to support cross-platform software
compatibility.

• For example, a platform implementing the PowerPC instruction set can be converted into a
virtual platform running the IA-32 instruction set. Consequently, software written for one pl
atform will run on the other.

• This compatibility can be provided either at the system level (e.g., to run Windows OS on a
MacIntosh) or at the program or process level (e.g., to run Excel on a Sun Solaris/SPARC pl
atform).

• In addition to emulation virtual machines can also provide dynamic, on-the-fly optimization
of program binaries.

• Finally, through emulation, virtual machines can enable new, proprietary instruction sets, e.g
, incorporating VLIWs, while supporting programs in an existing, standard instruction set.
Introduction to Virtual Machines
• Virtual machines have been investigated and built by operating system developers, language
designers, compiler developers, and hardware designers.

• Although each application of virtual machines has its unique characteristics, there also are u
nderlying concepts and technologies that are common across the spectrum of virtual
• machines.

• Because the various virtual machine architectures and underlying technologies have been de
veloped by different groups it is especially important to unify this body of knowledge and u
nderstand the base technologies that cut across the various forms of virtual machines.

• The goals are to describe the family of virtual machines in a unified way, to discuss the com
mon
• There are two parts of an ISA that are important in the definition of virtual machines.

• The first part includes those aspects of the ISA that are visible to an application program.

• This will be referred to as the user ISA.

• The second part includes those aspects that are visible only to supervisor software, such as t
he operating system,which is responsible for managing hardware resources.

• This is the system ISA. Of course, the supervisor software can also employ all the elements
of the user ISA.

• In Figure 4, the user ISA only, and interface consists of both the user and system ISA.
• The Application Binary Interface (ABI) provides a program with access to the hardware res
ources and services available in a system and has two major components.

• The first is the set of all user instructions; system instructions are not included in the ABI.

• At the ABI level, all application programs interact with the shared hardware resources indir
ectly, by invoking the operating system via a system call interface, which is the second com
ponent of the ABI.

• System calls provide a specific set of operations that an operating system may perform on b
ehalf of a user
• The Application Programming Interface (API) is usually defined with respect to a high level
language (HLL).

• A key element of an API is a standard library (or libraries) that an application calls to invoke
various services available on the system, including those provided by the operating system.
Major Program Interfaces
• ISA Interface -- supports all conventional software
Application Software
System Calls
Operating System
System ISA User ISA
ISA

 Application Binary Interface (ABI)


-- supports application software only
Application Software
System Calls
Operating System
System ISA User ISA
ABI
Introduction 308
Abstraction
• Computer systems are built
on levels of abstraction
file
file

 Higher level of abstraction abstraction

hide details at lower levels


 Example: files are an
abstraction of a disk

Introduction 309
Virtualization
• An isomorphism from guest to host
– Map guest state to host state
– Implement “equivalent” functions
e(Si )

Si Sj
Guest
V(Si ) V(Sj )

e'(Si')

Si' Sj'
Host

Introduction 310
Virtualization
• Similar to abstraction
Except
virtualization
– Details not necessarily hidden file file

• Construct Virtual Disks


– As files on a larger disk
– Map state
– Implement functions
• VMs: do the same thing wit
h the whole “machine”
Introduction 311

The “Machine”
Different perspectives on what the Mach
ine is: Application
• OS developer Programs

• Compiler developer Libraries

• Application programmer Operating System


Instruction Set Architecture
Execution Hardware
Memory
Application Program Interface System Interconnect
Translation
(bus)
• API I/O devices
and Main
• User ISA + library calls Networking Memory

Introduction 312
The “Machine”
• Different perspectives on wh Application
Programs
at the Machine is: Libraries

• OS developer Operating System

Execution Hardware
Memory
Translation
System Interconnect

Instruction Set Architecture


(bus)

I/O devices Main


and
–ISA Networking Memory

–Major division between hardwar


e and software
Introduction 313
The “Machine”
• Different perspectives on wh Application
Programs
at the Machine is: Libraries

• Compiler developer Operating System

Execution Hardware

Application Binary Interface Memory


Translation
System Interconnect
• ABI (bus)

I/O devices
• User ISA + OS calls and Main
Memory
Networking

Introduction 314
The “Machine”
• Different perspectives on wh Application
Programs
at the Machine is: Libraries

• Application programmer Operating System

Execution Hardware

Application Program Interface Memory


Translation
System Interconnect
(bus)
• API
I/O devices
• User ISA + library calls and
Networking
Main
Memory

Introduction 315
Virtual Machines
add Virtualizing Software to a Host platform
and support Guest process or system on a Virtual Machine (VM)

Example: System Virtual Machine


Applications Applications
Guest
OS OS

Virtualizing
VMM Software
Virtual
Machine
Hardware
Host "Machine"

Introduction 316
The Family of Virtual Machines
• Lots of things are called “virtual machines”
IBM VM/370
Java
VMware

Some things not called “virtual machines”, are virtual machines


IA-32 EL

Introduction 317
• As characterized by the isomorphism described earlier, the process of virtualization consists
of two parts

• 1) the mapping of virtual resources or state, e.g., registers, memory, or files, to real resource
s in the underlying machine and

• 2) the use of real machine instructions and/or system calls to carry out the actions specified
by virtual machine instructions and/or system calls; e.g., emulation of the virtual machine A
BI or ISA.
Process VMs
• Just as there is a process perspective and a system perspective of machines, there are also pr
ocess level and system level virtual machines.

• As the name suggests, a process virtual machine is capable of supporting an individual proc
ess.

• In process VMs, the virtualizing software is placed at the ABI interface, on top of the OS/ha
rdware combination

• The virtualizing software emulates both user level instructions and operating system calls.
Process VMs
• we usually refer to the underlying platform as the host, and the software that runs in the VM
environment as the guest.

• The real platform that corresponds to a virtual machine, i.e., the real machine being emulat
ed by the virtual machine, is referred to as the native machine.

• The name given to the virtualizing software depends on the type of virtual machine being im
plemented.

• In process VMs, virtualizing software is often referred to as the runtime, which is short for
“runtime software”.

• The runtime is created to support a guest process and runs on top of an operating system. Th
e VM supports the guest process as long as the guest process executes, and terminates suppo
rt when the guest process terminates
Process VMs
• Execute application binaries with an ISA different from hardware
platform
• Couple at ABI level via Runtime System
• Examples: IA-32 EL, FX!32
Guest Application Process Application Process

Runtime Virtualizing
Software
OS Virtual
Machine
Host Machine
Hardware

Introduction 321

System Virtual Machines
system virtual machine provides a complete system environment.

• This environment can support an operating system along with its potentially many user processes.

• It provides a guest operating system with access to underlying hardware resources, including netwo
rking, I/O, and, on the desktop, a display and graphical user interface.

• The VM supports the operating system as long as the system environment is alive.

• virtualizing software is placed between the underlying hardware machine and conventional so
ftware.
• In this particular example, virtualizing software emulates the hardware ISA so that conventional sof
tware “sees” a different ISA than the one supported by hardware.

• In many system VMs the guest and host run the same ISA, however. In system VMs, the virtualizin
g software is often referred to as the Virtual Machine Monitor (VMM), a term coined when the VM c
oncept was first developed in the late 1960’s.
System Virtual Machines

• Native VM System Virtual


Virtual
Machine
– VMM privileged mode Machine
Virtual
Machine Non-privileged
– Guest OS user mode VMM
VMM modes

– Example: classic IBM VMs HostHost


OS OS
VMM Privileged
Mode

• User-mode Hosted VM Hardware


Hardware
Hardware

– VMM runs as user applic


ation
• Dual-mode Hosted VM
– Parts of VMM privileged;
parts non-privileged
– Example VMware
Introduction 324
Emulation
• Emulation adds considerable flexibility by permitting “mix and match” cross-platform soft
ware portability.

• In this example (Figure 8a), one ISA is emulated by another. Virtualizing software can enha
nce emulation with optimization, by taking implementation-specific information into conside
ration as it performs emulation.

• Virtualizing software can also provide resource replication, for example by giving a single
• hardware platform the appearance of multiple platforms (Figure 8b), each capable of runnin
g a complete operating system and/or a set of applications.

• Finally, the virtual machine functions can be composed (Figure 8c) to form a wide variety o
f architectures, freed of many of the traditional compatibility and resource constraints.
Process VMs
• Process level VMs provide user applications with a virtual ABI environment.

• In their various implementations, process VMs can provide replication, emulation,


and optimization.
Multiprogramming
• The first and most common virtual machine is so ubiquitous that we do
n’t even think of it as being a virtual machine.

• The combination of the OS call interface and the user instruction set for
m the machine that executes a user process.

• Most OSes can simultaneously support multiple user processes through


multiprogramming where each user process is given the illusion of havi
ng a complete machine to itself
• Each process is given its own address space and is given access to a file structur
e.

• The operating system timeshares the hardware and manages underlying resourc
es to make this possible.

• In effect, the operating system provides a replicated process level virtual machin
e for each of the concurrently executing applications.
Emulators and Dynamic Binary Translators

• A more challenging problem for process level virtual machines is to support program binari
es compiled to a different instruction set than the one executed by the host’s hardware,
• i.e., to emulate one instruction set on hardware designed for another.

• An example emulating process virtual machine


• Application programs are compiled for a source ISA, but the hardware implemen
ts a different target ISA
• As shown in the example, the operating system may be the same for both the guest process a
nd the host platform, although in other cases the OSes may differ as well.

• The example illustrates the Digital FX!32 system (Hookway and Herdeg 1997).

• The FX!32 system can run Intel IA-32 application binaries compiled for Windows NT on an
Alpha hardware platform also running Windows NT.

• The most straightforward emulation method is interpretation.

• An interpreter program executing the target ISA fetches, decodes and emulates t
he execution of individual source instructions.
• This can be a relatively slow process, requiring tens of native target instructions f
or each source instruction interpreted.
• For better performance, binary translation is typically used.

• With binary translation, blocks of source instructions are converted to target instructions tha
t perform equivalent functions.

• There can be a relatively high overhead associated with the translation process, but once a bl
ock of instructions is translated, the translated instructions can be cached and repeatedly exe
cuted much faster than they can be interpreted.

• Because binary translation is the most important feature of this type of process virtual machi
ne, they are sometimes called dynamic binary translators.
• Same-ISA Binary Optimizers
• Most dynamic binary translators not only translate from source to target code, but
they also perform some code optimizations.

• This leads naturally to virtual machines where the instruction sets used by host an
d the guest are the same, and optimization of a program binary is the primary purp
ose of the virtual machine.

• Thus, same-ISA dynamic binary optimizers are implemented in a manner very sim
ilar to emulating virtual machines, including staged optimization and software cac
hing of optimized code.
High Level Language Virtual Machines

• Raise the level of abstraction


– User higher level virtual ISA
– OS abstracted as standard libraries
• Process VM (or API VM)
HLL Program HLL Program
Compiler front-end Compiler
Intermediate Code Portable Code
(Virtual ISA )
Compiler back-end
Object Code VM loader
( ISA) Virt. Mem. Image
Loader VM Interpreter/Translator
Memory Image Host Instructions

Traditional HLL VM
Introduction 333
Co-Designed VMs
 Perform both translation and
X86 Apps
optimization
 VM provides interface between Windows
standard ISA software and
implementation ISA
 Primary goal is performance or
power efficiency VLIW

 Use proprietary implementation ISA


 Transmeta Crusoe and IBM Daisy
best-known examples
Introduction 334
Composition

apps 1

OS 1
apps 2

OS 1

ISA 2

Introduction 335
Composition: Example
Java application
JVM

Linux x86
VMware
Windows x86

Code Morphing

Crusoe VLIW

Introduction 336
Summary (Taxonomy)
VM type (Process or System)
Host/Guest ISA same or different
Process VMs System VMs

same ISA different same ISA different


ISA ISA

Multiprogrammed IA-32 EL IBM VM/370 Virtual PC


Systems FX!32 for Mac
VMware
HP
Dynamo Transmeta
Java VM
Crusoe
MS CLI
Introduction 337
Back up and Recovery
• In today’s world, continuous access to information is a must for the smoot
h functioning of business operations.

• The cost of unavailability of information is greater than ever, the outages i


n key industries costing millions of dollars per hour.

• BC is an integrated and enterprise-wide process that includes all activities


(internal and external to IT) that a business must perform to mitigate the i
mpact of planned and unplanned downtime.

• The goal of a business continuity solution is to ensure the “information av


ailability” required to conduct vital business operations.
BC Terminologies
Disaster Recovery(DR) :

• DR is the coordinated process of restoring systems, data, and the infrastructure, required to suppor
t key ongoing business operations in the event of a disaster.

• It is the process of restoring and/or resuming business operations from a consistent copy of the dat
a.

• After all recoveries are completed, the data is validated to ensure that it is correct.
Hot site:

• A site to where an enterprise’s operations can be moved, in the event of a disaster.

• It is a site equipped with all the required hardware, operating system, application, and network sup
port that help perform business operations, and where the equipment is available and running at all
times.
Cold site:
• A site to where an enterprise’s operations can be moved, in the event of disast
er.
• It has minimum IT infrastructure and environmental facilities in place, but are
not activated.
Cluster:
• A group of servers and other necessary resources, coupled to operate as a sin
gle system.
• Clusters ensure high availability and load balancing.
• Typically, in failover clusters, one server runs an application and updates the
data, and the other is kept as a standby to take over completely, when require
d. In more sophisticated clusters, multiple servers may access data, while typi
cally, one server is kept as a standby.
RTO and RPO
Recovery Point Objective (RPO) Recovery Time Objective (RTO)
• Point in time to which systems and data • Time within which systems, applications, o
must be recovered after an outage r functions must be recovered after an ou
• Amount of data loss that a business can tage
endure • Amount of downtime that a business can
endure and survive

Weeks Weeks
Tape Backup Tape Restore
Days Days
Periodic Replication Disk Restore
Hours Hours
Asynchronous Replication Minutes
Manual Migration
Minutes

Seconds Synchronous Replication Seconds Global Cluster

Recovery-point objective Recovery-time objective

341
RPO
• For example, if the RPO is six hours, backups or replicas must be made at least once in 6 ho
urs.

• The figure shows various RPOs and their corresponding ideal recovery strategies.

• An organization may plan for an appropriate BC technology solution on the basis of the RP
O it sets.

• For example, if RPO is 24 hours, that means that backups are created on an offsite tape driv
e every midnight.

• The corresponding recovery strategy is to restore data from the set of last backup tapes. Sim
ilarly, for zero RPO, data is mirrored synchronously to a remote site.
RTO
• For example, if the RTO is two hours, then use a disk backup because it enables a faster rest
ore than a tape backup.

• However, for an RTO of one week, tape backup will most likely meet requirements.

• Few examples of RTOs and the recovery strategies to ensure data availability are listed belo
w:

RTO of 72 hours: Restore from backup tapes at a cold site.


RTO of 12 hours: Restore from tapes at a hot site.
RTO of 4 hours: Use a data vault to a hot site.
RTO of 1 hour: Cluster production servers with controller-based disk mirroring.
RTO of a few seconds: Cluster production servers with bi-directional mirroring, enabling t
he applications to run at both sites simultaneously.
BC Technology Solutions
Following are the solutions and supporting technologies that enable business continuit
y and uninterrupted data availability:

• Eliminating single points of failure


• Multi-pathing software
• Backup
– Backup/restore
• Replication
– Local replication
– Remote replication

Classic Data Center 344


Backup and Recovery
• Backup is an additional copy of data that can be used for restore and recovery purposes

• The Backup copy is used when the primary copy is lost or corrupted.

• This Backup copy can be created by:


– Simply copying data (there can be one or more copies)
– Mirroring data

• The Backup purposes are:


– Disaster Recovery
– Operational backup
– Archival

Classic Data Center 345


Backup Granularity
Full Backup

Su Su Su Su Su

Cumulative (Differential) Backup

Su M T W T F S Su M T W T F S Su M T W T F S Su M T W T F S Su

Incremental Backup

Su M T W T F S Su M T W T F S Su M T W T F S Su M T W T F S Su

Amount of data backup

Classic Data Center 346


Backup Components
• Backup client
– Sends backup data to back
up server or storage node
• Backup server
– Manages backup operation
Storage Array
s and maintains backup cat Backup Data

alog
• Storage node Application Server/ Backup Server/
Backup Client Storage Node
– Responsible for writing dat
Backup device
a to backup device
• Backup device
– Stores backup data
Classic Data Center
347
Backup and Restore Operation
• Backup operation
– Backup server initiates a scheduled backup
• Instructs storage node to load backup media and instructs clients to send backup d
ata to the storage node
• Storage node sends backup data to backup device and media information to backu
p server
• Backup server updates catalog and records the status
• Restore operation
– Backup client initiates the restore
• Backup server scans backup catalog to identify data to be restored and the client th
at will receive the data
• Backup server instructs storage node to load backup media
• Storage node restores the backup data to the client and sends metadata to the back
up server
Classic Data Center 348
Backup Optimization: Deduplication
Deduplication
Technology that conserves storage capacity and/or network traffic by
eliminating duplicate data.

• Can be implemented at:


– File level
– Block/chunk level
• Deduplication can be:
– Source Based (client)
– Target Based (Storage Device)

Classic Data Center 349


Deduplication
• Deduplication refers to technology that searches for duplicate data (ex: blocks, se
gments) and discards duplicate data when located.

• When duplicate data is detected, it is not retained; instead, a "data pointer" is mod
ified so that the storage system references an exact copy of that data already store
d on disk.

• Furthermore, Deduplication alleviates the costs associated with keeping multiple c


opies of the same data.

• It also drastically reduces the storage requirements of a system.


Deduplication
• Some Deduplication solutions work at the file level, whereas some other Deduplic
ation solutions work on a lower level, such as a block or variable chunk level.

• Deduplication could occur close to where the data is created, which is often referr
ed to as “Source Based Deduplication."

• It could occur close to where the data is stored, which is commonly called “Target
Based Deduplication”
Benefits of Deduplication
• By eliminating redundant data, far less infrastructure is required to hold the ba
ckup images
– Lowers infrastructure costs

• Reduces the amount of redundant content in the daily backup


– Enables longer retention periods
• Reduces backup window and enables faster restore
– Less data to be backed up
– Enables creation of daily full backup images

Classic Data Center 352


Source Based Deduplication
• Source based Deduplication eliminates redundant data at the source.

• This means that data Deduplication is performed at the start of the backup process—before t
he data is transmitted to the backup environment.

• Source based Deduplication can radically reduce the amount of backup data sent over netwo
rks during backup processes.

• Source based Deduplication increases the overhead on the backup client, which impacts the
backup performance.
Where does Deduplication Occur?
Source Based Deduplication Target Based Deduplication
 Data is Deduplicated at the source (backup client)  Data is Deduplicated at the target (backup
 Backup client sends only new, unique segments device)
across the network to the backup device  Backup client sends native data to the backup
 Reduced storage capacity and network device
bandwidth requirements and increased  Increased network bandwidth and storage
overhead on the backup client capacity requirements

Deduplication at Source Deduplication at Target

SAN
SAN
Backup client Backup device Backup client Backup device

Classic Data Center 354


Deduplication : Methods
• Single Instance Storage (SIS)
– Detects and removes redundant copies of identical files
– After a file is stored in the SIS system, all other references to the same file refer to the
original copy
• Sub-file Deduplication
– Identifies and filters repeated data segments stored in files
• Within a single system and across multiple systems
• Compression
– Reduces file size
– Identifies and removes blank spaces and repeated data chunks
– Can be performed at source(client) or target(storage device)

Classic Data 355


Backup in a VDC: An Overview
• VM backup includes
– Virtual disks containing system and application data
– Configuration data including network and power state
• Backup options
– File based
– Image based
• Backup optimization
– Deduplication

Business Continuity in
VDC
356
Backup operation
• A backup operation in a VDC environment often requires backing up the VM state.

• The VM state includes the state of its virtual disks, memory (i.e. RAM), network configurati
on, as well as the power state (on, off, or suspended).

• A virtual disk includes all the information typically backed up (OS, applications, and data.)

• As with physical machines, a VM backup needs to be periodically updated with respect to it


s source, in order to recover from human or technical errors.
Backup in a VDC: Traditional Approaches
• Compute based Backup VM as a
Physical Server
– Backup VM as a physical server
• Requires installing a backup agent o
n a VM
• Can only backup virtual disk data =Backup
Agent
– Backup VM files
• Requires installing backup agent on
hypervisor
Backup VM as
• Cannot backup LUNs directly attache a Flat File
d to a VM
• Array based
– Uses snapshot and cloning techniques

Business 358

Image based Backup
Creates a copy of the guest OS, its data, VM state, and configurations
 The backup is saved
as a single file – “image”
 Backup server creates
the backup copies and
offloads backup
processing from the Backup Server
hypervisor Backup Disk/
Tape
• Restores directly at
VM level only
• Operates at hypervisor
level
• Mounts image on backup server Physical Server
SAN Storage

Business
Continuity in VDC 359
Backup Considerations in a VDC
• Reduced computing resources
– Existence of multiple VMs running on the same physical machine leaves fewer res
ources available for backup process

• Complex VM configurations
– A backup agent running on VM has no access to VM configuration files
– Not possible for a backup agent running on hypervisor level to access storage dire
ctly attached to a VM using RDM

Business
Continuity in VDC 360
Backup Optimization: Deduplication
• Backup images of VM disk files are
candidates for deduplication in a V
DC environment
• Deduplication types and methods ar
e same as those employed in CDC

Data to be backed up

Deduplicated Backup Storage

Business 361
Restoring a VM
• Restore VM to a required
state using the backup
– Selection of the restore point depen
ds upon RPO Restoring Physical Machine
• Steps for restore process
– Selection of VM and virtual disks to Configur
e ConfigureInstall Start
restore from backup Install OS applications/data“single-step
Hardwar OS
– Selection of the destination e from backup Automated
Restoring Virtual Machine recovery”
– Configuration settings
• Restoring a VM may take
significantly fewer steps, c Restore VM Power on VM

ompared to recovering a
physical machine

Business 362
Puppet
 A system administrator's job primarily consists of configuring, deploying and mai
ntaining server machines.

• There are tasks that are very challenging and interesting at times, but most of the
daily routine, consists of many boring and repetitive tasks.

• Almost all system administrator's, try to get rid of those repetitive, and boring task
s with the help of scripting and automating them. But there are issues with scriptin
g and automating also.

• Scripts which are custom made to solve or automate a task, are seldom documente
d, published or announced.

• Enhancements to these scripts happen in a random manner.


• Sometimes you go to work the next day to find that something is not working as r
equired because the script was altered by somebody else.

• And the main disadvantage of this is that when it comes to larger infrastructure wi
th different platforms to deploy and manage, these scripts does not serve the purp
ose.

• Although there are plenty of proprietary configuration management tools availabl


e in the market, you need to either spend a lot of money behind them or, sometime
s end up waiting for their next release for an extra add on feature which could hav
e been implemented by you if the source code was available with you.

So what is Puppet?
• Puppet is a configuration management tool that is extremely powerful in deploying, configu
ring, managing, maintaining, a server machine.

• In other words you can say that puppet can be used for an entire life of a server, starting fro
m bootstrapping a server, to shredding and giving up a server.

• To give you an overview let me say that you can define distinct configuration for each and e
very host using puppet, and continuously check and confirm whether the required configurat
ion is in place and is not altered(if altered puppet will revert back to the required configurati
on ) on the host.
• Puppet keeps the configuration of your hosts under check, and can be use
d in one shot to configure a machine from scratch (like installing package,
editing and configuring,create and mange users required) etc.

• The main added advantage is that you can manage the configuration of al
most all open-source tools available out there, using puppet.
Who made puppet and who supports it?
• Puppet is made by Luke Kanies. It is based on ruby language. Currently puppet is supported
by Puppet Labs(Luke Kanies is the CEO of Puppet Labs). GPLv2 is used to license puppet.

Which platforms can be managed using Puppet?


• Puppet can be used to manage Unix and most of the Linux flavors. Even Windows platform
s can be handled using puppet. Some of the platforms which is supported by puppet are men
tioned below
• HP UX
• BSD
• MACOS X server
• Gentoo
• Debian
• Red Hat/Centos, Fedora
• Mandriva
• Microsoft Windows
PUPPET

• Puppet is a configuration management tool


that allows system administrator and devel
oper to manage and maintain the develop
ment and deployment of software systems
and servers.
Problem of Inconsistency
Puppet aims to solve the problem of consistency:

• Get a complex problem and that problem is the problem of consistency.

• When we deploy software's to web servers, desktops and computational system w


e should be concerned about how consistent we are in deployment.

• Development Environment ,QA Environment and Production Environment should


be similar.

• If they are different our software may not work the way we expect to behave.

• Puppet aims to solve consistency by introducing configuration management tool.


• Configuration management tool is the way of practicing of expressing the infrastr
ucture as code.

• Virtual servers and physical servers have the same software and OS configuration
and even deployment software is repeated and consistent manner.

• When the environment is repeated and consistent we can start to build on the envi
ronment ,modify them and make changes without worrying about the compatibilit
y issue from one system to another or server over time.

• Repetition and consistency is important to manage virtual infrastructure and cloud


infrastructure.

• We can create and destroy computation resources as we need them.


• Puppet aims to deploy the software from development environment to product
ion environment in consistent, repeatable and scalable manner. This denotes i
n deployment of s/w from DC to Cloud based systems.

• Puppet helps us to mange deployment and creation of servers in the environm


ent that involves auto scaling .

• AWS allows us to create an environment were we can adjust our server infrast
ructure incoming traffic of website .

• When we see our website traffic increasing we can auto provision brand new s
ervers to handle more and more web traffic.

• AWS allows us to scale. But it is Puppet that drives installation and configurat
ion of s/w on those web servers. Allow those webservers to be consistent even
after s/w updates.
Why Puppet
• Puppet is a choice among many other
Ansible, Salt, Chef, others.

• Puppet Lang is Declarative (It knows the end state of the system, Resources that are defined
are dependent on each other).

• It has Rich Set tools.

• Backing up by Puppet Labs.


• First snippet is used for identifying a specific file type called a directory on a com
puter system with a owner and a group member.

• Puppet runtime uses resources abstract layer to ensure we can define a resource an
d not be concerned about the individual command use to create a resource at run ti
me.

• Eg: This definition of the web directory can apply on linux, mac, windows.

• Second snippet is to identify the local n/w host. Identify the virtual host on web ap
p to use local ip.

• To identify this we can use Puppet lang.


What are the advantages of puppet over
other tools?
• Normally most of the configuration management tool, deploy the required configuration on
a machine, and leave them as it is. But puppet keeps on verifying the configuration at a spec
ified interval(which you can modify as per requirement).
• Puppet defines the configurations for a host with the help of a language which is very easy t
o learn and is only used for that purpose.
• Puppet is used by major players in the industry like Google,red hat etc.
• Larger open-source developer base
• Wide number of platforms are supported in puppet.
• Its works very smooth, even when deployed in a large infrastructure(thousands of hosts to m
anage)

You might also like