Virtualization
Virtualization
29
1. Introduction
different instruction set or CPU architecture; such emulation or simulation environments help
developers bring up new processors and cross-debug embedded hardware.
It is also a method of partitioning one physical server computer into multiple “virtual”
servers, giving each the appearance and capabilities of running on its own dedicated machine.
Each virtual server functions as a full-fledged server and can be independently rebooted.
Virtual Machine
A virtual machine is a tightly isolated software container that can run its own
operating systems and applications as if it were a physical computer. A virtual machine
behaves exactly like a physical computer and contains it own virtual (ie, software-based)
CPU, RAM hard disk and network interface card (NIC). .
An operating system can’t tell the difference between a virtual machine and a physical
machine, nor can applications or other computers on a network. Even the virtual machine
thinks it is a “real” computer. Nevertheless, a virtual machine is composed entirely of
software and contains no hardware components whatsoever. As a result, virtual machines
offer a number of distinct advantages over physical hardware.
Virtual machines possess four key characteristics that benefit the user:
• Compatibility: Virtual machines are compatible with all standard x86 computers
• Isolation: Virtual machines are isolated from each other as if physically separated
• Encapsulation: Virtual machines encapsulate a complete computing environment
• Hardware independence: Virtual machines run independently of underlying hardware
Virtual Infrastructure
A virtual infrastructure lets you share your physical resources of multiple machines
across your entire infrastructure. A virtual machine lets you share the resources of a single
physical computer across multiple virtual machines for maximum efficiency. Resources are
shared across multiple virtual machines and applications. This resource optimization drives
greater flexibility in the organization and results in lower capital and operational costs.
2. History of Virtualization
Virtualization is a proven concept that was first developed in the 1960s by IBM as a way
to logically partition large, mainframe hardware into separate virtual machines. These
partitions allowed mainframes to "multitask"; run multiple applications and processes at the
same time.
Virtualization was effectively abandoned during the 1980s and 1990s when client-server
applications and inexpensive x86 servers and desktops established the model of distributed
computing. The growth in x86 server and desktop deployments has introduced new IT
infrastructure and operational challenges. These challenges include:
• High Maintenance end-user desktops - Managing and securing enterprise desktops present
numerous challenges. Controlling a distributed desktop environment and enforcing
management, access and security policies without impairing users' ability to work effectively
is complex and expensive.
Present Day
Today, computers based on x86 architecture are faced with the same problems of
rigidity and underutilization that mainframes faced in the 1960s.
Today's powerful x86 computer hardware was originally designed to run only a single
operating system and a single application, but virtualization breaks that bond, making it
possible to run multiple operating systems and multiple applications on the same computer at
the same time, increasing the utilization and flexibility of hardware.
• Driving out the cost of IT infrastructure through more efficient use of available resources
VIRTUAL MACHINE
A virtual machine was originally defined by Popek and Goldberg as "an efficient,
isolated duplicate of a real machine".
Virtual machines are separated into two major categories, based on their use and
degree of correspondence to any real machine. A system virtual machine provides a
complete system platform which supports the execution of a complete operating system (OS).
Process virtual machine is designed to run a single program, which means that it supports a
single process. An essential characteristic of a virtual machine is that the software running
inside is limited to the resources and abstractions provided by the virtual machine -- it cannot
break out of its virtual world.
System virtual machines (sometimes called hardware virtual machines) allow the
sharing of the underlying physical machine resources between different virtual machines,
each running its own operating system. The software layer providing the virtualization is
called a virtual machine monitor or hypervisor. A hypervisor can run on bare hardware
(Type 1 or native VM) or on top of an operating system (Type 2 or hosted VM).
• multiple OS environments can co-exist on the same computer, in strong isolation from each
other
• the virtual machine can provide an instruction set architecture (ISA) that is somewhat
different from that of the real machine
The guest OS’s do not have to be all the same, making it possible to run different OS’s on
the same computer (e.g., Microsoft Windows and Linux, or older versions of an OS in order
to support software that has not yet been ported to the latest version).
This type of VM has become popular with the Java (JVM). And .NET Framework,
which runs on a VM called the Common Language Runtime.
Techniques
Virtual machines can also perform the role of an emulator, allowing software
applications and operating systems written for another computer processor architecture to be
run.
Some virtual machines emulate hardware that only exists as a detailed specification. For
example:
• Open Firmware allows plug-in hardware to include boot-time diagnostics, configuration code,
and device drivers that will run on any kind of CPU.
This technique allows diverse computers to run any software written to that specification;
only the virtual machine software itself must be written separately for each type of computer
on which it runs.
Hypervisor
Classifications
• Type 1 (or native, bare-metal) hypervisors are software systems that run directly on the host's
hardware as a hardware control and guest operating system monitor. A guest operating system
thus runs on another level above the hypervisor.
• Type 2 (or hosted) hypervisors are software applications running within a conventional
operating system environment. Considering the hypervisor layer being a distinct software
layer, guest operating systems thus run at the third level above the hardware.
There are three properties of interest when analyzing the environment created by a VMM:
Equivalence: A program running under the VMM should exhibit a behavior essentially identical to
that demonstrated when running on an equivalent machine directly.
Resource control: The VMM must be in complete control of the virtualized resources.
Efficiency: A statistically dominant fraction of machine instructions must be executed without VMM
intervention.
In Popek and Goldberg terminology, a VMM must present all three properties. VMM
are typically assumed to satisfy the equivalence and resource control properties. So, in a
sense, Popek and Goldberg's VMMs are today's efficient VMM.
The main result of Popek and Goldberg's analysis can then be expressed as follows.
1. it is Virtualizable and
2. a VMM without any timing dependencies can be constructed for it.
5. Classification of Virtualization
6. Platform virtualization
Fig (6): VMware Workstation running Ubuntu, on Windows, an example of platform virtualization
Concept
The creation and management of virtual machines has been called platform
virtualization, or server virtualization.
Full virtualization
Hardware-assisted virtualization
Partial virtualization
In partial virtualization (and also "address space virtualization"): The virtual machine
simulates multiple instances of much (but not all) of an underlying hardware environment,
particularly address spaces. Such an environment supports resource sharing and process
isolation, but does not allow separate "guest" operating system instances.
Para virtualization
In paravirtualization, the virtual machine does not necessarily simulate hardware, but
instead (or in addition) offers a special API that can only be used by modifying the "guest"
OS. This system call to the hypervisor is called a "hypercall" in TRANGO and Xen;
Hosted environment
Applications that hosted by the third party servers and that can be called or can be
used by a remote system’s environment.
7. Resource virtualization
The virtualization of specific system resources, such as storage volumes, name spaces,
and network resources is the resource virtualization,
Storage Virtualization
Storage virtualization is the pooling of multiple physical storage resources into what
appears to be a single storage resource that is centrally managed. Storage virtualization
automates tedious and extremely time-consuming storage administration tasks. This means
the storage administrator can perform the tasks of backup, archiving, and recovery more
easily and in less time, because the overall complexity of the storage infrastructure is
disguised. Storage virtualization is commonly used in file systems, storage area networks
(SANs), switches and virtual tape systems. Users can implement storage virtualization with
software, hybrid hardware or software appliances. Virtualization hides the physical
complexity of storage from storage administrators and applications, making it possible to
manage all storage as a single resource. In addition to easing the storage management burden,
this approach dramatically improves the efficiency and cuts overall costs.
First, it enables the pooling of multiple physical resources into a smaller number of
resources or even a single resource, which reduces complexity. Many environments have
become complex, which increases the storage management gap. With regard to resources,
pooling is an important way to achieve simplicity. A second advantage of using storage
virtualization is that it automates many time-consuming tasks. In other words, policy-driven
virtualization tools take people out of the loop of addressing each alert or interrupt in the
storage business. A third advantage of storage virtualization is that it can be used to disguise
the overall complexity of the infrastructure.
Network virtualization
Various equipment and software vendors offer network virtualization by combining any
of the following:
• Network hardware, such as switches and network adapters, also known as network interface
cards (NICs)
• Networks, such as virtual LANs (VLANs) and containers such as virtual machines and
Solaris Containers
• Network storage devices
• Network media, such as Ethernet and Fibre Channel
External network virtualization, in which one or more local networks are combined
or subdivided into virtual networks, with the goal of improving the efficiency of a large
corporate network or data center. The key components of an external virtual network are the
VLAN and the network switch. Using VLAN and switch technology, the system
administrator can configure systems physically attached to the same local network into
different virtual networks. Conversely, VLAN technology enables the system administrator to
combine systems on separate local networks into a VLAN spanning the segments of a large
corporate network.
Some VMM offer both internal and external network virtualization. Basic approach is
network in the box on a single system, using virtual machines that are managed by hypervisor
software. Infrastructure software connects and combines networks in multiple boxes into an
external virtualization scenario.
Cluster categorizations
Load-balancing clusters
Compute clusters
Clusters are used for primarily computational purposes, rather than handling IO-
oriented operations such as web service or databases. For instance, a cluster might support
computational simulations of weather or vehicle crashes.
Grid computing
Grids are usually compute clusters, but more focused on throughput like a computing
utility rather than running fewer, tightly-coupled jobs. grids will incorporate heterogeneous
collections of computers, possibly distributed geographically distributed nodes, sometimes
administered by unrelated organizations.
Grid computing is optimized for workloads which consist of many independent jobs
or packets of work, which do not have to share data between the jobs during the computation
process. Grids serve to manage the allocation of jobs to computers which will perform the
work independently of the rest of the grid cluster. Resources such as storage may be shared
by all the nodes, but intermediate results of one job do not affect other jobs in progress on
other nodes of the grid.
9. Application virtualization
Description
• Allows applications to run in environments that do not suit the native application (e.g. Wine
allows Microsoft Windows applications to run on Linux).
• Uses fewer resources than a separate virtual machine.
• Run incompatible applications side-by-side, at the same time and with minimal regression
testing against one another.
• Implement the security principle of least privilege by removing the requirement for end-users
to have Administrator privileges in order to run poorly written applications.
• Applications have to be "packaged" or "sequenced" before they will run in a virtualized way.
Since the software runs on a virtualized equivalent of the original computer, it does not
require recompilation or porting, thus saving time and development resources. However, the
processing overhead of binary translation and call mapping imposes a performance penalty,
when compared to natively-compiled software. For this reason, cross-platform virtualization
may be used as a temporary solution until resources are available to port the software.
Emulation
Simulation
Simulation is the imitation of some real thing, state of affairs, or process. The act of
simulating something generally entails representing certain key characteristics or behaviors of
a selected physical or abstract system.
Computer simulation has become a useful part of modeling many natural systems in
physics, chemistry and biology, and human systems in economics as well as in engineering to
gain insight into the operation of those systems.
In Computer science, simulation has some specialized meanings: Alan Turing used
the term "simulation" to refer to what happens when a universal machine executes a state
transition table (in modern terminology, a computer runs a program) that describes the state
transitions, inputs and outputs of a subject discrete-state machine.
11.Desktop virtualization
Rationale
Advantages
12.Virtualization Softwares
VMware Workstation
Fig (8): VMware Workstation 6.5 running Ubuntu The Snapshot Manager in VMware
Workstation 6
VMware Workstation is a virtual machine software suite for x86 and x86-64
computers from VMware, a division of EMC Corporation. This software suite allows users to
set up multiple x86 and x86-64 virtual computers and to use one or more of these virtual
machines simultaneously with the hosting operating system. Each virtual machine instance
can execute its own guest operating system, such as Windows, Linux, BSD variants, or
others. In simple terms, VMware Workstation allows one physical machine to run multiple
operating systems simultaneously.
Virtual machines are created and managed through an IIS web-based interface or
through a Windows client application tool called VMRCplus.
The current version is Microsoft Virtual Server 2005 R2 SP1. New features in R2 SP1
include Linux guest operating system support, Virtual Disk Precompactor, SMP (but not for
the Guest OS), x86-64 (x64) Host OS support (but not Guest OS support), the ability to
mount virtual hard drives on the host OS and additional operating systems including
Windows Vista. It also provides a Volume Shadow Copy writer which enables live backups of
the Guest OS on a Windows Server 2003 or Windows Server 2008 Host. A utility to mount
VHD images is also included since SP1. Officially supported Linux guest operating systems
include Red Hat Enterprise Linux versions 2.1-5.0, Red Hat Linux 9.0, SUSE Linux and
SUSE Linux Enterprise Server versions 9 and 10.
Microsoft Virtual PC
can get past the boot screen of the Live CD (and function fully) when using Safe Graphics
Mode).
VirtualBox
Supported host operating systems include Linux, Mac OS X, OS/2 Warp, Windows
XP or Vista, and Solaris, while supported guest operating systems include FreeBSD, Linux,
OpenBSD, OS/2 Warp, Windows and Solaris. According to a 2007 survey ,Virtual Box is the
third most popular software package for running Windows programs on Linux desktops.
Xen
Xen is a virtual machine monitor for IA-32, x86, x86-64, IA-64 and PowerPC 970
architectures. It allows several guest operating systems to be executed on the same computer
hardware concurrently. Xen was initially created by the University of Cambridge Computer
Laboratory and is now developed and maintained by the Xen community as free software,
licensed under the GNU General Public License (GPL2).
A Xen system is structured with the Xen hypervisor as the lowest and most privileged
layer. Above this layer are one or more guest operating systems, which the hypervisor
schedules across the physical CPUs. The first guest operating system, called in Xen
terminology "domain 0" (dom0), is booted automatically when the hypervisor boots and
given special management privileges and direct access to the physical hardware. The system
administrator logs into dom0 in order to start any further guest operating systems, called
"domain U" (domU) in Xen terminology.
13.Conclusion
• Reduce capital costs by requiring less hardware and lowering operational costs while
increasing your server to admin ratio
• Ensure enterprise applications perform with the highest availability and performance
• Build up business continuity through improved disaster recovery solutions and
deliver high availability throughout the datacenter
• Improve desktop management with faster deployment of desktops and fewer support
calls due to application conflicts.
14.References
Websites:
Books: