Module 2 Notes
Module 2 Notes
.IN
IMPLEMENTATION LEVELS OF VIRTUALIZATION
Definition: Virtualization is a computer architecture technology enabling multiple
virtual machines (VMs) to operate on the same hardware.
C
Historical Origin: The concept of VMs dates back to the 1960s.
N
Purpose of VMs:
• Enhances resource sharing among multiple users.
SY
Modern Relevance: Virtualization has gained importance recently due to rising demand
for distributed and cloud computing.
Core Idea: Separates hardware from software for better system efficiency.
Example: Virtual memory expanded users' accessible memory space.
Similar techniques optimize the use of compute engines, networks, and storage.
Levels of Virtualization Implementation
A traditional computer runs with a host operating system specially tailored for its hardware
architecture, as shown in Figure 3.1(a).
Dept. of CSE,SVIT 1
Effect of Virtualization:
.IN
• Enables different user applications, each with their own operating system (guest OS),
to run on the same hardware.
• Allows operation independently of the host OS.
C
Virtualization Layer:
• Achieved using additional software called a hypervisor or virtual machine monitor
(VMM).
N
• The virtualization layer interposes itself at various levels of the computer system.
SY
2. Hardware Level.
3. Operating System Level.
4. Library Support Level.
5. Application Level.
Dept. of CSE,SVIT 2
Instruction Set Architecture Level
Benefits:
• Facilitates running old binary code on new hardware.
• Creates virtual ISAs on any hardware.
Emulation Methods:
1. Code Interpretation:
.IN
o Interprets source instructions to target instructions one at a time.
o Requires multiple native instructions per source instruction.
o Slower process overall.
2. Dynamic Binary Translation:
C
o Translates basic blocks of source instructions into target instructions
dynamically.
N
o Improves performance compared to interpretation.
o Can extend to program traces or super blocks for better efficiency.
SY
Dual Functionality:
• Creates a virtual hardware environment for virtual machines (VMs).
• Manages the underlying hardware through virtualization.
Historical Development:
• Implemented in the IBM VM/370 system in the 1960s.
• Recently, the Xen hypervisor has been used to virtualize x86-based machines for
running Linux or other guest OS applications.
Dept. of CSE,SVIT 3
Operating System Level
Definition: OS-level virtualization is an abstraction layer between the operating system (OS)
and user applications.
Functionality:
• Creates isolated containers on a single physical server.
• OS instances in containers utilize hardware and software resources in data centers.
Common Uses:
1. Virtual Hosting:
.IN
o Allocates hardware resources among multiple, mutually distrusting users.
2. Server Consolidation:
o Moves services from separate hosts into containers or virtual machines (VMs)
on a single server.
C
o
Library Support Level
N
API-Based Virtualization:
SY
• Applications typically use APIs from user-level libraries rather than direct system
calls to the OS.
• APIs serve as an interface suitable for virtualization.
Method:
U
Examples:
1. WINE:
Supports Windows applications on UNIX systems using API-based
o
virtualization.
2. vCUDA:
o Enables applications in VMs to utilize GPU hardware acceleration.
User-Application Level
Definition:
• Application-level virtualization virtualizes applications as virtual machines (VMs).
• Applications typically run as processes on traditional operating systems (OS), making
this also known as process-level virtualization.
Popular Approach:
• Utilizes high-level language (HLL) VMs.
Dept. of CSE,SVIT 4
• Virtualization layer acts as an application program on top of the OS, exporting a VM
abstraction.
• Programs written and compiled for a specific abstract machine definition can run on
this VM.
• Examples: Microsoft .NET CLR and Java Virtual Machine (JVM).
Benefits:
.IN
• Simplifies application distribution and removal from user workstations.
• Example: LANDesk application virtualization platform, which deploys software as
self-contained, executable files without requiring installation or system modifications.
C
N
SY
U
VT
Definition:
o Hardware-level virtualization inserts a layer, called the Virtual Machine
Monitor (VMM), between real hardware and operating systems (OS).
Functionality:
o The VMM manages hardware resources and captures all hardware access
processes.
o Acts like a traditional OS, enabling multiple traditional operating systems
(same or different) to run simultaneously on the same hardware.
Key Features of a VMM:
1. Environment:
▪ Provides an environment for programs that is nearly identical to the
original hardware.
2. Performance:
▪ Programs should show only minor speed decreases.
Dept. of CSE,SVIT 5
3. Control:
▪ Maintains complete control of hardware resources.
Exceptions to Identical Functionality:
o Differences due to resource availability when multiple VMs are running.
o Timing dependencies caused by the virtualization layer or concurrent VMs.
Resource Requirements:
o Individual VM resource needs (e.g., memory) are reduced.
o Total resource usage may exceed the hardware’s capacity due to multiple
VMs.
Efficiency:
o Efficiency is critical; users won't prefer a VMM if performance is too low.
o A subset of virtual processor instructions must execute directly on the
hardware to improve efficiency.
.IN
Traditional Emulators vs. VMM:
o Emulators and simulators offer flexibility but are too slow for practical use.
o VMMs must execute instructions without excessive software intervention to
ensure speed.
C
Control Over Resources:
N
1. The VMM allocates hardware resources to programs.
2. Programs cannot access unallocated resources.
SY
requirements.
VT
Dept. of CSE,SVIT 6
Call for Improvement:
• Significant research and development are needed to address these challenges and
enhance support for cloud computing.
.IN
4. May require hardware modifications to reduce performance overhead.
OS-Level Virtualization as a Solution:
• Inserts a virtualization layer inside the operating system to partition physical
resources.
C
• Creates multiple isolated virtual machines (VMs) within a single OS kernel.
Characteristics of OS-Level VMs:
N
• Referred to as Virtual Execution Environments (VE), Virtual Private Systems
(VPS), or containers.
SY
• VEs share the same operating system kernel but can be customized for different users.
Alternate Name:
VT
• Known as single-OS image virtualization because all VEs use a single shared OS
kernel.
Dept. of CSE,SVIT 7
Advantages of OS Extensions
Disadvantages of OS Extensions
.IN
Main Disadvantage of OS Extensions:
• All OS-level VMs on a single container must use the same operating system family
(e.g., Windows cannot run on a Linux-based container).
C
Challenge in Cloud Computing:
N
• Users prefer diverse operating systems (e.g., Windows, Linux), which poses a
SY
1. Duplicating Resources:
▪ Copies common resources to each VM partition.
VT
Dept. of CSE,SVIT 8
Virtualization on Linux or Windows Platforms
.IN
Examples of OS-Level Virtualization Tools:
1. Linux vServer and OpenVZ: Enable Linux platforms to run other platform-based
applications.
2. FVM: Specifically developed for virtualization on the Windows NT platform.
C
N
SY
U
VT
Library-Level Virtualization:
• Also called user-level Application Binary Interface (ABI) or API emulation.
• Creates execution environments for running alien programs on a platform.
• Does not require creating a virtual machine (VM) for an entire operating system.
Key Functions:
1. API Call Interception.
2. API Call Remapping.
Examples of Library-Level Virtualization Systems:
• Windows Application Binary Interface (WABI).
• lxrun.
Dept. of CSE,SVIT 9
• WINE.
• Visual MainWin.
• vCUDA.
.IN
C
N
SY
Before Virtualization:
U
• A virtualization layer is added between the hardware and the operating system.
• This layer converts real hardware into virtual hardware.
• Enables multiple operating systems (e.g., Linux and Windows) to run on the same
physical machine simultaneously.
Classes of VM Architectures:
1. Hypervisor Architecture:
o Also known as the Virtual Machine Monitor (VMM).
o Performs virtualization operations.
2. Paravirtualization.
3. Host-Based Virtualization.
Dept. of CSE,SVIT 10
Hypervisor and Xen Architecture
Role of Hypervisor:
.IN
C
• Supports hardware-level virtualization on bare metal devices (e.g., CPU, memory,
disk, network interfaces).
• Acts as a virtualization layer between physical hardware and the operating system
N
(OS).
• Provides hypercalls for guest OSes and applications.
SY
scheduling).
o Device drivers and changeable components remain outside the hypervisor.
VT
Dept. of CSE,SVIT 11
• Acts as a virtual environment between hardware and OS.
• Commercial versions include Citrix XenServer and Oracle VM.
Core Components:
1. Hypervisor.
2. Kernel.
3. Applications.
Domains in Xen:
• Domain 0 (privileged guest OS):
o Manages hardware access and devices.
o Allocates and maps resources for other domains (Domain U).
o Boots first, without file system drivers.
o Security risks exist if Domain 0 is compromised.
• Domain U (unprivileged guest OS): Runs on resources allocated by Domain 0.
.IN
Security:
• Xen is Linux-based with a C2 security level.
• Strong security policies are needed to protect Domain 0.
VM Capabilities:
C
• Domain 0 acts as a VMM, enabling users to create, save, modify, share, migrate, and
roll back VMs like files.
N
• Rolling back or rerunning VMs allows fixing errors or redistributing content.
VM Execution Model:
SY
• Traditional machine states are linear, while VM states form a tree structure:
o Multiple instances of a VM can exist simultaneously.
o VMs can roll back to previous states or rerun from saved points.
1. Full Virtualization:
o Does not require modification of the host OS.
o Relies on binary translation to virtualize sensitive, nonvirtualizable
instructions.
o Guest OSes consist of both critical and noncritical instructions.
2. Host-Based Virtualization:
o Utilizes both a host OS and a guest OS.
o Includes a virtualization software layer between the host OS and guest OS.
Full Virtualization:
• Noncritical Instructions: Run directly on the hardware.
• Critical Instructions: Trapped and replaced with software-based emulation by the
Virtual Machine Monitor (VMM).
Why Only Critical Instructions Are Trapped:
• Performance:
o Binary translation of all instructions can lead to high performance overhead.
• Security:
Dept. of CSE,SVIT 12
o Critical instructions control hardware and can pose security risks.
o Trapping them ensures system security.
• Efficiency:
o Running noncritical instructions on hardware improves overall efficiency.
Implementation:
• VMware and other companies implement full virtualization.
• The Virtual Machine Monitor (VMM) operates at Ring 0, while the guest OS
operates at Ring 1.
.IN
Functionality:
• The VMM scans the instruction stream to identify privileged, control-, and
behavior-sensitive instructions.
• These instructions are trapped and emulated by the VMM using binary translation.
C
• Full virtualization combines binary translation and direct execution.
• The guest OS is completely decoupled from the hardware and is unaware of the
N
virtualization.
Challenges with Full Virtualization:
SY
Performance Efficiency:
• On the x86 architecture, performance typically ranges from 80% to 97% of the host
VT
machine's performance.
Host-Based Virtualization
Host-Based VM Architecture:
• A virtualization layer is installed on top of the host OS.
• The host OS remains responsible for managing hardware.
• Guest OSes run on top of the virtualization layer.
• Dedicated applications can run in VMs, while other applications may run directly on
the host OS.
Dept. of CSE,SVIT 13
Advantages:
1. Easy installation without modifying the host OS.
2. Simplifies VM design and deployment by relying on the host OS for device drivers
.IN
and low-level services.
3. Compatible with a wide range of host machine configurations.
Disadvantages:
• Lower performance compared to hypervisor/VMM architecture.
C
• Involves four layers of mapping when applications request hardware access,
significantly impacting performance.
• Requires binary translation when the ISA of the guest OS differs from the
N
underlying hardware ISA, further decreasing efficiency.
Tradeoff:
SY
• While flexible, host-based architectures may suffer from poor performance, making
them less practical for some use cases.
Definition:
VT
Dept. of CSE,SVIT 14
.IN
C
Virtualization of x86 Processor:
• A virtualization layer is inserted between hardware and the operating system (OS).
N
• The virtualization layer operates at Ring 0 (highest privilege level).
Para-Virtualization Mechanism:
SY
well.
2. Maintenance Cost: Maintaining para-virtualized OSes is costly due to deep kernel
modifications.
3. Performance Variability: Advantages of para-virtualization vary depending on
workloads.
Comparison with Full Virtualization:
• Para-virtualization is easier and more practical.
• Full virtualization suffers from performance issues, especially in binary translation.
Examples:
• Commonly used virtualization products employing para-virtualization include Xen,
KVM, and VMware ESX.
Dept. of CSE,SVIT 15
• Linux Kernel handles memory management and scheduling.
• KVM performs the remaining virtualization tasks, making it simpler than a full
hypervisor.
Key Features:
• Hardware-Assisted Para-Virtualization: Enhances performance.
• Supports unmodified guest OSes, including Windows, Linux, Solaris, and other UNIX
variants.
Para-Virtualization Process:
• Handles privileged and sensitive instructions at compile time rather than runtime.
.IN
• Modifies the guest OS kernel to replace such instructions with hypercalls that interact
with the hypervisor or Virtual Machine Monitor (VMM).
Architecture:
• Example: Xen employs a para-virtualization architecture.
C
• The guest OS typically runs at Ring 1, not Ring 0, meaning it cannot execute
privileged instructions directly.
N
Privileged Instructions:
• Implemented via hypercalls to the hypervisor.
SY
• The modified guest OS emulates the original OS behavior using these hypercalls.
On UNIX Systems:
• A system call generally involves an interrupt or service routine.
• In Xen, hypercalls use a dedicated service routine to fulfill this role.
U
VT
Processors like x86 use hardware-assisted virtualization, which enables the Virtual Machine
Monitor (VMM) and the guest operating system (OS) to operate in separate modes. Sensitive
Dept. of CSE,SVIT 16
instructions from the guest OS and its applications are intercepted by the VMM. The
hardware handles mode switching and saves processor states. Intel and AMD have developed
their own technologies for this on the x86 architecture.
1. Processor Modes:
o Modern processors ensure controlled hardware access using two modes:
▪ User Mode: For unprivileged instructions.
▪ Supervisor Mode: For privileged instructions.
o Without protection mechanisms, processes could cause hardware conflicts and
crashes.
2. Virtualized Environments:
.IN
o Virtualization adds complexity due to extra layers in the machine stack.
3. Hardware Virtualization Solutions:
o VMware Workstation:
▪ Software suite for x86 and x86-64 systems.
C
▪ Allows running multiple virtual machines (VMs) alongside the host
OS.
N
▪ Uses host-based virtualization.
o Xen Hypervisor:
SY
Dept. of CSE,SVIT 17
CPU Virtualization
.IN
C
1. VM Overview:
o A Virtual Machine (VM) is a duplicate of a computer system.
N
o Most VM instructions run natively on the host processor for efficiency.
o Critical instructions are managed carefully for system correctness and stability.
SY
configurations.
3. Virtualizable CPU Architecture:
VT
Dept. of CSE,SVIT 18
Hardware-Assisted CPU Virtualization
1. Challenge in Virtualization:
• Full or paravirtualization is complex, prompting efforts to simplify.
2. Privilege Mode Level (Ring-1):
• Intel and AMD introduced a privilege mode level (often called Ring-1) in x86
processors.
• Operating systems run at Ring 0, and the hypervisor runs at Ring-1.
• Privileged and sensitive instructions are automatically trapped in the hypervisor.
3. Benefits:
• Eliminates the need for binary translation in full virtualization.
• Allows operating systems to run in VMs without modification.
.IN
4. CPU Enhancements:
• Additional instructions are added to manage VM CPU state.
5. Implementations:
• Technologies like VT-x are used by Xen, VMware, and Microsoft Virtual PC
C
hypervisors.
6. Hardware-Assisted Virtualization:
N
• Offers high efficiency but incurs overhead due to mode-switching between the
hypervisor and the guest OS.
SY
Memory Virtualization
Dept. of CSE,SVIT 19
o Additional layer maps physical memory to machine memory.
o Increases complexity and performance costs.
5. Direct Mapping via TLB:
o VMware uses shadow page tables and TLB hardware to map virtual memory
directly to machine memory, avoiding two-stage translations on every access.
6. Hardware Assistance:
o AMD Barcelona processor (since 2007) introduced nested paging to support
two-stage memory translation, reducing overhead.
.IN
C
N
SY
I/O Virtualization
U
1. Definition:
o Manages routing of I/O requests between virtual devices and shared physical
VT
hardware.
2. Methods of Implementation:
o Full Device Emulation:
▪ Emulates real-world devices in software within the VMM.
▪ Slower than actual hardware due to software overhead.
o Para-Virtualization:
▪ Uses a split driver model (frontend in Domain U, backend in Domain
0).
▪ Provides better performance than emulation but with higher CPU
overhead.
o Direct I/O:
▪ VM accesses devices directly, achieving near-native performance.
▪ Challenges exist for commodity hardware, particularly during
workload migration.
3. Hardware-Assisted I/O Virtualization:
o Critical to reduce high overhead from device emulation.
Dept. of CSE,SVIT 20
o Intel VT-d supports remapping of DMA transfers and device-generated
interrupts.
4. Self-Virtualized I/O (SV-IO):
o Utilizes multicore processors for virtualizing I/O devices.
o Provides virtual devices with APIs for VMs and the VMM.
o Each Virtual Interface (VIF) has message queues (incoming and outgoing) and
a unique ID.
5. Challenges:
o Full emulation is slow, para-virtualization has higher CPU costs, and direct
I/O faces hardware limitations.
o Proper management is needed for reliability during device reassignment and
workload migration.
.IN
C
N
SY
U
VT
Dept. of CSE,SVIT 21
method proposed by Wells et al. [74]:
• Multicore Virtualization: Provides a higher-level abstraction of processor cores for
hardware designers.
• Efficient Resource Management: Reduces inefficiency and complexity in managing
hardware resources through software.
• Position in System Architecture: Located beneath the Instruction Set Architecture
(ISA) and remains unaffected by the OS or hypervisor (VMM).
• Dynamic VCPU Allocation:
o A virtual CPU (VCPU) can switch between physical cores as needed.
o If no suitable core is available, the VCPU's execution is temporarily
suspended.
This technique optimizes hardware resource management while maintaining software
transparency.
.IN
C
N
SY
U
VT
Virtual Hierarchy
Marty and Hill [39]:
• Many-Core CMPs and Space-Sharing:
o Emerging many-core CMPs can support space-sharing, where jobs are
assigned to separate core groups for long periods.
o This approach replaces time-sharing on fewer cores.
• Virtual Hierarchies:
o Introduced to overlay a coherence and caching hierarchy onto physical
processors.
o Can adapt to workloads, unlike fixed physical hierarchies, improving
performance and isolation.
• Physical vs. Virtual Cache Hierarchies:
o Physical hierarchies have static cache allocation and mapping.
o Virtual hierarchies dynamically adapt to workload needs for:
1. Faster data access near cores.
2. Establishing shared-cache domains and points of coherence.
3. Reducing miss access times.
Dept. of CSE,SVIT 22
• Space-Sharing Application:
o Illustrates workload assignments: e.g., database, web servers, middleware in
separate virtual core clusters (VMs).
o Works for multiple VMs or a single OS environment.
• Two-Level Virtual Hierarchy:
o Level 1: Isolates workloads/VMs, minimizing performance interference and
miss times.
o Level 2: Maintains globally shared memory for resource repartitioning without
costly cache flushes.
• Benefits:
o Efficient resource use for space-shared workloads (e.g., multiprogramming,
server consolidation).
o Easier system software integration and supports virtualization features like
.IN
content-based page sharing.
C
N
SY
U
VT
Physical Cluster:
• Collection of interconnected servers via a physical network like LAN.
• Focuses on clustering techniques for physical machines.
Virtual Clusters:
Dept. of CSE,SVIT 23
• Built on virtualized platforms instead of physical servers.
• Enable efficient resource sharing and adaptability for various applications.
Key Design Issues:
1. Live Migration of VMs: Moving virtual machines between physical hosts without
downtime.
2. Memory and File Migrations: Ensuring smooth transfer of memory and file systems.
3. Dynamic Deployment: Efficiently allocating and deploying virtual clusters in real-
time.
Challenges with Traditional VM Initialization:
• Administrators must manually configure VMs, causing potential overloading or
underutilization in networks.
Amazon EC2 Example:
• Provides elastic cloud computing power.
.IN
• Allows customers to create and manage virtual machines dynamically.
Bridging Mode:
• Supported by platforms like XenServer and VMware ESX Server.
C
• Enables VMs to appear as individual hosts on a network.
• Facilitates seamless communication through virtual network interface cards and
N
automated configuration.
SY
Dept. of CSE,SVIT 24
o Resource scheduling, load balancing, server consolidation, and fault tolerance
are essential techniques.
• Efficiency in VM Image Storage:
o Efficient storage of large numbers of VM images is crucial.
o Preinstalled software packages (templates) allow users to build custom
software stacks for specific needs.
• Host and Guest Systems:
o Physical machines are host systems, and VMs are guest systems.
o Hosts and guests may run different operating systems.
• Dynamic Boundaries:
o Virtual cluster boundaries can change as VMs are added, removed, or
migrated.
.IN
C
N
SY
U
VT
Dept. of CSE,SVIT 25
.IN
C
Fast Deployment and Effective Scheduling
N
• Fast Deployment Capabilities:
SY
applications.
o Live migration of VMs enables workload transfers but can introduce
significant overhead, negatively impacting cluster utilization, throughput, and
quality of service (QoS).
• Design Challenges for Green Computing:
o Develop migration strategies to implement energy-efficient solutions without
degrading cluster performance.
• Load Balancing in Virtual Clusters:
o Achieved using load indexes and user login frequency.
o Automatic scale-up and scale-down mechanisms increase resource utilization
and reduce system response time.
o Mapping VMs to appropriate physical nodes enhances performance.
• Dynamic Load Adjustment:
o Live migration is used to redistribute workloads when cluster nodes are
imbalanced.
Dept. of CSE,SVIT 26
Customization of VMs:
• Template VMs can be distributed to physical hosts for customization.
• Existing software packages reduce customization time and ease switching of virtual
environments.
• Efficient disk space management is crucial, achieved through reducing duplicate
blocks using hash values.
User Profiles:
• Profiles store data block identification for corresponding VMs.
• New blocks created during data modifications are tracked in user profiles.
Steps for Deploying VMs:
1. Prepare the Disk Image: Use templates for easy creation.
2. Configure the VMs: Assign a name, disk image, network settings, CPU, and memory.
.IN
3. Choose Destination Nodes: Select appropriate physical hosts.
4. Execute Deployment Commands: Run the commands on every host.
Templates for Deployment:
• Templates include preinstalled operating systems and/or application software.
C
• Copy-on-Write (COW) format reduces disk space usage and speeds up deployment.
Efficiency in Configuration:
N
• Use preedited profiles for VMs with the same configurations to simplify the process.
• Automatically assign values for specific items like UUID, VM name, and IP address.
SY
Dept. of CSE,SVIT 27
o Useful for dynamic resource provisioning upon demand or after node failures.
Live VM Migration Steps:
1. Start Migration: Identify VM and destination host; migration may be triggered
by load balancing.
2. Transfer Memory: Continuously transfer memory, including iterative pre-
copying, while minimizing disruptions.
3. Suspend VM: Pause the VM to transfer final memory and non-memory data
(e.g., CPU and network states); downtime is minimized.
4. Activate on New Host: Recover states and redirect network connections at the
new host; remove VM from the source.
.IN
C
N
SY
U
VT
Dept. of CSE,SVIT 28
o Enables clustering of inexpensive computers for scalable and reliable
computing.
.IN
• Focused on scalable and expressive mechanisms to define clusters for specific
services.
• Involved physical partitioning of cluster nodes for different service types.
Considerations for System Migration:
C
• Ensuring seamless transition from one physical node to another.
• Addressing potential performance impacts and resource allocation during migration.
N
Memory Migration
SY
Memory Migration:
• Involves moving a VM's memory instance from one physical host to another.
• Memory size typically ranges from hundreds of megabytes to a few gigabytes.
• The chosen technique depends on the application/workload supported by the guest
U
OS.
Internet Suspend-Resume (ISR) Technique:
VT
• Exploits temporal locality, as memory states of suspended and resumed VMs are often
similar, differing only in recent changes.
• Represents files as a tree of small subfiles to track modifications efficiently.
• Ensures only modified files are transmitted using caching, minimizing unnecessary
data transfer.
Limitations of ISR:
• Designed for scenarios where live VM migration is unnecessary.
• Leads to higher downtime compared to other advanced live migration techniques.
Dept. of CSE,SVIT 29
• A global file system across machines eliminates the need to copy files during
migration since all files are network-accessible.
Distributed File System in ISR:
• Used as a transport mechanism for suspended VM states.
• Local file systems are accessed by the VMM, which explicitly moves VM files for
suspend and resume operations.
• This avoids the complexity of implementing multiple file system calls but requires
local storage for VM virtual disks.
Smart Copying:
• Leverages spatial locality (frequent movement between predictable locations like
home and office).
• Transmits only changes between file systems at suspend and resume locations,
reducing data transfer.
.IN
• In cases without locality, much of the state can be synthesized at the resuming site.
Proactive State Transfer:
• Effective when the resuming site can be confidently predicted.
• Focuses on reducing data transfer for operating systems and applications, which
C
constitute the majority of storage space.
N
Network Migration
SY
machine.
• The VMM maps these virtual addresses to the corresponding VM and ensures the
VT
Dept. of CSE,SVIT 30
• Performance Limitation: Migration daemon uses network bandwidth, causing
performance degradation.
Adaptive Rate Limiting:
• Reduces network impact but prolongs the total migration time significantly.
• Limits the number of precopy iterations due to convergence issues.
CR/TR-Motion (Checkpointing/Recovery and Trace/Replay):
• Transfers execution traces instead of dirty pages, reducing migration time and
downtime.
• Effective if the log replay rate exceeds the log growth rate.
Postcopy Migration:
• Transfers memory pages only once, reducing total migration time.
• Causes higher downtime due to latency in fetching pages before resuming VM
operations.
.IN
Memory Compression:
• Uses CPU resources to compress memory pages, significantly reducing transferred
data.
• Decompression is simple, fast, and efficient with minimal overhead.
C
Live Migration of VM Using Xen
N
• Xen Hypervisor (VMM):
o Allows multiple commodity operating systems to safely share x86 hardware.
SY
Trade-Offs in Compression:
o A single compression algorithm for all memory data is ineffective.
o Different algorithms are needed to handle varying regularities in memory data.
Live Migration System:
o Migration daemons in Dom0 perform the migration tasks.
o Shadow Page Tables in the VMM layer trace changes to memory pages,
flagging them in a dirty bitmap.
o The dirty bitmap is sent to the migration daemon at the start of each precopy
round and then reset.
• Precopy Phase:
o Memory pages flagged in the bitmap are extracted and compressed for
transfer.
o At the destination, the data is decompressed, ensuring continuity of the VM
state.
This approach aims to balance migration efficiency and minimal disruption to the
applications running on VMs.
Dept. of CSE,SVIT 31
.IN
C
Dynamic Deployment of Virtual Clusters
N
1. Cluster-on-Demand (COD) at Duke University:
SY
Dept. of CSE,SVIT 32
Example 3.10: VIOLIN Project at Purdue University:
• Purpose:
o Utilizes live VM migration to reconfigure virtual clusters for better resource
utilization.
o Supports isolated virtual environments for parallel applications.
• Key Features:
o Enables relocation and scaling of resources in a transparent manner.
o Achieves resource adaptation with minimal overhead (20 seconds out of 1,200
seconds for large tasks).
• Results:
o Demonstrated enhanced resource utilization with less than 1% increase in
execution time.
.IN
3.5 VIRTUALIZATION FOR DATA-CENTER AUTOMATION
• Major IT companies (Google, Amazon, Microsoft, etc.) are heavily investing in data
centers.
C
• Automation enables dynamic allocation of resources (hardware, software, databases)
to millions of users while ensuring QoS and cost-effectiveness.
N
Virtualization Growth (2006–2011):
• Virtualization market share grew significantly (from $1.04 billion in 2006 to $3.2
SY
billion in 2011).
• Key drivers: mobility, reduced downtime, high availability, backup services, workload
balancing, and expanding client bases.
1. Workload Types:
o Chatty Workloads: Burst and silent periods (e.g., web video services).
VT
Dept. of CSE,SVIT 33
Advanced Resource Management:
• Dynamic CPU Allocation: Based on VM utilization and QoS metrics.
• Two-Level Management:
o Local controller at VM level.
o Global controller at the server level.
• CMP (Chip Multiprocessor) Optimization:
o Design virtual hierarchies for inter-VM sharing, reassignment, and memory
access optimization.
o VM-aware power budgeting to balance performance and energy saving.
This framework addresses the growing complexity and demands of modern data centers.
.IN
1. Definition:
o Traditional storage virtualization: Aggregation and repartitioning of disks at
coarse time scales for physical machines.
o Modern virtual storage: Managed by Virtual Machine Monitors (VMMs) and
C
guest OSes, involving VM images and application data.
2. Encapsulation and Isolation:
N
o VMs encapsulate operating systems and isolate them from others, allowing
multiple VMs to run on a single machine.
SY
Dept. of CSE,SVIT 34
o Allows live upgrades of block device drivers in active clusters.
3. Implementation:
o VDIs are handled via Xen’s block tap driver and implemented using a tapdisk
library.
o Storage appliance VMs manage block and network access, connecting to
physical hardware.
4. Scalability:
o Suitable for cluster-based environments with a shared administrative domain
for storage management.
o Optimized for dynamic, large-scale data centers.
Parallax demonstrates how innovative storage virtualization solutions can address key
bottlenecks in system virtualization, improving scalability and efficiency.
.IN
C
N
SY
U
VT
Dept. of CSE,SVIT 35
o Features: Virtual networking and VM management for private clouds.
o Hypervisors: Xen and KVM.
o Public Cloud Interface: EC2.
o License: BSD.
3. OpenNebula:
o Platform: Linux.
o Features: Management of VMs, hosts, virtual networks, scheduling tools, and
dynamic provisioning.
o Hypervisors: Xen and KVM.
o Public Cloud Interface: EC2.
o License: Apache v2.
4. vSphere 4:
o Platforms: Linux and Windows.
.IN
o Features: Virtual storage, networking, data protection, dynamic provisioning,
and scalability.
o Hypervisors: VMware ESX and ESXi.
o Public Cloud Interface: VMware vCloud.
C
o License: Proprietary.
Example 3.12: Eucalyptus:
N
• Open-source system designed for IaaS (Infrastructure as a Service) clouds.
• Supports virtual networking, VM management, and interaction with private and
SY
scheduling.
Example 3.13: VMware vSphere 4:
VT
• Commercial cloud operating system designed for data center virtualization and private
clouds.
• Functional Suites:
o Infrastructure Services:
▪ vCompute: ESX, ESXi, and DRS for virtualization.
▪ vStorage: VMFS and thin provisioning.
▪ vNetwork: Distributed switching and networking functions.
o Application Services:
▪ Availability: VMotion, Storage VMotion, High Availability (HA),
Fault Tolerance, Data Recovery.
▪ Security: vShield Zones and VMsafe.
▪ Scalability: DRS (Distributed Resource Scheduler) and Hot Add.
• Interfaces through VMware vCenter to integrate with existing or new applications.
These platforms play critical roles in virtualizing data centers and supporting cloud
operations.
Dept. of CSE,SVIT 36
.IN
C
N
SY
U
VT
Dept. of CSE,SVIT 37
.IN
C
N
SY
U
VT
1. Role:
oA VMM creates a software layer between operating systems and hardware.
oIt enables one or more Virtual Machines (VMs) on a single physical platform.
oProvides secure isolation, with hardware resources accessed only through
VMM control.
2. Encapsulation:
o Encapsulates VM states, allowing them to be copied, shared, or removed like
files.
Dept. of CSE,SVIT 38
o Raises security concerns if encapsulated states are exposed or reused (e.g.,
random number reuse leading to cryptographic vulnerabilities).
3. Security Risks:
o Compromising the VMM or management VM can endanger the entire system.
o Protocols relying on "freshness" (e.g., session keys, TCP initial sequence
numbers) face risks of hijacking or plaintext exposure.
VM-Based Intrusion Detection Systems (IDS):
1. Types:
o Host-Based IDS (HIDS): Monitors specific systems but can be compromised
if the host system is attacked.
o Network-Based IDS (NIDS): Monitors network traffic but struggles with
detecting fake actions.
2. Virtualization-Based IDS:
.IN
o Isolates guest VMs on shared hardware, limiting the impact of attacks on other
VMs.
o Monitors and audits access to hardware/software, merging advantages of
HIDS and NIDS.
C
o Implemented as either:
▪ A high-privileged VM running on the VMM.
N
▪ Integrated directly into the VMM for higher privileges.
3. Livewire Architecture:
SY
o Contains a policy engine and policy module for monitoring events across
guest VMs.
o Uses tracing mechanisms (e.g., PTrace) for secure policies.
Additional Security Tools:
1. Honeypots and Honeynets:
U
Dept. of CSE,SVIT 39
.IN
C
N
SY
U
VT
Dept. of CSE,SVIT 40