0% found this document useful (0 votes)
0 views40 pages

Module 2 Notes

The document discusses virtualization in cloud computing, detailing its implementation levels, structures, and tools. It covers various virtualization types, including hardware-level, OS-level, and application-level virtualization, along with their benefits and challenges. Additionally, it highlights the role of hypervisors and specific architectures like Xen in managing virtual machines and resources efficiently.

Uploaded by

shahidhhkhan58
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views40 pages

Module 2 Notes

The document discusses virtualization in cloud computing, detailing its implementation levels, structures, and tools. It covers various virtualization types, including hardware-level, OS-level, and application-level virtualization, along with their benefits and challenges. Additionally, it highlights the role of hypervisors and specific architectures like Xen in managing virtual machines and resources efficiently.

Uploaded by

shahidhhkhan58
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Department of Computer Science and Engineering

CLOUD COMPUTING (BCS601)


Module-02
Virtual Machines and Virtualization of Clusters and Data Centers
➢ Implementation Levels of Virtualization
➢ Virtualization Structure/Tools and Mechanisms,
➢ Virtualization of CPU/Memory and I/O devices,
➢ Virtual Clusters and Resource Management
➢ Virtualization for Data Center Automation.

.IN
IMPLEMENTATION LEVELS OF VIRTUALIZATION
Definition: Virtualization is a computer architecture technology enabling multiple
virtual machines (VMs) to operate on the same hardware.
C
Historical Origin: The concept of VMs dates back to the 1960s.
N
Purpose of VMs:
• Enhances resource sharing among multiple users.
SY

• Improves performance through efficient resource utilization and application


flexibility.
• Virtualizes hardware (CPU, memory, I/O devices) or software (operating systems,
U

libraries) in various layers.


VT

Modern Relevance: Virtualization has gained importance recently due to rising demand
for distributed and cloud computing.
Core Idea: Separates hardware from software for better system efficiency.
Example: Virtual memory expanded users' accessible memory space.
Similar techniques optimize the use of compute engines, networks, and storage.
Levels of Virtualization Implementation
A traditional computer runs with a host operating system specially tailored for its hardware
architecture, as shown in Figure 3.1(a).

Dept. of CSE,SVIT 1
Effect of Virtualization:

.IN
• Enables different user applications, each with their own operating system (guest OS),
to run on the same hardware.
• Allows operation independently of the host OS.
C
Virtualization Layer:
• Achieved using additional software called a hypervisor or virtual machine monitor
(VMM).
N
• The virtualization layer interposes itself at various levels of the computer system.
SY

Virtual Machines (VMs):


• Applications run in VMs with their own guest OS.
• VMs operate over virtualized CPU, memory, and I/O resources.
U

Common Virtualization Layers:


1. ISA Level (Instruction Set Architecture level).
VT

2. Hardware Level.
3. Operating System Level.
4. Library Support Level.
5. Application Level.

Dept. of CSE,SVIT 2
Instruction Set Architecture Level

Virtualization at ISA Level:


• Emulates one ISA using the ISA of a host machine.
• Enables running legacy binary code (e.g., MIPS code on x86 hardware).

Benefits:
• Facilitates running old binary code on new hardware.
• Creates virtual ISAs on any hardware.
Emulation Methods:
1. Code Interpretation:

.IN
o Interprets source instructions to target instructions one at a time.
o Requires multiple native instructions per source instruction.
o Slower process overall.
2. Dynamic Binary Translation:
C
o Translates basic blocks of source instructions into target instructions
dynamically.
N
o Improves performance compared to interpretation.
o Can extend to program traces or super blocks for better efficiency.
SY

Instruction Set Emulation:


• Relies on binary translation and optimization.
• Requires a virtual ISA (V-ISA).
• Involves adding a processor-specific software translation layer to the compiler.
U

Hardware Abstraction Level


VT

Definition: Hardware-level virtualization operates directly on top of the bare hardware.

Dual Functionality:
• Creates a virtual hardware environment for virtual machines (VMs).
• Manages the underlying hardware through virtualization.

Objective: Virtualizes computer resources (processors, memory, I/O devices) to enhance


hardware utilization for multiple users concurrently.

Historical Development:
• Implemented in the IBM VM/370 system in the 1960s.
• Recently, the Xen hypervisor has been used to virtualize x86-based machines for
running Linux or other guest OS applications.

Dept. of CSE,SVIT 3
Operating System Level

Definition: OS-level virtualization is an abstraction layer between the operating system (OS)
and user applications.

Functionality:
• Creates isolated containers on a single physical server.
• OS instances in containers utilize hardware and software resources in data centers.

Behavior: Containers function like real servers.

Common Uses:
1. Virtual Hosting:

.IN
o Allocates hardware resources among multiple, mutually distrusting users.
2. Server Consolidation:
o Moves services from separate hosts into containers or virtual machines (VMs)
on a single server.
C
o
Library Support Level
N
API-Based Virtualization:
SY

• Applications typically use APIs from user-level libraries rather than direct system
calls to the OS.
• APIs serve as an interface suitable for virtualization.

Method:
U

• Virtualization is achieved by controlling communication between applications and the


system through API hooks.
VT

Examples:
1. WINE:
Supports Windows applications on UNIX systems using API-based
o
virtualization.
2. vCUDA:
o Enables applications in VMs to utilize GPU hardware acceleration.

User-Application Level

Definition:
• Application-level virtualization virtualizes applications as virtual machines (VMs).
• Applications typically run as processes on traditional operating systems (OS), making
this also known as process-level virtualization.
Popular Approach:
• Utilizes high-level language (HLL) VMs.

Dept. of CSE,SVIT 4
• Virtualization layer acts as an application program on top of the OS, exporting a VM
abstraction.
• Programs written and compiled for a specific abstract machine definition can run on
this VM.
• Examples: Microsoft .NET CLR and Java Virtual Machine (JVM).

Other Forms of Application-Level Virtualization:


1. Application Isolation: Wraps applications in a layer isolated from the host OS and
other applications.
2. Application Sandboxing: Restricts application interactions for enhanced security.
3. Application Streaming: Streams applications to user devices for on-demand usage.

Benefits:

.IN
• Simplifies application distribution and removal from user workstations.
• Example: LANDesk application virtualization platform, which deploys software as
self-contained, executable files without requiring installation or system modifications.
C
N
SY
U
VT

VMM Design Requirements and Providers

Definition:
o Hardware-level virtualization inserts a layer, called the Virtual Machine
Monitor (VMM), between real hardware and operating systems (OS).
Functionality:
o The VMM manages hardware resources and captures all hardware access
processes.
o Acts like a traditional OS, enabling multiple traditional operating systems
(same or different) to run simultaneously on the same hardware.
Key Features of a VMM:
1. Environment:
▪ Provides an environment for programs that is nearly identical to the
original hardware.
2. Performance:
▪ Programs should show only minor speed decreases.

Dept. of CSE,SVIT 5
3. Control:
▪ Maintains complete control of hardware resources.
Exceptions to Identical Functionality:
o Differences due to resource availability when multiple VMs are running.
o Timing dependencies caused by the virtualization layer or concurrent VMs.
Resource Requirements:
o Individual VM resource needs (e.g., memory) are reduced.
o Total resource usage may exceed the hardware’s capacity due to multiple
VMs.
Efficiency:
o Efficiency is critical; users won't prefer a VMM if performance is too low.
o A subset of virtual processor instructions must execute directly on the
hardware to improve efficiency.

.IN
Traditional Emulators vs. VMM:
o Emulators and simulators offer flexibility but are too slow for practical use.
o VMMs must execute instructions without excessive software intervention to
ensure speed.
C
Control Over Resources:
N
1. The VMM allocates hardware resources to programs.
2. Programs cannot access unallocated resources.
SY

3. The VMM can regain control of allocated resources when needed.


Processor Limitations:
o Some processors, like x86, have limitations (e.g., inability to trap privileged
instructions) that make VMM implementation challenging.
o Hardware-Assisted Virtualization modifies hardware to meet VMM
U

requirements.
VT

Virtualization Support at the OS Level

Emergence of Cloud Computing:


• Enabled by virtual machine (VM) technology.
• Shifts hardware and staffing costs of managing computational centers to third-party
providers, similar to a banking model.

Two Major Challenges:


1. Variable Resource Needs:
o Tasks may require fluctuating numbers of physical machines and VM
instances.
o Example: A task might need one CPU at some phases but hundreds of CPUs at
other times.
2. Slow VM Instantiation:
o New VMs are either fresh boots or template replicates, unaware of the current
application state.
o Results in delays and inefficiencies.

Dept. of CSE,SVIT 6
Call for Improvement:
• Significant research and development are needed to address these challenges and
enhance support for cloud computing.

Why OS-Level Virtualization?

Challenges of Hardware-Level Virtualization:


1. Slow initialization due to each VM creating its image from scratch.
2. Storage issues from considerable repeated content among VM images.
3. Disadvantages include:
o Slow performance.
o Low density.
o The need for para-virtualization to modify the guest OS.

.IN
4. May require hardware modifications to reduce performance overhead.
OS-Level Virtualization as a Solution:
• Inserts a virtualization layer inside the operating system to partition physical
resources.
C
• Creates multiple isolated virtual machines (VMs) within a single OS kernel.
Characteristics of OS-Level VMs:
N
• Referred to as Virtual Execution Environments (VE), Virtual Private Systems
(VPS), or containers.
SY

• VEs behave like real servers with their own:


o Processes.
o File systems.
o User accounts.
o Network interfaces (IP addresses, routing tables, firewall rules, etc.).
U

• VEs share the same operating system kernel but can be customized for different users.
Alternate Name:
VT

• Known as single-OS image virtualization because all VEs use a single shared OS
kernel.

Dept. of CSE,SVIT 7
Advantages of OS Extensions

Benefits of OS-Level Virtualization Compared to Hardware-Level Virtualization:


1. Minimal startup/shutdown costs, low resource requirements, and high scalability.
2. Ability to synchronize state changes between a VM and its host environment when
needed.
Mechanisms to Achieve Benefits:
1. All OS-level VMs share a single operating system kernel.
2. Virtualization layer allows VM processes to access host machine resources without
modifying them.

Disadvantages of OS Extensions

.IN
Main Disadvantage of OS Extensions:
• All OS-level VMs on a single container must use the same operating system family
(e.g., Windows cannot run on a Linux-based container).
C
Challenge in Cloud Computing:
N
• Users prefer diverse operating systems (e.g., Windows, Linux), which poses a
SY

limitation for OS-level virtualization.

Virtual Root Directories:


• Created using commands like chroot in UNIX systems.
• Methods to implement virtual root directories:
U

1. Duplicating Resources:
▪ Copies common resources to each VM partition.
VT

▪ High resource costs and overhead on the physical machine.


2. On-Demand Copies:
▪ Shares resources with the host environment and creates private copies
only as needed.
▪ More resource-efficient.
OS-Level Virtualization Limitations:
• Incurs performance and scalability challenges compared to hardware-assisted
virtualization.
Concept of OS-Level Virtualization:
• Inserts a virtualization layer inside the OS to partition hardware resources.
• Allows multiple VMs to run applications in isolated environments.
Implementation Requirements:
• Isolated execution environments (VMs) rely on a single OS kernel.
• Access requests from a VM are redirected to the VM's local resource partition on the
physical machine.

Dept. of CSE,SVIT 8
Virtualization on Linux or Windows Platforms

Linux Dominance in OS-Level Virtualization:


• Most OS-level virtualization systems are based on Linux.
• Virtualization support for Windows platforms is still under research.
Role of the Linux Kernel:
• Provides an abstraction layer for processes to interact with resources without needing
hardware details.
• New hardware may require a new Linux kernel for compatibility.
• Patched kernels enable extended functionality for some Linux platforms.
Simultaneous VMs:
• Linux platforms not tied to a special kernel can run multiple VMs on the same
hardware.

.IN
Examples of OS-Level Virtualization Tools:
1. Linux vServer and OpenVZ: Enable Linux platforms to run other platform-based
applications.
2. FVM: Specifically developed for virtualization on the Windows NT platform.
C
N
SY
U
VT

Middleware Support for Virtualization

Library-Level Virtualization:
• Also called user-level Application Binary Interface (ABI) or API emulation.
• Creates execution environments for running alien programs on a platform.
• Does not require creating a virtual machine (VM) for an entire operating system.
Key Functions:
1. API Call Interception.
2. API Call Remapping.
Examples of Library-Level Virtualization Systems:
• Windows Application Binary Interface (WABI).
• lxrun.

Dept. of CSE,SVIT 9
• WINE.
• Visual MainWin.
• vCUDA.

.IN
C
N
SY

3.2 VIRTUALIZATION STRUCTURES/TOOLS AND MECHANISMS

Before Virtualization:
U

• The operating system directly manages the hardware.


After Virtualization:
VT

• A virtualization layer is added between the hardware and the operating system.
• This layer converts real hardware into virtual hardware.
• Enables multiple operating systems (e.g., Linux and Windows) to run on the same
physical machine simultaneously.

Classes of VM Architectures:
1. Hypervisor Architecture:
o Also known as the Virtual Machine Monitor (VMM).
o Performs virtualization operations.
2. Paravirtualization.
3. Host-Based Virtualization.

Dept. of CSE,SVIT 10
Hypervisor and Xen Architecture

Role of Hypervisor:
.IN
C
• Supports hardware-level virtualization on bare metal devices (e.g., CPU, memory,
disk, network interfaces).
• Acts as a virtualization layer between physical hardware and the operating system
N
(OS).
• Provides hypercalls for guest OSes and applications.
SY

Types of Hypervisor Architectures:


1. Micro-Kernel Hypervisor:
o Example: Microsoft Hyper-V.
o Includes only basic functions (e.g., memory management, processor
U

scheduling).
o Device drivers and changeable components remain outside the hypervisor.
VT

o Smaller hypervisor code size.


2. Monolithic Hypervisor:
o Example: VMware ESX.
o Implements all functions, including device drivers.
o Larger hypervisor code size.
Functionality:
• Converts physical devices into virtual resources for VMs.

The Xen Architecture


Overview of Xen:
• Open-source hypervisor developed by Cambridge University.
• A microkernel hypervisor separating mechanism (handled by Xen) from policy
(handled by Domain 0).
• Does not include native device drivers; provides mechanisms for guest OSes to access
physical devices.
Key Features:
• Small hypervisor size due to minimal functionality.

Dept. of CSE,SVIT 11
• Acts as a virtual environment between hardware and OS.
• Commercial versions include Citrix XenServer and Oracle VM.
Core Components:
1. Hypervisor.
2. Kernel.
3. Applications.
Domains in Xen:
• Domain 0 (privileged guest OS):
o Manages hardware access and devices.
o Allocates and maps resources for other domains (Domain U).
o Boots first, without file system drivers.
o Security risks exist if Domain 0 is compromised.
• Domain U (unprivileged guest OS): Runs on resources allocated by Domain 0.

.IN
Security:
• Xen is Linux-based with a C2 security level.
• Strong security policies are needed to protect Domain 0.
VM Capabilities:
C
• Domain 0 acts as a VMM, enabling users to create, save, modify, share, migrate, and
roll back VMs like files.
N
• Rolling back or rerunning VMs allows fixing errors or redistributing content.
VM Execution Model:
SY

• Traditional machine states are linear, while VM states form a tree structure:
o Multiple instances of a VM can exist simultaneously.
o VMs can roll back to previous states or rerun from saved points.

Binary Translation with Full Virtualization


U

Hardware Virtualization Categories:


VT

1. Full Virtualization:
o Does not require modification of the host OS.
o Relies on binary translation to virtualize sensitive, nonvirtualizable
instructions.
o Guest OSes consist of both critical and noncritical instructions.
2. Host-Based Virtualization:
o Utilizes both a host OS and a guest OS.
o Includes a virtualization software layer between the host OS and guest OS.

Full Virtualization:
• Noncritical Instructions: Run directly on the hardware.
• Critical Instructions: Trapped and replaced with software-based emulation by the
Virtual Machine Monitor (VMM).
Why Only Critical Instructions Are Trapped:
• Performance:
o Binary translation of all instructions can lead to high performance overhead.
• Security:

Dept. of CSE,SVIT 12
o Critical instructions control hardware and can pose security risks.
o Trapping them ensures system security.
• Efficiency:
o Running noncritical instructions on hardware improves overall efficiency.

Binary Translation of Guest OS Requests Using a VMM

Implementation:
• VMware and other companies implement full virtualization.
• The Virtual Machine Monitor (VMM) operates at Ring 0, while the guest OS
operates at Ring 1.

.IN
Functionality:
• The VMM scans the instruction stream to identify privileged, control-, and
behavior-sensitive instructions.
• These instructions are trapped and emulated by the VMM using binary translation.
C
• Full virtualization combines binary translation and direct execution.
• The guest OS is completely decoupled from the hardware and is unaware of the
N
virtualization.
Challenges with Full Virtualization:
SY

• Binary Translation is time-consuming, impacting performance.


• I/O-Intensive Applications: Virtualizing such workloads poses significant
challenges.
• Performance is improved using a code cache for translated instructions, but this
increases memory usage.
U

Performance Efficiency:
• On the x86 architecture, performance typically ranges from 80% to 97% of the host
VT

machine's performance.

Host-Based Virtualization

Host-Based VM Architecture:
• A virtualization layer is installed on top of the host OS.
• The host OS remains responsible for managing hardware.
• Guest OSes run on top of the virtualization layer.
• Dedicated applications can run in VMs, while other applications may run directly on
the host OS.

Dept. of CSE,SVIT 13
Advantages:
1. Easy installation without modifying the host OS.
2. Simplifies VM design and deployment by relying on the host OS for device drivers

.IN
and low-level services.
3. Compatible with a wide range of host machine configurations.
Disadvantages:
• Lower performance compared to hypervisor/VMM architecture.
C
• Involves four layers of mapping when applications request hardware access,
significantly impacting performance.
• Requires binary translation when the ISA of the guest OS differs from the
N
underlying hardware ISA, further decreasing efficiency.
Tradeoff:
SY

• While flexible, host-based architectures may suffer from poor performance, making
them less practical for some use cases.

Para-Virtualization with Compiler Support


U

Definition:
VT

• Para-virtualization modifies the guest operating systems.


• It provides special APIs, requiring significant changes to the guest OS kernel and user
applications.
Purpose:
• Aims to reduce virtualization overhead and improve performance.
Mechanism:
• The virtualization layer replaces nonvirtualizable OS instructions with hypercalls
using an intelligent compiler.
x86 Processor Execution Rings:
• Four rings: Rings 0, 1, 2, and 3 (lower ring numbers have higher privileges).
• Ring 0: Privileged instructions executed by the OS.
• Ring 3: User-level applications run here.
Example of Para-Virtualization:
• KVM (Kernel-based Virtual Machine).
3.2.3.1 Para-Virtualization Architecture

Dept. of CSE,SVIT 14
.IN
C
Virtualization of x86 Processor:
• A virtualization layer is inserted between hardware and the operating system (OS).
N
• The virtualization layer operates at Ring 0 (highest privilege level).
Para-Virtualization Mechanism:
SY

• Replaces nonvirtualizable instructions with hypercalls that interact with the


hypervisor or Virtual Machine Monitor (VMM).
• Requires modifications to the guest OS kernel, making it unable to run directly on
hardware.
U

Issues with Para-Virtualization:


1. Compatibility and Portability: Doubts arise as it must support unmodified OSes as
VT

well.
2. Maintenance Cost: Maintaining para-virtualized OSes is costly due to deep kernel
modifications.
3. Performance Variability: Advantages of para-virtualization vary depending on
workloads.
Comparison with Full Virtualization:
• Para-virtualization is easier and more practical.
• Full virtualization suffers from performance issues, especially in binary translation.
Examples:
• Commonly used virtualization products employing para-virtualization include Xen,
KVM, and VMware ESX.

KVM (Kernel-Based VM)

KVM (Kernel-based Virtual Machine):


• A Linux para-virtualization system included in the Linux version 2.6.20 kernel.
Functionality:

Dept. of CSE,SVIT 15
• Linux Kernel handles memory management and scheduling.
• KVM performs the remaining virtualization tasks, making it simpler than a full
hypervisor.
Key Features:
• Hardware-Assisted Para-Virtualization: Enhances performance.
• Supports unmodified guest OSes, including Windows, Linux, Solaris, and other UNIX
variants.

Para-Virtualization with Compiler Support

Para-Virtualization Process:
• Handles privileged and sensitive instructions at compile time rather than runtime.

.IN
• Modifies the guest OS kernel to replace such instructions with hypercalls that interact
with the hypervisor or Virtual Machine Monitor (VMM).
Architecture:
• Example: Xen employs a para-virtualization architecture.
C
• The guest OS typically runs at Ring 1, not Ring 0, meaning it cannot execute
privileged instructions directly.
N
Privileged Instructions:
• Implemented via hypercalls to the hypervisor.
SY

• The modified guest OS emulates the original OS behavior using these hypercalls.
On UNIX Systems:
• A system call generally involves an interrupt or service routine.
• In Xen, hypercalls use a dedicated service routine to fulfill this role.
U
VT

VIRTUALIZATION OF CPU, MEMORY, AND I/O DEVICES

Processors like x86 use hardware-assisted virtualization, which enables the Virtual Machine
Monitor (VMM) and the guest operating system (OS) to operate in separate modes. Sensitive

Dept. of CSE,SVIT 16
instructions from the guest OS and its applications are intercepted by the VMM. The
hardware handles mode switching and saves processor states. Intel and AMD have developed
their own technologies for this on the x86 architecture.

Hardware Support for Virtualization

1. Processor Modes:
o Modern processors ensure controlled hardware access using two modes:
▪ User Mode: For unprivileged instructions.
▪ Supervisor Mode: For privileged instructions.
o Without protection mechanisms, processes could cause hardware conflicts and
crashes.
2. Virtualized Environments:

.IN
o Virtualization adds complexity due to extra layers in the machine stack.
3. Hardware Virtualization Solutions:
o VMware Workstation:
▪ Software suite for x86 and x86-64 systems.
C
▪ Allows running multiple virtual machines (VMs) alongside the host
OS.
N
▪ Uses host-based virtualization.
o Xen Hypervisor:
SY

▪ Runs on IA-32, x86-64, Itanium, and PowerPC 970 hosts.


▪ Modifies Linux to act as the hypervisor (privileged layer).
▪ Supports multiple guest OSes on top.
o KVM (Kernel-based Virtual Machine):
▪ Linux-based virtualization infrastructure.
U

▪ Supports hardware-assisted virtualization (Intel VT-x, AMD-v) and


paravirtualization using the VirtIO framework.
VT

▪ VirtIO includes components like:


▪ Paravirtual Ethernet card.
▪ Disk I/O controller.
▪ Memory adjustment (balloon device).
▪ VGA graphics interface with VMware drivers.

Dept. of CSE,SVIT 17
CPU Virtualization
.IN
C
1. VM Overview:
o A Virtual Machine (VM) is a duplicate of a computer system.
N
o Most VM instructions run natively on the host processor for efficiency.
o Critical instructions are managed carefully for system correctness and stability.
SY

2. Critical Instruction Categories:


o Privileged Instructions: Can only execute in a privileged mode and are trapped
otherwise.
o Control-Sensitive Instructions: Change resource configurations.
o Behavior-Sensitive Instructions: Behave differently depending on resource
U

configurations.
3. Virtualizable CPU Architecture:
VT

o A CPU is virtualizable if VM instructions run in user mode while the Virtual


Machine Monitor (VMM) runs in supervisor mode.
o Privileged instructions are trapped in the VMM, ensuring stability and
correctness.
4. RISC vs. x86 CPU Architectures:
o RISC: Naturally virtualizable as all sensitive instructions are privileged.
o x86: Not fully virtualizable since some sensitive instructions are not privileged
(e.g., SGDT, SMSW).
5. System Calls:
o On UNIX-like systems: System calls trigger the 80h interrupt, handled by the
OS kernel.
o On para-virtualized systems (e.g., Xen): The 80h interrupt triggers in the guest
OS, while the 82h interrupt is triggered in the hypervisor. The hypervisor
processes the call and returns control to the guest OS.
6. Paravirtualization:
o Allows unmodified applications to run in the VM with slight performance
penalties.

Dept. of CSE,SVIT 18
Hardware-Assisted CPU Virtualization

1. Challenge in Virtualization:
• Full or paravirtualization is complex, prompting efforts to simplify.
2. Privilege Mode Level (Ring-1):
• Intel and AMD introduced a privilege mode level (often called Ring-1) in x86
processors.
• Operating systems run at Ring 0, and the hypervisor runs at Ring-1.
• Privileged and sensitive instructions are automatically trapped in the hypervisor.
3. Benefits:
• Eliminates the need for binary translation in full virtualization.
• Allows operating systems to run in VMs without modification.

.IN
4. CPU Enhancements:
• Additional instructions are added to manage VM CPU state.
5. Implementations:
• Technologies like VT-x are used by Xen, VMware, and Microsoft Virtual PC
C
hypervisors.
6. Hardware-Assisted Virtualization:
N
• Offers high efficiency but incurs overhead due to mode-switching between the
hypervisor and the guest OS.
SY

• Sometimes underperforms compared to binary translation.


7. Hybrid Approach:
• Systems like VMware use a hybrid model: some tasks are offloaded to hardware,
while others are handled by software.
8. Combining Techniques:
U

• Paravirtualization and hardware-assisted virtualization are often combined for better


performance.
VT

Memory Virtualization

1. Traditional Memory Mapping:


o Operating systems use page tables for one-stage mapping of virtual memory to
machine memory.
o MMU (Memory Management Unit) and TLB (Translation Lookaside Buffer)
enhance performance.
2. Virtual Environment:
o Involves two-stage mapping:
▪ Virtual memory → Physical memory (Guest OS).
▪ Physical memory → Machine memory (VMM).
o MMU virtualization ensures transparency to the guest OS.
3. Shadow Page Tables:
o VMM creates shadow page tables corresponding to guest OS page tables.
o Leads to high memory overhead due to duplication.
4. Nested Page Tables:

Dept. of CSE,SVIT 19
o Additional layer maps physical memory to machine memory.
o Increases complexity and performance costs.
5. Direct Mapping via TLB:
o VMware uses shadow page tables and TLB hardware to map virtual memory
directly to machine memory, avoiding two-stage translations on every access.
6. Hardware Assistance:
o AMD Barcelona processor (since 2007) introduced nested paging to support
two-stage memory translation, reducing overhead.

.IN
C
N
SY

I/O Virtualization
U

1. Definition:
o Manages routing of I/O requests between virtual devices and shared physical
VT

hardware.
2. Methods of Implementation:
o Full Device Emulation:
▪ Emulates real-world devices in software within the VMM.
▪ Slower than actual hardware due to software overhead.
o Para-Virtualization:
▪ Uses a split driver model (frontend in Domain U, backend in Domain
0).
▪ Provides better performance than emulation but with higher CPU
overhead.
o Direct I/O:
▪ VM accesses devices directly, achieving near-native performance.
▪ Challenges exist for commodity hardware, particularly during
workload migration.
3. Hardware-Assisted I/O Virtualization:
o Critical to reduce high overhead from device emulation.

Dept. of CSE,SVIT 20
o Intel VT-d supports remapping of DMA transfers and device-generated
interrupts.
4. Self-Virtualized I/O (SV-IO):
o Utilizes multicore processors for virtualizing I/O devices.
o Provides virtual devices with APIs for VMs and the VMM.
o Each Virtual Interface (VIF) has message queues (incoming and outgoing) and
a unique ID.
5. Challenges:
o Full emulation is slow, para-virtualization has higher CPU costs, and direct
I/O faces hardware limitations.
o Proper management is needed for reliability during device reassignment and
workload migration.

.IN
C
N
SY
U
VT

Virtualization in Multi-Core Processors

Higher Complexity: Virtualizing multi-core processors is inherently more complicated than


uni-core ones, despite their higher performance potential.
Challenges in Utilization:
• Parallelization: Programs must be parallelized to fully utilize multi-core processors,
requiring better programming models, languages, and libraries.
• Task Assignment: Explicitly assigning tasks to cores is difficult, spurring research in
scheduling algorithms and resource management, though finding the right balance
between performance and complexity remains elusive.
Emerging Issues:
• Dynamic Heterogeneity: Mixing fat CPU cores with thin GPU cores adds complexity
to resource management, stemming from factors like less reliable transistors and the
challenges of using them efficiently.

Physical versus Virtual Processor Cores

Dept. of CSE,SVIT 21
method proposed by Wells et al. [74]:
• Multicore Virtualization: Provides a higher-level abstraction of processor cores for
hardware designers.
• Efficient Resource Management: Reduces inefficiency and complexity in managing
hardware resources through software.
• Position in System Architecture: Located beneath the Instruction Set Architecture
(ISA) and remains unaffected by the OS or hypervisor (VMM).
• Dynamic VCPU Allocation:
o A virtual CPU (VCPU) can switch between physical cores as needed.
o If no suitable core is available, the VCPU's execution is temporarily
suspended.
This technique optimizes hardware resource management while maintaining software
transparency.

.IN
C
N
SY
U
VT

Virtual Hierarchy
Marty and Hill [39]:
• Many-Core CMPs and Space-Sharing:
o Emerging many-core CMPs can support space-sharing, where jobs are
assigned to separate core groups for long periods.
o This approach replaces time-sharing on fewer cores.
• Virtual Hierarchies:
o Introduced to overlay a coherence and caching hierarchy onto physical
processors.
o Can adapt to workloads, unlike fixed physical hierarchies, improving
performance and isolation.
• Physical vs. Virtual Cache Hierarchies:
o Physical hierarchies have static cache allocation and mapping.
o Virtual hierarchies dynamically adapt to workload needs for:
1. Faster data access near cores.
2. Establishing shared-cache domains and points of coherence.
3. Reducing miss access times.
Dept. of CSE,SVIT 22
• Space-Sharing Application:
o Illustrates workload assignments: e.g., database, web servers, middleware in
separate virtual core clusters (VMs).
o Works for multiple VMs or a single OS environment.
• Two-Level Virtual Hierarchy:
o Level 1: Isolates workloads/VMs, minimizing performance interference and
miss times.
o Level 2: Maintains globally shared memory for resource repartitioning without
costly cache flushes.
• Benefits:
o Efficient resource use for space-shared workloads (e.g., multiprogramming,
server consolidation).
o Easier system software integration and supports virtualization features like

.IN
content-based page sharing.
C
N
SY
U
VT

3.4 VIRTUAL CLUSTERS AND RESOURCE MANAGEMENT

Physical Cluster:
• Collection of interconnected servers via a physical network like LAN.
• Focuses on clustering techniques for physical machines.
Virtual Clusters:

Dept. of CSE,SVIT 23
• Built on virtualized platforms instead of physical servers.
• Enable efficient resource sharing and adaptability for various applications.
Key Design Issues:
1. Live Migration of VMs: Moving virtual machines between physical hosts without
downtime.
2. Memory and File Migrations: Ensuring smooth transfer of memory and file systems.
3. Dynamic Deployment: Efficiently allocating and deploying virtual clusters in real-
time.
Challenges with Traditional VM Initialization:
• Administrators must manually configure VMs, causing potential overloading or
underutilization in networks.
Amazon EC2 Example:
• Provides elastic cloud computing power.

.IN
• Allows customers to create and manage virtual machines dynamically.

Bridging Mode:
• Supported by platforms like XenServer and VMware ESX Server.
C
• Enables VMs to appear as individual hosts on a network.
• Facilitates seamless communication through virtual network interface cards and
N
automated configuration.
SY

Physical versus Virtual Clusters

• Definition of Virtual Clusters:


o Built using Virtual Machines (VMs) across distributed servers in one or more
physical clusters.
U

o VMs are interconnected through a virtual network spanning multiple physical


networks.
VT

o Virtual clusters have dynamic and distinct boundaries.


• Key Properties:
1. Nodes can be physical or virtual machines.
2. Multiple VMs, running different operating systems, can coexist on the same
physical server.
3. VMs improve server utilization and flexibility by consolidating multiple
functionalities on one server.
4. VMs can be replicated across multiple servers for fault tolerance, distributed
parallelism, and disaster recovery.
5. Virtual clusters can dynamically scale in size (nodes can be added or
removed).
6. Physical node failure may impact VMs hosted there, but VM failure does not
affect the host system.
• Virtual Cluster Management:
o Deployment, monitoring, and managing VMs across large-scale physical
nodes are critical for building high-performance environments.

Dept. of CSE,SVIT 24
o Resource scheduling, load balancing, server consolidation, and fault tolerance
are essential techniques.
• Efficiency in VM Image Storage:
o Efficient storage of large numbers of VM images is crucial.
o Preinstalled software packages (templates) allow users to build custom
software stacks for specific needs.
• Host and Guest Systems:
o Physical machines are host systems, and VMs are guest systems.
o Hosts and guests may run different operating systems.
• Dynamic Boundaries:
o Virtual cluster boundaries can change as VMs are added, removed, or
migrated.

.IN
C
N
SY
U
VT

Dept. of CSE,SVIT 25
.IN
C
Fast Deployment and Effective Scheduling
N
• Fast Deployment Capabilities:
SY

1. Construct and distribute software stacks (OS, libraries, applications) rapidly to


physical nodes in clusters.
2. Quickly switch runtime environments between users' virtual clusters, shutting
down or suspending unused clusters to save resources.
U

• Green Computing Challenges:


o Traditional methods focus only on single workstations, not entire clusters.
o Existing techniques are limited to homogeneous workstations and specific
VT

applications.
o Live migration of VMs enables workload transfers but can introduce
significant overhead, negatively impacting cluster utilization, throughput, and
quality of service (QoS).
• Design Challenges for Green Computing:
o Develop migration strategies to implement energy-efficient solutions without
degrading cluster performance.
• Load Balancing in Virtual Clusters:
o Achieved using load indexes and user login frequency.
o Automatic scale-up and scale-down mechanisms increase resource utilization
and reduce system response time.
o Mapping VMs to appropriate physical nodes enhances performance.
• Dynamic Load Adjustment:
o Live migration is used to redistribute workloads when cluster nodes are
imbalanced.

High-Performance Virtual Storage

Dept. of CSE,SVIT 26
Customization of VMs:
• Template VMs can be distributed to physical hosts for customization.
• Existing software packages reduce customization time and ease switching of virtual
environments.
• Efficient disk space management is crucial, achieved through reducing duplicate
blocks using hash values.
User Profiles:
• Profiles store data block identification for corresponding VMs.
• New blocks created during data modifications are tracked in user profiles.
Steps for Deploying VMs:
1. Prepare the Disk Image: Use templates for easy creation.
2. Configure the VMs: Assign a name, disk image, network settings, CPU, and memory.

.IN
3. Choose Destination Nodes: Select appropriate physical hosts.
4. Execute Deployment Commands: Run the commands on every host.
Templates for Deployment:
• Templates include preinstalled operating systems and/or application software.
C
• Copy-on-Write (COW) format reduces disk space usage and speeds up deployment.
Efficiency in Configuration:
N
• Use preedited profiles for VMs with the same configurations to simplify the process.
• Automatically assign values for specific items like UUID, VM name, and IP address.
SY

Host Selection Strategy:


• Match VMs to appropriate physical hosts based on requirements.
• Balance workloads across the entire network.
U

Live VM Migration Steps and Performance Effects


VT

• Cluster Built with Mixed Nodes:


o Physical nodes can failover to VMs on other hosts, ensuring flexibility in
failover mechanisms.
o VM failures won’t affect the host system, but host node failures impact all
VMs on it.
o VM Life Migration mitigates issues by transferring VM state files to new
hosts.
• Virtual Cluster Management Methods:
1. Guest-Based Manager: Resides on guest systems (e.g., openMosix, Sun’s
cluster Oasis).
2. Host-Based Manager: Supervises guest systems and restarts failed VMs (e.g.,
VMware HA system).
3. Independent Cluster Manager: Handles both host and guest systems but
increases complexity.
4. Integrated Manager: Distinguishes between virtualized and physical resources.
Applications of Virtual Clusters:
o Found in computational grids, cloud platforms, and HPC systems.

Dept. of CSE,SVIT 27
o Useful for dynamic resource provisioning upon demand or after node failures.
Live VM Migration Steps:
1. Start Migration: Identify VM and destination host; migration may be triggered
by load balancing.
2. Transfer Memory: Continuously transfer memory, including iterative pre-
copying, while minimizing disruptions.
3. Suspend VM: Pause the VM to transfer final memory and non-memory data
(e.g., CPU and network states); downtime is minimized.
4. Activate on New Host: Recover states and redirect network connections at the
new host; remove VM from the source.

.IN
C
N
SY
U
VT

Trade-Offs in Live Migration:


o Ensure minimal downtime, low network bandwidth usage, and reasonable
migration time.
o Avoid resource contention with active services (e.g., CPU, bandwidth).
VM States:
1. Inactive: Not enabled.
2. Active: Performing tasks.
3. Paused: Instantiated but waiting.
4. Suspended: Resources saved back to disk.
Performance Example:
o Live migration resulted in minimal downtime (165 ms) and small throughput
impact, critical for dynamic reconfiguration and disaster recovery.
Advantages of VM Technology:
o Simplifies management of physical resources.

Dept. of CSE,SVIT 28
o Enables clustering of inexpensive computers for scalable and reliable
computing.

Migration of Memory, Files, and Network Resources


High Initial Costs:
• Clusters require significant investments in space, power conditioning, and cooling
equipment.
Shared Clusters:
• Leasing or sharing clusters helps reduce costs and offers better resource utilization
through economies of scale.
• Multiplexing enhances resource efficiency by accommodating varying demands.
Early Configuration Systems:

.IN
• Focused on scalable and expressive mechanisms to define clusters for specific
services.
• Involved physical partitioning of cluster nodes for different service types.
Considerations for System Migration:
C
• Ensuring seamless transition from one physical node to another.
• Addressing potential performance impacts and resource allocation during migration.
N
Memory Migration
SY

Memory Migration:
• Involves moving a VM's memory instance from one physical host to another.
• Memory size typically ranges from hundreds of megabytes to a few gigabytes.
• The chosen technique depends on the application/workload supported by the guest
U

OS.
Internet Suspend-Resume (ISR) Technique:
VT

• Exploits temporal locality, as memory states of suspended and resumed VMs are often
similar, differing only in recent changes.
• Represents files as a tree of small subfiles to track modifications efficiently.
• Ensures only modified files are transmitted using caching, minimizing unnecessary
data transfer.
Limitations of ISR:
• Designed for scenarios where live VM migration is unnecessary.
• Leads to higher downtime compared to other advanced live migration techniques.

File System Migration


File System Consistency:
1. Each VM needs a location-independent file system view across all hosts.
2. One solution is providing each VM with its own virtual disk mapped to the file
system, but transferring entire disks over a network is inefficient due to high capacity.
Global File System Approach:

Dept. of CSE,SVIT 29
• A global file system across machines eliminates the need to copy files during
migration since all files are network-accessible.
Distributed File System in ISR:
• Used as a transport mechanism for suspended VM states.
• Local file systems are accessed by the VMM, which explicitly moves VM files for
suspend and resume operations.
• This avoids the complexity of implementing multiple file system calls but requires
local storage for VM virtual disks.
Smart Copying:
• Leverages spatial locality (frequent movement between predictable locations like
home and office).
• Transmits only changes between file systems at suspend and resume locations,
reducing data transfer.

.IN
• In cases without locality, much of the state can be synthesized at the resuming site.
Proactive State Transfer:
• Effective when the resuming site can be confidently predicted.
• Focuses on reducing data transfer for operating systems and applications, which
C
constitute the majority of storage space.
N
Network Migration
SY

Network Connectivity During Migration:


• Migrating VMs must maintain open network connections without relying on
forwarding or redirection mechanisms.
• Each VM is assigned a virtual IP and virtual MAC address distinct from the host
U

machine.
• The VMM maps these virtual addresses to the corresponding VM and ensures the
VT

protocol states travel with the VM.


Handling IP Reconfiguration:
• On a single switched LAN, an unsolicited ARP reply from the migrating host
advertises the VM's new location.
• Alternatively, the VM keeps its original Ethernet MAC address, and the network
switch detects the move.
Live Migration Features:
• Moves a VM from one physical node to another while keeping the OS and
applications running.
• Enables online system maintenance, load balancing, reconfiguration, and fault
tolerance.
• Used in enterprise environments for efficient resource management (e.g., server
consolidation, performance isolation).
Precopy Mechanism:
• Memory pages are transferred iteratively during the precopy phase.
• Only dirty pages (modified data) are copied in the final iteration, minimizing service
downtime.

Dept. of CSE,SVIT 30
• Performance Limitation: Migration daemon uses network bandwidth, causing
performance degradation.
Adaptive Rate Limiting:
• Reduces network impact but prolongs the total migration time significantly.
• Limits the number of precopy iterations due to convergence issues.
CR/TR-Motion (Checkpointing/Recovery and Trace/Replay):
• Transfers execution traces instead of dirty pages, reducing migration time and
downtime.
• Effective if the log replay rate exceeds the log growth rate.
Postcopy Migration:
• Transfers memory pages only once, reducing total migration time.
• Causes higher downtime due to latency in fetching pages before resuming VM
operations.

.IN
Memory Compression:
• Uses CPU resources to compress memory pages, significantly reducing transferred
data.
• Decompression is simple, fast, and efficient with minimal overhead.
C
Live Migration of VM Using Xen
N
• Xen Hypervisor (VMM):
o Allows multiple commodity operating systems to safely share x86 hardware.
SY

o Uses Domain 0 (Dom0) for managing tasks like creating, terminating, or


migrating VMs.
o Transfers states between VMs using a send/receive model.
RDMA for Data Communication:
o Reduces reliance on CPU, caches, or context switches.
U

o Minimizes the impact on guest operating systems and hosted applications


during migration.
VT

Trade-Offs in Compression:
o A single compression algorithm for all memory data is ineffective.
o Different algorithms are needed to handle varying regularities in memory data.
Live Migration System:
o Migration daemons in Dom0 perform the migration tasks.
o Shadow Page Tables in the VMM layer trace changes to memory pages,
flagging them in a dirty bitmap.
o The dirty bitmap is sent to the migration daemon at the start of each precopy
round and then reset.
• Precopy Phase:
o Memory pages flagged in the bitmap are extracted and compressed for
transfer.
o At the destination, the data is decompressed, ensuring continuity of the VM
state.
This approach aims to balance migration efficiency and minimal disruption to the
applications running on VMs.

Dept. of CSE,SVIT 31
.IN
C
Dynamic Deployment of Virtual Clusters
N
1. Cluster-on-Demand (COD) at Duke University:
SY

o Objective: Dynamic resource allocation for virtual clusters.


o Results: Enabled resource sharing and dynamic restructuring of virtual clusters
using Sun GridEngine.
2. Cellular Disco at Stanford University:
U

o Objective: Deploy virtual clusters on shared-memory multiprocessor systems.


o Results: VMs were deployed on multiple processors under the Cellular Disco
VMM.
VT

3. VIOLIN at Purdue University:


o Objective: Enhance resource utilization through dynamic adaptation.
o Results: Achieved significant adaptation gains with minimal overhead.
4. GRAAL Project at INRIA, France:
o Objective: Test parallel algorithm performance in Xen-enabled virtual clusters.
o Results: Improved execution times by 75% with 30% resource slack.
Example 3.9: COD Project at Duke University:
• Purpose:
o Manages dynamic server allocation for multiple virtual clusters.
o Supports dynamic reconfiguration based on load changes.
• Key Features:
o Resource reservation, adaptive provisioning, and idle resource scavenging.
o Resource policies and templates managed by a configuration database.
• Results:
o Demonstrated dynamic resizing of virtual clusters over time to match user
workloads.
o Enhanced resource utilization in real-life applications.

Dept. of CSE,SVIT 32
Example 3.10: VIOLIN Project at Purdue University:
• Purpose:
o Utilizes live VM migration to reconfigure virtual clusters for better resource
utilization.
o Supports isolated virtual environments for parallel applications.
• Key Features:
o Enables relocation and scaling of resources in a transparent manner.
o Achieves resource adaptation with minimal overhead (20 seconds out of 1,200
seconds for large tasks).
• Results:
o Demonstrated enhanced resource utilization with less than 1% increase in
execution time.

.IN
3.5 VIRTUALIZATION FOR DATA-CENTER AUTOMATION

• Major IT companies (Google, Amazon, Microsoft, etc.) are heavily investing in data
centers.
C
• Automation enables dynamic allocation of resources (hardware, software, databases)
to millions of users while ensuring QoS and cost-effectiveness.
N
Virtualization Growth (2006–2011):
• Virtualization market share grew significantly (from $1.04 billion in 2006 to $3.2
SY

billion in 2011).
• Key drivers: mobility, reduced downtime, high availability, backup services, workload
balancing, and expanding client bases.

Server Consolidation in Data Centers:


U

1. Workload Types:
o Chatty Workloads: Burst and silent periods (e.g., web video services).
VT

o Noninteractive Workloads: Continuous processing without user input (e.g.,


high-performance computing).
2. Resource Allocation Challenges:
o Static resource allocation for peak demand leads to underutilized servers and
wasted resources.
o Server consolidation reduces physical server counts to optimize hardware use.
Advantages of Server Virtualization:
• Enhanced Utilization: Consolidates underutilized servers for higher efficiency.
• Agile Provisioning: OS and application images can be cloned and reused easily.
• Lower Costs: Reduces hardware purchases, maintenance, space, and energy costs.
• Improved Continuity: Guest OS crashes don’t affect host OS or other VMs; easier
VM transfers between servers.
Automation in Data Centers:
• Requires resource scheduling, power management, and autonomic management
techniques.
• Scheduling should span multiple levels (VM, server, and data center), though current
techniques handle only one or two levels.

Dept. of CSE,SVIT 33
Advanced Resource Management:
• Dynamic CPU Allocation: Based on VM utilization and QoS metrics.
• Two-Level Management:
o Local controller at VM level.
o Global controller at the server level.
• CMP (Chip Multiprocessor) Optimization:
o Design virtual hierarchies for inter-VM sharing, reassignment, and memory
access optimization.
o VM-aware power budgeting to balance performance and energy saving.
This framework addresses the growing complexity and demands of modern data centers.

Virtual Storage Management

.IN
1. Definition:
o Traditional storage virtualization: Aggregation and repartitioning of disks at
coarse time scales for physical machines.
o Modern virtual storage: Managed by Virtual Machine Monitors (VMMs) and
C
guest OSes, involving VM images and application data.
2. Encapsulation and Isolation:
N
o VMs encapsulate operating systems and isolate them from others, allowing
multiple VMs to run on a single machine.
SY

o Storage systems lag in achieving seamless support, becoming a bottleneck in


VM deployment.
3. Storage Challenges:
o Guest OS operates as if using a real hard disk but cannot access it directly.
o Multiple guest OSes contend for hard disk resources, increasing complexity
U

for VMM storage management.


o Primitive operations like volume remapping and checkpointing are
VT

cumbersome and sometimes unavailable.


4. Key Solutions:
o Parallax: A distributed storage system designed for virtualization
environments.
o Content Addressable Storage (CAS): Reduces the size of VM images and
supports large-scale VM systems.
Parallax Storage System:
1. Architecture:
o Uses a federation of storage VMs on the same physical hosts as the client
VMs.
o Acts as a block virtualization layer between VMs and physical storage
devices.
o Provides virtual disks (VDIs) accessible transparently across physical hosts.
2. Features:
o Supports paravirtualization and full virtualization techniques.
o Enables storage functionality (e.g., snapshots) at the software level, reducing
dependency on high-end hardware.

Dept. of CSE,SVIT 34
o Allows live upgrades of block device drivers in active clusters.
3. Implementation:
o VDIs are handled via Xen’s block tap driver and implemented using a tapdisk
library.
o Storage appliance VMs manage block and network access, connecting to
physical hardware.
4. Scalability:
o Suitable for cluster-based environments with a shared administrative domain
for storage management.
o Optimized for dynamic, large-scale data centers.
Parallax demonstrates how innovative storage virtualization solutions can address key
bottlenecks in system virtualization, improving scalability and efficiency.

.IN
C
N
SY
U
VT

Cloud OS for Virtualized Data Centers


Four VI Managers/OSes:
1. Nimbus:
o Platform: Linux.
o Features: VM creation, virtual cluster management, and virtual networking.
o Hypervisors: Xen and KVM.
o Public Cloud Interface: EC2.
o License: Apache v2.
2. Eucalyptus:
o Platform: Linux.

Dept. of CSE,SVIT 35
o Features: Virtual networking and VM management for private clouds.
o Hypervisors: Xen and KVM.
o Public Cloud Interface: EC2.
o License: BSD.
3. OpenNebula:
o Platform: Linux.
o Features: Management of VMs, hosts, virtual networks, scheduling tools, and
dynamic provisioning.
o Hypervisors: Xen and KVM.
o Public Cloud Interface: EC2.
o License: Apache v2.
4. vSphere 4:
o Platforms: Linux and Windows.

.IN
o Features: Virtual storage, networking, data protection, dynamic provisioning,
and scalability.
o Hypervisors: VMware ESX and ESXi.
o Public Cloud Interface: VMware vCloud.
C
o License: Proprietary.
Example 3.12: Eucalyptus:
N
• Open-source system designed for IaaS (Infrastructure as a Service) clouds.
• Supports virtual networking, VM management, and interaction with private and
SY

public clouds (e.g., AWS EC2 and S3 compatibility).


• Key Components:
1. Instance Manager: Handles VM execution and termination.
2. Group Manager: Schedules and manages virtual instance networks.
3. Cloud Manager: Acts as the main user interface for managing resources and
U

scheduling.
Example 3.13: VMware vSphere 4:
VT

• Commercial cloud operating system designed for data center virtualization and private
clouds.
• Functional Suites:
o Infrastructure Services:
▪ vCompute: ESX, ESXi, and DRS for virtualization.
▪ vStorage: VMFS and thin provisioning.
▪ vNetwork: Distributed switching and networking functions.
o Application Services:
▪ Availability: VMotion, Storage VMotion, High Availability (HA),
Fault Tolerance, Data Recovery.
▪ Security: vShield Zones and VMsafe.
▪ Scalability: DRS (Distributed Resource Scheduler) and Hot Add.
• Interfaces through VMware vCenter to integrate with existing or new applications.
These platforms play critical roles in virtualizing data centers and supporting cloud
operations.

Dept. of CSE,SVIT 36
.IN
C
N
SY
U
VT

Dept. of CSE,SVIT 37
.IN
C
N
SY
U
VT

Trust Management in Virtualized Data Centers

1. Role:
oA VMM creates a software layer between operating systems and hardware.
oIt enables one or more Virtual Machines (VMs) on a single physical platform.
oProvides secure isolation, with hardware resources accessed only through
VMM control.
2. Encapsulation:
o Encapsulates VM states, allowing them to be copied, shared, or removed like
files.

Dept. of CSE,SVIT 38
o Raises security concerns if encapsulated states are exposed or reused (e.g.,
random number reuse leading to cryptographic vulnerabilities).
3. Security Risks:
o Compromising the VMM or management VM can endanger the entire system.
o Protocols relying on "freshness" (e.g., session keys, TCP initial sequence
numbers) face risks of hijacking or plaintext exposure.
VM-Based Intrusion Detection Systems (IDS):
1. Types:
o Host-Based IDS (HIDS): Monitors specific systems but can be compromised
if the host system is attacked.
o Network-Based IDS (NIDS): Monitors network traffic but struggles with
detecting fake actions.
2. Virtualization-Based IDS:

.IN
o Isolates guest VMs on shared hardware, limiting the impact of attacks on other
VMs.
o Monitors and audits access to hardware/software, merging advantages of
HIDS and NIDS.
C
o Implemented as either:
▪ A high-privileged VM running on the VMM.
N
▪ Integrated directly into the VMM for higher privileges.
3. Livewire Architecture:
SY

o Contains a policy engine and policy module for monitoring events across
guest VMs.
o Uses tracing mechanisms (e.g., PTrace) for secure policies.
Additional Security Tools:
1. Honeypots and Honeynets:
U

o Simulate systems to attract and monitor attackers.


o Protect real systems while analyzing attack actions.
VT

o Can be implemented as physical or virtual systems.


2. EMC Trusted Zones:
o Built to insulate virtual clusters for multiple tenants.
o Countermeasures include anti-malware, encryption, intrusion detection, and
access segregation.
o Ensures isolation and compliance for tenant-specific virtual infrastructures.
This framework highlights the challenges and solutions for security and intrusion detection in
virtual environments.

Dept. of CSE,SVIT 39
.IN
C
N
SY
U
VT

Dept. of CSE,SVIT 40

You might also like