CC Module 2
CC Module 2
Cost Efficiency: By running multiple virtual machines on a single physical server, cloud
providers can reduce hardware costs and improve energy efficiency, passing those
savings on to customers.
Support for Diverse Workloads: Different types of virtual machines can run various
operating systems and applications on the same hardware, accommodating a wide range
of workloads and technologies.
Live migration and Server consolidation
Live Migration:
Live migration is the process of moving a running virtual machine (VM) from one physical
server to another without interrupting its operation. Imagine you're watching a movie on your
laptop, and you want to move to another room. You can take your laptop with you and keep
watching without pausing the movie. In the same way, live migrations lets a VM switch servers
seamlessly, which helps with load balancing, maintenance, or reducing downtime.
Server Consolidation:
Server consolidation is when multiple physical servers are combined into fewer servers by using
virtualization. Think of it like moving several small fish tanks into one larger tank. Instead of
having a lot of separate servers (each with its own resources), you run multiple virtual machines
on a single physical server. This saves space, reduces costs, and makes better use of hardware
resources.
Live migration keeps things running smoothly while moving VMs, and server consolidation helps reduce
the number of physical servers needed by combining workloads onto fewer machines.
CONS:
Performance Degradation: Running multiple virtual machines on a single physical server can lead
to performance issues. If too many VMs are active, they may compete for limited resources (like
CPU and memory), slowing down overall performance.
Inefficiency: Sometimes, virtual machines may not use resources as effectively as physical
machines. This can happen if VMs are over-provisioned (allocated more resources than they
need) or under-utilized, leading to wasted resources.
Degraded User Experience: If VMs experience performance issues, users may notice slower
applications or longer response times. This can impact productivity and satisfaction, especially in
environments where quick access is crucial.
Security Holes: Virtualization can introduce new security vulnerabilities. If one VM is
compromised, it might expose others on the same host. Additionally, misconfigurations can
create security gaps that attackers can exploit.
New Threats: Virtualization can lead to different types of attacks, such as hypervisor attacks,
where the software managing VMs is targeted. These attacks can compromise multiple VMs at
once, posing a significant risk.
Modified OS: In paravirtualization, the guest operating systems (OS) are modified to work more
efficiently with the hypervisor (the software that manages virtual machines). This means they can
communicate directly with the hypervisor rather than emulating hardware.
Efficiency: Because the guest OS knows it’s running in a virtual environment, it can avoid some
overhead that comes with fully emulated hardware. This leads to better performance and faster
operation.
Hypervisor: The Xen hypervisor sits between the physical hardware and the guest OS. It manages
how resources are shared among the different VMs.
Use Cases: Para virtualization is often used in environments where performance is critical, such
as data centers, because it allows for efficient resource use and lower latency.
Here’s a simple explanation of Xen architecture and guest OS management, broken down into key points:
Xen Architecture
Hypervisor: At the core of Xen is the hypervisor, also called the Xen hypervisor. It manages the
hardware resources and controls how multiple operating systems (OS) interact with the hardware.
Domains: Xen uses the concept of "domains":
Domain 0 (Dom0): This is the privileged domain that has direct access to the hardware. It manages other
domains and handles tasks like device management.
Domain U (DomU): These are the unprivileged domains that run guest operating systems. They rely on
Dom0 for hardware access.
Resource Management: The hypervisor allocates CPU, memory, and other resources to each domain,
ensuring they operate independently and efficiently.
Isolation: Each domain is isolated from others, meaning if one domain crashes or has issues, it doesn’t
affect the others.
Guest OS Management
Modified OS: Guest operating systems (like Linux or Windows) can be run in Xen. In paravirtualization
mode, they are modified to work directly with the hypervisor for better performance.
Installation: Guest OSs are installed in their own domains (DomU). Each one thinks it has its own
dedicated hardware, even though they share the physical resources.
Communication: The guest OS communicates with the hypervisor for resource requests and
management, allowing for efficient operation and resource allocation.
Snapshots and Cloning: Xen supports features like snapshots (saving the state of a VM) and cloning
(creating copies of VMs) for easy management and backup.
Xen Architecture: Comprises a hypervisor, privileged domain (Dom0), and unprivileged domains
(DomU) for efficient resource management and isolation.
Guest OS Management: Involves running modified OSs in isolated environments, allowing for efficient
communication and resource use.
VMware full virtualization is a technique that allows multiple virtual machines (VMs) to run on a single
physical server by fully emulating the underlying hardware. Here’s a concise definition:
Definition: VMware full virtualization is a virtualization method in which the hypervisor (VMware)
provides complete hardware emulation, enabling guest operating systems to run unmodified. Each VM
operates as if it has its own dedicated physical hardware, allowing for isolation, flexibility, and resource
management without requiring changes to the guest OS.
Key Features:
Complete Emulation: The hypervisor replicates all hardware components for each VM.
Unmodified OS: Guest operating systems do not need any modifications to run.
Isolation: Each VM operates independently, ensuring stability and security.
Resource Management: The hypervisor dynamically allocates resources like CPU and memory
among the VMs.
Definition: Full virtualization allows multiple operating systems (OS) to run on a single physical server
as if they are running on separate machines, without needing to change the OS.
Full Virtualization Reference Model
Physical Layer: At the bottom, you have the physical hardware, which includes the CPU,
memory, storage and network components. Example: A physical server.
Hypervisor Layer: Above the physical hardware is the hypervisor (like VMware). The
hypervisor acts as an intermediary between the hardware and the virtual machines (VMs).
Role: It manages the hardware resources and provides full hardware emulation to each VM.
Virtual Machines Layer: On top of the hypervisor, you have multiple virtual machines. Each
VM can run its own operating system independently.
Example: VM1 running Windows, VM2 running Linux, etc.
Guest Operating Systems: Each virtual machine contains a guest operating system, which
interacts with the virtualized hardware provided by the hypervisor.
How It Works
Isolation: Each VM is isolated from others, meaning they can operate independently. If one VM
crashes, the others continue to run smoothly.
Resource Allocation: The hypervisor allocates physical resources (like CPU and memory) to
each VM based on their needs, ensuring efficient use of hardware.
No Modifications Needed: Since full virtualization provides complete hardware emulation, guest
operating systems can be installed and run without any modifications. They behave as if they are
on their own physical machines.
Full virtualization uses a hypervisor to create multiple virtual machines on a single physical server,
allowing each VM to run an unmodified OS. This setup provides isolation, efficient resource
management, and flexibility, making it a powerful solution for various applications.
Binary translation is a technique used in virtualization that allows software designed for one type of
hardware to run on a different type of hardware. Here’s a simple breakdown:
Code Conversion: Binary translation involves converting the machine code (binary code) of a program
written for one type of processor into equivalent code that can run on another type of processor.
Dynamic Translation: This process often happens in real-time, meaning the translation occurs while the
program is running. The hypervisor translates instructions on-the-fly, allowing for immediate execution.
Compatibility: It allows guest operating systems and applications that were not originally designed for
the host hardware to run smoothly. For example, running a Windows program on a Linux server.
Performance: While binary translation can introduce some overhead (extra processing time for the
translation), it can be optimized to minimize the impact on performance.
Use in Virtualization: This technique is commonly used in hypervisors that support full virtualization,
helping to bridge the gap between different hardware architectures.
Binary translation is a method that converts the machine code of a program so it can run on different
hardware. This allows for greater flexibility in virtualization by enabling various operating systems and
applications to function on incompatible systems.
2. Virtualization Solutions
Architecture:
VMware Workstation Architecture
Client Device: The user’s device (laptop, tablet, etc.) connects to a remote server.
Virtual Desktop Infrastructure (VDI): A server hosts multiple virtual desktops.
Each desktop is an isolated environment that looks and feels like a traditional desktop.
Connection Broker: This manages user connections, directing them to their virtual desktop.
Storage: Centralized storage holds user data and applications, making it easy to back up and manage
1. Host Machine
Physical Computer: This is your actual computer with hardware components like CPU, RAM, and
storage.
Host Operating System: The main operating system installed on your physical machine (e.g., Windows,
Linux).
Hypervisor: VMware Workstation is a Type 2 hypervisor, meaning it runs on top of the host operating
system.
User Interface: This is the graphical interface where you can create and manage virtual machines (VMs).
Virtual Hardware: Each VM simulates a complete computer, including its own CPU, memory, hard
drive, and network interface.
Guest Operating System: Each VM can run a different operating system (like Windows or Linux)
independently of the host OS.
Configuration Files: These files contain settings for each VM, such as how much RAM or CPU it uses.
Virtual Disk Files: These are like virtual hard drives for the VMs, storing the operating system and all
the data for that VM.
5. Networking
Virtual Networks: VMware Workstation allows you to set up different networking options:
NAT: VMs share the host’s IP address to access the internet.
Bridged: VMs connect directly to the physical network.
Host-Only: VMs can communicate with each other and the host, but not with the outside world.
6. Resource Management
CPU and Memory Allocation: You can specify how much CPU and RAM each VM uses based on your
needs.
Snapshots: You can take snapshots of a VM to save its state, allowing you to revert back later if needed.
Clones: Create exact copies of VMs for testing or backup purposes.
7. VMware Tools
A set of utilities that improve VM performance and provide better integration between the guest OS and
the host.
VMware Workstation lets you run multiple virtual computers (VMs) on a single physical machine. Each
VM operates like a real computer with its own OS and resources, while the host machine and VMware
software manage everything. This setup is useful for testing software, running different OS, or creating
isolated environments without needing extra hardware.
b. Server Virtualization
This involves running multiple virtual servers on a single physical server. Each virtual server operates
independently, allowing better resource utilization.
Architecture:
VMware GSX Architecture
Physical Server (Host): The hardware that runs the virtualization software.
Hypervisor: Software (like VMware or Hyper-V) that creates and manages virtual servers (VMs) on the
host.
Virtual Machines (VMs): Each VM has its own operating system and applications. They share the host's
resources (CPU, RAM, storage).
Management Tools: Software for monitoring and managing the VMs, ensuring they operate smoothly.
VMware GSX Server is a type of virtualization software that allows multiple operating systems to run on
a single physical server. It’s particularly useful for server consolidation and testing.
GSX Server Layer: The core software that handles virtualization. It interacts directly with the host
hardware and manages virtual machines (VMs).
Virtual Machines (VMs): These are the individual instances of operating systems running on the host.
Each VM operates as if it were a separate physical computer.
Virtual Hardware: Each VM has virtualized components, like CPU, memory, disk, and network
interfaces. These components are mapped to the host’s physical hardware.
Management Interface: A user interface (UI) or command-line tool to manage VMs. Allows users to
create, modify, and monitor VMs.
How It Works
Resource Allocation: The GSX Server allocates CPU, memory, and storage from the host to each VM as
needed.
Isolation: Each VM operates independently, meaning that problems in one VM don’t affect others.
Snapshot and Cloning: Users can take snapshots of VMs (saving their state) and clone them (creating
duplicates) for testing or backup.
Use Cases
Testing and Development: Developers can create and test applications in isolated environments.
Server Consolidation: Organizations can run multiple server applications on a single physical server,
saving hardware costs.
VMware GSX Server allows multiple operating systems to run on one physical machine by using a
virtualization layer that manages VMs and their resources efficiently, making it easier to use hardware
resources and run applications in isolated environments.
VMware ESXi is a hypervisor that allows you to run multiple virtual machines (VMs) on a single
physical server. It’s a key component in VMware's virtualization solutions.
Host Machine: The physical server that runs ESXi. Contains the hardware resources (CPU, memory,
storage) needed for VMs.
Hypervisor Layer:
ESXi is a Type 1 hypervisor, meaning it runs directly on the host hardware. It manages the VMs and
allocates resources from the host to them.
Virtual Machines (VMs):Individual operating systems that run on top of ESXi.
Each VM has its own virtualized hardware, including CPU, memory, disk, and network interfaces.
VMware Virtual Hardware: Each VM uses virtual hardware that mimics physical components.
This includes things like virtual CPUs (vCPUs), virtual memory, and virtual disks.
Management Interface:ESXi provides a web-based interface (vSphere Client) for managing VMs.
Administrators can create, configure, and monitor VMs from this interface.
How It Works
Resource Management: ESXi allocates the physical server's resources to each VM based on its needs.
Isolation: Each VM is isolated from others, ensuring that issues in one VM do not affect the others.
Scalability: You can run many VMs on a single server, making it easy to scale up or down as needed.
Use Cases
VMware ESXi is a powerful virtualization platform that runs directly on server hardware, allowing
multiple virtual machines to operate independently. It efficiently manages resources and provides a user-
friendly interface for managing those VMs.
The VMware Cloud Solution Stack is a framework that helps organizations manage and deploy
applications and services in a cloud environment. It consists of multiple layers, each providing specific
capabilities.
VMware Cloud Solution Stack
Infrastructure Layer:
Physical Resources: This includes the physical servers, storage, and networking equipment in data
centers.
Hypervisor: VMware ESXi or other hypervisors that create and manage virtual machines (VMs) on this
hardware.
Virtualization Layer:
Virtual Machines: VMs that run on the hypervisor, allowing multiple operating systems to share the
same physical resources.
Containers: Lightweight, portable units of software that can run applications without the overhead of full
VMs, often managed with tools like Kubernetes.
vSphere: The primary management platform for virtualized environments, enabling VM deployment,
monitoring, and resource management.
vRealize Suite: A set of tools for managing cloud operations, automation, and performance monitoring.
Platform Services Layer:
Networking: VMware NSX provides virtual networking capabilities, allowing for flexible and secure
network configurations.
Storage: VMware vSAN aggregates storage resources across the physical infrastructure to provide
scalable and high-performance storage for VMs.
Application Layer:
Applications: The actual software and services that run on top of the virtualized infrastructure. This
could include business applications, databases, or web services.
Micro services: Modern applications built using micro services architecture, which allows for greater
scalability and flexibility.
User Experience Layer:
User Interfaces: Dashboards and management consoles that allow IT administrators and users to interact
with the cloud environment.
APIs: Application Programming Interfaces that enable developers to automate processes and integrate
with other tools.
How It Works
Integration: Each layer of the stack integrates with the others, allowing seamless communication and
management.
Flexibility: Organizations can choose which components to use based on their specific needs, whether
on-premises, hybrid, or fully in the cloud.
Scalability: The stack is designed to easily scale, meaning organizations can add resources as their needs
grow.
Cost Efficiency: Optimizes resource usage, reducing hardware costs and improving return on investment.
Simplified Management: Centralized tools make it easier to manage resources and workloads.
Improved Agility: Organizations can deploy applications faster and respond quickly to changing
demands.
The VMware Cloud Solution Stack provides a comprehensive framework for managing cloud
environments, from the physical infrastructure to the applications running on it. Each layer adds specific
capabilities, allowing for flexibility, scalability, and simplified management.
Microsoft Hyper-V is a virtualization technology that allows you to create and run multiple virtual
machines (VMs) on a single physical server. Each VM acts like a separate computer, with its own
operating system and applications
What is Hyper-V Architecture?
Hyper-V is a Type 1 hypervisor that allows you to create and manage virtual machines (VMs) on a
physical server. Its architecture consists of various components that work together to provide
virtualization services.
Microsoft Hyper-V Architecture
Hypervisor:
Definition: The hypervisor is the core of Hyper-V. It sits directly on the hardware (the physical server)
and manages the VMs.
Function: It allocates resources (like CPU, memory, and storage) to each VM and ensures that they
operate independently and securely.
• Hypercalls interface: This is the entry point for all the partitions for the execution of sensitive
instructions. This is an implementation of the paravirtualization approach already discussed with Xen.
This interface is used by drivers in the partitioned operating system to contact the hypervisor using the
standard Windows calling convention. The parent partition also uses this interface to create child
partitions.
• Memory service routines (MSRs): These are the set of functionalities that control the memory and its
access from partitions. By leveraging hardware-assisted virtualization, the hypervisor uses the
Input/Output Memory Management Unit (I/O MMU or IOMMU) to fast-track access to devices from
partitions by translating virtual memory addresses.
• Advanced programmable interrupt controller (APIC): This component represents the interrupt
controller, which manages the signals coming from the underlying hardware when some event occurs
(timer expired, I/O ready, exceptions and traps). Each virtual processor is equipped with a synthetic
interrupt controller (SynIC), which constitutes an extension of the local APIC. The hypervisor is
responsible of dispatching, when appropriate, the physical interrupts to the synthetic interrupt controllers.
• Scheduler: This component schedules the virtual processors to run on available physical processors.
The scheduling is controlled by policies that are set by the parent partition.
• Address manager: This component is used to manage the virtual network addresses that are allocated
to each guest operating system.
• Partition manager: This component is in charge of performing partition creation, finalization,
destruction, enumeration, and configurations. Its services are available through the hypercalls interface
API previously discussed.
Parent Partition:
Definition: The parent partition is the primary VM that runs directly on the hypervisor.
Function: It manages the hypervisor and acts as the control layer for the entire virtualization
environment. This partition can run services, manage VMs, and perform administrative tasks.
Child Partitions:
Definition: Child partitions are the individual VMs that run on the hypervisor.
Function: Each child partition operates its own operating system and applications, functioning like a
separate computer. They rely on the parent partition for management and resource allocation.
Enlightened I/O:
Definition: Enlightened I/O refers to the optimized input/output (I/O) processing used in Hyper-V.
Function: It allows VMs to communicate with the physical hardware more efficiently. This means less
overhead and better performance when accessing storage and network resources.
Synthetic Devices:
Definition: Synthetic devices are virtual hardware components that are designed to work with the
hypervisor.
Function: Unlike traditional emulated devices, synthetic devices take advantage of the hypervisor's
capabilities to provide better performance. Examples include synthetic network adapters and synthetic
storage controllers. They allow VMs to access resources with lower latency and higher throughput.
Resource Management: The hypervisor allocates resources from the physical server to the parent
partition. The parent partition, in turn, manages the child partitions, distributing resources among them as
needed.
I/O Optimization: Child partitions use synthetic devices for I/O operations, which improves
performance. Enlightened I/O ensures that data flows efficiently between VMs and the physical hardware.
Isolation: Each child partition runs independently, meaning if one VM crashes, it doesn’t affect the
others. The hypervisor maintains this isolation.
Management: Administrators can manage all aspects of the Hyper-V environment from the parent
partition, including creating, modifying, and deleting child partitions.
Microsoft Hyper-V architecture consists of a hypervisor that manages the virtualization environment, a
parent partition that controls the hypervisor and VMs, and child partitions that run individual operating
systems. It uses enlightened I/O for efficient data handling and synthetic devices for enhanced
performance. Together, these components enable efficient and scalable virtualization on a physical server.
Virtualization: Hyper-V allows multiple virtual machines (VMs) to run on a single physical
server, maximizing hardware utilization.
Isolation: Each VM operates independently, providing security and isolation for applications
and data.
Resource Allocation: Hyper-V enables efficient allocation of CPU, memory, and storage
resources to VMs based on needs.
Disaster Recovery: Hyper-V supports backup and replication features, facilitating disaster
recovery and business continuity plans.
Development and Testing: Developers can create isolated environments to test applications
without affecting production systems.
Hybrid Cloud Solutions: Hyper-V can integrate on-premises and cloud resources, enabling
hybrid cloud setups for greater flexibility.
Management Tools: It provides tools for monitoring, managing, and optimizing VMs,
improving operational efficiency.
Cost Efficiency: By consolidating servers and reducing hardware needs, Hyper-V helps lower
operational costs.
Support for Multiple OS: Hyper-V can run different operating systems on the same
hardware, allowing for diverse applications in the cloud.
Hyper-V can be considered better than Xen and VMware in certain aspects:
Seamless Integration: Hyper-V is built into Windows Server, making it easy for organizations
already using Microsoft products.
2. Cost-Effective
No Extra Licensing: Hyper-V comes with Windows Server licenses, which can save costs
compared to separate licenses required for VMware.
3. User-Friendly Management
Familiar Interface: Hyper-V uses Windows-based management tools, making it easier for
Windows administrators to use.
5. Live Migration
Zero Downtime: Hyper-V supports live migration of VMs without downtime, allowing seamless
workload balancing.
6. Snapshot Functionality
Easy Backups: Hyper-V offers simple snapshot capabilities for quick backups and recovery
points.
Integration with Azure: Hyper-V easily integrates with Microsoft Azure, making it ideal for
hybrid cloud solutions.
Enhanced Security: Hyper-V includes features like Shielded VMs and secure boot, providing
robust security options.
9. Scalability
Large VM Support: Hyper-V can support a large number of VMs and high resource
configurations, suitable for growing businesses.
Simplified Management: Works well with Active Directory for user authentication and
permissions management.