Cloud M2 1) : Centralized Management Tools Allow Administrators To Monitor and Control Virtual Environments Efficiently
Cloud M2 1) : Centralized Management Tools Allow Administrators To Monitor and Control Virtual Environments Efficiently
M2
1) Significance of Virtualization
2. Cost Savings: Reduces the need for additional hardware, leading to lower capital and
operational expenses.
4. Enhanced Flexibility and Agility: Provides developers and IT administrators the ability to
quickly deploy and manage workloads.
5. Disaster Recovery and Backup: Virtual machines can be backed up, cloned, or migrated
easily, ensuring business continuity.
6. Isolation and Security: VMs operate independently, enhancing security and preventing faults
from affecting other applications
A Virtual Machine Monitor (VMM), also known as a Hypervisor, is a software layer that creates and
manages Virtual Machines (VMs). It allows multiple virtual machines to run on a single physical host
by abstracting hardware resources and distributing them to VMs. Each VM operates as an
independent system with its own operating system (OS) and applications.
The primary role of a VMM is to facilitate virtualization. Here’s how it contributes to virtualization:
1. Resource Management
o The VMM allocates and manages physical resources (CPU, memory, storage, and
network) across multiple VMs.
o VMM provides isolation between VMs, ensuring that the failure or compromise of
one VM doesn’t affect others.
3. Abstraction of Hardware
o VMM abstracts the underlying hardware, providing a uniform environment for VMs.
o This allows different operating systems to run on the same physical machine without
compatibility issues.
o Admins can easily start, stop, and monitor VMs using VMM interfaces.
o Through consolidation, VMM allows multiple VMs to run on fewer physical servers,
leading to lower operational costs.
o VMM enables live migration of VMs between physical servers without downtime.
o VMM allows the creation of VM snapshots for backup and recovery purposes.
4)
Key Differences
Computing Model Parallel computing (tight coupling) Distributed computing (loose coupling)
Time Sensitivity Requires results quickly Can process over longer timeframes
In summary, HPC is about maximizing computational power for complex tasks, while HTC is about
maximizing the number of tasks completed over time.
M2 7)
1. Before Virtualization
a. Hardware Layer
Single OS installed directly on the hardware (also called a Bare Metal system).
c. Application Layer
Key Characteristics
Application isolation is challenging; issues in one application can crash the entire system.
Diagram:
----------------------------------
| Applications |
----------------------------------
| Operating System |
----------------------------------
| Physical Hardware |
----------------------------------
2. After Virtualization
a. Hardware Layer
b. Hypervisor Layer
Each VM has its own virtual CPU, memory, storage, and virtual network interfaces.
d. Application Layer
Key Characteristics
Diagram:
----------------------------------------------------
----------------------------------------------------
----------------------------------------------------
----------------------------------------------------
| Physical Hardware |
----------------------------------------------------
Comparison Summary
Resource Utilization Low, resources often underused High, resources dynamically shared
Management Complexity High for multiple servers Easier with centralized management
Aspect Before Virtualization After Virtualization
8)
Hypervisors, also known as virtual machine monitors (VMMs), are software, firmware, or hardware
platforms that create and manage virtual machines (VMs). They allow multiple operating systems to
run concurrently on a host machine by abstracting the underlying hardware resources. Hypervisors
play a crucial role in virtualization, enabling efficient resource utilization, isolation, and management
of VMs.
Hypervisors are generally classified into two main types: Type 1 and Type 2.
Type 1 hypervisors run directly on the host's hardware without an underlying operating system. They
have direct access to the physical resources of the machine, which allows for better performance,
scalability, and security. Because they operate at a lower level, Type 1 hypervisors can manage
resources more efficiently and provide better isolation between VMs.
3. **Xen**: An open-source hypervisor that supports various operating systems and is often used in
cloud computing environments.
4. **KVM (Kernel-based Virtual Machine)**: A Linux kernel module that turns the Linux kernel into a
Type 1 hypervisor, allowing it to run multiple VMs.
Type 2 hypervisors run on top of a conventional operating system. They rely on the host OS for
resource management and hardware access, which can introduce some overhead. While they are
generally easier to install and use, they may not perform as well as Type 1 hypervisors, especially
under heavy workloads.
1. **VMware Workstation**: A popular desktop virtualization solution that allows users to run
multiple operating systems on a single physical machine.
- Easier to install and use, making them suitable for individual users and developers.
- Can run on existing operating systems without the need for dedicated hardware.
- Less efficient resource management and potential security vulnerabilities due to reliance on the
host OS.
|------------------------|---------------------------------------|---------------------------------------|
**Use Cases** | Enterprise environments, data centers | Development, testing, personal use
|
| **Examples** | VMware ESXi, Microsoft Hyper-V, Xen, KVM | VMware Workstation, Oracle
VirtualBox, Parallels Desktop |
M1 7) assignment Modern distributed systems are widely used across various sectors due to their
ability to provide scalability, reliability, and performance. Here are some major applications of
modern distributed systems:
1. Cloud Computing
Examples: AWS, Microsoft Azure, Google Cloud
Purpose: Provides on-demand computing resources like storage, databases, and virtual
machines.
Benefits: Ensures low latency, handles high user traffic, and provides personalized
experiences.
Benefits: Efficient data replication across data centers ensures faster content delivery and
fault tolerance.
Benefits: Ensures high availability, secure transactions, and low latency for global financial
operations.
5. Healthcare Systems
Purpose: Provides remote diagnosis, medical data storage, and real-time health monitoring.
Benefits: Ensures data consistency across hospitals, supports remote patient care, and
enhances data security.
Purpose: Distributes computational tasks for model training and large-scale AI inference.
Benefits: Reduces training time, supports large-scale data processing, and enables real-time
AI services.
These applications demonstrate how distributed systems drive modern technological advancements
by ensuring scalability, reliability, and real-time data processing.
19)
A Single-System Image (SSI) refers to the illusion created by a cluster of interconnected computers
(nodes) that presents itself as a single unified system to users and applications. Despite consisting of
multiple physical machines, an SSI abstracts the complexity of distributed resources, making them
appear as one logical entity.
Unified Management: Users interact with the system as if it were a single machine,
regardless of the number of nodes.
Transparency: Handles resource management, task scheduling, and file storage without
exposing the underlying distributed nature.
Efficient Resource Utilization: Allocates processing power, memory, and storage dynamically
across nodes.
1. Simplified Administration:
o Administrators can manage the entire cluster from a single point of control, reducing
complexity and operational overhead.
o SSI enables dynamic load balancing by distributing tasks across available nodes,
ensuring efficient use of CPU, memory, and storage.
o In case of node failures, SSI ensures seamless failover by redistributing tasks to other
nodes, maintaining system uptime.
4. Ease of Scalability:
o New nodes can be added to the cluster without major configuration changes, as SSI
handles the expansion transparently.
o Users and applications are unaware of the physical distribution of nodes, resulting in
a seamless and uniform computing experience.
Cloud Computing Services: Platforms like AWS and Azure provide the illusion of a single
virtual machine using SSI, even when resources are spread across multiple servers.
Data Centers: Large data centers use SSI to provide seamless service to end-users, despite
relying on clusters of thousands of physical machines.
20) Grid computing and cloud computing are both distributed computing models, but they differ in
purpose, architecture, and use cases. Here's a comparison to help you understand the differences:
Grid Computing:
o Designed for large-scale computational tasks that require massive processing power.
o Suitable for tasks like weather forecasting, protein folding simulations, or physics
experiments.
Cloud Computing:
2. Architecture
Grid Computing:
Cloud Computing:
3. Resource Management
Grid Computing:
o Jobs are often divided into smaller tasks and run in parallel.
Cloud Computing:
4. Accessibility
Grid Computing:
Cloud Computing:
5. Cost Model
Grid Computing:
Cloud Computing:
o Pay-as-you-go pricing model, where users only pay for the resources they use.
Grid Computing:
o Large scientific projects like CERN’s Large Hadron Collider (LHC) or SETI@home.
Cloud Computing:
o Services like Netflix (video streaming on AWS), Dropbox (cloud storage), and Zoom
(video conferencing).
Conclusion
Use Grid Computing when you need to perform large-scale, collaborative research requiring
vast computational power.
Use Cloud Computing for scalable, on-demand services with flexible pricing and ease of
management, suitable for businesses and everyday applications.
Overview: Shadow page tables are maintained by the hypervisor to map guest virtual
memory to host physical memory. They act as a "shadow" of the guest's page tables,
providing a correct view of memory to the guest operating system (OS) while maintaining
control over memory management.
Implementation Process:
o The guest OS maintains its own page tables, which map virtual addresses to guest
physical addresses (GPA).
o The hypervisor, however, maintains shadow page tables that directly map guest
virtual addresses (GVA) to host physical addresses (HPA).
o The CPU uses the shadow page tables instead of the guest’s page tables to perform
address translation.
o Every time the guest modifies its page tables, the hypervisor intercepts and updates
the shadow page tables accordingly.
Advantages:
o Provides compatibility with older processors that lack hardware support for memory
virtualization.
Disadvantages:
Implementation Process:
o The guest OS maintains its page tables for translating virtual addresses to guest
physical addresses (GPA).
o The hypervisor manages the EPT, which translates guest physical addresses to host
physical addresses (HPA).
1. Guest Virtual Address (GVA) → Guest Physical Address (GPA) (via guest page
tables).
2. Guest Physical Address (GPA) → Host Physical Address (HPA) (via EPT).
o This hardware support eliminates the need for shadow page tables.
Advantages:
o Simplifies the hypervisor’s design since it no longer needs to maintain shadow page
tables.
Disadvantages:
Comparison Summary
Hypervisor
High Low
Involvement
Higher memory
Lower memory usage due
Memory Overhead consumption for shadow
to direct EPT usage
pages
In modern systems, EPT has largely replaced shadow page tables due to its improved
performance and lower management overhead. However, understanding both techniques is
crucial for virtualization administrators and developers working with diverse environments.
1. Hypervisor-Based Virtualization:
o A hypervisor (also called a Virtual Machine Monitor, or VMM) manages multiple VMs
on a single host.
2. Isolation:
o Each VM runs its own OS, applications, and system processes independently.
3. Resource Allocation:
o VMs share CPU, memory, disk, and network resources allocated by the hypervisor.
o VMs can be backed up using snapshots and moved across different servers (live
migration).
o Helps in server consolidation, reducing the need for multiple physical servers.
2. Virtualization Middleware
Virtualization Middleware is software that sits between the operating system and the
underlying hardware, enabling virtualization by providing an abstraction layer. It helps in
managing VMs, optimizing resource allocation, and automating tasks.
1. Hypervisor Management:
2. Resource Optimization:
5. Cloud Integration:
+--------------------------------------------------+
|--------------------------------------------------|
+--------------------------------------------------+
+--------------------------------------------------+
| VM 1 | VM 2 | VM 3 | VM 4 |
+--------------------------------------------------+
2. Cost Reduction: Reduces costs associated with physical hardware, power consumption, and
maintenance.
4. Security & Isolation: Provides a secure environment where different VMs run independently.
5. Disaster Recovery & Backup: Ensures easy recovery using VM snapshots and live migration.