0% found this document useful (0 votes)
25 views47 pages

Unit III-1

Unit 3 of full stack web development
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views47 pages

Unit III-1

Unit 3 of full stack web development
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

UNIT III

VIRTUAL MACHINES AND


VIRTUALIZATION OF CLUSTERS AND
DATA CENTERS
VIRTUALIZATION
 Virtualization refers to the process of creating a
virtual version of something, such as hardware,
software, or an entire computer system, using
specialized software known as hypervisors or
virtual machine monitors.

 In the context of modern computing, virtualization


involves abstracting and decoupling the physical
resources of a computer system, such as CPU,
memory, storage, and networking, from the
underlying hardware, enabling multiple virtual
instances to run independently on the same physical
infrastructure.
IMPLEMENTATION LEVELS OF
VIRTUALIZATION
IMPLEMENTATION LEVELS OF
VIRTUALIZATION
 A traditional computer runs with a host operating
system specially tailored for its hardware
architecture, as shown in Figure 3.1(a).
 After virtualization, different user applications
managed by their own operating systems (guest OS)
can run on the same hardware, independent of the
host OS.
 This is often done by adding additional software,
called a virtualization layer as shown in Figure 3.1(b).
This virtualization layer is known as hypervisor or
virtual machine monitor (VMM)
IMPLEMENTATION LEVELS OF
VIRTUALIZATION
 Hardware Abstraction Level Hardware-level
virtualization is performed right on top of the bare
hardware.
 On the one hand, this approach generates a virtual
hardware environment for a VM. On the other hand,
the process manages the underlying hardware
through virtualization.
 The idea is to virtualize a computer’s resources, such
as its processors, memory, and I/O devices.
 The intention is to upgrade the hardware utilization
rate by multiple users concurrently.
IMPLEMENTATION LEVELS OF
VIRTUALIZATION

 Operating System Level This refers to an


abstraction layer between traditional OS and user
applications.

 OS-level virtualization creates isolated containers on


a single physical server and the OS instances to
utilize the hardware and software in data centers.

 The containers behave like real servers. OS-level


virtualization is commonly used in creating virtual
hosting environments to allocate hardware resources
among a large number of mutually distrusting users.
IMPLEMENTATION LEVELS OF
VIRTUALIZATION

Library Support Level


 Most applications use APIs exported by user-level
libraries rather than using lengthy system calls by the
OS.
 Since most systems provide well-documented APIs,
such an interface becomes another candidate for
virtualization.
 Virtualization with library interfaces is possible by
controlling the communication link between
applications and the rest of a system through API
hooks.
IMPLEMENTATION LEVELS OF
VIRTUALIZATION
 User-Application Level Virtualization at the
application level virtualizes an application as a VM.
On a traditional OS, an application often runs as a
process. Therefore, application-level virtualization is
also known as process level virtualization.

 The most popular approach is to deploy high level


language (HLL) VMs. In this scenario, the
virtualization layer sits as an application program on
top of the operating system, and the layer exports an
abstraction of a VM that can run programs written
and compiled to a particular abstract machine
definition. Any program written in the HLL and
compiled for this VM will be able to run on it.
VIRTUALIZATION
 Virtualization is the process of creating virtual instances of
physical resources, such as servers, storage, networks, and
even entire operating systems, within a single physical
machine. These virtual instances, often referred to as
virtual machines (VMs), behave as if they were separate
physical entities, even though they share the underlying
hardware resources.

Benefits of Virtualization:
1. Server Consolidation: Virtualization allows multiple
virtual machines to run on a single physical server,
leading to better resource utilization and reduced
hardware costs.

2. Isolation: Each virtual machine is isolated from others,


providing security and ensuring that issues in one VM do
not affect others.
VIRTUALIZATION
3. Resource Allocation: Virtualization enables dynamic
allocation of resources like CPU, memory, and storage to
VMs based on demand, improving efficiency and
performance.
4. Hardware Independence: Virtualization abstracts the
underlying hardware, making it easier to migrate VMs
between different physical servers without compatibility
issues.
5. Reduced Downtime: Live migration and fault tolerance
features allow VMs to be moved between hosts without
interruption, minimizing downtime during maintenance
or hardware failures.
6. Development and Testing: Virtual environments
provide a sandbox for software development, testing, and
debugging without affecting the production environment.
7. Energy Efficiency: By consolidating multiple VMs on
fewer physical servers, energy consumption can be
reduced, leading to cost savings and reduced
environmental impact.
VIRTUALIZATION

Comparison of Full Virtualization and Para


virtualization:
1. Full Virtualization:
In full virtualization, guest operating systems run on
VMs that are unaware they are virtualized. The
hypervisor abstracts the underlying hardware.

 Advantages: Compatibility with unmodified guest


OSes, easy migration between different hardware
platforms, better isolation.
 Disadvantages: Slightly higher performance
overhead due to hardware emulation.
VIRTUALIZATION

2. Para virtualization:

 In paravirtualization, guest OSes are modified to


interact with the hypervisor for better efficiency.
Guest OSes are aware of the virtualization layer.

 Advantages: Reduced performance overhead


compared to full virtualization, better performance
due to optimized communication with the hypervisor.
 Disadvantages: Requires guest OS modification,
limited to supported OSes, potential security concerns
due to close interaction.
ROLE OF HYPERVISORS IN VIRTUALIZATION:

 Hypervisors are critical components in the


virtualization process, acting as a software layer that
enables the creation and management of virtual
machines (VMs) on a physical host.

 They facilitate the allocation and sharing of


physical resources among multiple VMs, while also
providing isolation between them.

 The primary role of a hypervisor is to abstract the


underlying hardware and present it in a way that
VMs can utilize.
Differentiating Type 1 and Type 2 Hypervisors:

Type 1 Hypervisor: Bare-Metal Hypervisor

 Installed directly on physical hardware: Type 1


hypervisors run directly on the host's hardware without
the need for a separate operating system.

 High performance: Since they operate directly on the


hardware, Type 1 hypervisors generally offer better
performance and efficiency.

 Ideal for server virtualization: Type 1 hypervisors are


commonly used in data centers for server consolidation
and resource optimization.

 Examples: VMware vSphere/ESXi, Microsoft Hyper-V,


Xen.
Differentiating Type 1 and Type 2 Hypervisors:

Type 2 Hypervisor: Hosted Hypervisor

 Runs on top of an operating system: Type 2


hypervisors are installed on top of a host operating system,
similar to any other software application.

 Moderate performance: Due to the additional layer of


the host OS, Type 2 hypervisors might introduce some
performance overhead.

 Suited for desktop and development environments:


Type 2 hypervisors are often used for testing, development,
and running multiple operating systems on a personal
computer.

 Examples: Oracle VirtualBox, VMware Workstation,


Parallels Desktop.
Differentiating Type 1 and Type 2 Hypervisors:

 Performance: Type 1 hypervisors generally offer better


performance since they run directly on the hardware
without the overhead of an additional OS layer.

 Resource Management: Type 1 hypervisors have more


direct control over resource allocation, making them more
suitable for resource-intensive tasks like server
virtualization.

 Isolation: Both types provide isolation between VMs, but


Type 1 hypervisors offer stronger isolation since they don't
rely on an underlying OS.

 Use Cases: Type 1 hypervisors are common in enterprise


environments for server consolidation, while Type 2
hypervisors are popular for desktop and development
scenarios.
Memory Virtualization:

 Virtual memory virtualization is similar to the


virtual memory support provided by modern operating
systems.

 In a traditional execution environment, the operating


system maintains mappings of virtual memory to
main memory using page tables, which is a one-
stage mapping from virtual memory to machine
memory.

 All modern x86 CPUs include a memory management


unit (MMU) and a Translation Look Aside buffer
(TLB) to optimize virtual memory performance.
Memory Virtualization:

 Virtual memory virtualization involves sharing the


physical system memory in RAM and dynamically
allocating it to the physical memory of the VMs.

 That means a two-stage mapping process should be


maintained by the guest OS and the VMM,
respectively: virtual memory to physical memory and
physical memory to machine memory.

 Furthermore, MMU virtualization should be


supported, which is transparent to the guest OS. The
guest OS continues to control the mapping of virtual
addresses to the physical memory addresses of VMs.
But the guest OS cannot directly access the actual
machine memory.
Memory Virtualization:

 The VMM is responsible for mapping the guest


physical memory to the actual machine memory.
Figure 3.12 shows the two-level memory mapping
procedure.

 Since each page table of the guest OS has a separate


page table in the VMM corresponding to it, the VMM
page table is called the shadow page table.
LIVE MIGRATION:
LIVE MIGRATION:

 Live migration is a critical feature in virtualization that


allows a running virtual machine (VM) to be moved from
one physical host to another with minimal downtime and
service interruption.
 Here's a step-by-step explanation of the process depicted in
the diagram:

Step 1: Preparing to Migrate

 Initially, the VM is running on the source host.


 The system administrator initiates the live migration
process, specifying the destination host for the VM.
 The destination host prepares to receive the migrated VM
by allocating resources and configuring its virtualization
environment.
LIVE MIGRATION:

Step 2: Memory Copy Transfer

 The live migration process begins with the transfer of


memory pages from the source host to the
destination host. This process is iterative and involves
copying memory pages from the source to the
destination while tracking changes.
 Memory pages that have been modified during the
migration process are continuously copied to the
destination host.
 Both hosts coordinate to ensure that the memory
contents remain consistent during the transfer (1).
LIVE MIGRATION:

Step 3: CPU Synchronization and State Transfer

 Concurrently with memory copying, the source and


destination hosts synchronize the CPU state and
registers of the VM.
 This synchronization ensures that the VM's execution
remains consistent during the migration process.
 CPU state and registers are transferred from the
source host to the destination host (2).
LIVE MIGRATION:

Step 4: Network State Transfer


 Network state information, including active connections
and network configurations, is transferred from the source
host to the destination host.
 This ensures that network connections and services remain
uninterrupted when the VM starts running on the
destination host.
Step 5: Completion and Activation
 Once memory copying, CPU synchronization, and network
state transfer are complete, the VM is effectively running
on the destination host.
 The source host marks the VM as "migrated" and releases
any resources associated with it.
 The destination host takes over the execution of the VM,
and it resumes normal operation with minimal or no
downtime.
CONCEPT OF RESOURCE MANAGEMENT IN VIRTUALIZED DATA
CENTERS

 Resource management in virtualized data centers


involves efficiently allocating and optimizing the
utilization of physical resources such as CPU,
memory, and I/O devices among multiple virtual
machines (VMs).

 The goal is to ensure that VMs receive the necessary


resources to meet their performance requirements
while maximizing resource utilization.

 Here's an explanation of resource management along


with techniques for CPU, memory, and I/O device
allocation and optimization:
CONCEPT OF RESOURCE MANAGEMENT IN VIRTUALIZED DATA
CENTERS
1. CPU Resource Management:
 Allocation: CPU resources are allocated to VMs based on their
processing needs. This can be done using techniques like
proportional sharing, where VMs are assigned shares of CPU
time, or reservations, where a minimum amount of CPU is
guaranteed to a VM.
 Optimization: CPU affinity and anti-affinity policies can be used
to optimize CPU allocation. Affinity ensures that a VM always
runs on the same physical CPU core, improving cache efficiency.
Anti-affinity prevents certain VMs from running on the same
physical core to avoid resource contention.

2. Memory Resource Management:


 Allocation: Memory is allocated to VMs based on their memory
demands. Techniques like memory ballooning allow the
hypervisor to reclaim memory from one VM and allocate it to
another based on need.
 Optimization: Techniques like transparent page sharing (TPS)
identify identical memory pages across VMs and share them,
reducing memory duplication. Memory compression can be used
to further optimize memory usage by compressing memory pages
before storing them.
CONCEPT OF RESOURCE MANAGEMENT IN VIRTUALIZED DATA
CENTERS

3. I/O Device Resource Management:


 Allocation: I/O devices are allocated to VMs based on
device type and demand. For example, network bandwidth
can be allocated to VMs using quality of service (QoS)
policies.
 Optimization: Techniques like I/O scheduling prioritize I/O
requests from VMs, ensuring fair access to I/O devices.
Storage tiering can be used to optimize storage
performance by placing frequently accessed data on faster
storage media.

4. Dynamic Resource Allocation:


 Resources can be dynamically allocated based on real-time
demand. For example, CPU and memory can be hot-added
or hot-removed from a VM without downtime.
 Dynamic resource allocation ensures that VMs receive the
resources they need when they need them, improving
resource utilization.
CONCEPT OF RESOURCE MANAGEMENT IN VIRTUALIZED DATA
CENTERS

5. Overcommitment and Balancing:


 Overcommitment involves allocating more resources than
physically available, relying on techniques like memory
page swapping and memory ballooning to manage
overcommitted resources efficiently.
 Resource balancing redistributes resources among VMs to
maintain optimal utilization. Live migration can be used to
move VMs to different hosts to achieve resource balance.

6. Monitoring and Allocation Policies:


 Resource management relies on continuous monitoring of
VM resource usage. Policies and algorithms, such as load
balancing and dynamic resource scaling, can automatically
adjust resource allocations based on changing workloads.
CONCEPT OF RESOURCE MANAGEMENT IN VIRTUALIZED DATA
CENTERS

Explanation of Components in a Virtual Cluster:


 Virtual Cluster (Cluster-1): This represents the
entire virtual cluster, which is a collection of virtual
machines (VMs) or nodes working together to provide
services or run applications.
 VM (Node): These are individual virtual machines
within the cluster. Each VM can run applications or
services and contributes to the overall workload of the
cluster.
 Load Balancer: Load balancers are critical
components within the virtual cluster. They distribute
incoming network traffic or workloads across multiple
VMs or nodes to ensure optimal resource utilization
and performance.
CONCEPT OF RESOURCE MANAGEMENT IN VIRTUALIZED DATA
CENTERS

How Load Balancing Works within a Virtual Cluster:


 Load balancing within a virtual cluster aims to evenly distribute
incoming requests or workloads among the available VMs or
nodes to prevent resource bottlenecks and optimize resource
utilization. Here's how load balancing works:

 Incoming Requests: When incoming requests or workloads,


such as web traffic or application requests, arrive at the virtual
cluster, they first reach the load balancer.

 Load Balancer's Decision: The load balancer assesses the


current state of the VMs or nodes within the cluster. It considers
factors such as CPU usage, memory availability, network
latency, and the current number of connections to each VM.

 Load Distribution: Based on its assessment, the load balancer


decides which VM or node is the most suitable to handle the
incoming request. It then forwards the request to that VM.
CONCEPT OF RESOURCE MANAGEMENT IN VIRTUALIZED DATA
CENTERS

 Dynamic Adjustment: Load balancers continuously monitor the


performance of VMs in real time. If a VM becomes overloaded or
unavailable, the load balancer automatically adjusts the traffic
distribution. For example, it might redirect new requests away from
an overloaded VM.

 Session Persistence: In some cases, it's important to maintain


session persistence, such as in e-commerce applications. Load
balancers can be configured to ensure that all requests from a
particular client are sent to the same VM, preserving session state.

 Health Checks: Load balancers periodically perform health checks


on VMs to verify their availability and responsiveness. If a VM fails
a health check, the load balancer can redirect traffic away from it
until it recovers.

 Scalability: Load balancing can also facilitate auto-scaling. When


traffic increases, additional VMs can be automatically provisioned to
handle the load, and the load balancer ensures that traffic is
distributed evenly among the new and existing VMs.
COMPARE AND CONTRAST THE THREE MAIN SERVICE MODELS: IAAS,
PAAS, AND SAAS.

Aspect IaaS PaaS SaaS


Definition
Offers a platform and Delivers software
environment for applications over the
Provides virtualized
developing, running, internet on a
computing resources
and managing subscription basis.
(e.g., virtual machines,
applications. Users access and use
storage, networking)
Developers focus on software without
over the internet. Users
coding, while the worrying about
manage the OS,
provider manages the underlying
applications, and data.
infrastructure and infrastructure or
runtime. maintenance.

Users focus on
Users simply use the
developing and
Users are responsible software application.
deploying applications.
for managing and Maintenance, updates,
The platform manages
User Responsibilities configuring the and infrastructure
the underlying
operating system, management are the
infrastructure,
applications, and data. provider's
including OS, runtime,
responsibility.
and scaling.
Highly
Less flexibility
customizable. Users
compared to IaaS, Limited
have control over
as it focuses on customization as
the virtual
Customization application users can't modify
machines and can
development the software's core
install/configure
within the functionality.
any software they
provided platform.
need.

Reduces development
Requires more effort effort as the platform Minimal development
and expertise from abstracts effort as users only
Development Effort users in managing infrastructure need to configure and
the OS, applications, management. use the software
and security. Developers can application.
concentrate on coding.

Offers scalability by Typically offers


Provides auto-scaling
allowing users to scalability without
and load-balancing
Scalability provision additional user intervention.
features, simplifying
virtual resources as The provider handles
application scaling.
needed. resource allocation.
Examples
Examples
include Amazon Examples
include Google
Web Services include Gmail,
App Engine,
Examples (AWS), Microsoft Salesforce,
Microsoft Azure
Azure, and Dropbox, and
App Service, and
Google Cloud Zoom.
Heroku.
Platform (GCP).

Ideal for end-users


and businesses
Suitable for users
Ideal for developers looking for ready-
who need full
and teams focused made software
control over
on building and solutions without
infrastructure and
deploying the need for
want to run various
applications installation,
Use Cases software stacks.
without managing maintenance, and
Common for
infrastructure. infrastructure
development,
Common for web management.
testing, and
and mobile app Common for email,
hosting servers and
development. CRM,
applications.
collaboration, and
productivity tools.
TYPES OF CLOUD

Public Cloud
 Public cloud is open to all to store and access
information via the Internet using the pay-per-usage
method.
 In public cloud, computing resources are managed and
operated by the Cloud Service Provider (CSP). The
CSP looks after the supporting infrastructure and
ensures that the resources are accessible to and
scalable for the users.

Example: Amazon elastic compute cloud (EC2), IBM


SmartCloud Enterprise, Microsoft, Google App Engine,
Windows Azure Services Platform.
CHARACTERISTICS OF PUBLIC CLOUD

 Accessibility: Public cloud services are available to anyone with an internet


connection. Users can access their data and programs at any time and from
anywhere.
 Shared Infrastructure: Several users share the infrastructure in public
cloud settings. Cost reductions and effective resource use are made possible
by this.
 Scalability: By using the public cloud, users can easily adjust the resources
they need based on their requirements, allowing for quick scaling up or
down.
 Pay-per-Usage: When using the public cloud, payment is based on usage,
so users only pay for the resources they actually use. This helps optimize
costs and eliminates the need for upfront investments.
 Managed by Service Providers: Cloud service providers manage and
maintain public cloud infrastructure. They handle hardware maintenance,
software updates, and security tasks, relieving users of these
responsibilities.
 Reliability and Redundancy: Public cloud providers ensure high
reliability by implementing redundant systems and multiple data centers.
By doing this, the probability of losing data and experiencing service
disruptions is reduced.
 Security Measures: Public cloud providers implement robust security
measures to protect user data. These include encryption, access controls, and
regular security audits.
PRIVATE CLOUD

 Private cloud is also known as an internal


cloud or corporate cloud.

 It is used by organizations to build and manage their


own data centers internally or by the third party. It
can be deployed using Opensource tools such as
Openstack and Eucalyptus.
PRIVATE CLOUD
The private cloud has the following key characteristics:

 Exclusive Use: Private cloud is dedicated to a single organization,


ensuring the resources and services are tailored to its needs. It is like
having a personal cloud environment exclusively for that organization.

 Control and Security: Private cloud offers organizations higher


control and security than public cloud options. Organizations have more
control over data governance, access controls, and security measures.

 Customization and Flexibility: Private cloud allows organizations to


customize the infrastructure according to their specific requirements.
They can configure resources, networks, and storage to optimize
performance and efficiency.

 Scalability and Resource Allocation: The private cloud can scale


and allocate resources. According to demand, businesses may scale up or
down their infrastructure, effectively using their resources.
PRIVATE CLOUD

 Performance and dependability: Private clouds give businesses


more control over the infrastructure at the foundation, improving
performance and dependability.

 Compliance and Regulatory Requirements: Organizations may


more easily fulfill certain compliance and regulatory standards using
the private cloud. It provides the freedom to put in place strong security
measures, follow data residency laws, and follow industry-specific
norms.

 Hybrid Cloud Integration: Private cloud can be integrated with


public cloud services, forming a hybrid cloud infrastructure. This
integration allows organizations to leverage the benefits of both private
and public clouds.
HYBRID CLOUD

 Hybrid Cloud is a combination of the public cloud and


the private cloud.

 Hybrid cloud is partially secure because the services


which are running on the public cloud can be accessed
by anyone, while the services which are running on a
private cloud can be accessed only by the
organization's users.

 Example: Google Application Suite (Gmail, Google


Apps, and Google Drive), Office 365 (MS Office on the
Web and One Drive), Amazon Web Services.
CHARACTERISTICS OF HYBRID CLOUD
 Integration of Public and Private Clouds: Hybrid cloud seamlessly
integrates public and private clouds, allowing organizations to leverage both
advantages. It provides a unified platform where workloads and data can be
deployed and managed across both environments.

 Flexibility and Scalability: Hybrid cloud offers resource allocation and


scalability flexibility. Organizations can dynamically scale their infrastructure by
utilizing additional resources from the public cloud while maintaining control
over critical workloads on the private cloud.

 Enhanced Security and Control: Hybrid cloud allows organizations to


maintain higher security and control over their sensitive data and critical
applications. Private cloud components provide a secure and dedicated
environment, while public cloud resources can be used for non-sensitive tasks,
ensuring a balanced approach to data protection.

 Cost Optimization: Hybrid cloud enables organizations to optimize costs by


utilizing the cost-effective public cloud for non-sensitive workloads while keeping
mission-critical applications and data on the more cost-efficient private cloud.
This approach allows for efficient resource allocation and cost management.
CHARACTERISTICS OF HYBRID CLOUD
 Data and Application Portability: Organizations can move workloads and
data between public and private clouds as needed with a hybrid cloud. This
portability offers agility and the ability to adapt to changing business
requirements, ensuring optimal performance and responsiveness.

 Compliance and Regulatory Compliance: Hybrid cloud helps organizations


address compliance and regulatory requirements more effectively. Sensitive data
and applications can be kept within the private cloud, ensuring compliance with
industry-specific regulations while leveraging the public cloud for other non-
sensitive operations.

 Disaster Recovery and Business Continuity: Hybrid cloud facilitates robust


disaster recovery and business continuity strategies. Organizations can replicate
critical data and applications between the private and public clouds, ensuring
redundancy and minimizing the risk of data loss or service disruptions.
VIRTUALIZATION FOR DATA-CENTER AUTOMATION

Server Consolidation in Data Centers:

 In data centers, a large number of heterogeneous


workloads can run on servers at various times. These
heterogeneous workloads can be roughly divided into two
categories: chatty workloads and noninter-active
workloads.

 Chatty workloads may burst at some point and return to a


silent state at some other point. A web video service is an
example of this, whereby a lot of people use it at night and
few people use it during the day. Noninteractive workloads
do not require people’s efforts to make progress after they
are submitted. High-performance computing is a typical
example of this.
VIRTUALIZATION FOR DATA-CENTER AUTOMATION

 Therefore, it is common that most servers in data


centers are underutilized. A large amount of
hardware, space, power, and management cost of
these servers is wasted.

 Server consolidation is an approach to improve the


low utility ratio of hardware resources by reducing the
number of physical servers.

 Among several server consolidation techniques such


as centralized and physical consolidation,
virtualization-based server consolidation is the most
powerful. Data centers need to optimize their resource
management.

You might also like