0% found this document useful (0 votes)
10 views19 pages

Unit 4-Memory - CPU Resource Allocation

The document discusses resource management monitoring, focusing on CPU and memory resource allocation through techniques like paging and segmentation. It explains how paging eliminates fragmentation and supports virtual memory, while segmentation provides a user-oriented view of processes. Additionally, it covers VMware's Distributed Resource Scheduler (DRS), which optimizes resource allocation and load balancing among virtual machines in a cluster.

Uploaded by

clothestheticzz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views19 pages

Unit 4-Memory - CPU Resource Allocation

The document discusses resource management monitoring, focusing on CPU and memory resource allocation through techniques like paging and segmentation. It explains how paging eliminates fragmentation and supports virtual memory, while segmentation provides a user-oriented view of processes. Additionally, it covers VMware's Distributed Resource Scheduler (DRS), which optimizes resource allocation and load balancing among virtual machines in a cluster.

Uploaded by

clothestheticzz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Unit 4- Resource Management Monitoring

1.1 CPU & Memory Resource allocation:


1) Paging:
Paging is a memory management scheme that eliminates the need for a
contiguous allocation of physical memory. The process of retrieving processes in the
form of pages from the secondary storage into the main memory is known as paging.
The basic purpose of paging is to separate each procedure into pages.

The mapping between logical pages and physical page frames is maintained
by the page table, which is used by the memory management unit to translate logical
addresses into physical addresses. The page table maps each logical page number
to a physical page frame number. By using a Page Table, the operating system
keeps track of the mapping between logical addresses (used by programs) and
physical addresses (actual locations in memory).

Why Paging is used for memory Management?

Paging is a memory management technique that addresses common challenges


in allocating and managing memory efficiently. Here we can understand why paging
is needed as a Memory Management technique:

 Memory isn’t always available in a single block: Programs often need more
memory than what is available in a single continuous block. Paging breaks
memory into smaller, fixed-size pieces, making it easier to allocate scattered free
spaces.
 Processes size can increase or decrease: programs don’t need to occupy
continuous memory, so they can grow dynamically without the need to be moved.

How Paging Works?

Paging is a method used by operating systems to manage memory efficiently.


In paging, the physical memory is divided into fixed-size blocks called page frames,
which are the same size as the pages used by the process. The process’s logical
address space is also divided into fixed-size blocks called pages, which are the same
size as the page frames.
When a process requests memory, the operating system allocates one or more page
frames to the process and maps the process’s logical pages to the physical page
frames. When a program runs, its pages are loaded into any available frames in the
physical memory.
This approach prevents fragmentation issues by keeping memory allocation uniform.
Each program has a page table, which the operating system uses to keep track of
where each page is stored in physical memory. When a program accesses data, the
system uses this table to convert the program’s address into a physical memory
address.
Paging allows for better memory use and makes it easier to manage. It also
supports Virtual memory letting parts of programs be stored on disk and loaded into
memory only when needed. This way, even large programs can run without fitting
entirely into main memory.

 If Logical Address = 31 bit, then Logical Address Space = 231 words = 2 G words
(1 G = 230)
 If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address
= log2 227 = 27 bits
 If Physical Address = 22 bit, then Physical Address Space = 222 words = 4 M
words (1 M = 220)
 If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address
= log2 224 = 24 bits

The mapping from virtual to physical address is done by the Memory Management
Unit (MMU) which is a hardware device and this mapping is known as the paging
technique.

 The Physical Address Space is conceptually divided into a number of fixed-size


blocks, called frames.
 The Logical Address Space is also split into fixed-size blocks, called pages.
 Page Size = Frame Size

Example
 Physical Address = 12 bits, then Physical Address Space = 4 K words
 Logical Address = 13 bits, then Logical Address Space = 8 K words
 Page size = frame size = 1 K words (assumption)

Number of frames = Physical Address Space / Frame Size = 4K / 1K =


4 = 22
Number of Pages = Logical Address Space / Page Size = 8K / 1K = 23

Block Dig: Paging


The address generated by the CPU is divided into:

1. Page number(p): Number of bits required to represent the pages in Logical


Address Space or Page number
2. Page offset(d): Number of bits required to represent a particular word in a page
or page size of Logical Address Space or word number of a page or page offset.
A Physical Address is divided into two main parts:
1. Frame Number(f): Number of bits required to represent the frame of Physical
Address Space or Frame number frame
2. Frame Offset(d): Number of bits required to represent a particular word in a
frame or frame size of Physical Address Space or word number of a frame or
frame offset.
So, a physical address in this scheme may be represented as follows:

Physical Address = (Frame Number << Number of Bits in Frame Offset)


+ Frame Offset
where "<<" represents a bitwise left shift operation.

Advantages of Paging:
Eliminates External Fragmentation: Paging divides memory into fixed-size blocks
(pages and frames), so processes can be loaded wherever there is free space in
memory. This prevents wasted space due to fragmentation.

Efficient Memory Utilization: Since pages can be placed in non-contiguous


memory locations, even small free spaces can be utilized, leading to better memory
allocation.

Supports Virtual Memory: Paging enables the implementation of virtual memory,


allowing processes to use more memory than physically available by swapping
pages between RAM and secondary storage.

Ease of Swapping: Individual pages can be moved between physical memory and
disk (swap space) without affecting the entire process, making swapping faster and
more efficient.

Improved Security and Isolation: Each process works within its own set of pages,
preventing one process from accessing another’s memory space.

Disadvantages of Paging:
Internal Fragmentation: If the size of a process is not a perfect multiple of the page
size, the unused space in the last page results in internal fragmentation.

Increased Overhead: Maintaining the Page Table requires additional memory and
processing. For large processes, the page table can grow significantly, consuming
valuable memory resources.
Page Table Lookup Time: Accessing memory requires translating logical
addresses to physical addresses using the page table. This additional step increases
memory access time, although Translation Lookaside Buffers (TLBs) can help
reduce the impact.

I/O Overhead During Page Faults: When a required page is not in physical memory
(page fault), it needs to be fetched from secondary storage, causing delays and
increased I/O operations.

Complexity in Implementation: Paging requires sophisticated hardware and


software support, including the Memory Management Unit (MMU) and algorithms
for page replacement, which add complexity to the system.

2. Segmentation:
A process is divided into Segments. The chunks that a program is divided into
which are not necessarily all of the exact sizes are called segments. Segmentation
gives the user’s view of the process which paging does not provide. Here the user’s
view is mapped to physical memory.

Types of Segmentation in Operating Systems:


Virtual Memory Segmentation: Each process is divided into a number of segments,
but the segmentation is not done all at once. This segmentation may or may not take
place at the run time of the program.

Simple Segmentation: Each process is divided into a number of segments, all of


which are loaded into memory at run time, though not necessarily contiguously.

There is no simple relationship between logical addresses and physical


addresses in segmentation. A table stores the information about all such segments
and is called Segment Table.

What is Segment Table?


It maps a two-dimensional Logical address into a one-dimensional Physical address.
It’s each table entry has:

 Base Address: It contains the starting physical address where the segments
reside in memory.

 Segment Limit: Also known as segment offset. It specifies the length of the
segment.
Advantages of Segmentation in Operating System
Reduced Internal Fragmentation: Segmentation can reduce internal fragmentation
compared to fixed-size paging, as segments can be sized according to the actual
needs of a process. However, internal fragmentation can still occur if a segment is
allocated more space than it is actually used.

Segment Table consumes less space in comparison to Page table in paging.

The user specifies the segment size, whereas, in paging, the hardware determines
the page size.

Segmentation is a method that can be used to segregate data from security


operations.

Flexibility: Segmentation provides a higher degree of flexibility than paging.


Segments can be of variable size, and processes can be designed to have multiple
segments, allowing for more fine-grained memory allocation.

Sharing: Segmentation allows for sharing of memory segments between processes.


This can be useful for inter-process communication or for sharing code libraries.

Protection: Segmentation provides a level of protection between segments,


preventing one process from accessing or modifying another process’s memory
segment. This can help increase the security and stability of the system.

Disadvantages of Segmentation in Operating


System:
External Fragmentation: As processes are loaded and removed from memory, the
free memory space is broken into little pieces, causing external fragmentation. This
is a notable difference from paging, where external fragmentation is significantly
lesser.
Overhead is associated with keeping a segment table for each activity.

Due to the need for two memory accesses, one for the segment table and the other
for main memory, access time to retrieve the instruction increases.

Fragmentation: As mentioned, segmentation can lead to external fragmentation as


memory becomes divided into smaller segments. This can lead to wasted memory
and decreased performance.

Overhead: Using a segment table can increase overhead and reduce performance.
Each segment table entry requires additional memory, and accessing the table to
retrieve memory locations can increase the time needed for memory operations.

Complexity: Segmentation can be more complex to implement and manage than


paging. In particular, managing multiple segments per process can be challenging,
and the potential for segmentation faults can increase as a result.

3. Virtual Memory:
 The main objective of virtual memory is to support multiprogramming, The main
advantage that virtual memory provides is, a running process does not need to
be entirely in memory.

 Programs can be larger than the available physical memory. Virtual Memory
provides an abstraction of main memory, eliminating concerns about storage
limitations.

 A memory hierarchy, consisting of a computer system’s memory and a disk,


enables a process to operate with only some portions of its address space in
RAM to allow more processes to be in memory.
How Virtual Memory Works?
Virtual Memory is a technique that is implemented using both hardware and
software. It maps memory addresses used by a program, called virtual addresses,
into physical addresses in computer memory.

 All memory references within a process are logical addresses that are
dynamically translated into physical addresses at run time. This means that a
process can be swapped in and out of the main memory such that it occupies
different places in the main memory at different times during the course of
execution.

 A process may be broken into a number of pieces and these pieces need not be
continuously located in the main memory during execution. The combination of
dynamic run-time address translation and the use of a page or segment table
permits this.

If these characteristics are present then, it is not necessary that all the pages or
segments are present in the main memory during execution. This means that the
required pages need to be loaded into memory whenever required. Virtual memory
is implemented using Demand Paging or Demand Segmentation.

Difference Between Virtual memory & Physical


memory:
1.2 Resource Pool & DRS (Distributed Resource Scheduler):
1) DRS:
Q. What is VMvare DRS Cluster?

A cluster is a group of hosts connected to each other with special software that
makes them elements of a single system. At least two hosts (also called nodes) must
be connected to create a cluster. When hosts are added to the cluster, their resources
become the cluster’s resources and are managed by the cluster.

The most common types of VMware vSphere clusters are High Availability (HA)
and Distributed Resource Scheduler (DRS) clusters. HA clusters are designed to
provide high availability of virtual machines and services running on them; if a host
fails, they immediately restart the virtual machines on another ESXi host. DRS clusters
provide the load balancing among ESXi hosts.

Q. How Does the DRS Cluster Work?

Distributed Resource scheduler (DRS) is a type of VMware vSphere cluster that


provides load balancing by migrating VMs from a heavily loaded ESXi host to another
host that has enough computing resources, all while the VMs are still running. This
approach is used to prevent overloading of ESXi hosts. Virtual machines can have
uneven workloads at different times, and if an ESXi host is overloaded, performance
of all VMs running on that host is reduced. The VMware DRS cluster helps in this
situation by providing automatic VM migration.

For this reason, DRS is usually used in addition to HA, combining failover with
load balancing. In a case of failover, the virtual machines are restarted by the HA on
other ESXi hosts and the DRS, being aware of the available computing resources,
provides the recommendations for VM placement. vMotion technology is used for this
live migration of virtual machines, which is transparent for users and applications .
VMware DRS features/benefits:
Optimization of infrastructure without manual intervention by continuously
monitoring resource utilization and making automatic changes. It helps organizations
optimize their infrastructure without manual intervention. This saves time and reduce
the risk of human error, while also ensuring that virtual machines are always running
on the most appropriate host.
Flexibility. Distributed Resource Scheduler can be configured to operate in either
manual or automatic mode, allowing organizations to choose the level of control they
want over resource allocation. It also includes features like affinity rules and anti-
affinity rules, which allow administrators to specify which virtual machines should or
should not run on specific hosts in a cluster.
VMware DRS includes predictive DRS, which uses historical performance data to
predict future resource requirements and proactively move virtual machines to the
most appropriate host. This can help organizations avoid performance issues before
they occur and ensure that virtual machines always have access to the resources they
need.
Load Balancing: It is the feature that optimizes the utilization of computing resources
(CPU and RAM). Utilization of processor and memory resources by each VM, as well
as the load level of each ESXi host within the cluster, is continuously monitored. The
DRS checks the resource demands of VMs and determines whether there is a better
host for the VM to be placed on. If there is such host, the DRS makes a
recommendation to migrate the VM in automatic or manual mode, depending on your
settings. The DRS generates these recommendations every 5 minutes if they are
necessary. The figure below illustrates the DRS performing VM migration for load
balancing purposes.

Distributed Power Management (DPM) is a power-saving feature that compares the


capacity of cluster resources to the resources utilized by VMs within the cluster. If
there are enough free resources in the cluster, then DPM recommends migrating the
VMs from lightly loaded ESXi hosts and powering off those hosts. If the cluster needs
more resources, wake-up packets are sent to power hosts back on.
Affinity Rules: allow you some control over placement of VMs on hosts. There are
two types of rules that allow keeping VMs together or separated:

 affinity or anti-affinity rules between individual VMs.


 affinity or anti-affinity rules between groups of VMs and groups of ESXi hosts.
Let’s explore how these rules work with examples.

1. Suppose you have a database server running on one VM, a web server running on
a second VM, and an application server running on a third VM. Because these servers
interact with each other, three VMs would ideally be kept together on one ESXi host
to prevent overloading the network. In this case, we would select the “Keep Virtual
Machines Together” (affinity) option.

2. If you have an application-level cluster deployed within VMs in a DRS cluster, you
may want to ensure the appropriate level of redundancy for the application-level
cluster (this provides additional availability). In this case, you could create an anti-
affinity rule and select the “Separate Virtual Machines” option. Similarly, you can use
this approach when one VM is a main domain controller and the second is a replica of
that domain controller (Active Directory level replication is used for domain controllers).
If the ESXi host with the main domain controller VM fails, users can connect to the
replicated domain controller VM, as long as the latter is running on a separate ESXi
host.

3. An affinity rule between a VM and an ESXi host might be set, in particular, for
licensing reasons. As you know, in a VMware DRS cluster, the virtual machines can
migrate between hosts. Many software licensing policies – such as database software,
for example – require you to buy a license for all hosts on which the software runs,
even if there is only one VM running the software within the cluster. Thus, you should
prevent such VM from migrating to different hosts and costing you more licenses. You
can accomplish this by applying an affinity rule: the VM with database software must
run only on the selected host for which you have a license. In this case, you should
select the “Virtual Machines to Hosts” option. Choose “Must Run on Host” and then
input the host with the license. (Alternatively, you could select “Must Not Run on Hosts
in Group” and specify all unlicensed hosts.)

2) Resource Pool:
Resource pools are used for flexible resource management of ESXi hosts in the
DRS cluster. You can set processor and memory limits for each resource pool, then add
virtual machines to them. For example, you could create one resource pool with high
resource limits for developers’ virtual machines, a second pool with normal limits for
testers’ virtual machines, and a third pool with low limits for other users. vSphere lets
you create child and parent resource pools.
Summery on DRS & Resource pool:

In VMware, a "DRS" refers to the Distributed Resource Scheduler, a feature


that automatically balances workloads across a cluster of hosts, while a "resource
pool" is a logical container within that cluster where you can group virtual machines
and allocate specific amounts of CPU and memory to them, allowing DRS to effectively
manage resource distribution based on your defined priorities within the
pool; essentially, DRS uses resource pools to optimize resource allocation across the
cluster.

*************************************************************************************
1.3 Monitoring Performance & Alarms:

Data Centre Administrators are compelled not only to respond to emergent


issues but also to predict and stop them. Monitoring events and crafting alarms
empower administrators to detect irregularities, pinpoint bottlenecks, and early identify
potential hazards before they escalate into critical problems for your VMware vSphere
infrastructure.

This proactive methodology contributes to keeping high availability, optimizing


performance, and ensuring the complete well-being of the entire system. Proactive
management is always expected by the organization from your organization, along
with organizing seamless operations and optimizing resource utilization .

For Administrators, it becomes very easy to detect differences, bottlenecks, and


potential hazards related to storage, disk, VM health, Network Health, Data Center
Health, etc.
Q. What are the key VMware metrics to monitor.

1. CPU Usage:

A constantly capped-out CPU or an unusually high trend can lead you toward
identifying a problem with your hardware or software. Keep an eye on these metrics:

 Overall CPU usage — Unless a VM is actively running intensive programs or


being hit by heavy traffic, workloads where they’re spending hours at maximum
CPU is unusual. That isn’t good for the health of your server, either. Keep an
eye on VMs with high CPU usage for no apparent reason.

 CPU ready time — This measures when a VM is ready to use the CPU but
cannot access physical resources. High metrics indicate resource contention
and over-subscribing resources.

 CPU wait time — Similarly to CPU ready, this occurs when a process or thread
has to wait in a queue to gain access to the CPU. Look out for long I/O wait
times that can degrade performance.

2 . Memory Usage:

If you’ve ever maxed out the memory on your personal computer, you know
what happens. Everything grinds to a halt, programs crash, and the entire OS can
freeze. Now imagine that happening to your customers.

It’s essential to track memory usage to avoid memory over commitment and crashing.
Here are the metrics worth tracking:

 Overall memory usage — As with CPU usage, resource-intensive programs


can eat up memory, although the cause may also be a memory leak.

 Memory ballooning — This reclaims memory from VMs that aren’t using it at
the cost of VM performance. High ballooning degrades overall performance and
indicates memory pressure.

 Memory swapping — Similarly, memory swapping is used to handle low


memory but significantly impacts performance and indicates an issue if it
happens frequently.

3. Storage Metrics:

Keeping an eye on your IOPS (Input/Output Operations per Second) accurately


indicates disk health.

A failing disk means lost data. Besides the server slowdowns an overused disk
causes, you’ll want to monitor these metrics closely:
 Datastore usage/availability — An unavailable datastore means no VM
provisioning, which could be a disaster in some setups. Also, look out for
excessive data store usage, which leads to poor VM performance.

 Disk usage — Running out of allocated disk space is just as damaging to


individual VMs as it is to the entire server.

 IOPS — High IOPS means the storage and CPU are being used excessively,
which leads to VMs fighting over resources and significant slowdown. Very low
IOPS indicates a failing or misconfigured disk.

4. Network Metrics:

Lastly, network monitoring for each VM’s usage is good since unusually high
network throughput can indicate a problem. Keep these metrics in mind:

 Network connectivity — This simple but essential metric tracks the stability of
network access to each VM. If the network is down, there’s a problem.

 Network throughput — This is your bandwidth — how much data each VM


transmits. High outliers need to be closely examined.

Q. What are the best practices to for VMware performance monitoring? Explain
with suitable example.

a) Set performance thresholds and alerts:


Alerts can quickly solve some issues, ping administrators when something goes
wrong, and give you or your developers time to check it out.
For example:

1. Consider a Scenario: An e-commerce platform experiences a sudden surge in


traffic during a flash sale event. The increased load on the virtualized servers can lead
to performance degradation or potential outages.

Alarm Requirement: Set CPU and memory utilization thresholds as alarms. When
these thresholds are crossed, administrators receive alerts, allowing them to
proactively allocate resources or spin up additional virtual machines to handle the
increased demand.

2. Consider a Scenario: A healthcare institution relies on a virtualized environment


for critical patient records and medical applications. Any downtime could impact
patient care and regulatory compliance.

Alarm Requirement: Configure an alarm for storage latency. If storage latency


exceeds acceptable levels, administrators are immediately notified. This ensures that
patient records and medical applications remain accessible and responsive .
3. Consider a Scenario: An online education platform relies on virtualized servers to
deliver courses to students globally. Any interruptions in the virtual infrastructure could
lead to students being unable to access learning materials.

Alarm Requirement: Create alarms for disk space utilization. When disk space
reaches critical levels, administrators receive alerts. This ensures that the platform
remains accessible, preventing disruptions to students’ learning experience.

b) Analyse performance regularly:


Make sure you regularly back up and store this historical data, as it could be helpful
in the future.

c) Train your team:


Even skilled developers may need to become more familiar with the intricacies of
running a large VM ecosystem.

Ensuring your IT team has the necessary knowledge to work with VMware is a
valuable investment — and prevents your VM environment from crashing and burning
if something goes wrong.

************************************************************************************************

1.4 Backup Strategies for VMware Virtual Environment:

Q. What is Virtual Machine Backup?


Virtual Machine (VM) backup is the process of creating a secure copy of a
virtual machine’s data, configuration, and system state to safeguard against data loss
caused by disasters, hardware failures, or software corruption. It is a critical
component of data protection strategies for businesses utilizing virtual environments,
ensuring operational continuity and minimizing downtime.

VM backup solutions enable the capture and storage of virtual machine data in a way
that allows quick and efficient restoration when needed. Backups can be created using
various methods, including:

 Full backups, which replicate the entire virtual machine;


 Incremental backups, which save only the changes made since the last backup;
 Differential backups, which save changes made since the last full backup.

Modern VM backup software offers advanced features such as data deduplication,


which eliminates duplicates to save storage space; compression, which reduces the
size of backup files; and encryption, which secures sensitive data against
unauthorized access.
By implementing VM backups, businesses can effectively protect their virtual
environments, ensuring resilience to unexpected disruptions and maintaining
operational stability.

Q. What are the Benefits of VM Backup? & How to


choose right VM backup software?
VM backup ensures:

 business continuity,
 minimizing downtime and
 allowing rapid restoration of operations in the event of data loss or system
failure.
This level of protection is essential for maintaining customer trust and operational
stability.

When choosing VM backup software, it’s worth considering a few key aspects:

 Hypervisor compatibility: Ensure the solution supports your hypervisor, such as


VMware vSphere or Microsoft Hyper-V.
 Scalability and performance: Select software that grows with your business without
negatively affecting performance.
 Backup types: Look for a solution that supports full, incremental, and differential
backups to flexibly adapt the data protection process to your needs.

When evaluating backup solutions, remember that effective data protection should
combine performance with reliability, ensuring secure data storage and easy access
in case of need. Xopero software Backup meets these needs, helping you optimize
data protection processes for your business.

Q. What are the Best Practices for VMware Backup?


To ensure effective virtual machine backups, follow these best practices:

1. Use reliable VM backup software: Select backup software that supports your
virtualization platform, such as VMware vSphere or Microsoft Hyper-V.
2. Schedule regular backups: Plan backups at regular intervals, such as daily or
weekly, to ensure your virtual machines are consistently protected.
3. Use incremental backups: Employ incremental backups to reduce the volume of
data backed up and minimize the impact on virtual machine performance.
4. Store backups offsite: Protect against physical disasters by storing backups in
external locations, such as cloud storage.
5. Test backups regularly: Regularly test your backups to ensure they are complete
and can be restored in case of a disaster.
6. Utilize data deduplication and compression: Apply deduplication and compression
to reduce the storage space required for backups.
7. Encrypt backups: Encrypt your backups to safeguard them from unauthorized
access.

1.5 Disaster Recovery Planning:

What is disaster recovery Plan OR (DRP)? And what are its


types?

A Disaster Recovery Plan is a procedure to recover data and


functionalities when a disaster — either natural or caused by a human mistake —
disrupts a system. It is a contingency plan that collects the action protocol and the
methodologies that should be used when one or more of a company’s IT systems fail.

The main goal of a DRP is minimizing the impact of a downtime by


getting mission-critical applications back to operation in the shortest time possible.
This allows organizations and workers to start operating again, virtually as usual, until
the issue is completely solved.

DRP Types:

 Disaster Recovery at hypervisor level.


 Disaster Recovery at storage level.
 Disaster Recovery at application level.

A Disaster Recovery plan can protect your business against many scenarios, such
as:

 Human mistakes
 Power outages
 System failures
 Faulty updates
 Natural disasters such as floods or earthquakes
 Data centre fires
 Thefts
 Cyberattacks, virus and other corruptions (ransomware, for instance)
What is virtual disaster recovery?

Virtual disaster recovery is a type of DR that typically involves replication and


enables a user to fail over to virtualized workloads.

For the most efficient virtual disaster recovery, an organization should


copy VM workloads off-site on a regular basis. Replication essentially makes a real-
time copy of VMs in a separate location, strengthening DR.

Benefits of virtual disaster recovery

Virtualization provides flexibility in disaster recovery. When servers are


virtualized, they are containerized into VMs, independent from the underlying
hardware. An organization does not need the same physical servers at the primary
site as at its secondary disaster recovery site.

Other benefits of virtual disaster recovery include ease, efficiency and speed.
Virtualized platforms typically provide high availability in the event of a failure.
Virtualization helps organizations meet recovery time objectives (RTOs) and recovery
point objectives (RPOs), as replication is done as frequently as needed, especially for
critical systems. DR planning and failover testing is also simpler with virtualized
workloads than with a physical setup, making disaster recovery a more attainable
process for organizations that may not have the funds or resources for physical DR.

Consolidating physical servers with virtualization saves money because the


virtualized workloads require less power, floor space and maintenance. However,
replication can get expensive, depending on how frequently it's done.

Recovery Time Objective or RTO:

On the one hand, the RTO or Recovery Time Objective is the maximum
period of time during which you consider it is acceptable for your company’s activity to
be interrupted. It is to say, the tolerable period of time before a downtime starts
disrupting your business normal activity.

Recovery Point Objective or RPO:

On the other hand, the RPO or Recovery Point Objective is the previous point
in time you are willing to get back to in order to recover your company’s data and
functionalities. In other words, this represents the quantity of data a company is willing
to lose between the last security backup and a contingency.

A virtual disaster recovery plan has many similarities to a traditional DR plan. An


organization should do the following:

 decide which systems and data are the most critical for recovery;

 document critical systems and data;

 get management support for the DR plan;

 complete a risk assessment and business impact analysis to outline possible


risks and their potential impacts;

 document steps needed for recovery;

 define RTOs and RPOs; and

 test the plan.

References:

https://www.vinchin.com/vm-tips/vsphere-resource-pool.html----

https://www.nakivo.com/blog/what-is-vmware-drs-cluster

https://www.geeksforgeeks.org/virtual-memory-in-operating-system
https://www.cloudthat.com/resources/blog/mastering-proactive-management-
monitoring-events-crafting-alarms-in-vmware-vsphere-8-0

https://www.liquidweb.com/blog/vmware-performance-monitoring/

https://xopero.com/blog/en/virtual-machine-backup-a-step-by-step-guide-to-data-
protection/----backup strategy

https://www.stackscale.com/blog/disaster-recovery-plan/----Disaster recover plan

You might also like