Unit 4-Memory - CPU Resource Allocation
Unit 4-Memory - CPU Resource Allocation
The mapping between logical pages and physical page frames is maintained
by the page table, which is used by the memory management unit to translate logical
addresses into physical addresses. The page table maps each logical page number
to a physical page frame number. By using a Page Table, the operating system
keeps track of the mapping between logical addresses (used by programs) and
physical addresses (actual locations in memory).
Memory isn’t always available in a single block: Programs often need more
memory than what is available in a single continuous block. Paging breaks
memory into smaller, fixed-size pieces, making it easier to allocate scattered free
spaces.
Processes size can increase or decrease: programs don’t need to occupy
continuous memory, so they can grow dynamically without the need to be moved.
If Logical Address = 31 bit, then Logical Address Space = 231 words = 2 G words
(1 G = 230)
If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address
= log2 227 = 27 bits
If Physical Address = 22 bit, then Physical Address Space = 222 words = 4 M
words (1 M = 220)
If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address
= log2 224 = 24 bits
The mapping from virtual to physical address is done by the Memory Management
Unit (MMU) which is a hardware device and this mapping is known as the paging
technique.
Example
Physical Address = 12 bits, then Physical Address Space = 4 K words
Logical Address = 13 bits, then Logical Address Space = 8 K words
Page size = frame size = 1 K words (assumption)
Advantages of Paging:
Eliminates External Fragmentation: Paging divides memory into fixed-size blocks
(pages and frames), so processes can be loaded wherever there is free space in
memory. This prevents wasted space due to fragmentation.
Ease of Swapping: Individual pages can be moved between physical memory and
disk (swap space) without affecting the entire process, making swapping faster and
more efficient.
Improved Security and Isolation: Each process works within its own set of pages,
preventing one process from accessing another’s memory space.
Disadvantages of Paging:
Internal Fragmentation: If the size of a process is not a perfect multiple of the page
size, the unused space in the last page results in internal fragmentation.
Increased Overhead: Maintaining the Page Table requires additional memory and
processing. For large processes, the page table can grow significantly, consuming
valuable memory resources.
Page Table Lookup Time: Accessing memory requires translating logical
addresses to physical addresses using the page table. This additional step increases
memory access time, although Translation Lookaside Buffers (TLBs) can help
reduce the impact.
I/O Overhead During Page Faults: When a required page is not in physical memory
(page fault), it needs to be fetched from secondary storage, causing delays and
increased I/O operations.
2. Segmentation:
A process is divided into Segments. The chunks that a program is divided into
which are not necessarily all of the exact sizes are called segments. Segmentation
gives the user’s view of the process which paging does not provide. Here the user’s
view is mapped to physical memory.
Base Address: It contains the starting physical address where the segments
reside in memory.
Segment Limit: Also known as segment offset. It specifies the length of the
segment.
Advantages of Segmentation in Operating System
Reduced Internal Fragmentation: Segmentation can reduce internal fragmentation
compared to fixed-size paging, as segments can be sized according to the actual
needs of a process. However, internal fragmentation can still occur if a segment is
allocated more space than it is actually used.
The user specifies the segment size, whereas, in paging, the hardware determines
the page size.
Due to the need for two memory accesses, one for the segment table and the other
for main memory, access time to retrieve the instruction increases.
Overhead: Using a segment table can increase overhead and reduce performance.
Each segment table entry requires additional memory, and accessing the table to
retrieve memory locations can increase the time needed for memory operations.
3. Virtual Memory:
The main objective of virtual memory is to support multiprogramming, The main
advantage that virtual memory provides is, a running process does not need to
be entirely in memory.
Programs can be larger than the available physical memory. Virtual Memory
provides an abstraction of main memory, eliminating concerns about storage
limitations.
All memory references within a process are logical addresses that are
dynamically translated into physical addresses at run time. This means that a
process can be swapped in and out of the main memory such that it occupies
different places in the main memory at different times during the course of
execution.
A process may be broken into a number of pieces and these pieces need not be
continuously located in the main memory during execution. The combination of
dynamic run-time address translation and the use of a page or segment table
permits this.
If these characteristics are present then, it is not necessary that all the pages or
segments are present in the main memory during execution. This means that the
required pages need to be loaded into memory whenever required. Virtual memory
is implemented using Demand Paging or Demand Segmentation.
A cluster is a group of hosts connected to each other with special software that
makes them elements of a single system. At least two hosts (also called nodes) must
be connected to create a cluster. When hosts are added to the cluster, their resources
become the cluster’s resources and are managed by the cluster.
The most common types of VMware vSphere clusters are High Availability (HA)
and Distributed Resource Scheduler (DRS) clusters. HA clusters are designed to
provide high availability of virtual machines and services running on them; if a host
fails, they immediately restart the virtual machines on another ESXi host. DRS clusters
provide the load balancing among ESXi hosts.
For this reason, DRS is usually used in addition to HA, combining failover with
load balancing. In a case of failover, the virtual machines are restarted by the HA on
other ESXi hosts and the DRS, being aware of the available computing resources,
provides the recommendations for VM placement. vMotion technology is used for this
live migration of virtual machines, which is transparent for users and applications .
VMware DRS features/benefits:
Optimization of infrastructure without manual intervention by continuously
monitoring resource utilization and making automatic changes. It helps organizations
optimize their infrastructure without manual intervention. This saves time and reduce
the risk of human error, while also ensuring that virtual machines are always running
on the most appropriate host.
Flexibility. Distributed Resource Scheduler can be configured to operate in either
manual or automatic mode, allowing organizations to choose the level of control they
want over resource allocation. It also includes features like affinity rules and anti-
affinity rules, which allow administrators to specify which virtual machines should or
should not run on specific hosts in a cluster.
VMware DRS includes predictive DRS, which uses historical performance data to
predict future resource requirements and proactively move virtual machines to the
most appropriate host. This can help organizations avoid performance issues before
they occur and ensure that virtual machines always have access to the resources they
need.
Load Balancing: It is the feature that optimizes the utilization of computing resources
(CPU and RAM). Utilization of processor and memory resources by each VM, as well
as the load level of each ESXi host within the cluster, is continuously monitored. The
DRS checks the resource demands of VMs and determines whether there is a better
host for the VM to be placed on. If there is such host, the DRS makes a
recommendation to migrate the VM in automatic or manual mode, depending on your
settings. The DRS generates these recommendations every 5 minutes if they are
necessary. The figure below illustrates the DRS performing VM migration for load
balancing purposes.
1. Suppose you have a database server running on one VM, a web server running on
a second VM, and an application server running on a third VM. Because these servers
interact with each other, three VMs would ideally be kept together on one ESXi host
to prevent overloading the network. In this case, we would select the “Keep Virtual
Machines Together” (affinity) option.
2. If you have an application-level cluster deployed within VMs in a DRS cluster, you
may want to ensure the appropriate level of redundancy for the application-level
cluster (this provides additional availability). In this case, you could create an anti-
affinity rule and select the “Separate Virtual Machines” option. Similarly, you can use
this approach when one VM is a main domain controller and the second is a replica of
that domain controller (Active Directory level replication is used for domain controllers).
If the ESXi host with the main domain controller VM fails, users can connect to the
replicated domain controller VM, as long as the latter is running on a separate ESXi
host.
3. An affinity rule between a VM and an ESXi host might be set, in particular, for
licensing reasons. As you know, in a VMware DRS cluster, the virtual machines can
migrate between hosts. Many software licensing policies – such as database software,
for example – require you to buy a license for all hosts on which the software runs,
even if there is only one VM running the software within the cluster. Thus, you should
prevent such VM from migrating to different hosts and costing you more licenses. You
can accomplish this by applying an affinity rule: the VM with database software must
run only on the selected host for which you have a license. In this case, you should
select the “Virtual Machines to Hosts” option. Choose “Must Run on Host” and then
input the host with the license. (Alternatively, you could select “Must Not Run on Hosts
in Group” and specify all unlicensed hosts.)
2) Resource Pool:
Resource pools are used for flexible resource management of ESXi hosts in the
DRS cluster. You can set processor and memory limits for each resource pool, then add
virtual machines to them. For example, you could create one resource pool with high
resource limits for developers’ virtual machines, a second pool with normal limits for
testers’ virtual machines, and a third pool with low limits for other users. vSphere lets
you create child and parent resource pools.
Summery on DRS & Resource pool:
*************************************************************************************
1.3 Monitoring Performance & Alarms:
1. CPU Usage:
A constantly capped-out CPU or an unusually high trend can lead you toward
identifying a problem with your hardware or software. Keep an eye on these metrics:
CPU ready time — This measures when a VM is ready to use the CPU but
cannot access physical resources. High metrics indicate resource contention
and over-subscribing resources.
CPU wait time — Similarly to CPU ready, this occurs when a process or thread
has to wait in a queue to gain access to the CPU. Look out for long I/O wait
times that can degrade performance.
2 . Memory Usage:
If you’ve ever maxed out the memory on your personal computer, you know
what happens. Everything grinds to a halt, programs crash, and the entire OS can
freeze. Now imagine that happening to your customers.
It’s essential to track memory usage to avoid memory over commitment and crashing.
Here are the metrics worth tracking:
Memory ballooning — This reclaims memory from VMs that aren’t using it at
the cost of VM performance. High ballooning degrades overall performance and
indicates memory pressure.
3. Storage Metrics:
A failing disk means lost data. Besides the server slowdowns an overused disk
causes, you’ll want to monitor these metrics closely:
Datastore usage/availability — An unavailable datastore means no VM
provisioning, which could be a disaster in some setups. Also, look out for
excessive data store usage, which leads to poor VM performance.
IOPS — High IOPS means the storage and CPU are being used excessively,
which leads to VMs fighting over resources and significant slowdown. Very low
IOPS indicates a failing or misconfigured disk.
4. Network Metrics:
Lastly, network monitoring for each VM’s usage is good since unusually high
network throughput can indicate a problem. Keep these metrics in mind:
Network connectivity — This simple but essential metric tracks the stability of
network access to each VM. If the network is down, there’s a problem.
Q. What are the best practices to for VMware performance monitoring? Explain
with suitable example.
Alarm Requirement: Set CPU and memory utilization thresholds as alarms. When
these thresholds are crossed, administrators receive alerts, allowing them to
proactively allocate resources or spin up additional virtual machines to handle the
increased demand.
Alarm Requirement: Create alarms for disk space utilization. When disk space
reaches critical levels, administrators receive alerts. This ensures that the platform
remains accessible, preventing disruptions to students’ learning experience.
Ensuring your IT team has the necessary knowledge to work with VMware is a
valuable investment — and prevents your VM environment from crashing and burning
if something goes wrong.
************************************************************************************************
VM backup solutions enable the capture and storage of virtual machine data in a way
that allows quick and efficient restoration when needed. Backups can be created using
various methods, including:
business continuity,
minimizing downtime and
allowing rapid restoration of operations in the event of data loss or system
failure.
This level of protection is essential for maintaining customer trust and operational
stability.
When choosing VM backup software, it’s worth considering a few key aspects:
When evaluating backup solutions, remember that effective data protection should
combine performance with reliability, ensuring secure data storage and easy access
in case of need. Xopero software Backup meets these needs, helping you optimize
data protection processes for your business.
1. Use reliable VM backup software: Select backup software that supports your
virtualization platform, such as VMware vSphere or Microsoft Hyper-V.
2. Schedule regular backups: Plan backups at regular intervals, such as daily or
weekly, to ensure your virtual machines are consistently protected.
3. Use incremental backups: Employ incremental backups to reduce the volume of
data backed up and minimize the impact on virtual machine performance.
4. Store backups offsite: Protect against physical disasters by storing backups in
external locations, such as cloud storage.
5. Test backups regularly: Regularly test your backups to ensure they are complete
and can be restored in case of a disaster.
6. Utilize data deduplication and compression: Apply deduplication and compression
to reduce the storage space required for backups.
7. Encrypt backups: Encrypt your backups to safeguard them from unauthorized
access.
DRP Types:
A Disaster Recovery plan can protect your business against many scenarios, such
as:
Human mistakes
Power outages
System failures
Faulty updates
Natural disasters such as floods or earthquakes
Data centre fires
Thefts
Cyberattacks, virus and other corruptions (ransomware, for instance)
What is virtual disaster recovery?
Other benefits of virtual disaster recovery include ease, efficiency and speed.
Virtualized platforms typically provide high availability in the event of a failure.
Virtualization helps organizations meet recovery time objectives (RTOs) and recovery
point objectives (RPOs), as replication is done as frequently as needed, especially for
critical systems. DR planning and failover testing is also simpler with virtualized
workloads than with a physical setup, making disaster recovery a more attainable
process for organizations that may not have the funds or resources for physical DR.
On the one hand, the RTO or Recovery Time Objective is the maximum
period of time during which you consider it is acceptable for your company’s activity to
be interrupted. It is to say, the tolerable period of time before a downtime starts
disrupting your business normal activity.
On the other hand, the RPO or Recovery Point Objective is the previous point
in time you are willing to get back to in order to recover your company’s data and
functionalities. In other words, this represents the quantity of data a company is willing
to lose between the last security backup and a contingency.
decide which systems and data are the most critical for recovery;
References:
https://www.vinchin.com/vm-tips/vsphere-resource-pool.html----
https://www.nakivo.com/blog/what-is-vmware-drs-cluster
https://www.geeksforgeeks.org/virtual-memory-in-operating-system
https://www.cloudthat.com/resources/blog/mastering-proactive-management-
monitoring-events-crafting-alarms-in-vmware-vsphere-8-0
https://www.liquidweb.com/blog/vmware-performance-monitoring/
https://xopero.com/blog/en/virtual-machine-backup-a-step-by-step-guide-to-data-
protection/----backup strategy