0% found this document useful (0 votes)
10 views36 pages

Virtual Memory

Virtual memory is a technique that allows processes to execute even if they are not fully loaded in physical memory, utilizing methods like swapping and demand paging. Demand paging loads pages into memory only when needed, and page faults occur when a requested page is not in memory, requiring a series of steps to handle. Various page replacement algorithms, such as FIFO, LRU, and optimal replacement, are used to manage memory efficiently and minimize page faults.

Uploaded by

goraisneha191
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views36 pages

Virtual Memory

Virtual memory is a technique that allows processes to execute even if they are not fully loaded in physical memory, utilizing methods like swapping and demand paging. Demand paging loads pages into memory only when needed, and page faults occur when a requested page is not in memory, requiring a series of steps to handle. Various page replacement algorithms, such as FIFO, LRU, and optimal replacement, are used to manage memory efficiently and minimize page faults.

Uploaded by

goraisneha191
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 36

Virtual Memory

• It is a technique that allows the execution of processes that may not


be completely in memory. i.e, programs can be executed even
program size (logical address space) is greater than physical available
memory.
• If main memory is less than logical memory then, the programs
loaded in main memory through a technique called SWAPPING.
• Virtual memory is the separation of user logical memory from physical
memory. This allows an extremely large virtual memory is provided
when only a small physical memory is available.
Advantages:
• Efficient main memory utilization.
• Ability to execute a program that is only partially in main memory.
• Makes the task of programming much easier.

N.B:
Virtual memory is commonly implemented by demand paging.
Demand Paging
• It is similar to paging system with swapping.
Criteria:
A page is not loaded into main memory from secondary memory, until
it is needed. So, a page is loaded into the main memory by demand.
Hence, it is called demand paging.
Basic Concepts:
• To implement this scheme, we need hardware support to distinguish
between those pages that are in memory and those pages that are in the
disk.
• So, the valid invalid bit can be used for this purpose.
• When this bit is set to “valid” the value indicates that the associated page is
legal and in memory.
• If the bit is set to “invalid” indicates that the page either not valid (i.e, not in
logical address space of process) or is valid but is currently on disk.
• The page table entry for a page that is brought into memory is set as valid
and the page that is not currently in memory is set as invalid or contains the
address of the page on disk.
• Pages marked as invalid will have no effect if the process never tries to
access that page.
Demand paging with page table
when some pages are not in main
memory
But, if a process tries to access a page that
was not brought into memory, then what
happened?

Access to a page marked invalid causes a


page fault trap.
Page fault
• When a process needs to execute a particular page and that page is
not available in main memory, this situation is called to be page fault.
• To handle page fault a series of steps are executed as follows:

1. The memory address requested is first checked, to make sure it was


a valid memory request.
2. If the reference was invalid, the process is terminated. Otherwise,
the page must be paged in.
3. A free frame is located, possibly from a free-frame list.
4. A disk operation is scheduled to bring in the necessary page from
disk. ( This will usually block the process on an I/O wait, allowing
some other process to use the CPU in the meantime. )
5. When the I/O operation ( i.e, the disk read) is complete, the
process's page table is updated with the new frame number, and
the invalid bit is changed to valid bit to indicate that this is now a
valid page reference.
6. The instruction that caused the page fault must now be restarted
from the beginning.
Steps in handling a page fault
Pure demand paging
• Pure demand paging is a memory management technique where no
pages of a program are initially loaded into RAM.
• Pages are loaded into memory only when they are accessed during
program execution.
• Here, a page fault is generated if an attempt is made to access a page
that is not currently in RAM. At a point, it can execute with no more
faults. Hence, the scheme is called pure demand paging.
Performance of demand paging
•There are many steps that occur when servicing a page fault and some of
the steps are optional or variable.
•suppose that a normal memory access requires 200 nanoseconds, and that
servicing a page fault takes 8 milliseconds. ( 8,000,000 nanoseconds) With
a page fault rate of p, ( on a scale from 0 to 1 ), the effective access time is
now: (1-p)*ma + p*page fault time
=( 1 - p ) * ( 200 ) + p * 8000000
= 200 + 7,999,800 * p
• Hence, effective access time is directly proportional to the page fault rate
and it is important to keep the page fault rate low otherwise the effective
access time increases.
Page Replacement
• In order to make the most use of virtual memory, we load several
processes into memory at the same time. Since we only load the
pages that are actually needed by each process at any given time,
there is room to load many more processes than if we had to load in
the entire process.
• However, memory is also needed for other purposes ( such as I/O
buffering ), and what happens if some process suddenly decides it
needs more pages and there aren't any free frames available? So, the
following steps are performed in page fault routine for page
replacement:
• Find the location of desired page in backing store.
• Find the free frame:
a) If the frame is free, use it.
b) Otherwise, use a page replacement algorithm to select a victim frame.
c) Write the victim page into the disk, change the page and frame table
accordingly.
• Read the desired page into the free frame; change the content of page
and frame tables.
• Restart the user process.
Page replacement
Implementation of demand paging
system
a) There are two major requirements to implement a successful demand paging
system.
b) We must develop a frame-allocation algorithm and a page-replacement
algorithm. The former centers around how many frames are allocated to each
process ( and to other needs ), and the latter deals with how to select a page for
replacement when there are no free frames available.
c) The overall goal in selecting and tuning these algorithms is to generate the fewest
number of overall page faults. Because disk access is so slow relative to memory
access, even slight improvements to these algorithms can yield large
improvements in overall system performance.
d) Algorithms are evaluated using a given string of memory accesses known as
a reference string.
N.B: As the number of available frame increases number of page fault decreases.
Page Replacement Algorithm
FIFO Page Replacement Algorithm
• A simple page replacement strategy is FIFO, i.e. first-in-first-out.
• As new pages are brought in, they are added to the tail of a queue,
and the page at the head of the queue is the next victim.
• Although FIFO is simple and easy, it is not always optimal, or even
efficient.
Example:

In the above example, 20 page requests result in 15 page faults. So, the page
fault rate= No. of page faults/No. of bits=15/20=75%
Belady’s anomaly

If the no of frames available increases, the no of page faults decreases.


But, for some page replacement algorithms the page fault rate may
increase as the no of allocated frames increases. This most unexpected
result is known as Belady's anomaly.
Optimal Page Replacement
Algorithm
• This algorithm is simply "Replace the page that will not be used for
the longest time in the future.“
• It yields the lowest of all possible page-faults, and it does not suffer
from Belady's anomaly.
• But, it is difficult to implement, because it requires the future
knowledge of the reference string.
In the above example, 20 page requests result in 9 page faults. So, the page fault
rate= No. of page faults/No. of bits=9/20=45%
LRU Page Replacement Algorithm
• The Least Recently Used, algorithm replaces the page that has not been used in
the longest time is the one that will not be used again in the near future.
• LRU is considered a good replacement policy, and is often used. The problem is
how exactly to implement it. There are two simple approaches commonly used:
• Counters. Every memory access increments a counter, and the current value of this
counter is stored in the page table entry for that page. Then finding the LRU page involves
simple searching the table for the page with the smallest counter value. Note that
overflowing of the counter must be considered.
• Stack. Another approach is to use a stack, and whenever a page is accessed, pull that page
from the middle of the stack and place it on the top. The LRU page will always be at the
bottom of the stack. Because this requires removing objects from the middle of the stack,
a doubly linked list is the recommended data structure.
In the above example, 20 page requests result in 12 page faults. So, the page fault
rate= No. of page faults/No. of bits=12/20=60%
LFU & MFU Page Replacement
Algorithm
We can keep a counter of the no of references that have been made to
each page and develop two schemes LFU and MFU.

LFU Page Replacement Algorithm


It requires that the page with the smallest count be replaced. Because,
an actively used page should have a large reference count.
Reference String:

7 0 2 4 3 1 4 7 2 0 4 3 0 3
String2 77 0 2 4 3 1 4 7 2 0 4 3 0 3 2 7

Frame 2 2 2 1 1 1 2 2 2 3 3 3 3 3
3
Frame 0 0 0 3 3 3 7 7 0 0 0 0 0 2 7
2
Frame 7 7 7 4 4 4 4 4 4 4 4 4 4 4 4 4
1
Miss/ MM M M M M H M M M H M H H M M
Hit

In the above example, 16 page requests result in 12 page faults. So, the page fault
rate= No. of page faults/No. of bits=12/16=75%
MFU Page Replacement Algorithm
• In contrast to LFU, the MFU policy replaces the page that has been
visited the most times.
• This approach attempts to maintain pages that are used regularly and
eliminate those that were used frequently in the past but are no
longer used as frequently.
LRU Approximation Page
Replacement Algorithm
• Each page has a reference bit.
• All pages are arranged in a circular list (like a clock).
• When a page is referenced, its reference bit is set to 1.
• When a replacement is needed:
• The algorithm scans the pages in a circular manner.
• If it finds a page with a reference bit of 0, it replaces it.
• If the bit is 1, it gives the page a “second chance” by clearing the bit and
moving on.
Second Chance Algorithm
Frame Allocation Methods

Minimum Number of Frames


• The allocation of frames are constrained in various ways.
• We cant allocate more than the total no of available frames.
• We must also allocate at least a minimum no of frames.
• But, allocating at least a minimum no of frames involves performance,
as the no of frames allocated to each process decreases, the page fault
rate increases, slowing process execution.
• The minimum no of frames is defined by computer architecture and the
maximum no of frames is defined by the amount of available physical
memory.
Allocation Algorithms
• Equal Allocation - If there are m frames available and n processes to
share them, each process gets m/n frames, and the leftovers are kept
in a free-frame buffer pool.
Example: if the system has 48 frames and 9 processes, each process will
get 5 frames. The three frames which are not allocated to any process
can be used as a free-frame buffer pool.
• In systems with processes of varying sizes, it does not make much
sense to give each process equal frames. Allocation of a large number
of frames to a small process will eventually lead to the wastage of a
large number of allocated unused frames.
• Proportional Allocation - Allocate the frames proportionally to the size
of the process, relative to the total size of all processes.
• For a process pi of size si, the number of allocated frames is ai =
(si/S)*m, where S is the sum of the sizes of all the processes and m is
the number of frames in the system.
Example: in a system with 62 frames, if there is a process of 10KB and
another process of 127KB, then the first process will be allocated
(10/137)*62 = 4 frames and the other process will get (127/137)*62 =
57 frames.
• All the processes share the available frames according to their needs,
rather than equally.
Global versus Local Allocation
• The number of frames allocated to a process can also dynamically
change depending on whether we have used global
replacement or local replacement for replacing pages in case of a
page fault.
• Local replacement: When a process needs a page which is not in the
memory, it can bring in the new page and allocate it a frame from its
own set of allocated frames only.
• Advantage: The pages in memory for a particular process and the page fault
ratio is affected by the paging behavior of only that process.
• Disadvantage: A low priority process may hinder a high priority process by
not making its frames available to the high priority process.
• Global replacement: When a process needs a page which is not in the
memory, it can bring in the new page and allocate it a frame from the
set of all frames, even if that frame is currently allocated to some
other process; that is, one process can take a frame from another.
• Advantage: Does not hinder the performance of processes and hence results
in greater system throughput.
• Disadvantage: The page fault ratio of a process can not be solely controlled by
the process itself. The pages in memory for a process depends on the paging
behavior of other processes as well.
Thrashing

• If a process cannot maintain its minimum required number of frames,


then it must be swapped out, freeing up frames for other processes.
This is an intermediate level of CPU scheduling.
• But, what about a process that can keep its minimum, but cannot
keep all of the frames that it is currently using on a regular basis? In
this case it is forced to page out pages that it will need again in the
very near future, leading to large numbers of page faults.
• A process that is spending more time paging than executing is said to
be thrashing.
Cause of Thrashing
• Thrashing occurs when a system spends more time swapping pages in and out of
memory than executing processes, leading to a significant drop in performance.
• If a high-priority process arrives in memory and the frame is not vacant at the
moment, the other process occupying the frame will be moved to secondary
storage, and the free frame will be allotted to a higher-priority process.
• As soon as the memory is full, the procedure begins to take a long time to swap in
the required pages. Because most of the processes are waiting for pages, the CPU
utilization drops again.
• Hence, a high level of multi programming and a lack of frames are two of the most
common reasons for thrashing in the operating system.
N.B: Causes of thrashing:
• High degree of multiprogramming.
• Lack of frames.
• Page replacement policy.
The above diagram illustrates the situation of thrashing, where no useful work would be done by the
CPU and the CPU utilization would fall drastically.

You might also like