0% found this document useful (0 votes)
2 views18 pages

Unit4 OS

The document discusses memory management and virtual memory concepts, including logical versus physical address spaces, swapping, contiguous allocation, paging, and segmentation. It explains mechanisms like dynamic loading and linking, the differences between logical and physical addresses, and various memory allocation techniques such as static and dynamic partitioning, along with their advantages and disadvantages. Additionally, it covers advanced topics like Translation Lookaside Buffer (TLB) and the combination of segmentation with paging to enhance memory management efficiency.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views18 pages

Unit4 OS

The document discusses memory management and virtual memory concepts, including logical versus physical address spaces, swapping, contiguous allocation, paging, and segmentation. It explains mechanisms like dynamic loading and linking, the differences between logical and physical addresses, and various memory allocation techniques such as static and dynamic partitioning, along with their advantages and disadvantages. Additionally, it covers advanced topics like Translation Lookaside Buffer (TLB) and the combination of segmentation with paging to enhance memory management efficiency.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

UNIT-IV

Memory Management and Virtual Memory - Logical versus Physical Address Space,
Swapping, Contiguous Allocation, Paging, Segmentation, Segmentation with Paging, Demand
Paging, Page Replacement, Page Replacement Algorithms.

1. INTRODUCTION TO MEMORY MANAGEMENT


Main memory is also known as RAM (Random Access Memory). All the programs and files
are saved in the secondary storage device (Hard Disk). Whenever we want to execute the
programs or access the files, they must be copied from secondary storage device into main
memory.
The program execution starts only after loading the complete program. But sometimes a
certain part or routine of the program is loaded into the main memory and execution starts.
During program execution, the code which is not available in the main memory is loaded. This
mechanism is called Dynamic Loading, this enhance the performance.

Also, at times one program is dependent on some other program. In such a case, rather than
loading all the dependent programs, CPU links the dependent programs to the main executing
program when it is required. This mechanism is known as Dynamic Linking.

2. PHYSICAL ADDRESS AND LOGICAL ADDRESS


Logical Address can be defined as the location of instruction of a program with reference to
the first instruction. That is, first instruction is at address ‘0’, the next (second) instruction is at
address ‘1’ and so on. The logical address is generated by CPU while a program is running. Each
time the program gets executed, the logical address will be same every time. The logical address
does not exist physically, therefore, it is also known as Virtual Address. This address is used as a
reference to access the physical memory location by CPU.

Physical address can be defined as the exact location in the main memory. Physical
Address identifies a physical location of required instruction/data in a memory. The user
program generates the logical address and in order to execute it the logical address must be
converted into the physical address by Memory Management Unit (MMU). Each time the
program gets executed, the physical address may not be same every time. The relocation register

Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR


stores the physical address of first instruction in the program. The 347 th instruction’s logical
address will be 346 and physical address will be 14346, if the relocation register contain 14000.

Differences between Logical and Physical Address in Operating System

 The basic difference between Logical and physical address is that Logical address is
generated by CPU with reference to starting point of a program whereas the physical address
is a location that exists in the memory unit.
 The logical address does not exist physically in the memory whereas physical address is a
location in the memory that can be accessed physically.
 The logical address is generated by the CPU while the program is running and the physical
address is computed by the Memory Management Unit (MMU).

3. SWAPPING
Swapping is a mechanism in which a process can be swapped (or moved) temporarily out of
main memory to secondary storage (disk) and make that memory available to other processes. At
some time later, the system swaps back the process from the secondary storage to main memory.

Whenever the main memory runs out of space, one of the inactive processes is swapped
to create free space for executing other new process. This improves system throughput and
processor is utilized effectively.

Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR


4. CONTIGUOUS MEMORY ALLOCATION
In contiguous memory allocation each process is stored in a single contiguous block of
memory. Main memory is divided into several partitions. Each partition contains exactly one
process. When a partition is free, a process is selected from the input queue and loaded into it.
The free blocks of memory are known as holes. The set of holes is searched to determine which
hole is best to allocate. The two popular techniques used for contiguous memory allocation are:
i. Static Partitioning or Fixed Size Partitioning
ii. Dynamic Partitioning or Variable Size Partitioning

i. Static Partitioning
 Static partitioning is a fixed size partitioning scheme.
 In this technique, main memory is pre-divided into fixed size partitions.
 The size of each partition is fixed and can not be changed latter.
 Each partition is allowed to store only one process. The fixed size partitions can be of
two types: Equal size or Unequal size partitioning.
 In equal size partitioning, all the partitions will have the same size. In un-equal size
partitioning, all the partitions may have the different sizes.
Example:

These partitions are allocated to the processes as they arrive. In equal size partitioning
scheme, as all the partition sizes are equal. So there is no difference in allocating any partition to
any process. But, in un-equal size partitioning scheme, allocating a particular partition to a
process is very important because, when a process is allocated to one of the partition, some
memory will be wasted if process size is less than partition size. This wastage of memory is
called fragmentation. The following algorithms are used in un-equal size partitioning scheme.

 First Fit Algorithm


 Best Fit Algorithm
 Worst Fit Algorithm
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
1. First Fit Algorithm
 This algorithm starts scanning the partitions serially from the starting.
 When an empty partition that is big enough to store the process is found, it is
allocated to the process.
 Obviously, the partition size has to be greater than or at least equal to the process size.

2. Best Fit Algorithm


 This algorithm first scans all the empty partitions.
 It then allocates the smallest size partition that fit to the process.
3. Worst Fit Algorithm
 This algorithm first scans all the empty partitions.
 It then allocates the largest size partition that fit to the process.

Example: Consider that initially there are three free partitions /holes available with sizes 700
KB, 950 KB and 500 KB as shown in the figure (a). If we want to insert P3 process with size 450
KB, then the implementation of different algorithms are shown below.

Pi th
: Partition allocated to i process : Free partition /hole

500 KB P1 500 KB P1 500 KB P1 500 KB P1


P3 450 KB
700 KB
700 KB 700 KB 700 KB
250 KB
300 KB P2 300 KB P2 300 KB P2 300 KB P2
P3 450 KB
950 KB 950 KB 950 KB 950 KB
500 KB

500 KB 500 KB 500 KB P3 450 KB


500 KB
50 KB
Figure (a): Initial Figure (b): First Fit Figure (c): Best Fit Figure (d): Worst Fit

In First-fit algorithm, 700 KB hole is allocated to P3 process and the remaining memory 250
KB is wasted. In Best-fit algorithm, 500 KB hole is allocated to P3 process and the remaining
memory 50 KB is wasted. In Worst-fit algorithm, 950 KB hole is allocated to P3 process and the
remaining memory 500 KB is wasted. As the memory wasted is inside the partition, so it is
called internal fragmentation.

Best Fit Algorithm works best because the space left after the allocation inside the partition is
of very small size. Thus, internal fragmentation is least. Worst Fit Algorithm works worst. This
is because space left after the allocation inside the partition is of very large size. Thus, internal
fragmentation is maximum.

Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR


ii. Dynamic Partitioning
Dynamic partitioning tries to overcome the problems caused by fixed partitioning. In this
technique, the partition size is not declared initially. It is declared at the time of process loading.

 Dynamic partitioning is a variable size partitioning scheme.


 It performs the memory allocation at the time of process loading.
 Initially RAM is empty and partitions are made during the run-time according to size of
the process.
 When a process arrives, a partition of size equal to the size of process is created. Then,
that partition is allocated to the process. As a result there is no internal fragmentation.
 The processes arrive and leave the main memory after execution. After a process leave
the main memory, a hole (free space) is created.
 These holes are allocated to the processes that arrive in future.
 When a new process arrives and no hole fit to it, then memory can not be allocated to it.
Even though the process size is less than total empty space in memory (sum of the all
holes memory size). This is because the memory can not be allocated to that process as
the required empty space is not contiguous (It is available at different places).

For example, consider processes = {P1, P2, P3, P4} with memory size = {2, 7, 1, 5} MBs
respectively. They are loaded into main memory with partition size is equal to process size as
shown in the below figure (a).

Figure (a) Figure (b)

Suppose process P1(2MB) and process P3(1MB) completed their execution, they will
leave the memory and two holes (free spaces) are created. The total free space is 4 MB

Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR


(2+1+1). This is shown in the above figure (b). Let’s a new process P5 of size 3MB arrives.
The empty space in memory cannot be allocated to P5 even though P5 memory is less than
total empty space ( P5’s 3 MB < 4 MB empty space) because the required 3 MB space is not
contiguous. So, this memory is not useful now and it results in External Fragmentation.

Internal Fragmentation
 Internal Fragmentation occurs only in static partitioning.
 It occurs when the space is left inside the fixed partition after allocating the partition
to a process. This space is called internal fragmentation.
 This space can not be allocated to any other process.
 This is because only one process is allowed to store in each partition.

External Fragmentation
 External Fragmentation occurs only in dynamic partitioning.
 It occurs when the total amount of empty space required to store the process is
available in the main memory but the free memory is not contiguous.
 As the space is not contiguous, so the process can not be stored.

The external fragmentation problem can be solved using compaction technique. The
compaction is a technique used to move all processes towards one end of memory and all the
free blocks of memory are moved towards the other end. This is shown in the below diagram.

Huge amount of time is invested for moving all the free spaces to one side and the CPU will
remain idle for all this time. Despite of the fact that the compaction avoids external
fragmentation, it makes system inefficient.

Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR


5. PAGING
 Paging belongs to fixed partition technique.
 Paging is a memory management mechanism that allows the physical address space of a
process can be non-contagious.
 The main memory is partitioned into fixed small equal size blocks called frames.
 The user process is also divided into small equal size blocks called pages.
 The page size should be equal to frame size except the last page of each process.

When a process needs to be executed, its pages are loaded into any available memory
frames. In order to start execution, not all the pages of it to loaded into main memory; only few
pages are enough. After loading the pages, the details of which page is loaded into which frame
is saved in a table called page table. Each process maintains a separate page table. The starting
location address of a page table is saved in the Page Table Base Register (PTBR).
When processor wants to execute an instruction, it generates the logical address. The
logical address contain two parts: a page number (Pi) and a page offset (d). The page number is
used as an index into a page table and fetch corresponding frame no (f). The physical address is
obtained by combining the frame no (f) with the offset d. This is shown in the below diagram.

Pi d f d

The advantages of paging are:


 It allows to store parts of a single process in a non-contiguous fashion.
 It solves the problem of external fragmentation.

The disadvantages of paging are:


 It suffers from internal fragmentation.
 There is an overhead of maintaining a page table for each process.
 The time taken to fetch the instruction increases since now two memory accesses are
required.

Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR


6. SEGMENTATION

 Like Paging, Segmentation is another non-contiguous memory allocation technique.


 Segmentation is a variable size partitioning scheme.
 In segmentation, process is not divided blindly into fixed size pages. Rather, the process
is divided into modules. These partitions are called as segments.
 The main memory is divided into partitions of unequal size dynamically. The size of
partitions depends on the length of modules.

Example: Consider a program is divided into 5 segments as:

Segment Table:
 Segment table is a table that stores the information about each segment of the process.
 It has two columns. First column stores the size or length of the segment. Second column
stores the base address or starting address of the segment in the main memory.
 Segment table is stored as a separate segment in the main memory.
 The base address of the segment table is stored in Segment Table Base Register (STBR).

When a process needs to be executed, its segments are loaded into main memory. After
loading the segments, the details of each segment’s base address and the limit value are saved in
a table called segmentation table. Base Address contains the starting physical address where the
segments stored in main memory. Limit specifies the length of the segment. When processor
wants to execute an instruction, it generates the logical address. This address contain two parts: a
segment number (s) and a segment offset (d). The segment number is used as an index into a
segment table and fetches corresponding base address. If the segment offset (d) is greater than
limit size, then an addressing error is raised. Otherwise, physical address is generated. The

Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR


physical address is obtain by combining the segment base address with the offset(d). This is
shown in the below diagram.

L
d

The advantages of segmentation are:

 It allows to divide the program into modules which provides better visualization.
 Segment table consumes less space as compared to page table in paging.
 It solves the problem of internal fragmentation.

The disadvantages of segmentation are:

 There is an overhead of maintaining a segment table for each process.


 The time taken to fetch the instruction increases since now two memory accesses are
required.
 Segments of unequal size are not suited for swapping.
 It suffers from external fragmentation as the free space gets broken down into smaller
pieces with the processes being loaded and removed from the main memory.

7. TRANSITION LOOK ASIDE BUFFER (TLB)


A Translation look aside buffer can be defined as a memory cache that contains page
table entries that have been most recently/frequently used. TLB can be used to reduce the time
taken to access the page table again and again. TLB follows the concept of locality of reference
which means that it contains only the entries of those pages that are frequently accessed by the
CPU.

Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR


Given a logical address, the processor examines the TLB, if a page table entry is present (TLB
hit), the frame number is retrieved and the physical address is formed. If a page table entry is not
found in the TLB (TLB miss), the page number is used to index the process page table to
generate physical address. This is shown in above diagram.

8. SEGMENTATION WITH PAGING


Both paging and segmentation have their advantages and disadvantages, it is better to
combine these two schemes to improve on each. The combined scheme is known as
‘Segmentation with Paging’. In this scheme,

 Process is first divided into segments and then each segment is divided into pages.
 These pages are then stored in the frames of main memory.
 A separate page table exists for each segment to keep track of the frames storing the
pages of that segment.
 Number of entries in the page table of a particular segment = Number of pages that
segment is divided.
 A segment table exists that keeps track of the frames storing the page tables of segments.
 Number of entries in the segment table of a process = Number of segments that process is
divided.
 The base address of the segment table is stored in the segment table base register (STBR).

When a process needs to be executed, CPU generates the logical address for each instruction.
The logical address consisting of three parts:

Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR


a) Segment Number (S): It gives the specific segment from which CPU wants to reads
the data.
b) Page Number (P): It gives specifies the specific page of that segment from which
CPU wants to read the data.
c) Page Offset (d): It gives the specific word on that page that CPU wants to read.

The physical address is computed from the given logical address as follows:

 The segment table base register (STBR) contain the starting address of segment table.
 For the given segment number, corresponding entry is found in that segment table.
 Segment table provides the address of page table belongs to the referred segment number.
 For the given page number, corresponding entry is found in that page table.
 This page table provides the frame number of the required page of the referred segment.
 The frame number combined with the page offset to get the required physical address.

The below diagram illustrates the above steps of translating logical address into physical address:

The advantages of segmented paging are:

 Segment table contains only one entry corresponding to each segment.


 It reduces memory usage.
 The size of page table is limited by the segment size.
 It solves the problem of external fragmentation.

The disadvantages of segmented paging are:

 Segmented paging suffers from internal fragmentation.


 The complexity level is much higher as compared to paging.

Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR


9. VIRTUAL MEMORY
Computers have a finite amount of main memory (RAM), so memory may be filled and
can run out of free space, especially when multiple programs run at the same time. This problem
can be solved by making a section of the hard disk to emulate RAM. This emulated memory in
the hard disk is called as “Virtual Memory”.

Virtual memory is the capability of an OS that allow a computer to compensate for


physical memory shortages. Whenever RAM runs out of free space, the OS temporarily transfer
few pages from RAM to emulated RAM memory in the hard disk. Virtual Memory is a space
where large programs can store themselves in form of pages during their execution and only the
required pages of processes are loaded into the main memory. This technique is useful as large
virtual memory is provided for user programs when a very small physical memory is there.

Benefits of having Virtual Memory

 Large programs can be written, as virtual space available is huge compared to physical
memory.
 More physical memory available, as programs are stored on virtual memory, so they
occupy very less space on actual physical memory.

10. DEMAND PAGING


In real scenario, not all the pages of a process are loaded into main memory at once, only
few pages are sufficient to start executing it. During execution, if the required page is not in main
memory, then that page is brought in. This process of loading pages into main memory is called
“Demand paging”.

Demand paging suggests keeping all the pages of a process in the virtual memory until
they are required. In other words, it says that do not load any page into the main memory until it
is required. Whenever any page is referred it may or may not be in the main memory. If the
referred page is not present in the main memory then there will be a miss and the concept is
called page fault occur. The CPU has to transfer the missed page from the virtual memory into
one of the free frame of the main memory. If the main memory does not have any free frame,
then page replacement algorithms are used to swap one of the pages from the main memory with
the required page. When OS use demand paging, the page table contains one extra column
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
named as valid bit. If the page is in main memory, then its valid bit is set to true; otherwise it is
set to false. This valid bit is used to know whether the required page is in main memory or not.

There are cases when no pages are loaded into the memory initially; pages are only
loaded when demanded by the process by generating page faults. This is called “Pure Demand
Paging”. In demand paging, some of the pages are loaded into main memory initially before
start executing it.

“Pre-paging” is a technique used to load pages into main memory before they are needed
to avoid page faults. But identifying the needed pages in advance is very difficult.

11. PAGE REPLACEMENT


When Demand Paging is used, only certain pages of a process are loaded initially into the
memory. This allows us to get more number of processes into the main memory at the same time
and they can be executed parallel. But when a process references a page that is not in main
memory, it has to be brought in. If no free frame is available in the main memory to bring the
referenced page in, then the following steps can be taken to deal with this problem:

1. Put the process that need the page not in main memory in the wait queue, until any other
process finishes its execution thereby freeing some of the frames.
2. Or, remove some other process completely from the memory to free frames.
3. Or, find some pages that are not being used right now, move them to the disk (virtual
memory) to get free frames. This technique is called Page replacement and is most
commonly used. We have some great algorithms to carry on page replacement efficiently.

Basic Page Replacement

The following steps are followed by page replacement algorithms to replace the referenced page.

 Find the location of the page referenced by ongoing process on the disk.
 Find a free frame. If there is a free frame, use it. If there is no free frame, use a page-
replacement algorithm to select any existing frame to be replaced, such frame is known
as victim frame.
 Write the page form the victim frame to disk. Change all related page tables to indicate
that this page is no longer in memory.

Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR


 Move the required page and store it in the frame. Adjust all related page and frame tables
to indicate the change.
 Restart the process that was waiting for this page.

12. PAGE REPLACEMENT ALGORITHMS


When there is a page fault, the referenced page must be loaded. If there is no available frame
in memory, then one of the page is selected for replacement. Page replacement algorithms help
to decide which page must be swapped out from the main memory to create a room for the
incoming page. Various page replacement algorithms are:
 FIFO Page Replacement Algorithm
 LRU Page Replacement Algorithm
 Optimal Page Replacement Algorithm

Note: A good page replacement algorithm is one that minimizes the number of page faults.

FIFO Page Replacement Algorithm:


 This is the simplest page replacement algorithm.
 As the name suggests, this algorithm works on the principle of “First in First out”.
 It replaces the oldest page that has been present in the main memory for the longest time.
 It is implemented by keeping track of all the pages in a queue.
 The number of page faults is more as compared to other page replacement algorithms.
Note: A page fault ‘F’ occurs, if the referenced page is not in the memory frames.

Problem 1: Consider the following page references: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7,


0 , 1. Find no of page faults when FIFO is implemented. Use 3 frames.

Solution:

No of Page Faults (F) = 15

Problem 2: Consider the following page references: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7,


0 , 1. Find no of page faults when FIFO is implemented. Use 4 frames.

Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR


Solution:

No of Page Faults (F) = 10

Belady’s anomaly: Generally, when number of frames increases then number page faults has to
decrease. But, when FIFO page replacement algorithm is used on most of the reference strings, it
found that when number of frames increases then number page faults also increasing. This
phenomenon is called Belady’s anomaly. For example, if we consider reference string 3, 2, 1, 0,
3, 2, 4, 3, 2, 1, 0, 4 and number of frames as 3, we get total 9 page faults, but if we increase
number of frames to 4, we get 10 page faults.

Optimal Page Replacement Algorithm:

 This algorithm replaces the page that will not be referred by the CPU in future for the
longest time.
 It is practically impossible to implement this algorithm.
 This is because the pages that will not be used in future for the longest time can not be
predicted.
 However, it is the best known algorithm and gives the least number of page faults.
 Hence, it is used as a performance measure criterion for other algorithms.

Problem 1: Consider the following page references: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7,


0 , 1. Find no of page faults when OPTIMAL page replacement is implemented. Use 3 frames.

Solution:

No of Page Faults (F) = 9

Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR


LRU Page Replacement Algorithm:

 As the name suggests, this algorithm works on the principle of “Least Recently Used”.
 It replaces the page that has not been referred by the CPU for the longest time.
 This algorithm is just opposite to the optimal page replacement algorithm. In this, we
look at the past instead of staring at future.

Problem 1: Consider the following page references: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7,


0 , 1. Find no of page faults when LRU is implemented. Use 3 frames.

Solution:

No of Page Faults (F) = 12

13. THRASHING
Thrashing is a condition or a situation when the system is spending a major portion of its
time in servicing the page faults, but the actual execution or processing done is very negligible.

The basic concept involved is that if a process is allocated too few frames, then there will
be too many and too frequent page faults. As a result, no useful work would be done by the CPU
and the CPU utilization would fall drastically.

Locality Model:
A locality is a set of pages that are actively used together. The locality model states that
as a process executes, it moves from one locality to another. A program is generally composed of
several different localities which may overlap.

Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR


For example when a function is called, it defines a new locality where memory references
are made to the instructions of the function call, it’s local and global variables, etc. Similarly,
when the function is exited, the process leaves this locality.

Techniques to handle thrashing:

1. Working Set Model: This model is based on the above-stated concept of the Locality
Model. The basic principle states that if we allocate enough frames to a process to
accommodate its current locality, it will only fault whenever it moves to some new locality.
But if the allocated frames are lesser than the size of the current locality, the process is bound
to thrash.

According to this model, based on a parameter A (no of recent referenced pages), the
working set is defined as the set of pages in the most recent ‘A’ page references. Hence, all
the actively used pages would always end up being a part of the working set. The accuracy of
the working set is dependent on the value of parameter A. If A is too large, then working sets
may overlap. On the other hand, for smaller values of A, the locality might not be covered
entirely.

If the summation of working set sizes of all the processes present in the main memory
exceeds the availability of frames, then some of the processes have to be suspended
(swapped out of memory). Otherwise, i.e., if there are enough extra frames, then some more
processes can be loaded in the memory. This technique prevents thrashing along with
ensuring the highest degree of multiprogramming possible. Thus, it optimizes CPU
utilization.

2. Page Fault Frequency: A more direct approach to handle thrashing is the one that uses
Page-Fault Frequency concept. The problem associated with Thrashing is the high page fault
rate and thus, the concept here is to control the page fault rate.

If the page fault rate is too high, it indicates that the process has too few frames allocated
to it. On the contrary, a low page fault rate indicates that the process has too many frames.
Upper and lower limits can be established on the desired page fault rate as shown in the
below diagram.

Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR


If the page fault rate falls below the lower limit, frames can be removed from the process.
Similarly, if the page faults rate exceeds the upper limit, more number of frames can be
allocated to the process. In other words, the graphical state of the system should be kept
limited to the rectangular region formed in the given diagram.

If the page fault rate is high with no free frames, then some of the processes can be
suspended and frames allocated to them can be reallocated to other processes. The suspended
processes can then be restarted later.

Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR

You might also like