0% found this document useful (0 votes)
34 views78 pages

Operating System

Uploaded by

Tech Army
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views78 pages

Operating System

Uploaded by

Tech Army
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 78

Operating System

Unit – 4
Memory Management

Subject Faculty: Rashmi Rathi Upadhyay


Computer Engineering

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 1


Disclaimer
 It is hereby declared that the production of the said content is meant for
non-commercial, scholastic and research purposes only.

 We admit that some of the content or the images provided in this channel's
videos may be obtained through the routine Google image searches and
few of them may be under copyright protection. Such usage is completely
inadvertent.

 It is quite possible that we overlooked to give full scholarly credit to the


Copyright Owners. We believe that the non-commercial, only-for-
educational use of the material may allow the video in question fall under
fair use of such content. However we honor the copyright holder's rights
and the video shall be deleted from our channel in case of any such claim
received by us or reported to us.
Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 2
Topics to be covered
 Basics of Memory Management
 Memory partitioning: Fixed and Variable Size Partitioning
 Memory Allocation Strategies (First Fit, Best Fit, and Worst
Fit)
 Swapping and Fragmentation
 Paging and Demand Paging
 Segmentation
 Concepts of Virtual Memory
 Page Replacement Policies (FIFO, LRU, Optimal, Other
Strategies)
 Thrashing
Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 3
Basics of Memory
 There are three important criteria as far as memory is concerned:
1. Size – larger size of m/m is desirable
CPU
2. Access time – small access time is desirable. +
3. Per unit cost – less per unit cost is desirable. Memory

 Can you have a m/m which is large in size, have very less access
time and also have less per unit cost??
The answer is No.
 As all these three criteria’s would not met simultaneously that’s
why we don’t have a single memory in the systems.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 4


Hierarchy of Memory
CPU

Cache Memory

Main Memory

Secondary Memory

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 5


Hierarchy of Memory
Now, here the OS have two
important responsibilities:
1. Space Allocation: Now OS
will decide which process
from the SM will get which
area in MM.
2. Address Translation i.e.
from logical address to
physical address.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 6


Memory allocation
Techniques

Contiguous memory Non-Contiguous


allocation memory allocation

Fixed Variable
partition partition Paging Segmentation

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 7


Memory Allocation Techniques
Contiguous Memory allocation

• Simple and old method.


• Here each process occupies contiguous block of main
memory.
• When process is brought in memory, a memory is
searched to find out a chunk of free memory having
enough size to hold a process.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 8


Contiguous M/m allocation: Fixed Partition

Multi programming with fixed partition

• Numbers of partitions are fixed.


• Here, memory is divided into fixed size partition.
• Each partition may contain exactly one process.
• Size of each partition is not requires to be same.
• When a partition is free, process is selected from the
input queue and it is loaded into free partition.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 9


Contiguous M/m allocation: Fixed Partition

Multi programming with fixed partition:

• Advantage:
• Implementation is simple.
• Processing overhead is low.
• Disadvantage:
• Limit in process size.
• Degree of multiprogramming is also limited.
• Causes External fragmentation because of
contiguous memory allocation.
• Causes Internal fragmentation due to fixed
partition of memory.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 10


Contiguous M/m allocation: Variable Size Partition

Multi programming with variable/dynamic


partition
• Here memory is not divided into fixed partition, also the
number of partition is not fixed.
• Only required memory is allocated to process at runtime.
• Whenever any process enter in a system, a chunk of
memory big enough to fit the process is found and
allocated. And the remaining unoccupied space is treated
as another free partition.
• When process get terminated it releases the space
occupied and it that free partition is contiguous to
another free partition then that both free partition can be
merge.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 11


Contiguous M/m allocation: Variable Size Partition

Multi programming with variable/dynamic partition:


• Advantage:
• No internal fragmentation.
• No limitation on number of processes.
• No limitation on process size.
• Disadvantage:
• Causes External fragmentation:
• Memory is allocated when process enters into system, and
deallocated when terminates. This operation may leads to
small holes in the memory.
• This holes will be so small that no process can be loaded in it..
• But total size of all holes may be big enough to hold any
process.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 12


Memory Allocation Strategies
 In Partition Allocation, when there is more than one partition
freely available to accommodate a process’s request, a partition
must be selected.
 To choose a particular partition, a partition allocation method is
needed. A partition allocation method is considered better if it
avoids internal fragmentation.
 There are different Memory allocation Algorithm are:
1. First Fit
2. Best Fit
3. Worst Fit

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 13


Memory Allocation Strategies
1. First Fit:
 In the first fit, the partition is allocated which is the first sufficient
block from the top of Main Memory.
 It scans memory from the beginning and chooses the first
available block that is large enough. Thus it allocates the first
partition that is large enough.
2. Best Fit:
 Allocate the process to the partition which is the first smallest
sufficient partition among the free available partition.
 It searches the entire list of partition to find the smallest partition
whose size is greater than or equal to the size of the process.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 14


Memory Allocation Strategies
3. Worst Fit:
 Allocate the process to the partition which is the largest sufficient
among the freely available partitions available in the main
memory.
 It is opposite to the best-fit algorithm.
 It searches the entire list of partitions to find the largest partition
and allocate it to process.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 15


Variable Size Partitioning
 P1=300
 P2=25
 P3=125 50 150 300 600
350
 P4=50

First Fit

Best Fit

Worst Fit

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 16


Fixed Size Partitioning
 P1=212K
 P2=417K
 P3=112K 100 500 200 600
300
 P4=426K

First Fit

Best Fit

Worst Fit

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 17


Logical to Physical Address Mapping
 Relocation Register : Special purpose register in CPU which holds
the base address of the process in main m/m.
 Limit register: The size of a process is stored in this register.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 18


Logical to Physical Address Mapping

100 500

52 52 552

For e.g. Values:


Limit Register=100
Relocation Register = 500

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 19


Logical to Physical Address Mapping

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 20


Logical to Physical Address Mapping
Question: Check for the following processes are requesting for
authenticated instructions or not?

Process Limit Register Relocation Register


P1 500 1200
P2 275 550
P3 212 880
P4 420 1400
P5 118 200

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 21


Non-Contiguous M/m Allocation
 Non-Contiguous memory allocation techniques are basically of
two types:
1. Paging
2. Segmentation

 The main disadvantage of Dynamic Partitioning is External


fragmentation.
 Although, this can be removed by Compaction but as we have
discussed earlier, the compaction makes the system inefficient.
 That’s why we come up with an idea of non-contiguous memory
allocation.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 22


Paging – Basic Idea and its Need
 Physical memory (Main Memory)is divided into fixed sized block
called frames.
 Logical address space (Secondary Memory) is divided into blocks
of fixed size called pages.
 Page and frame will be of same size.
 Whenever a process needs to get execute on CPU, its pages are
moved from hard disk to available frame in main memory.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 23


Paging
Thus here memory management task is:
 to find free frame in main memory,
 allocate appropriate frame to the page,
 keeping track of which page belong to which frame.
 OS maintains a table called page table, for each process.
 Page table is index by page number and stores the information
about frame number.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 24


Page Frame
number number

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 25


Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 26
Non-Contiguous M/m Allocation

• Here one free frame list is maintain.


Page • When a process arrives in the system to
be executed, size of process is expressed
allocation in terms of number of pages.
• Each page of the process needs one
in Paging frame.
• Thus if process requires n pages then n
system frames must be free in memory.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 27


Before allocation After allocation

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 28


Address Mapping in Paging
 Here logical address (L) is divided into two parts:
1. Page number (p)
2. Offset (d): which gives us actual position in the page.

 Let page size is 2^n.


 So we can get page number (p)= L /2^n
 Offset = L% 2^n
 Physical address (P)= frame number * page size +d
 For given logical address space 2m and page size 2n

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 29


Address Mapping in Paging

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 30


Paging: Hardware support
 Modern OS uses variations of Paging which are:
• Translation look aside buffer
• Hierarchical paging
• Inverted page table

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 31


Translation look aside buffer
 Used to overcome the slower access problem.
 TLB is page table cache, which is implemented in fast associative
memory.
 Its cost is high so capacity is limited, thus only subset of page table
is kept in memory.
 Each TLB contains a page number and a frame number where the
page is stored in the memory.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 32


TLB - Working
 Whenever a logical address is generated, the page number of
logical address is searched in the TLB.
 If the page number is found then it is know as TLB hit. In this case
corresponding frame number is fetched from TLB entry and used
to get physical address. The whole task may take bit longer than it
would if an unmapped memory reference were used.
 If a match is not found then it is termed as TLB miss, in this case a
memory reference to that page must be made. page table is used
to get the frame number. And this entry id moved to TLB.
 If TLB is full while moving the entry then some of the existing
entry in the TLB ae removed. (strategy can be Least Recently
Used(LRU) to random).

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 33


TLB - Working

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 34


TLB
 Effective Access Time calculation:
• Q- 80 percent hit ratio in TLB, if it takes 20 nanosecond to
search in TLB, 100 nanosecond to access main memory, then
what is the effective access time to find a page?
• A- Hit ratio is 80 percent and miss ratio is 20 percent

 Effective access time = H(TLB+MM )+ M(TLB+PT+MM)


= H(TLB+MM )+ M(TLB+2MM)
= 0.8(20+100) + 0.2(20+100+100)

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 35


Disadvantages of Paging
1. Additional memory reference
 Its required to read information from page table.
 Every instruction requires two memory accesses: one for page
table, and one for instruction or data.
2. Size of page table
 Page tables are too large to keep it in main memory.
 Page table contain all pages in logical address space thus larger
process page table will be large.
3. Internal fragmentation
 A process size may not be exactly of the page size.
 So some space would remain unoccupied in the last page of a
process. This result in internal fragmentation.
Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 36
Segmentation
 Works on the user point of view, logical address space of any
process is a collection of code, data and stack.
 Here the logical address space of a process is divided into blocks
of varying size, called segments.
 Each segment contains a logical unit of process.
 When ever a process is to be executed, its segments are moved
from secondary storage to the main memory.
 Each segment is allocated a chunk of free memory of the size
equal to that segment.
 OS maintains one table known as segment table, for each
process. It includes size of segment and location in memory
where the segment has been loaded.
Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 37
Segmentation
• Logical address is divided in to two parts:
1. Segment number: identifier for segment.
2. Offset: actual location within a segment.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 38


Address Mapping in Segmentation

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 39


Address Mapping in Segmentation

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 40


Virtual Memory
 A virtual memory is technique that allows a process to execute
even though it is partially loaded in main memory.
 The basic idea behind virtual memory is that the combined size of
the program, data, and stack may exceed the amount of physical
memory( main memory) available for it.
 The operating system keeps those parts of the program currently
in use in main memory, and the rest on the disk.
 These program-generated addresses are called virtual addresses
and form the virtual address space.
 MMU (Memory Management Unit) maps the virtual addresses
onto the physical memory addresses

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 41


Advantage of Virtual Memory
 Less number of I/O would be needed to load or swap each user
program into memory.
 A program would no longer be constrained by the amount of
physical memory that is available. User would be able to write
programs for an extremely large virtual address space.
 Each user program could take less physical memory, more
programs could be run the same time, with a corresponding
increase in CPU utilization and throughput.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 42


Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 43
Virtual Memory: Hardware and control structures
 Virtual memory involves the separation of logical memory
perceived(aware) by user from physical memory.
 This separation allows extremely large virtual memory to be
provided for programmers when only smaller amount of
physical memory is available.
 Thus programmer need not to worry about the amount of
memory available.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 44


Swapping
 Example : A system has a physical memory of size 32-MB. Now
suppose there are 5 process each having size 8MB that all want to
execute simultaneously. How it is possible???
 The solution is to use swapping. Swapping is technique in which
process are moved between main memory and secondary
memory or disk.
 Swapping use some portion of secondary memory as backing
store known as swapping area.
 Operation of moving process from memory to swap area is called
“swap out". And moving from swap area to memory is known as
“swap in”.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 45


Swapping

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 46


Thrashing
 CPU utilization is directly linked to degree of multi programming.
 As RAM is limited, so we are using paging concept here.
 For e.g. I have 100 processes and each process is divided into
some number of pages.
 Degree of multiprogramming is maximum, if I will place one page
of each process into RAM.
 Degree of multiprogramming is achieved here as every process
have its one page available in the RAM, but it causes maximum
page faults.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 47


Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 48
Thrashing
 Due to this performance of the system is decreased .
 After a certain limit thrashing occurs.
 To avoid this problem :
P1 – page1
i. Increase the main memory size
P2– page1
ii. Efficiently use long term scheduler. P3 – page1
P4 – page1


P100 – page1

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 49


Virtual Memory

Virtual memory can


be implemented in
following three
ways:

Demand Segmentation with


Demand paging
Segmentation Paging

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 50


Demand Paging
 Demand paging is similar to a paging system with swapping where
processes may reside in secondary memory. When we want to
execute a process, we swap it into the main memory.
 Rather than swapping entire process into memory we use lazy
swapper.
 A lazy swapper never swaps page into memory, unless that page
will be needed.
 If some process is needs to be swap in, then pager(page table
handler) brings only those pages which will be used by process.
 Thus avoid reading unused pages and decrease swap time and
amount of physical memory needed.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 51


Demand Paging

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 52


Demand Paging- Page Fault
If a process tries to access
Page table includes the
a page that is not in main
valid-invalid bit for each
memory then it causes
page entry.
page fault.
If the bit is valid then page
is currently available in to
the memory.
Pager will generate trap to
the OS, and tries to swap in.
If it is set to invalid then
page is either invalid or not
present in main memory.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 53


Demand Paging – Page Fault
Following steps are followed to manage page fault:
1. Check page table for the process to determine whether the
reference is valid or invalid.
2. If the page is invalid then terminate the process, but if the page
is valid but currently not available in main memory, then
generate trap instruction.
3. OS determines the location of that page on swap area.
4. Then it will use free frame list to find out free frame. OS will
schedule disk operation to read desired page into newly
allocated memory.
5. When disk read is complete modify the page table and set
reference bit to valid.
6. Restart the instruction.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 54


Demand Paging – Page Fault

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 55


Page Replacement Policies - Need

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 56


Page Replacement Policies - Basics
1. Find the location of the desired page
on disk.

2. Find a free frame:


- If there is a free frame, use it.
- If there is no free frame, use a
page replacement algorithm to select
a victim frame
- Write victim frame to disk if
dirty

3. Bring the desired page into the


(newly) free frame; update the page
and frame tables

4. ContinueUnit-
the4process
Memoryby restarting (Rashmi Rathi Upadhyay)
Management 57
First In First Out (FIFO)
 The simplest page replacement algorithm.
 When the page must be replaced the oldest page is chosen.
 one FIFO queue is maintained to hold all pages in memory.
 Replace the page which at the top of the queue and add new
pages from rear end (tail)of the queue.

Reference String : 2 3 2 1 5 2 4 5 3 2 5 2
 3 frames (3 pages can be in memory at a time per process)

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 58


• Reference String : 7,0,1,2,0,3,0,4,2,3,0,3,1,2,0
• 3 frames (3 pages can be in memory at a time per
process) FIFO
F3 1 1 1 1 0 0 0 3 3 3 3 2 2
F2 0 0 0 0 3 3 3 2 2 2 2 1 1 1
F1 7 7 7 2 2 2 2 4 4 4 0 0 0 0 0
F * * * hit * * * * * * 2hit * * Hit

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 59


First In First Out (FIFO)
 Reference String : 7,0,1,2,0,3,0,4,2,3,0,3,1,2,0
 3 frames (3 pages can be in memory at a time)

F3 1 1 1 1 0 0 0 3 3 3 3 2 2
F2 0 0 0 0 3 3 3 2 2 2 2 1 1 1
F1 7 7 7 2 2 2 2 4 4 4 0 0 0 0 0
F F F F Hit F F F F F F hit F F hit

Page faults/Page Miss = 12 Page hit = 3


 Miss Ratio = Num of miss/ num of reference => (12/15)*100= 80%
 Hit Ratio = Num of hit/ num of reference => (03/15)*100= 20%

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 60


First In First Out (FIFO)
 Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
 3 frames (3 pages can be in memory at a time per process)
9 page faults
 Find out the total page faults in the given reference string?
By using 4 frames

 10 page faults and 2 page hits

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 61


Belady’s Anomaly in FIFO

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 62


Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 63
Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 64
f3 6 6 6 3 3 3 0 0 5
F2 5 5 1 1 1 6 6 6 3
F1 3 0 0 0 5 5 5 1 1

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 65


First In First Out (FIFO)
 Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
 3 frames (3 pages can be in memory at a time per process)

 15 page faults

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 66


First In First Out (FIFO)
Advantage

• Very simple.
• Easy to implement.

Disadvantage

• A page fetched into memory a long time ago may have


now fallen out of use.
• This reasoning will often be wrong, because there will
often be regions of program or data that are heavily used
throughout the life of a program.
• Those pages will be repeatedly paged in and out by the
FIFO algorithm.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 67


Least Recently Used (LRU)
It is based on the observation that if pages that have
been heavily used in the last few instructions will
probably be heavily used again in the next few.

Conversely, pages that have not been used for ages


will probably remain unused for a long time.

This idea suggests a realizable algorithm: when a page


fault occurs, throw out the page that has been unused
for the longest time.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 68


Least Recently Used (LRU)
 Example-1 Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
 8 page faults
(replace the least recently used pages in past)

F1 1 1 1 1 1 1 5
F2 2 2 2 2 2 2 2
F3 3 3 5 5 5 4 4
f4 4 4 4 4 3 3 3
4* 2hit * 2hit * * *

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 69


Least Recently Used (LRU)
 Example-2
 Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
 3 frames (3 pages can be in memory at a time per process)

F1 7 2 2 2 2 4 4 4 0 0 1 1 1 1 1 1
F2 0 0 0 0 0 0 0 3 3 3 3 3 0 0 0 0
F3 1 1 1 3 3 3 2 2 2 2 2 2 2 2 7 7
3* * hit * hit * * * * 4hit * hit * hit * 2hit

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 70


Least Recently Used (LRU)
1. Reference string: 1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6
2. Reference string: 0 1 7 2 3 2 7 1 0 3
 4 frames (4 pages can be in memory at a time per process)

 10 page faults

 3 frames (3 pages can be in memory at a time per process)

 15 page faults

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 71


Most Recently Used (MRU)
Idea of MRU- Replace the page which is most recently used in past.
 Example
 1. Reference string: 7,0,1,2,0,3,0,4,2,7,3
 2. Reference string: 0 1 7 2 3 2 7 1 0 3
3 frames (3 pages can be in memory at a time per process)

F1 7 7 7 7 7 7 7 3
F2 0 0 0 3 0 4 4 4
F3 1 2 2 2 2 2 2 2
3* * hit * * * 2hit *

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 72


Optimal Page Replacement

Its best page replacement policy.

The Optimal policy selects for replacement the page


that will not be used for longest period of time.

Impossible to implement (need to know the future) but


serves as a standard to compare with the other
algorithms we shall study.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 73


Optimal Page Replacement
 For example: 3 frames
 Reference string: 1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6

F1 1 1 1 1 1 1 3 3 3 3 3 3 6
f2 2 2 2 2 2 2 2 7 7 2 2 2 2
f3 3 4 4 5 6 6 6 6 6 6 1 1 1
3* * 2hit * * 3hit * * 2hit * * 2hit *

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 74


Optimal Page Replacement
 Reference string : 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5(check next
pages)
 4 frames

F1 1 1 1 1 4
F2 2 2 2 2 2
F3 3 3 3 3 3
f4 4 4 5 5 5
4* 2 hit * 3hit *

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 75


Optimal Page Replacement

Need an approximation of how likely each


frame is to be accessed in the future

If we base this on past behavior we got


a way to track future behavior

Tracking memory accesses requires


hardware support to be efficient

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 76


Optimal Page Replacement
Advantage:

• Lowest page faults.


• Can Improves performance of system as
it reduces number of page faults so
requires less swapping.

Disadvantage:

• Very difficult to implement.

Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 77


Unit- 4 Memory Management (Rashmi Rathi Upadhyay) 78

You might also like