Chapter 4: Memory Management: Part 2: Paging Algorithms and Implementation Issues
Chapter 4: Memory Management: Part 2: Paging Algorithms and Implementation Issues
A B C D E F G H A
t=0 t=4 t=8 t=15 t=21 t=22 t=29 t=30 t=32
w(k,t)
Algorithm Comment
OPT (Optimal) Not implementable, but useful as a benchmark
NRU (Not Recently Used) Crude
FIFO (First-In, First Out) Might throw out useful pages
Second chance Big improvement over FIFO
Clock Better implementation of second chance
LRU (Least Recently Used) Excellent, but hard to implement exactly
NFU (Not Frequently Used) Poor approximation to LRU
Aging Good approximation to LRU, efficient to implement
Working Set Somewhat expensive to implement
WSClock Implementable version of Working Set
Page referenced 0 1 2 3 0 1 4 0 1 2 3 4
Youngest page 0 1 2 3 0 1 4 4 4 2 3 3
0 1 2 3 0 1 1 1 4 2 2
Oldest page 0 1 2 3 0 0 0 1 4 4
Page referenced 0 1 2 3 0 1 4 0 1 2 3 4
Youngest page 0 1 2 3 3 3 4 0 1 2 3 4
0 1 2 2 2 3 4 0 1 2 3
0 1 1 1 2 3 4 0 1 2
Oldest page 0 0 0 1 2 3 4 0 1
0 2 1 3 5 4 6 3 7 4 7 3 3 5 5 3 1 1 1 7 1 3 4 1
0 2 1 3 5 4 6 3 7 4 7 3 3 5 5 3 1 1 1 7 1 3 4 1
0 2 1 3 5 4 6 3 7 4 7 7 3 3 5 3 3 3 1 7 1 3 4
0 2 1 3 5 4 6 3 3 4 4 7 7 7 5 5 5 3 3 7 1 3
0 2 1 3 5 4 6 6 6 6 4 4 4 7 7 7 5 5 5 7 7
0 2 1 1 5 5 5 5 5 6 6 6 4 4 4 4 4 4 5 5
0 2 2 1 1 1 1 1 1 1 1 6 6 6 6 6 6 6 6
0 0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
CS 1550, cs.pitt.edu (original Chapter 4 19
y modified by Ethan L. Miller
Stack algorithms
LRU is an example of a stack algorithm
For stack algorithms
Any page in memory with m physical pages is also in memory with m+1
physical pages
Increasing memory size is guaranteed to reduce (or at least not increase) the
number of page faults
Stack algorithms do not suffer from Belady’s anomaly
Distance of a reference == position of the page in the stack before the
reference was made
Distance is if no reference had been made before
Distance depends on reference string and paging algorithm: might be different
for LRU and optimal (both stack algorithms)
Data
More address space?
One address space for data,
another for code
Data
Code & data separated
More complex in hardware
Code
Code
Less flexible
CPU must handle instructions 0
& data differently
4. Page
User External
User 2. Page needed arrives
process pager
space
5. Here is page!
Kernel Fault MMU
space 1. Page fault handler
6. Map in page
handler