Lecture 10b - Updated
Lecture 10b - Updated
Lecture 10b
Chapter 8
1
Segmentation – Virtual Memory Implications
Segmentation
allows the
programmer to
view memory as
consisting of
multiple address
spaces or
segments
2
Segment Organization
Each segment table entry contains the starting address of
the corresponding segment in main memory and the length
of the segment
One bit is needed to determine if the segment is already in
main memory
Another bit is needed to determine if the segment has been
modified since it was loaded in main memory
3
Segment Tables
4
Combined Paging and Segmentation
5
Combined Segmentation and Paging
6
Combined Segmentation and Paging
7
Thrashing
8
OS Policies for Virtual Memory
Fetch policy
Placement policy
Replacement policy
Resident set management
Cleaning policy
Load control
9
Fetch Policy
Determines when a page should be brought into memory
Two common alternatives:
Demand paging only brings pages into main memory
when a reference is made to a location on the page
• Many page faults when process first started
Pre-paging brings in more pages than needed
• More efficient to bring in pages that reside contiguously on the
disk (due to the seek times and rotational latency)
10
Replacement Policy
11
Replacement Policy
12
Basic Replacement Algorithms
Optimal policy
Least recently used (LRU) policy
First-in-first-out (FIFO) policy
Clock policy
13
Basic Replacement Algorithms
Optimal policy
– Selects for replacement that page for which the time to
the next reference is the longest
– Impossible to have perfect knowledge of future events
14
Basic Replacement Algorithms
Least Recently Used (LRU)
– Replaces the page that has not been referenced for the
longest time
– By the principle of locality, this should be the page least
likely to be referenced in the near future
– Each page could be tagged with the time of last
reference. This would require a great deal of overhead.
15
Basic Replacement Algorithms
First-in, first-out (FIFO)
– Treats page frames allocated to a process as a circular
buffer
– Pages are removed in round-robin style
– Simplest replacement policy to implement
– Page that has been in memory the longest is replaced
– These pages may be needed again very soon
16
Basic Replacement Algorithms
LRU policy has low page fault rate but the algorithm to select which
page to be replaced is expensive.
FIFO policy is simple and fast (the algorithm in selecting which page to
be replaced). However, the page fault rate is high because it does not
really analyze the past requests pattern. E.g. FIFO does not consider
memory reference as “usage”. It only considers their arrival time.
Clock policy aims to approximate LRU policy with a more efficient
algorithm.
Associate an additional bit called a Use bit (*) with each frame.
A frame with Use bit equals to 0 means the page residing in that
frame is the least recently used (LRU) page and shall be replaced.
Else, when Use bit equals to 1, it means the page residing in that
frame is not the LRU page. The Use bit will be reset to 0. This action
is to ensure every page has a limit of one life cycle, based on its
usage time.
17
Basic Replacement Algorithms
Clock Policy
The set of frames that are candidates for replacement* is considered
to be a circular buffer, with which an arrow is associated.
Use an arrow to keep track of the next frame to fill or replace, in Round-Robin
fashion.
Associate an additional bit called a Use bit with each frame.
When a page is first loaded in memory (the arrow is pointing at a
blank frame), the Use bit is set to 1. Then, point the arrow to the next
frame.
When the page is referenced (no page fault), the Use bit of the
referenced page is set to 1. Note: the arrow is not used nor moved
during page reference.
18
Basic Replacement Algorithms
Clock Policy
When it is time to replace a page (i.e. page fault):
If the arrow is pointing at the frame with the Use bit 1, reset the
Use bit to 0 and point the arrow to the next frame. Keep doing so
until the arrow points to Use bit 0.
If the arrow is pointing at the frame with the Use bit 0, the page in
that frame will be replaced. The Use bit is then set to 1 and point
the arrow to the next frame.
19
Basic Replacement Algorithms
20
Combined Examples
Comparison of Placement Algorithms
22
Resident Set Size
How many pages to bring in?
How much memory to allocate to a process?
Two policies:
– Fixed-allocation
• Gives a process a fixed number of pages within which to execute
• When a page fault occurs, one of the pages of that process must
be replaced
– Variable-allocation
• Number of pages allocated to a process varies over the lifetime
of the process (if page faults level is high, additional page frames
will be given)
23
Replacement Scope
Local replacement policy: a page in the resident pages of
the process that generated the page fault is selected to
replace (Resident set: portion of process that is in main
memory)
Global replacement policy: all unblocked pages are
considered as candidates to replace
Fixed-allocation: employ local replacement policy
Variable-allocation: can employ both local and global
replacement policies
24
Resident Set Management Summary
Fixed Allocation, Local Scope
Decide ahead of time the amount of allocation to give a
process
Drawbacks:
If allocation is too small, there will be a high page fault
rate
If allocation is too large there will be too few programs in
main memory
26
Variable Allocation, Global Scope
Easiest to implement
Adopted by many operating systems
Operating system keeps list of free frames
Free frame is added to resident set of process when a page
fault occurs
If no free frame, replaces one from another process
27
Variable Allocation, Local Scope
28
Cleaning Policy
Opposite of the fetch policy, determine when a modified
page should be written out to secondary memory
Demand cleaning
A page is written out only when it has been selected for
replacement
Pre-cleaning
Pages are written out in batches, before their frames are
needed
29
Cleaning Policy
Best approach uses page buffering
– Replaced pages are placed in two lists
• Modified and unmodified
– Pages in the modified list are periodically written out in
batches
– Pages in the unmodified list are either reclaimed if
referenced again or lost when its frame is assigned to
another page
30
Load Control
Determines the number of processes that will be resident in
main memory (multiprogramming level)
Too few processes, many occasions when all processes will
be blocked and much time will be spent in swapping
Too many processes will lead to thrashing because on
average, the size of the resident set of each process will be
inadequate and frequent faulting will occur.
31
Multiprogramming
32
Process Suspension
33
Process Suspension
Process with smallest resident set
– This process requires the least future effort to reload
Largest process
– Obtains the most free frames
Process with the largest remaining execution window
34
Linux Memory Management
35
Linux Memory Management
36
Linux Memory Management
Contiguous blocks of pages are mapped into contiguous
blocks of frames by using buddy system
Kernel maintains a list of contiguous frame groups (a
group may consist 1, 2, 4, 8, 16 or 32 frames)
Using clock replacement algorithm, but the use bit is
replaced with an age variable
Age variable is incremented on every access
Linux periodically decrements the age variable
Page with age variable equal to zero can be replaced
37