Chapter 8: Virtual Memory: CS 472 Operating Systems Indiana University - Purdue University Fort Wayne
Chapter 8: Virtual Memory: CS 472 Operating Systems Indiana University - Purdue University Fort Wayne
Virtual memory
Load on demand paging and/or segmentation Allows more processes to be active at a time, since only part of each process is in memory at a time One process may be larger than all of main memory The resident set is the set of pages or segments currently loaded in memory Virtual memory is possible because of the principle of locality
2
Principle of locality
3
Principle of locality
References to program instructions and data within a process tend to cluster Only a few pages of a process are needed over a short period of time It is possible to make intelligent guesses about which pages will be needed in the future This suggests that virtual memory may work efficiently
4
Thrashing
Thrashing is when there is excessive page fault activity
. . . particularly when pages are frequently replaced just before they are needed
The processor spends most of its time swapping pages rather than executing user instructions
frame number
Holds 26-bit disk address if page not loaded protection (each page)
Valid bit (v): If the page is not loaded, generate a page fault and get disk address from bits 0-25 Modify bit (m): Set the bit with the first write to the page in memory Avoids write-back of clean page to disk
9
user space
stack space
system space
unused
shared 10
Note that each logical memory reference now requires two physical references.
11
Note that each logical memory reference now requires three physical references
A hardware solution is needed to keep things from running to slow
13
Page #s
PTEs
15
17
Other issues
Cache memory
PTE may be in TLB cache, real memory, or disk Instructions and data may be in real memory cache, real memory, or disk
Page size
Large
more internal fragmentation on the last page resident set less able to adapt to principle of locality larger page tables more page faults
Small
18
19
20
21
22
23
Combined paging/segmentation
Paging is transparent to the programmer Segmentation is visible to the programmer Each segment is broken into fixed-size pages
24
25
26
Fetch policy
Determines when a page should be brought into memory Demand paging brings a page into main memory when a reference is made to a location on the page
This results in many page faults when process first started
Most policies predict the future behavior on the basis of past behavior
28
Replacement policy
Consider only the set of pages available to be swapped out
Ignore locked frames
29
So, use the principle of locality to improve the chances that the replaced page is unlikely to be referenced soon
30
31
34
35
36
37
Page buffering
A replaced page is added to a pool of free frames which have been replaced but not yet overwritten The pool is separated into two lists
Free page list of unmodified (clean) free pages Modified page list of pages that need to be written back to disk
Modified pages can be cleaned in batches and moved to the free page list
This decouples cleaning from replacement
Some page faults can be handled by simple bookkeeping Page buffering is best used in conjunction with the FIFO page replacement policy
Used by the VAX/VMS operating system
38
39
40
Cleaning policy
Demand cleaning
A page is written out only when it has been selected for replacement
Precleaning
Pages are written out in batches (cleaned) It works best to combine precleaning with page buffering
41
Working set
Definition The working set W(t,) of a process is the set of process pages at virtual time t that have been referenced in the previous virtual time units The page fault rate is low if | resident set | > | W(t,) | Goal: Allow as many processes as possible consistent with a low fault rate
42
43
Multiprogramming level
45