0% found this document useful (0 votes)
32 views

Chapter 7 Virtual Memory p2

The document discusses key aspects of memory management in operating systems, including virtual memory techniques, paging vs segmentation, and algorithms for replacement policies, cleaning policies, and load control. The main types of virtual memory policies are demand paging, which only loads pages when referenced, and prepaging, which loads additional pages to improve efficiency. Common replacement policies discussed are least recently used (LRU), first-in first-out (FIFO), and clock replacement.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views

Chapter 7 Virtual Memory p2

The document discusses key aspects of memory management in operating systems, including virtual memory techniques, paging vs segmentation, and algorithms for replacement policies, cleaning policies, and load control. The main types of virtual memory policies are demand paging, which only loads pages when referenced, and prepaging, which loads additional pages to improve efficiency. Common replacement policies discussed are least recently used (LRU), first-in first-out (FIFO), and clock replacement.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Operating System Software

The design of the memory management


portion of an operating system depends on
three fundamental areas of choice:
• whether or not to use virtual memory techniques
• the use of paging or segmentation or both
• the algorithms employed for various aspects of
memory management
Policies for Virtual Memory
 Key issue: performance
 minimize page faults
 Determines when a
page should be Two main
brought into types:
memory

Demand
Prepaging
Paging
Demand Paging
 Demand Paging
 only brings pages into main memory when a reference is made
to a location on the page
 many page faults when process is first started
 principle of locality suggests that as more and more pages are
brought in, most future references will be to pages that have
recently been brought in, and page faults should drop to a
very low level
Prepaging
 Prepaging
 pages other than the one demanded by a page fault are
brought in
 exploits the characteristics of most secondary memory devices
 if pages of a process are stored contiguously in secondary
memory (disk) it is more efficient to bring in a number of
pages at one time
 ineffective if extra pages are not referenced
 should not be confused with “swapping” (all pages are moved
out)
2- Placement Policy
 Determines where in real memory a process
piece is to reside
 Important design issue in a segmentation system
(best-fit, first-fit, etc.)
 Paging or combined paging with segmentation
placing is irrelevant (transparent) because
hardware performs functions with equal
efficiency
3- Replacement Policy
 Deals with the selection of a page in main
memory to be replaced when a new page must be
brought in
 objective is that the page that is removed be the page
least likely to be referenced in the near future

 The more elaborate/sophiscitated the replacement


policy, the greater the hardware and software
overhead to implement it
 When a frame is locked the page currently stored in that frame
may not be replaced
 kernel of the OS as well as key control structures are held
in locked frames
 I/O buffers and time-critical areas may be locked into
main memory frames
 locking is achieved by associating a lock bit with each
frame
Algorithms used for
the selection of a
page to replace:
• Optimal
• Least recently used (LRU)
• First-in-first-out (FIFO)
• Clock
 Selects the page for which the time to the next
reference is the longest (need perfect knowledge
of future events)
 Produces three page faults after the frame
allocation has been filled
Least Recently Used (LRU)
 Replaces the page that has not been referenced for the longest
time

 By the principle of locality, this should be the page least likely


to be referenced in the near future

 Difficult to implement
 one approach is to tag each page with the time of last
reference
 this requires a great deal of overhead
LRU Example
First-in-First-out (FIFO)
 Treats page frames allocated to a process as a circular buffer

 Pages are removed in round-robin style


 simple replacement policy to implement

 Page that has been in memory the longest is replaced


Clock Policy
 Requires the association of an additional bit with each frame
 referred to as the use bit

 When a page is first loaded in memory or referenced, the use bit


is set to 1

 The set of frames is considered to be a circular buffer (Page


frames visualized as laid out in a circle)

 Any frame with a use bit of 1 is passed over by the algorithm


When it comes time to replace a page, the operating system
scans the buffer to find a frame with a use bit set to 0.

Each time it encounters a frame with a use bit of 1, it resets


that bit to 0 and continues on.

If any of the frames in the buffer have a use bit of 0 at the


beginning of this process, the first such frame encountered
is chosen for replacement.

If all of the frames have a use bit of 1, then the pointer will
make one complete cycle through the buffer, setting all the
use bits to 0, and stop at its original position, replacing the
page in that frame.
Combined Examples
4- Cleaning Policy
 Concerned with determining when a modified page should be
written out to secondary memory (disk)

Demand Cleaning
a page is written out to secondary memory only when it has been selected for
replacement

Precleaning
Writes modified pages before their page frames are needed; allows the writing
of pages in batches
5- Load Control
 Determines the number of processes
that will be resident in main memory
 multiprogramming level

 Critical in effective memory


management

 Too few processes, many occasions


when all processes will be blocked and
much time will be spent in swapping

 Too many processes will lead to


thrashing

You might also like