Chapter03-Memory Management
Chapter03-Memory Management
All rights herein belong to PSB Academy and are protected by copyright laws.
Reproduction and distribution without permission is prohibited.
Unless prior approval is obtained from lecturers, students are not allowed to record (audio
or video) lessons. Students are allowed to download and use lesson materials from PSB
Academy (including lecture recordings and presentation slides) only for their personal
revision. Different policies may apply for lesson materials by our academic and industry
partners - please check with your School for more information.
Modern Operating Systems
Fourth Edition
Chapter 3
Memory Management
Figure 3-1. Three simple ways of organizing memory with an operating system and one
user process. Other possibilities also exist
Figure 3-2. Illustration of the relocation problem. (a) A 16-KB program. (b) Another 16-KB
program. (c) The two programs loaded consecutively into memory.
Figure 3-3. Base and limit registers can be used to give each process a separate address
space.
Figure 3-4. Memory allocation changes as processes come into memory and leave it.
The shaded regions are unused memory
Figure 3-5. (a) Allocating space for a growing data segment. (b) Allocating space for a
growing stack and a growing data segment.
Figure 3-6. (a) A part of memory with five processes and three holes. The tickmarks
show the memory allocation units. The shaded regions (0 in the bitmap) are free. (b) The
corresponding bitmap. (c) The same information as a list.
Figure 3-8. The position and function of the MMU. Here the MMU is shown as being
apart of the CPU chip because it commonly is nowadays. However, logically it could be a
separate chip and was years ago.
Figure 3-9. The relation between virtual addresses and physical memory addresses is
given by the page table. Every page begins on a multiple of 4096 and ends 4095
addresses higher, so 4K–8K really means 4096–8191 and 8K to 12K means 8192–12287
Figure 3-10. The internal operation of the MMU with 16 4-KB pages.
Figure 3-13. (a) A 32-bit address with two page table fields. (b) Two-level page tables.
Figure 3-14. Comparison of a traditional page table with an inverted page table.
Figure 3-15. Operation of second chance. (a) Pages sorted in FIFO order. (b) Page list if
a page fault occurs at time 20 and A has its R bit set. The numbers above the pages are
their load times.
Figure 3-17. The aging algorithm simulates LRU in software. Shown are six pages for five
clock ticks. The five clock ticks are represented by (a) to (e).
Figure 3-18. The working set is the set of pages used by the k most recent memory
references. The function w(k, t) is the size of the working set at time t.
Figure 3-20. Operation of the WSClock algorithm. (a) and (b) give an example of what
happens when R = 1.
Figure 3-20. Operation of the WSClock algorithm. (c) and (d) give an example of R = 0.
Algorithm Comment
Optimal Not implementable, but useful as a benchmark
NRU (Not Recently Used) Very crude approximation of LRU
FIFO (First-in, First-out) Might throw out important pages
Second, chance Big improvement over FIFO
Clock Realistic
LRU (Least Recently Used) Excellent, but difficult to implement exactly
NFU (Not Frequently Used) Fairly crude approximation to LRU
Aging Efficient algorithm that approximates LRU well
Working set Somewhat expensive to implement
WSClock Good efficient algorithm
Figure 3-22. Local versus global page replacement. (a) Original configuration. (b) Local
page replacement. (c) Global page replacement.
Figure 3-23. Page fault rate as a function of the number of page frames assigned.
Figure 3-24. (a) One address space. (b) Separate I and D spaces.
Figure 3-25. Two processes sharing the same program sharing its page table.
Figure 3-28. (a) Paging to a static swap area. (b) Backing up pages dynamically.
Figure 3-30. In a one-dimensional address space with growing tables, one table may
bump into another.
Figure 3-31. A segmented memory allows each table to grow or shrink independently of
the other tables.