0% found this document useful (0 votes)
10 views

Chapter03-Memory Management

k

Uploaded by

yoyoweiguan1790
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Chapter03-Memory Management

k

Uploaded by

yoyoweiguan1790
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

© PSB Academy. All rights reserved.

All rights herein belong to PSB Academy and are protected by copyright laws.
Reproduction and distribution without permission is prohibited.

Unless prior approval is obtained from lecturers, students are not allowed to record (audio
or video) lessons. Students are allowed to download and use lesson materials from PSB
Academy (including lecture recordings and presentation slides) only for their personal
revision. Different policies may apply for lesson materials by our academic and industry
partners - please check with your School for more information.
Modern Operating Systems
Fourth Edition

Chapter 3
Memory Management

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Memory
• Paraphrase of Parkinson’s Law, ‘‘Programs expand to
fill the memory available to hold them.’’
• Average home computer nowadays has 10,000 times
more memory than the IBM 7094, the largest computer in
the world in the early 1960s

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


No Memory Abstraction

Figure 3-1. Three simple ways of organizing memory with an operating system and one
user process. Other possibilities also exist

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Running Multiple Programs Without a
Memory Abstraction

Figure 3-2. Illustration of the relocation problem. (a) A 16-KB program. (b) Another 16-KB
program. (c) The two programs loaded consecutively into memory.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Base and Limit Registers

Figure 3-3. Base and limit registers can be used to give each process a separate address
space.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Swapping (1 of 2)

Figure 3-4. Memory allocation changes as processes come into memory and leave it.
The shaded regions are unused memory

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Swapping (2 of 2)

Figure 3-5. (a) Allocating space for a growing data segment. (b) Allocating space for a
growing stack and a growing data segment.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Memory Management with Bitmaps

Figure 3-6. (a) A part of memory with five processes and three holes. The tickmarks
show the memory allocation units. The shaded regions (0 in the bitmap) are free. (b) The
corresponding bitmap. (c) The same information as a list.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Memory Management with Linked Lists

Figure 3-7. Four neighbor combinations for the terminating process, X.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Memory Management Algorithms
• First fit
• Next fit
• Best fit
• Worst fit
• Quick fit

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Virtual Memory
• There is a need to run programs that are too large to fit in
memory
• Solution adopted in the 1960s, split programs into little
pieces, called overlays
– Kept on the disk, swapped in and out of memory
• Virtual memory : each program has its own address
space, broken up into chunks called pages

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Paging (1 of 3)

Figure 3-8. The position and function of the MMU. Here the MMU is shown as being
apart of the CPU chip because it commonly is nowadays. However, logically it could be a
separate chip and was years ago.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Paging (2 of 3)

Figure 3-9. The relation between virtual addresses and physical memory addresses is
given by the page table. Every page begins on a multiple of 4096 and ends 4095
addresses higher, so 4K–8K really means 4096–8191 and 8K to 12K means 8192–12287

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Paging (3 of 3)

Figure 3-10. The internal operation of the MMU with 16 4-KB pages.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Structure of a Page Table Entry

Figure 3-11. A typical page table entry.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Speeding Up Paging
Major issues faced:
1. The mapping from virtual address to physical
address must be fast.
2. If the virtual address space is large, the page table
will be large.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Translation Lookaside Buffers
A TLB to speed up paging.

Valid Virtual Modified Protectio Page


Page n Frame
1 140 1 RW 31
1 20 0 RX 38
1 130 1 RW 29
1 129 1 RW 62
1 19 0 RX 50
1 21 0 RX 45
1 860 1 RW 14
1 861 1 RW 75

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Multilevel Page Tables

Figure 3-13. (a) A 32-bit address with two page table fields. (b) Two-level page tables.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Inverted Page Tables

Figure 3-14. Comparison of a traditional page table with an inverted page table.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Page Replacement Algorithms
• Optimal algorithm
• Not recently used algorithm
• First-in, first-out (FIFO) algorithm
• Second-chance algorithm
• Clock algorithm
• Least recently used (LRU) algorithm
• Working set algorithm
• WSClock algorithm

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Not Recently Used Algorithm

• At page fault, system inspects pages


• Categories of pages based on the current values of their
R and M bits:

Class 0: not referenced, not modified.


Class 1: not referenced, modified.
Class 2: referenced, not modified.
Class 3: referenced, modified.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Second-Chance Algorithm

Figure 3-15. Operation of second chance. (a) Pages sorted in FIFO order. (b) Page list if
a page fault occurs at time 20 and A has its R bit set. The numbers above the pages are
their load times.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Clock Page Replacement Algorithm

Figure 3-16. The clock page replacement algorithm.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Simulating LRU in Software

Figure 3-17. The aging algorithm simulates LRU in software. Shown are six pages for five
clock ticks. The five clock ticks are represented by (a) to (e).

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Working Set Algorithm (1 of 2)

Figure 3-18. The working set is the set of pages used by the k most recent memory
references. The function w(k, t) is the size of the working set at time t.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Working Set Algorithm (2 of 2)

Figure 3-19. The working set algorithm.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


WSClock Algorithm (1 of 2)

Figure 3-20. Operation of the WSClock algorithm. (a) and (b) give an example of what
happens when R = 1.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


WSClock Algorithm (2 of 2)

Figure 3-20. Operation of the WSClock algorithm. (c) and (d) give an example of R = 0.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Summary of Page Replacement Algorithms
Page replacement algorithms discussed in the text.

Algorithm Comment
Optimal Not implementable, but useful as a benchmark
NRU (Not Recently Used) Very crude approximation of LRU
FIFO (First-in, First-out) Might throw out important pages
Second, chance Big improvement over FIFO
Clock Realistic
LRU (Least Recently Used) Excellent, but difficult to implement exactly
NFU (Not Frequently Used) Fairly crude approximation to LRU
Aging Efficient algorithm that approximates LRU well
Working set Somewhat expensive to implement
WSClock Good efficient algorithm

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Local versus Global Allocation Policies (1 of
2)

Figure 3-22. Local versus global page replacement. (a) Original configuration. (b) Local
page replacement. (c) Global page replacement.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Local versus Global Allocation Policies (2 of
2)

Figure 3-23. Page fault rate as a function of the number of page frames assigned.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Separate Instruction and Data Spaces

Figure 3-24. (a) One address space. (b) Separate I and D spaces.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Shared Pages

Figure 3-25. Two processes sharing the same program sharing its page table.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Shared Libraries

Figure 3-26. A shared library being used by two processes.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Page Fault Handling (1 of 3)

1. The hardware traps to kernel, saving program counter


on stack.
2. Assembly code routine started to save general registers
and other volatile info
3. system discovers page fault has occurred, tries to
discover which virtual page needed
4. Once virtual address caused fault is known, system
checks to see if address valid and the protection
consistent with access

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Page Fault Handling (2 of 3)

5. If frame selected dirty, page is scheduled for transfer to


disk, context switch takes place, suspending faulting
process
6. As soon as frame clean, operating system looks up disk
address where needed page is, schedules disk
operation to bring it in.
7. When disk interrupt indicates page has arrived, tables
updated to reflect position, and frame marked as being
in normal state.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Page Fault Handling (3 of 3)

8. Faulting instruction backed up to state it had when it


began and program counter is reset
9. Faulting process is scheduled, operating system returns
to routine that called it.
10. Routine reloads registers and other state information,
returns to user space to continue execution

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Instruction Backup

Figure 3-27. An instruction causing a page fault.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Backing Store

Figure 3-28. (a) Paging to a static swap area. (b) Backing up pages dynamically.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Separation of Policy and Mechanism (1 of 2)

Memory management system is divided into three parts


1. A low-level MMU handler.
2. A page fault handler that is part of the kernel.
3. An external pager running in user space.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Separation of Policy and Mechanism (2 of 2)

Figure 3-29. Page fault handling with an external pager.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Segmentation (1 of 4)
Examples of tables generated by compiler:
1. The source text being saved for the printed listing
2. The symbol table, names and attributes of variables.
3. The table containing integer and floating-point
constants used.
4. The parse tree, syntactic analysis of the program.
5. The stack used for procedure calls within compiler.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Segmentation (2 of 4)

Figure 3-30. In a one-dimensional address space with growing tables, one table may
bump into another.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Segmentation (3 of 4)

Figure 3-31. A segmented memory allows each table to grow or shrink independently of
the other tables.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Segmentation (4 of 4)
Comparison of paging and segmentation
Consideration Paging Segmentation
Need the programmer be aware that this No Yes
technique is being used?
How many linear address spaces are there? 1 Many
Can the total address space exceed the size Yes Yes
of physical memory?
Can procedures and data be distinguished No Yes
and separately protected?
Can tables whose size fluctuates be No Yes
accommodated easily?
Is sharing of procedures between users No Yes
facilitated?
Why was this technique invented? To get a large linear To allow programs and data
address space without to be broken up into logically
having to buy more independent address spaces
physical memory and to aid sharing and
protection
Copyright © 2014 Pearson Education, Inc. All Rights Reserved
Implementation of Pure Segmentation

Figure 3-33. (a)-(d) Development of checkerboarding. (e) Removal of the


checkerboarding by compaction.

Copyright © 2014 Pearson Education, Inc. All Rights Reserved


Copyright

Copyright © 2014 Pearson Education, Inc. All Rights Reserved

You might also like