0% found this document useful (0 votes)
53 views110 pages

Chapter 03

The document discusses memory management techniques used in operating systems, including paging which maps virtual addresses to physical addresses through a page table to allow for virtual memory. Paging allows programs to have their own private address spaces larger than physical memory by swapping pages between memory and disk. The memory management unit (MMU) handles address translation for paging by checking the page table on every memory access.

Uploaded by

hiraadnanfl23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views110 pages

Chapter 03

The document discusses memory management techniques used in operating systems, including paging which maps virtual addresses to physical addresses through a page table to allow for virtual memory. Paging allows programs to have their own private address spaces larger than physical memory by swapping pages between memory and disk. The memory management unit (MMU) handles address translation for paging by checking the page table on every memory access.

Uploaded by

hiraadnanfl23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 110

MODERN OPERATING SYSTEMS

Third Edition

ANDREW S. TANENBAUM

Chapter 3
Memory Management

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Memory Management
 Memory (RAM) is an important and rare resource
 Programs expand to fill the memory available to them
 Programmer’s view
 Memory should be private, infinitely large, infinitely fast,
nonvolatile…
 Reality
 Best of people’s mind: memory hierarchy
 Register, cache, memory, disk, tape
 Memory manager
 Efficiently manage memory
 Keep track the free memory, allocate memory to
programs…
Memory management

 The memory management in this chapter


ranges from very simple to highly
sophisticated…
No Memory Abstraction

 Early mainframe, early minicomputers, early


personal computers had no memory
abstraction…
 MOV REGISTER1, 1000
 Here 1000 means move the content of physical
memory address1000 to register
 Impossible to have two programs in memory
No Memory Abstraction

Figure 3-1. Three simple ways of organizing memory with an


operating system and one user process.

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Multiple problems without
abstraction
 IBM 360
 Memory divided into 2-KB blocks and each one
with a 4-bit protection key
 PSW also has a 4-bit protection key
 Hardware will trap any attempt tries to access
memory with a protection code different from
PSW key
Multiple Programs Without Memory
Abstraction: Drawback

Figure 3-2. Illustration of the relocation problem.

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Drawback of no abstraction
 Problem is that both programs reference absolute
physical memory
 People hope that they can have a private space,
that is, addresses local to it
 IBM 360
 Modify the second program on the fly as it loaded it into
memory
 Static relocation
 When a program is loaded into 16384, then the constant is added
to every address
 Slow down loading, needs extra information
 No abstraction memory still used in embedded and
smart systems
Abstraction: address space
 Not to expose physical address to programmers
 Crash OS
 Hard to parallelize
 Two problems to solve:
 Protection
 Relocation
 Address space:
 A set of memory processes can use to address
memory
 Each process has its own address space, independent
of each other
 How?
Dynamic relocation

 Equip CPU with two special register: base


and limit
 Program be loaded into a consecutive space
 No relocation during loading
 When process is run and reference an address,
CPU automatically adds the base to that address;
as well as check whether it exceeds the limit
register
Base and Limit Registers

Figure 3-3. Base and limit registers can be used to give each
process a separate address space.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Base and Limit Registers

 Disadvantage:
 Need to perform an addition and a comparison on
every memory reference
Swapping

 Many background server processes run in


the system
 Physical memory not large enough to hold all
programs
 Swapping
 Bring in and swap out programs
 Virtual memory
 Run programs even when they are partially in memory
Swapping (1)

Figure 3-4. Memory allocation changes as processes come into


memory and leave it. The shaded regions are unused memory.

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Swapping

 Problems
 Addresses different as swaps in and out
 Static relocation/dynamic relocation
 Memory holes
 Memory compaction
 Require CPU time
 Move 4 byes in 20ns, then 5 sec to compact 1 GB
 How much memory allocate for a program
 Programs tend to grow
 Both data segment (heap) and stack
Swapping (2)

Figure 3-5. (a) Allocating space for growing data segment. (b)
Allocating space for growing stack, growing data segment.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Managing free memory

 Bit maps and linked lists


 Bitmap
 Memory is divided into allocation units (a few word
to KB)
 Corresponding to each unit, there is a bit in the
bitmap
 Hard to find a given length free space
Memory Management with Bitmaps

Figure 3-6. (a) A part of memory with five processes and three
holes. The tick marks show the memory allocation units. The
shaded regions (0 in the bitmap) are free. (b) The
corresponding bitmap. (c) The same information as a list.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Memory Management with Linked Lists

Figure 3-7. Four neighbor combinations


for the terminating process, X.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Manage free memory
 Linked list
 Double linked list
 How to allocate free memory to programs?
 First fit
 quick; beginning used more often; break a large free space
 Next fit
 Every time from where last time used
 Best fit
 Search entire list, finds the hole close to the actual size
 Worst fit
 Finds the largest hole
 Quick fit
 Keeps separate queues of processes and holes
exercise

 In a swapping system, memory consists of


the following hole sizes in memory order:
10KB , 4KB , 20KB , 18KB , 7KB , 9K
B , 12KB and 15KB. Which hold is taken for
successive segment requests of
 12KB
 10KB

 9KB

For first fit? Best fit? Worst fit? And next fit?
Virtual Memory
 Manage bloatware
 Where programs are too big to fit into memory
 Being split by programs is a bad idea (overlays)
 Virtual memory
 Every program has its own address space
 The address space is divided into chunks called pages
 Each page is a contiguous area and mapped to physical
address
 But, not all pages are needed to in physical memory
 OS maps page addresses and physical addresses on the fly
 When a needed page not in memory, OS needs to get it in
 Every page needs relocation
Virtual Memory – Paging (1)

Figure 3-8. The position and function of the MMU – shown as


being a part of the CPU chip (it commonly is nowadays).
Logically it could be a separate chip, was in years gone by.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Paging (2)

Figure 3-9. Relation between virtual addresses and


physical memory addresses given by page table.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Paging

 MMU( memory management unit)


 CPU: MOV REG, 0
 MMU: MOV REG, 8192
 CPU: MOV REG 8192
 MMU: MOV REG 24567
 CPU:MOV REG 20500
 MMU:MOV REG 12308
 CPU: MOV REG 32780
 MMU: page fault
Paging (3)

Figure 3-10. The internal operation of the MMU with


16 4-KB pages.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Page tables

 Virtual addresses mapping


 Virtual address split into virtual page number and
offset
 16-bit address: 4KB page size; 16 pages
 Virtual page number: index to the page table

 Purpose of page table


 Map virtual pages onto page frames
Structure of Page Table Entry

Figure 3-11. A typical page table entry.

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Page Table Structure

 Protection
 What kinds of access are permitted
 Modified:
 When a page is written to (dirty)
 Referenced:
 When a page is referenced
 Cache disabling
 Data inconsistency
Speeding Up Paging
Paging implementation issues:

• The mapping from virtual address to physical


address must be fast.
• If the virtual address space is large, the page table
will be large. (32bit/64bit)

• Every process should have its own page table in


memory

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Speeding up paging
 To keep the page table in register?
 No more memory access needed during process execution
 But unbearably expensive
 To keep the page table entirely in memory?

 Each process has its own page table


 Page table is kept in memory
 How many memory access needed to perform a logical memory
access?
Speed up paging

• Effective memory-access time,


time needed for every data/instruction access
– Two time memory-access time; reduces performance
by half
– Access the page table & Access the data/instruction

• Solution:
– A special fast-lookup hardware cache called
associative registers or translation look-aside
buffers (TLBs)
Translation Lookaside Buffers

Figure 3-12. A TLB to speed up paging.

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
TLB

 TLB is usually inside MMU and consists of a


small number of entries
 When received a virtual address
 MMU first check to see if its virtual pager number
is in TLB;
 if it’s, there is no need to visit page table; if not,
evict one entry from TLB and replaces it with the
page table entry
Effective Access Time
 Associative Lookup =  time unit;
memory cycle time = t time unit;
Hit ratio = 
• Effective Access Time (EAT)
EAT = (t + )  + (2t + )(1 – )
= 2t +  – t
• If (20 ns), t(100 ns), 1(80%), 2(98%):
• TLB hit: 20+100=120 ns
• TLB miss: 20+100+100=220 ns
• EAT1 = 120*0.8 + 220 * 0.2 = 140 ns
• EAT2 = 120*0.98 + 220 * 0.02 = 122 ns
Page Table for Large Memory

 Address space: 32bit


 Page size: 4KB
 Page Numbers: 20bit, 1M pages
 32bit per page entry, then needs 4MB to
store a page table
 Not to mention 64bit system
Multilevel page table

 32bit virtual memory divided into three parts


 10bit PT1, 10bit PT2, 12bit offset
 Multilevel page table
 Not to have all page tables in memory all the time
 Page tables are stored in pages, too

 Example: a program has 4G address space, it


needs 12M to run: 4M code; 4M data; 4M stack
Multilevel Page Tables

Figure 3-13. (a) A 32-bit address with two page table fields.
(b) Two-level page tables.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
?

 A computer has 32-bit virtual addresses and


4-KB pages. The program and data together
fit in the lowest page (0-4095) the stack fits in
the highest page. How many entries are
needed in the page table if traditional paging
is used? How many page table entries are
needed for 2-level paging, with 10 bits in
each part?
Inverted Page Table

• When virtual address spaces is much larger


than physical memory
• Inverted page table: entry per page frame
rather than per page of virtual address space
• Search is much harder
– TLB
– Hash
• Inverted page table is common on 64-bit
machines
Inverted Page Tables

Figure 3-14. Comparison of a traditional page table


with an inverted page table.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
?
• Array A[1024, 1024] of integer, each row is
stored in one page
• Program 1
for j := 1 to 1024 do
for i := 1 to 1024 do
A[i,j] := 0;
– 1024 × 1024 page faults
• Program 2
for i := 1 to 1024 do
for j := 1 to 1024 do
A[i,j] := 0;
– 1024 page faults
Page Replacement Algorithms

• Optimal page replacement algorithm


• Not recently used page replacement
• First-In, First-Out page replacement
• Second chance page replacement
• Clock page replacement
• Least recently used page replacement
• Working set page replacement
• WSClock page replacement

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Impact of page fault

Page fault time = 25ms


ma = 100ns
With page miss rate as p:
EAT = 100(1-p)+25×106×p
= 100 + 24,999,900p
If p=1/1000, then EAT =25,099.9 ns
If needs EAT <110ns, then 100+24,999,900p<110
that is p <10/24,999,900<10/25,000,000 =1/2,500,000
=4×10-7
Page fault rage p must be smaller than 4×10-7
Page replacement

 When a page fault occurs, some page must


be evicted from memory
 If the evicted page has been modified while
in memory, it has to be write back to disk
 If a heavily used page is moved out, it is
probably that it will be brought back in a short
time
 How to choose one page to replace?
Optimal Page replacement

 Easy to describe but impossible to implement


 Each page be labelled with the number of
instructions that will be executed before that page
is first referenced
 The page with the highest label be removed
 OS won’t know when each page will be reference
next
 Be used to compare the performance of realizable
algorithms
Page fault 9 times , replacement 6 times
Not Recently Used

 Replacing based on page usages


 Pages have R and M bit
 When a process is started up, both page bits are
set to 0, periodically, R bit is cleared
 Four classes can be formed
 Class 0: not referenced, not modified
 Class 1: not referenced, modified
 Class 2: referenced, not modified
 Class 3: referenced, modifed
First-In-First-Out (FIFO) Algorithm

 Replace “the oldest one”


 Simple , with unsatisfactory performance

Page fault 15 , replacement 12


2nd chance

 A simple modification to FIFO to prevent


throwing out a heavily used page:
 To inspect the R bit
 If the R is 0, then the page is old and unused; if it
is 1, give it a 2nd chance
 If all pages have been referenced, it degenerates
into FIFO
Second Chance Algorithm

Figure 3-15. Operation of second chance.


(a) Pages sorted in FIFO order.
(b) Page list if a page fault occurs at time 20 and A has its R bit set.
The numbers above the pages are their load times.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13- 6006639
Clock

 2nd chance is inefficient:


 It constantly moves pages around in its list
 An alternative:
 To keep al the page frames on a circular list in the
form of a clock
The Clock Page Replacement Algorithm

Figure 3-16. The clock page replacement algorithm.

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13- 6006639
LRU

 An approximation to the optimal algorithm:


 Pages have not been used for ages will probably
remain unused for a long time
 When a page fault occurs, throw out the page that
has been unused for the longest time
 to implement LRU
 To record time or use a linked list of all pages in
memory;
 Or use hardware counter or a matrix
LRU algorithm
 LRU = Least Recently Used
 Replace the page that has not been used for
the longest period of time.

Page fault: 12
Replacement: 9
LRU implementation
• Hard to implement efficiently:
– Use an extra counter (time-of-use) field in a page
• replace the oldest page
• Requires the hardware with a counter
• On page fault, examines all the counters to find the
lowest one
– Store a stack of page numbers (software)
• replace the bottom page
• The size of the stack is the size of physical frames
• If the page is in the stack, take it out and push;
otherwise, push it in
LRU with stack (graph)
LRU Page Replacement Algorithm (hardware)

Figure 3-17. LRU using a matrix when pages are referenced in the order
0, 1, 2, 3, 2, 1, 0, 3, 2, 3.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13- 6006639
Ex.

 If FIFO page replacement is used with four


page frames and eight pages, how many
page faults will occur with the reference string
0172327103 if the four frames are initially
empty? Now repeat this problem for LRU.
To simulate LRU in software: NFU

 Not Frequently Used :


 To record the referenced times of pages
 When a page fault occurs, the lowest one will be
evicted
 Problem: it keeps long history
Simulating LRU in Software

Figure 3-18. The aging algorithm simulates LRU in software. Shown are
six pages for five clock ticks. The five clock ticks are represented
by (a) to (e).
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13- 6006639
Differences with LRU

 Can’t distinguish the references in a clock


 Counters have a finite number, has a limited
history
?

 A small computer has four page frames. At


the first clock tick, the R bits are 0111, and at
subsequent clock ticks, the values are 1011,
1010,1101,0010,1100,and 0001. If the aging
algorithm is used with an 8-bit counter, give
the values of the counters after the last tick.
Most Recently Used

• Advantage of LRU:
– statistical analysis and proving show that it is sub-
optimal
• Situations where MRU is preferred:
– if there are N pages in the LRU pool, an
application executing a loop over array of N + 1
pages will cause a page fault on each and every
access
– Ex: the reference sequence is 1,2…,501, and the
frame is 500
Working Set Page Replacement

• Demand paging:
– Load pages on demand, not in advance
• Locality of reference:
– During execution, process references only a
relatively small fraction of its pages
• Working set:
– Set of pages that a process is currently using
– If the working set is in, won’t cause many page
faults until next phase
 Given the reference sequence,
56214( t1)563433434434231(t2 )657345,
then working set w(10,t1) = , and the
w( 10,t2)= .
Working set model

 When processes are swapped out and


swapped in later, load the working set first
will greatly reduce the page fault
 Prepaging: loading the pages before letting
processes run is called prepaging
Working Set Page Replacement

Figure 3-19. The working set is the set of pages used by the k most
recent memory references. The function w(k, t) is the size of the
working set at time t.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13- 6006639
Replacement with WS

 On page fault, evict the page which is not in


the working set
 To get a working set: keep a register and
shift it on every reference would be
expensive
 Approximations:
 Use execution time tau instead of k memory
references
Working Set Page Replacement

Figure 3-20. The working set algorithm (R bit is periodically cleared).


Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13- 6006639
WSClock page replacement

 Basic working set algorithm needs to scan


the entire page table at each page fault
 WSClock: simple and efficient
 Circular list of page frames
 If R is 1, page has been referenced during the
current tick; set R to 0, and advance the hand
 If R is 0: if M is 0 and age> tau, replace it; if M is
1, advance the hand and schedule the writing
back;
The WSClock Page Replacement Algorithm

Figure 3-21. Operation of the WSClock algorithm. (a) and (b) give an
example of what happens when R = 1.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13- 6006639
WSClock page replacement

 Two cases the hand comes all the way to the


starting point:
 At least one write has been scheduled: the hand
keeps moving, until find a clean page
 No writes has been scheduled: all pages are in
the working set, choose a clean page or if without
clean page, the current page is the victim and
written back to disk
Page Replacement Algorithm Summary
?
• A computer has four page frames. The time of loading,
time of last access, and the R and M bits for each page
are as shown below (the times are in clock ticks):
• Page Loaded Last Ref. R M
• 0 230 285 1 0
• 1 120 265 0 0
• 2 140 270 0 1
• 3 110 280 1 1
• (a) Which page will NRU replace? (b) Which page will
FIFO replace? (c) Which page will LRU replace? (d)
Which page will second chance replace?
• int a[1024][1024], b[1024][1024], c[1024][1024];
• multiply( ) {
• unsigned i, j, k;
• for(i = 0; i < 1024; i++)
for(j = 0; j < 1024; j++)
• for(k = 0; k < 1024; k++)
• c[i][j] += a[i,k] * b[k,j];
• }
• Assume that the binary for executing this function fits in
one page, and the stack also fits in one page. Assume
further that an integer requires 4 bytes for storage.
Compute the number of TLB misses if the page size is
4096 and the TLB has 8 entries with a replacement policy
consisting of LRU.
Summary of Page Replacement Algorithms

Figure 3-22. Page replacement algorithms discussed in the text.

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Local versus Global Allocation Policies (1)

Figure 3-23. Local versus global page replacement.


(a) Original configuration. (b) Local page replacement.
(c) Global page replacement.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Local versus Global Allocation Policies (2)

Figure 3-24. Page fault rate as a function


of the number of page frames assigned.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Belady’s Anomaly
• Size of frames
• With the following reference sequence:
123412512345
• With four frames:
1 1 1 1 1 1 5 5 5 5 4 4 Page fault: 10
2 2 2 2 2 2 1 1 1 1 5
3 3 3 3 3 3 2 2 2 2 Page replacement: 6
4 4 4 4 4 4 3 3 3
• With three frames:
1 1 1 4 4 4 5 5 5 5 5 5 Page fault: 9
2 2 2 1 1 1 1 1 3 3 4 Page replacement: 6
3 3 3 2 2 2 2 2 4 4
Separate Instruction and Data Spaces

Figure 3-25. (a) One address space.


(b) Separate I and D spaces.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Shared Pages

Figure 3-26. Two processes sharing the same program


sharing its page table.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Shared Libraries

Figure 3-27. A shared library being used by two processes.

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Static library

 $ gcc -c func.c -o func.o


$ ar rcs libfunc.a func.o
$ gcc main.c -o main -static -L. -lfunc
$ ./main
Dynamic library

 $ gcc -fPIC -c func.c -o func.o


$ $gcc -shared -o libfunc.so func.o
 $ export LD_LIBRARY_PATH=$(pwd)
$ ./main
Page Fault Handling (1)

• The hardware traps to the kernel, saving the


program counter on the stack.
• An assembly code routine is started to save the
general registers and other volatile information.
• The operating system discovers that a page
fault has occurred, and tries to discover which
virtual page is needed.
• Once the virtual address that caused the fault is
known, the system checks to see if this address
is valid and the protection consistent with the
access
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Page Fault Handling (2)

• If the page frame selected is dirty, the page is


scheduled for transfer to the disk, and a context
switch takes place.
• When page frame is clean, operating system
looks up the disk address where the needed
page is, schedules a disk operation to bring it in.
• When disk interrupt indicates page has arrived,
page tables updated to reflect position, frame
marked as being in normal state.

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Page Fault Handling (3)

• Faulting instruction backed up to state it had


when it began and program counter reset to
point to that instruction.
• Faulting process scheduled, operating system
returns to the (assembly language) routine that
called it.
• This routine reloads registers and other state
information and returns to user space to
continue execution, as if no fault had occurred.

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Instruction Backup

Figure 3-28. An instruction causing a page fault.

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Backing Store (1)

Figure 3-29. (a) Paging to a static swap area.

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Backing Store (2)

Figure 3-29. (b) Backing up pages dynamically.

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Separation of Policy and Mechanism (1)

Memory management system is divided into


three parts:

• A low-level MMU handler.


• A page fault handler that is part of the kernel.
• An external pager running in user space.

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Separation of Policy and Mechanism (2)

Figure 3-30. Page fault handling with an external pager.

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Segmentation (1)
A compiler has many tables that are built up as
compilation proceeds, possibly including:

• The source text being saved for the printed listing (on
batch systems).
• The symbol table – the names and attributes of variables.
• The table containing integer, floating-point constants
used.
• The parse tree, the syntactic analysis of the program.
• The stack used for procedure calls within the compiler.

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Segmentation (2)

Figure 3-31. In a one-dimensional address space with growing


tables, one table may bump into another.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Segmentation (3)

Figure 3-32. A segmented memory allows each table to grow or


shrink independently of the other tables.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Implementation of Pure Segmentation

Figure 3-33. Comparison of paging and segmentation.


Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Segmentation with Paging: MULTICS (1)

Figure 3-34. (a)-(d) Development of checkerboarding. (e)


Removal of the checkerboarding by compaction.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Segmentation with Paging: MULTICS (2)

Figure 3-35. The MULTICS virtual memory. (a) The


descriptor segment points to the page tables.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Segmentation with Paging: MULTICS (5)

Figure 3-35. The MULTICS virtual memory. (b) A segment


descriptor. The numbers are the field lengths.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Segmentation with Paging: MULTICS (6)

When a memory reference occurs, the following


algorithm is carried out:

• The segment number used to find segment descriptor.


• Check is made to see if the segment’s page table is in
memory.
– If not, segment fault occurs.
– If there is a protection violation, a fault (trap) occurs.

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Segmentation with Paging: MULTICS (7)

• Page table entry for the requested virtual page


examined.
– If the page itself is not in memory, a page fault is
triggered.
– If it is in memory, the main memory address of the
start of the page is extracted from the page table entry
• The offset is added to the page origin to give the
main memory address where the word is located.
• The read or store finally takes place.

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Segmentation with Paging: MULTICS (8)

Figure 3-36. A 34-bit MULTICS virtual address.

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Segmentation with Paging: MULTICS (9)

Figure 3-37. Conversion of a two-part MULTICS address into a


main memory address.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Segmentation with Paging: MULTICS (10)

Figure 3-38. A simplified version of the MULTICS TLB. The


existence of two page sizes makes the actual TLB more
complicated.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Segmentation with Paging: The Pentium (1)

Figure 3-39. A Pentium selector.

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Segmentation with Paging: The Pentium (2)

Figure 3-40. Pentium code segment descriptor.


Data segments differ slightly.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Segmentation with Paging: The Pentium (3)

Figure 3-41. Conversion of a (selector, offset)


pair to a linear address.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Segmentation with Paging: The Pentium (4)

Figure 3-42. Mapping of a linear address onto a physical address.


Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Segmentation with Paging: The Pentium (5)

Figure 3-43. Protection on the Pentium.


Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

You might also like