0% found this document useful (0 votes)
43 views

Main Memory: Address Translation: (Chapter 8)

This document discusses address translation in operating systems using paged translation. It introduces the concepts of virtual addresses, physical addresses, pages, and frames. It explains that processes are assigned frames when available to map their virtual addresses to physical locations. A page table stored in memory maps virtual page numbers to physical frame numbers. This allows processes to have a continuous logical address space while physical memory can be fragmented. Address translation hardware uses the page table to translate virtual addresses to physical addresses on memory accesses. Paging provides benefits like protection, dynamic loading, dynamic linking, and copy-on-write.

Uploaded by

Gaurav
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views

Main Memory: Address Translation: (Chapter 8)

This document discusses address translation in operating systems using paged translation. It introduces the concepts of virtual addresses, physical addresses, pages, and frames. It explains that processes are assigned frames when available to map their virtual addresses to physical locations. A page table stored in memory maps virtual page numbers to physical frame numbers. This allows processes to have a continuous logical address space while physical memory can be fragmented. Address translation hardware uses the page table to translate virtual addresses to physical addresses on memory accesses. Paging provides benefits like protection, dynamic loading, dynamic linking, and copy-on-write.

Uploaded by

Gaurav
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Main Memory:

Address Translation
(Chapter 8)
CS 4410
Operating Systems
Can’t We All Just Get Along?
Physical Reality: different processes/threads share the
same hardware à need to multiplex
• CPU (temporal)
• Memory (spatial)
• Disk and devices (later)

Why worry about memory sharing?


• Complete working state of process and/or kernel is
defined by its data in memory (+ registers)
• Don’t want different threads to have access to each
other’s memory (protection)
2
Aspects of Memory Multiplexing
Isolation
Don’t want distinct process states collided in physical memory
(unintended overlap à chaos)
Sharing
Want option to overlap when desired (for communication)
Virtualization
Want to create the illusion of more resources than exist in
underlying physical system
Utilization
Want to best use of this limited resource

3
Address Translation
• Paged Translation
• Efficient Address Translation

All in the context of the OS

4
A Day in the Life of a Program
Compiler Loader
(+ Assembler + Linker) “It’s alive!”

sum.c sum pid xxx


source files executable process
PC GP SP
0xffffffff

#include <stdio.h> ... stack


0040 0000 0C40023C
int max = 10; 21035000
main
.text

1b80050c
int main () { 8C048004
int i; 21047002 heap
int sum = 0;
0C400020
add(m, &sum);
printf(“%d”,i); ... 0x10000000 max data
... 1000 0000 10201000
21040330
text
.data

} max 22500102 jal


addi
... 0x00400000
5
0x00000000
Logical view of process memory
0xffffffff What’s wrong with this
stack …in the context of:
multiple processes?
multiple threads?
heap
data

text

0x00000000

6
TERMINOLOGY ALERT:
Paged Translation Page: the data itself
Frame: physical location
Processor’s Physical
Virtual View Memory
Page N Frame M

stack STACK 0
No more
HEAP 0
external
TEXT 1 fragmentation!
heap
HEAP 1
DATA 0
data
TEXT 0
text
STACK 1
Virtual Frame 0
Page 0
7
Paging Overview
Divide:
• Physical memory into fixed-sized blocks called frames
• Logical memory into blocks of same size called pages
Management:
• Keep track of all free frames.
• To run a program with n pages, need to find n free
frames and load program
Notice:
• Logical address space can be noncontiguous!
• Process given frames when/where available
8
Address Translation, Conceptually
Virtual
Address
Raise
Processor Translation Invalid
Exception

Valid

Physical
Data Memory
Physical
Address

Data

9
Memory Management Unit (MMU)
• Hardware device
• Maps virtual to physical address
(used to access data)

User Process:
• deals with virtual addresses
• Never sees the physical address
10
High-Level Address Translation
Processor’s Physical
View Memory
Page N Frame M red cube is 255th
stack STACK 0
HEAP 0
byte in page 2.

heap
TEXT 1 Where is the red
HEAP 1
DATA 0 cube in physical
data
TEXT 0 memory?
text
STACK 1
Page 0 Frame 0

11
Logical Address Components
Page number – Upper bits
• Must be translated into a physical frame number

Page offset – Lower bits


• Does not change in translation

page number page offset

m-n n
For given logical address space 2m and page size 2n

12
High-Level Address Translation
Virtual Physical Who keeps
Memory Memory
track of the
stack STACK 0 mapping?
HEAP 0

TEXT 1 0x????
heap 0x6000
0x5000
0x5000
HEAP 1 à Page Table
0x4000 DATA 0 0 -
data 0x4000
1 3
0x3000 TEXT 0
text 0x20FF 0x3000 2 6
0x2000 0x2000
STACK 1 3 4
0x1000 0x1000
4 8
0x0000 0x0000 5… 5
13
Simplified Page Table

Page-table

Lives in Memory
Page-table base register (PTBR)
• Points to the page table
• Saved/restored on context switch PTBR
14
Leveraging Paging
• Protection
• Dynamic Loading
• Dynamic Linking
• Copy-On-Write

15
Full Page Table

Page-table

Meta Data about each frame


Protection R/W/X, Modified, Valid, etc.

PTBR
16
Leveraging Paging
• Protection
• Dynamic Loading
• Dynamic Linking
• Copy-On-Write

17
Dynamic Loading & Linking
Dynamic Loading
• Routine is not loaded until it is called
• Better memory-space utilization; unused
routine is never loaded
• No special support from the OS needed
Dynamic Linking
• Routine is not linked until execution time
• Locate (or load) library routine when called
• AKA shared libraries (e.g., DLLs)

18
Leveraging Paging
• Protection
• Dynamic Loading
• Dynamic Linking
• Copy-On-Write

19
Copy on Write (COW) Physical
P1 Virt Addr Space
COW Addr Space
X stack
• P1 forks() X heap
heap
• P2 created with X data
text
- own page table X text

- same translations P2 Virt Addr Space


• All pages X stack data
marked COW X heap
stack
(in Page Table) X data
X text
20
COW, then keep executing Physical
P1 Virt Addr Space
COW Addr Space
Option #1: Child X stack
stack
keeps executing X heap
heap
X data
text
Upon page fault: X text

• Allocate new P2 Virt Addr Space


frame X stack data
• Copy frame heap
stack
X
• Both pages no X data
longer COW X text
21
COW, then call exec (before) Physical
P1 Virt Addr Space
Addr Space
Option #2: Child COW
X stack
calls exec() X heap
heap
X data
text
• Load new X text

frames P2 Virt Addr Space


• Copy frame X stack data
• Both pages now X heap
stack

COW X data
BEFORE
X text
22
COW, then call exec (after) Physical
P1 Virt Addr Space
Addr Space
Option #2: Child COW
stack
calls exec() heap
stack
heap
data
text
• Load new frames text text
• Copy frame P2 Virt Addr Space data
• Both pages no stack data
longer COW stack

data
AFTER
text
23
Downsides to Paging
Memory Consumption:
• Internal Fragmentation
• Make pages smaller? But then…
• Page Table Space: consider 32-bit address space,
4KB page size, each PTE 8 bytes
• How big is this page table?
• How many pages in memory does it need?
Performance: every data/instruction access
requires two memory accesses:
• One for the page table
• One for the data/instruction
24
Address Translation
• Paged Translation
• Efficient Address Translation
• Multi-Level Page Tables
• Inverted Page Tables
• TLBs

25
Multi-Level Page Tables to the Rescue!
Implementation Physical
Memory

Processor

Virtual
Address

index
Index 1 1 | 2index
Index 2 3|
Index offset
Offset
Physical
Address
Level 1
Frame Offset

Level 2
Frame

Level 3
Frame | Access
+ Allocate only PTEs in use
+ Simple memory allocation
− more lookups per memory reference 26
Two-Level Paging Example
32-bit machine, 1KB page size
• Logical address is divided into:
– a page offset of 10 bits (1024 = 2^10)
– a page number of 22 bits (32-10)
• Since the page table is paged, the page number is
further divided into:
– a 12-bit first index
– a 10-bit second index
• Thus, a logical address is as follows:
page number page offset
index 1 index 2 offset
12 10 10
27
This one goes to three!Implementation Physical
Memory

Processor

Virtual
Address
Index 1 Index 2 Index 3 Offset
Physical
Address
Level 1
Frame Offset

Level 2

Level 3

+ First Level requires less contiguous memory


− even more lookups per memory reference 28
Complete Page Table Entry (PTE)
Valid Protection R/W/X Ref Dirty Index

Index is an index into:


• table of memory frames (if bottom level)
• table of page table frames (if multilevel page table)
• backing store (if page was swapped out)

Synonyms:
• Valid bit == Present bit
• Dirty bit == Modified bit
• Referenced bit == Accessed bit
29
Address Translation
• Paged Translation
• Efficient Address Translation
• Multi-Level Page Tables
• Inverted Page Tables
• TLBs

30
Inverted Page Table: Motivation
So many virtual pages…

… comparatively few physical frames


Traditional Page Tables:
• map pages to frames
• are numerous and sparse
Why not map frames to pages? (How?)
31
Inverted Page Table: Implementation
Virtual address
pid page # offset

pid page
Page-table Physical
Memory
frame page pid

offset

Implementation: frame

• 1 Page Table for entire system


• 1 entry per frame in memory
• Why don’t we store the frame #?

Not to scale! Page table << Memory 32


Inverted Page Table: Discussion
Tradeoffs:
↓ memory to store page tables
↑ time to search page tables

Solution: hashing
• hash(page,pid) à PT entry (or chain of entries)
• What about:
• collisions…
• sharing…

33
Address Translation
• Paged Translation
• Efficient Address Translation
• Multi-Level Page Tables
• Inverted Page Tables
• TLBs

34
Translation Lookaside Buffer (TLB)
Cache of virtual to physical page translationsPhysical
Major efficiency improvement Memory

Virtual
Address
Page# Offset

Translation Lookaside Buffer (TLB)

Virtual Page
Page Frame Access Physical
Address
Matching Entry Frame Offset

Page Table
Lookup 35
Address Translation with TLB
Access TLB before you access memory.
Virtual Virtual
Address Address
Raise
Processor TLB Miss Page Invalid
Exception
Table
Hit
Valid

Frame Frame

Offset Physical
Memory
Physical
Address
Data
Trick: access TLB
Data
while you access the cache.
36
Address Translation Uses!
Process isolation
• Keep a process from touching anyone else’s memory, or
the kernel’s
Efficient interprocess communication
• Shared regions of memory between processes
Shared code segments
• common libraries used by many different programs
Program initialization
• Start running a program before it is entirely in memory
Dynamic memory allocation
• Allocate and initialize stack/heap pages on demand

37
MORE Address Translation Uses!
Program debugging
• Data breakpoints when address is accessed
Memory mapped files
• Access file data using load/store instructions
Demand-paged virtual memory
• Illusion of near-infinite memory, backed by disk or
memory on other machines
Checkpointing/restart
• Transparently save a copy of a process, without stopping
the program while the save happens
Distributed shared memory
• Illusion of memory that is shared between machines
38

You might also like