0% found this document useful (0 votes)
40 views38 pages

Memory Management: Operating Systems Lecture Notes

Logical vs. Physical address spaces - Allocation - memory paging - Segmentation Typical programming steps. Logical and physical addresses are the same. The OS must transform absolute logical addresses into physical addresses.

Uploaded by

dnlkaba
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views38 pages

Memory Management: Operating Systems Lecture Notes

Logical vs. Physical address spaces - Allocation - memory paging - Segmentation Typical programming steps. Logical and physical addresses are the same. The OS must transform absolute logical addresses into physical addresses.

Uploaded by

dnlkaba
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 38

Operating Systems Lecture Notes

Memory Management

Matthew Dailey
Some material Silberschatz, Galvin, and Gagne, 2002

Todays Lecture
Why memory management? We want to give the user the illusion of a large, contiguous address space But we also want to maximize the degree of multiprogramming and utilization of physical memory

Topics: Memory address binding Logical vs. physical address spaces Allocation Memory paging Segmentation

Readings: Silberschatz et al., chapter 9

Birds eye view


Symbolic names Logical addresses
Binding

Physical addresses
But physical memory is shared among many processes
Relocation Allocation Paging Segmentation

Typical programming steps


C source

C source

test1.c, test2.c gcc c test1.c; gcc c test2.c mylib, test1.o, test2.o gcc o test test.o test2.o lmylib -lm test

compiler
static library

compiler
object code

object code

linker
dynamic library executable

loader
memory image

./test
test, math library, standard C libraries

Address Binding for User Programs


Programs use symbolic addresses, e.g. int i; The compiler usually transforms symbolic addresses into relocatable addresses (e.g. 14 bytes from the beginning of the module). The linkage editor or the loader transforms relocatable addresses into absolute (logical) addresses. Does the OS allow programs to run anywhere in memory?
YES: The OS must transform the absolute logical addresses into physical addresses. NO: Logical and physical addresses are the same.

OS-supported dynamic binding


Dynamic linking of system libraries (DLLs or shared libraries):
At link time, only a stub for library routines is linked The rest of the library sits on disk until needed The stub checks whether library has been loaded when called, and loads the DLL if necessary.

Reduces memory usage by sharing between processes Requires OS support for protection

Logical vs. Physical Addresses


When running a user program, the CPU deals with logical addresses.

OS

Process 1

The memory controller sees a stream of physical addresses. The memory management unit (MMU) does the conversion.
Process 2 Process 1 Process 2 Process 3

Logical View

Process 3

Physical View

Swapping and address binding

Swapping and address binding


Sometimes the OS needs to swap out a process to make room for others. When time to swap the process back in, does it need to go back to the same location in physical memory?
Yes, if compile or load-time binding [e.g. embedded systems] No, if dynamic binding is allowed [e.g. modern general-purpose systems]

Relocation
CPU generates logical addresses in the range 0 to max.

The MMU adds a base register (value=r) to the logical addresses to get physical addresses.
Physical address space is r+0 to r+max. e.g. with r=14000, a logical address of 346 gets mapped to the physical address 14346.

Relocation Register Example

Allocation
When processes are created or swapped in, we must allocate space for them. The relocation register scheme only supports contiguous allocation:
OS stays resident in low memory To protect other processes, use a limit register with the relocation register

Hardware support for contiguous allocation

Contiguous allocation
Basic idea: keep list of available memory holes. When a new process arrives, find a hole for it.
OS process 5 OS process 5 OS process 5 process 9 process 8 OS process 5 process 9 process 10

process 2

process 2

process 2

process 2

Contiguous Allocation Strategies


First fit: allocate from first hole large enough
Circular version: start search from last-allocated position

Best fit:

allocate from smallest hole possible

Worst fit: allocate from largest hole possible Simulations show first fit and best fit are usually better than worst fit, but no clear winner.

Dealing with fragmentation in allocation


Internal fragmentation:
Usually we round off a process usage into blocks. The space wasted by this round off is internal fragmentation.

External fragmentation:
Loading and releasing memory blocks can leave small holes spread over physical memory. It is possible to have enough memory to satisfy a request even though it is broken up into too-small pieces. This wasted space is called external fragmentation.

Dealing with external fragmentation


Compaction: Shuffle processes around in physical memory to create larger holes Only works if processes are relocatable (dynamic address binding) Time consuming and complex (I/O completion) Better solutions: Paging: break logical address space into fixed-size pages and allow pages to be placed arbitrarily Segmentation: split logical address space into a small set of variable-sized segments

Dealing with external fragmentation with paging


The most common non-contiguous allocation method.

Physical memory broken into fixed-size frames. Logical memory broken into same-size pages. Logical addresses now consist of page number and offset. Page number p indexes a page table. Page table entry p contains the address f of page p in physical memory. Requires hardware support.
Can still have fragmentation?: internal yes, external no.

Paging address translation architecture

Logical and physical memory

Example
4-bit addresses 2 bits for page number 2 bits for offset 4-byte pages

How to select page size?


Advantages of big pages:
Page table size is smaller Frame allocation accounting is simpler

Advantages of small pages:


Internal fragmentation is reduced

Process loading
Split process logical address space into pages. Find enough free frames for the process pages. If necessary, swap out an old process. Find a sufficient number of free frames. Copy process pages into their designated frames. Fill in the page table.

Process loading

Page Table Implementation


Page table maps logical memory page numbers to physical memory frame numbers. Additional information: protection bits
Readable, writable, valid/invalid, etc.

If very small, page table can be stored in fast registers Usually, the page table is too big:
32 bit addresses, 4 KB pages: more than a million entries Needs to be stored in main memory Each memory access now requires two accesses. 1MB page table for a 1MB process is wasteful

Two-level page tables

Two-level page tables


To reduce size of the page table, use hierarchical structure.

page number p p

page offset

d 2 Page number field isi split into a page number and an offset within that page. 10 10 12 Now one memory access requires 3 accesses!

Page table caching


Storing page tables in main memory doubles or triples effective memory access time. Solution: the Translation Lookaside Buffer (TLB) An associative, high-speed, on-chip memory (cache)

Tag

Value

Search for the value corresponding to a tag is parallel

Page #

Frame #

Page table caching, again


Storing page tables in main memory doubles or triples effective memory access time. Solution: the Translation Lookaside Buffer (TLB) An associative, high-speed, on-chip memory (cache)

Tag

Value

Search for the value corresponding to a tag is parallel

Page #

Frame #

Paging with a TLB

TLB Usage
The TLB stores a few page number / frame number pairs When CPU generates a logical address: TLB is searched in parallel for the page number tag If present, TLB hit
Frame number is used immediately to access memory.

If not present, TLB miss


Frame number has to be retrieved from table in memory. Retrieved frame number is used to access memory TLB is updated with the retrieved page / frame numbers
If TLB is full, have to select an entry for replacement

The TLB must be flushed and restored on each context switch.

Effective access times


Average memory access time depends on TLB hit ratio. Suppose:
TLB search takes 20 ns Memory accesses take 100 ns Single-level page table 80% hit rate

Effective access time (EAT) = 0.8 x ( 20 ns + 100 ns ) + 0.2 x ( 20 ns + 100 ns + 100 ns ) = 140 ns Paging overhead is 40% (100 ns vs. 140 ns)

Segmentation
An alternative way of breaking up the logical address space User programs are broken into logical segments
Main program, function1, global heap, etc.

Each process has several variable-sized memory segments Hardware provides base and limit registers for each segment Packing problem simpler (smaller chunks) but external fragmentation still occurs
Solution: combine paging and segmentation

Segmented Address Space

Logical and physical address spaces

1 1 4

3 4

2 3

user space

physical memory space

Hardware support for segments

Segmentation example

What have we learned?


Address binding schemes
Compile time, link time, load time, dynamic

Logical vs. physical address spaces Allocation schemes


Contiguous Paging
Translation Lookaside Buffer (TLB) page table cache

Segmented memory architecture

You might also like