Oslecture10 13
Oslecture10 13
Outline
• Background
• Dynamic Loading
• Dynamic Linking
• Overlays
• Logical versus Physical Address Space
• Memory Management Unit
• Swapping
• Memory Protection
Memory Allocation
Memory Fragmentation
Paging
Virtual Memory
Demand paging
- Performance of demand paging
Page Replacement
- Page Replacement Algorithms
Thrashing
Working – set model
Background
• Program must be brought into memory and
placed within a process for it to be executed.
• Input Queue - collection of processes on the
disk that are waiting to be brought into
memory for execution.
• User programs go through several steps
before being executed.
Functions of Memory
Management
Keep track of every memory location.
Track of whether memory is allocated or not
Track of how much memory is allocated.
Take decision which process will get memory
and when.
Updates the status of memory location when it
is allocated or freed.
Names and Binding
-Late binding
-VM, dynamic linking/loading, overlaying,
Interpreting code less efficient, checks done at
runtime flexible, allows dynamic reconfiguration
Dynamic Loading
• Routine is not loaded until it is called.
• Better memory-space utilization; unused
routine is never loaded.
• Useful when large amounts of code are
needed to handle infrequently occurring
cases.
• No special support from the operating system
is required; implemented through program
design.
Dynamic Linking
• Linking postponed until execution time.
• Small piece of code, stub, used to locate the
appropriate memory-resident library routine.
• Stub replaces itself with the address of the routine,
and executes the routine.
• Operating system needed to check if routine is in
processes’ memory address.
• Dynamic linking is particularly useful for libraries.
Overlays
• To enable a process to be larger than the
amount of memory allocated to it.
• Keep in memory only those instructions and
data that are needed at any given time.
• Implemented by user, no special support from
operating system; programming design of
overlay structure is complex.
Contd...
For eg. Consider a two pass assembler.
During pass1:
- construct a symbol table
During pass2:
- generate machine language code
Contd..
Partition an assembler into pass1 code ,pass2
code, symbol table,common support routine
used by both pass1 & pass2
Assume the size of these components are as
follows:
pass1 70kb
pass2 80kb
symbol table 20kb
20K
Symbol table
Common routines 30 K
Overlay driver 10 K
Overlay2
Overlay 1 pass1 Overlay Area
pass2
70k 80k
Contd...
- When finish pass1,overlay driver jumps to
overlay 2 into memory.
- Overlay 1 needs only 120kb
- Overlay 2 needs only 130kb.
- As in dynamic loading,overlays do not require
any special support from O.S
- Implemented by user with simple file
structures.
Logical vs. Physical Address
Space
-The concept of a logical address space that is
bound to a separate physical address space is
central to proper memory management.
-Logical Address or virtual address -
generated by CPU
Physical Address: address seen by memory unit.
Logical vs. Physical Address Space
Relocation
registar
1400
Logical
addres
Physical address
s
CPU
+ Memory
346 14346
MMU
Swapping
– A process can be swapped temporarily
out of memory to a backing store and
then brought back into memory for
continued execution.
- Worst-fit
-Allocate the largest hole; must also search entire
list. Produces the largest leftover hole.
216 k
3 30 K 20
4 70 K 8
5 50 K 15
256k
Compaction
0 0
Monitor
Monitor
40 K
40 K
Job 5
Job 5
90 K
90 K
10 K
Compact
100 K Job 4
160 K
Job 4
170 K
30 K Job3
200 K
190 K
Job 3
230 K 66K
26 K
256 K
256 K
Paging
- Memory management scheme that permits
the Physical address space of a process to
be non-contiguous;
- process is allocated physical memory wherever it
is available.
-Divide physical memory into fixed size
blocks called frames
- size is power of 2, 512 bytes and 8192bytes
Contd....
- Divide logical memory into same size blocks
called pages.
- When a process is to be executed, its pages are
loaded into any available memory frames from
the backing store.
- The backing store is divided into fixed sized
blocks that are of the same size as the memory
frames.
Contd...
read/write/execute privileges
1
1
1
1
0
0
0
page table
Page Table When Some Pages Are Not in Main Memory
Page Fault
• It is a type of interrupt raised when a running
process access a memory page that is
mapped into virtual memory but not loaded
in main memory.
Procedure for Handling a Page
Fault
– Page is needed - reference to page
– Step 1: Check an internal table (usually kept
with the PCB) for this process, to determine
whether the reference is valid or invalid
memory access.
– Step 2:If the reference is invalid, terminate
the process. If it is valid but we have not yet
brought in that page, now page in it.
–Step 3: Find a free frame(by taking one from
the free frame list).
Contd..
- Copy-on-Write
- Memory-Mapped Files
Copy-on-Write
• Copy-on-Write (COW) allows both parent
and child processes to initially share the
same pages in memory.
• If process modifies a shared page, only
then the page is copied.
• COW allows more efficient process creation
as only modified pages are copied.
• Free pages are allocated from a pool
of zeroed-out pages.
Memory-Mapped Files
• Memory-mapped file I/O allows file I/O to
be treated as routine memory access by
mapping a disk block to a page in memory.
• A file is initially read using demand paging.
A page-sized portion of the file is read from
the file system into a physical page.
Subsequent reads/writes to/from the file
are treated as ordinary memory accesses.
Cont..
• Simplifies file access by treating file I/O
through memory rather than read() write()
system calls.
Frame 1 1 5 4
Frame 2 2 1 5
Frame 3 3 2 10 Page faults
4 frames Frame 4 4 3
FIFO Replacement - Belady’s Anomaly -- more frames does not mean less page faults
Optimal Page Replacement
• In this algorithm, pages are replaced which
would not be used for the longest duration of
time in the future.
• Example: Consider the page references
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2,
with 4 page frame. Find number of page fault.
Cont..
Cont..
• Initially all slots are empty, so when 7 0 1 2 are
allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault.
• when 3 came it will take the place of 7 because
it is not used for the longest duration of time in
the future.—>1 Page fault.
• 0 is already there so —> 0 Page fault.
• 4 will takes place of 1 —> 1 Page Fault.
Cont..
• Now for the further page reference string —> 0
Page fault because they are already available
in the memory.
• Optimal page replacement is perfect, but not
possible in practice as the operating system
cannot know future requests.
• The use of Optimal Page replacement is to set
up a benchmark so that other replacement
algorithms can be analyzed against it.
Least Recently Used (LRU)
Algorithm
– Use recent past as an approximation of near
future.
– Choose the page that has not been used for the
longest period of time.
– May require hardware assistance to implement.
- Considered good, but difficult to implement
Like all stack algorithms, LRU does not suffer
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 1,2,0
147
Thrashing
– Why does thrashing occur?
• (size of locality) total memory size
Working Set Model
156
Demand Paging Issues
– Prepaging
• Tries to prevent high level of initial paging.
– E.g. If a process is suspended, keep list of pages in working set
and bring entire working set back before restarting process.
– Tradeoff - page fault vs. prepaging - depends on how many
pages brought back are reused.
– Page Size Selection
• fragmentation
• table size
• I/O overhead
• locality
Demand Paging Issues
– Program Structure
• Array A[1024,1024] of integer
• Assume each row is stored on one page
• Assume only one frame in memory
• Program 1
for j := 1 to 1024 do
for i := 1 to 1024 do
A[i,j] := 0;
1024 * 1024 page faults
• Program 2
for i := 1 to 1024 do
for j:= 1 to 1024 do
A[i,j] := 0;
1024 page faults
Demand Paging Issues
• I/O Interlock and addressing
• Say I/O is done to/from virtual memory. I/O is
implemented by I/O controller.
– Process A issues I/O request
– CPU is given to other processes
– Page faults occur - process A’s pages are paged out.
– I/O now tries to occur - but frame is being used for another
process.
• Solution 1: never execute I/O to memory - I/O takes
place into system memory. Copying Overhead!!
• Solution 2: Lock pages in memory - cannot be selected
for replacement.
Demand Segmentation
• Used when there is insufficient hardware to
implement demand paging.
• OS/2 allocates memory in segments, which it
keeps track of through segment descriptors.
• Segment descriptor contains valid bit to indicate
whether the segment is currently in memory.
– If segment is in main memory, access continues.
– If not in memory, segment fault.