CS8493 Unit 3
CS8493 Unit 3
UNIT – III
STORAGE MANAGEMENT
5
Binding of Instructions and Data to Memory
Address binding of instructions and data to
memory addresses can happen at three different
stages
Compile time: If memory location known a
priori, absolute code can be generated; must
recompile code if starting location changes
Load time: Must generate relocatable code if
memory location is not known at compile
time
Execution time: Binding delayed until run
time if the process can be moved during its
execution from one memory segment to
another
Need hardware support for address maps Multistep Processing
(e.g., base and limit registers) of a User Program
6
Logical vs. Physical Address Space
The concept of a logical address space that is bound to a separate
physical address space is central to proper memory management
Logical address – generated by the CPU; also referred to as
virtual address
Physical address – address seen by the memory unit
Logical and physical addresses are the same in compile-time and
load-time address-binding schemes; logical (virtual) and physical
addresses differ in execution-time address-binding scheme
Logical address space is the set of all logical addresses generated
by a program
Physical address space is the set of all physical addresses
generated by a program
7
Memory-Management Unit (MMU)
Hardware device that at run time maps virtual to physical address
Many methods possible, covered in the rest of this chapter
To start, consider simple scheme where the value in the relocation
register is added to every address generated by a user process at
the time it is sent to memory
Base register now called relocation register
MS-DOS on Intel 80x86 used 4 relocation registers
The user program deals with logical addresses; it never sees the
real physical addresses
Execution-time binding occurs when is made to
reference location in memory
Logical address bound to physical addresses
8
Dynamic relocation using a relocation
register
Routine is not loaded until it is called
Better memory-space utilization; unused
routine is never loaded
All routines kept on disk in relocatable load
format
Useful when large amounts of code are
needed to handle infrequently occurring
cases
No special support from the
operating system is required
Implemented through program
design
OS can help by providing
libraries to
implement dynamic loading 9
Dynamic Linking
Static linking – system libraries and program code combined by the
loader into the binary program image
Dynamic linking –linking postponed until execution time
Small piece of code, stub, used to locate the appropriate memory-
resident library routine
Stub replaces itself with the address of the routine, and executes
the routine
Operating system checks if routine is in memory
processes’
address
If not in address space, add to address space
Dynamic linking is particularly useful for libraries
System also known as shared libraries
Consider applicability to patching system 10
Swapping
A process can be swapped temporarily out of memory to a
backing store, and then brought back into memory for
continued execution
Total physical memory space of processes can exceed
physical memory
Backing store – fast disk large enough to accommodate copies
of all memory images for all users; must provide direct access
to these memory images
Roll out, roll in – swapping variant used for priority-based
scheduling algorithms; lower-priority process is swapped out
so higher-priority process can be loaded and executed
Major part of swap time is transfer time; total transfer time is
directly proportional to the amount of memory swapped
System maintains a ready queue of ready-to-run processes
which have memory images on disk
11
Swapping
Does the swapped out process
need to swap back in to same
physical addresses?
Depends on address binding
method
Plus consider pending I/O to /
from process memory space
Modified versions of swapping are
found on many systems (i.e., UNIX,
Linux, and Windows)
Swapping normally disabled
Started if more than threshold
amount of memory allocated
Schematic View of
Disabled again once
Swapping
demand memory reduced
threshold 12
below
Context Switch Time including Swapping
If next processes to be put on CPU is not in memory, need to swap
out a process and swap in target process
Context switch time can then be very high
100MB process swapping to hard disk with transfer rate
of 50MB/sec
Swap out time of 2000 ms
Plus swap in of same sized process
Total context switch swapping component time of
4000ms (4 seconds)
Can reduce if reduce size of memory swapped – by knowing how
much memory really being used
System calls to inform OS of memory use via request_memory()
and release_memory()
13
Context Switch Time and Swapping
Other constraints as well on swapping
Pending I/O – can’t swap out as I/O would
occur to wrong process
Or always transfer I/O to kernel space, then
to I/O device
Known as double buffering, adds overhead
Standard swapping not used in modern
operating systems
But modified version common
Swap only when free memory extremely low
14
Swapping on Mobile Systems
Not typically supported
Flash memory based
Small amount of
space
Limited number
of write cycles
Poor throughput between flash memory and CPU on mobile
platform
Instead use other methods to free memory if low
Read-only data thrown out and reloaded flash
iOS asks apps
fromtoneeded
voluntarily relinquish allocated memory if
Failure to free can result in termination
Android terminates apps if low free memory, but first
writes
application state topaging
Both OSes support flash for
as fast restartbelow
discussed 15
Contiguous Allocation
Main memory must support both OS and user processes
Limited resource, must allocate efficiently
Contiguous allocation is one early method
Main memory usually into two partitions:
Resident operating system, usually held in low
memory with interrupt vector
User processes then held in high memory
Each process contained in single contiguous section of memory
Relocation registers used to protect user processes from each
other, and from changing operating-system code and data
Base register contains value of smallest physical address
16
Contiguous Allocation
Limit register contains
range of logical addresses
– each logical address
must be less than the limit
register
MMU maps logical
address dynamically
Can then allow actions
such as kernel code being
Hardware Support for
transient and kernel
Relocation and Limit Registers
changing size
17
Multiple-partition allocation
Multiple-partition allocation
Degree of multiprogramming limited by number of partitions
Variable-partition sizes for efficiency (sized to a given process’ needs)
Hole – block of available memory; holes of various size are
scattered throughout memory
When a process arrives, it is allocated memory from a hole large enough to
accommodate it
Process exiting frees its partition, adjacent free partitions combined
Operating system maintains information
about:
a) allocated partitions b) free partitions (hole)
18
Dynamic Storage-Allocation Problem
First-fit: Allocate the first hole that is big
enough
First-fit and best-fit better than worst-fit in terms of speed and storage utilization
20
Fragmentation
Reduce external fragmentation by compaction
Shuffle memory contents to place all free
memory together in one large block
Compaction is possible only if relocation is dynamic,
and is done at execution time
I/O problem
Latch job in memory while it is involved in I/O
Do I/O only into OS buffers
Now consider that backing store has
same fragmentation problems
21
Segmentation
Memory-management scheme that supports user view of memory
A program is a collection of segments
A segment is a logical unit such as:
main program
procedure
function
method
object
local variables, global variables
common block
stack
symbol table
arrays User’s View of a Program
22
Segmentation Architecture
Logical address consists of a two tuple:
<segment-number, offset>,
Logical View of
Segment table – maps two-dimensional
physical addresses; each table entry has:
Segmentation
base – contains the starting physical
address where the segments reside in
memory
limit – specifies the length of the
segment
Segment-table base register (STBR)
points to the segment table’s location in
use space
memory
Segment-table length register (STLR)
indicates number of segments used by a physical memory space
program;
segment number s is legal if s < STLR 23
Segmentation Architecture
Protection
With each entry in
segment table associate:
validation bit = 0 illegal
segment
read/write/
execute
privileges
Protection bits associated with
segments; code sharing occurs at
segment
Since level vary in length,
segments
memory allocation is a dynamic Segmentation Hardware
storage-allocation problem
A segmentation example is shown
in the following diagram
24
Paging
Physical address space of a process can be noncontiguous;
process is allocated physical memory whenever the latter is
available
Avoids external fragmentation
Avoids problem of varying sized memory chunks
Divide physical memory into fixed-sized blocks called frames
Size is power of 2, between 512 bytes and 16 Mbytes
Divide logical memory into blocks of same size called pages
Keep track of all free frames
To run a program of size N pages, need to find N free frames and
load program
Set up a page table to translate logical to physical addresses
Backing
Still havestore likewise
Internal split into pages
fragmentation
25
Address Translation Scheme
Address generated by CPU is divided into:
Page number (p) – used as an index into a page table
which contains base address of each page in physical
memory
Page offset (d) – combined with base address to
define the physical memory address that is sent to the
memory unit page number page offset
p d
m -n n
26
Paging Hardware
27
Paging Model of Logical and Physical Memory
28
Paging Example
Page table is kept in
main
memory
Page-table base register
(PTBR)
points to the page table
Page-table length register (PTLR)
indicates size of the page table
29
Free Frames
Some TLBs
store
identifiers (ASIDs)
address-space
in
each –
TLB
uniquely entry
identifies
each process to
provide address-space
protection for that
process
Otherwise need to
flush at
every switch
context
TLBs typically small (64 to 1,024 entries)
On a TLB miss, value is loaded into the TLB for faster access next
time
Replacement policies must down
be considered
Some entries can be wired for permanent fast
access 30
Associative Memory
Associative memory – parallel search
Page # Frame #
31
Paging Hardware With TLB
32
Effective Access Time
Associative Lookup = time unit
Can be < 10% of memory access time
Hit ratio =
Hit ratio – percentage of times that a page number is found in
the associative registers; ratio related to number of associative
registers
Consider = 80%, = 20ns for TLB search, 100ns for memory access
Effective Access Time (EAT) = (1 + ) + (2 + )(1 – ) = 2 + –
Consider = 80%, = 20ns for TLB search, 100ns for memory
access
EAT = 0.80 x 100 + 0.20 x 200 = 120ns
Consider more realistic hit ratio -> = 99%, = 20ns for TLB search,
100ns for memory access
EAT = 0.99 x 100 + 0.01 x 200 = 101ns
33
Memory Protection
Memory protection implemented by associating protection bit
with each frame to indicate if read-only or read-write access is
allowed
Can also add more bits to indicate page execute-only, and
so on
Valid-invalid bit attached to each entry in the page table:
“ valid ” indicates that the associated page is
in the process’ logical address space, and is thus a legal
page
“invalid” indicates that the page is not in the
process’logical address space
Or use page-table length register (PTLR)
Any violations result in a trap to the kernel
34
Valid (v) or Invalid (i) Bit In A Page
Table
35
Shared Pages
Shared code
One copy of read-only (reentrant) code shared
among processes (i.e., text editors, compilers, window
systems)
Similar to multiple threads sharing the same
process space
Also useful for interprocess communication if sharing of
read-write pages is allowed
Private code and data
Each process keeps a separate copy of the
code and data
The pages for the private code and data can
appear anywhere in the logical address space
36
Shared Pages Example
37
Structure of the Page Table
Memory structures for paging can get huge using straight-forward
methods
Consider a 32-bit logical address space as on modern computers
Page size of 4 KB (212)
Page table would have 1 million entries (232 / 212)
If each entry is 4 bytes -> 4 MB of physical address
space / memory for page table alone
That amount of memory used to cost a lot
Don’t want to allocate that contiguously in main memory
Hierarchical Paging
Hashed Page Tables
Inverted Page Tables
38
Hierarchical Page Tables
Break up the logical address space into multiple
page tables
A simple technique is a two-level page table
We then page the page table
39
Two-Level Paging Example
A logical address (on 32-bit machine with 1K page
size) is divided into:
a page number consisting of 22 bits
a page offset consisting of 10 bits
40
Address-Translation Scheme
Three-level Paging
Scheme
41
64-bit Logical Address Space
Even two-level paging scheme not sufficient
If page size is 4 KB (212)
Then page table has 252 entries
If two level scheme, inner page tables could be 210 4-byte
entries
Address would look like
44
Inverted Page Table
Rather than each process having a page table and keeping track of
all possible logical pages, track all physical pages
One entry for each real page of memory
Entry consists of the virtual address of the page stored in that real
memory location, with information about the process that owns that
page
Decreases memory needed to store each page table, but increases
time needed to search the table when a page reference occurs
Use hash table to limit the search to one — or at most a few —
page-table entries
TLB can accelerate access
But how to implement shared memory?
One mapping of a virtual address to the shared physical address
45
Inverted Page Table Architecture
46
Oracle SPARC Solaris
Consider modern, 64-bit operating system example with
tightly integrated HW
Goals are efficiency, low overhead
Based on hashing, but more complex
Two hash tables
One kernel and one for all user
processes
Each maps memory addresses
from virtual to physical memory
Each entry represents a contiguous area of mapped
virtual memory,
More efficient than having a separate hash-table entry for
each page
Each entry has base address and span (indicating the
47
Oracle SPARC Solaris
TLB holds translation table entries (TTEs) for fast hardware lookups
A cache of TTEs reside in a translation storage buffer (TSB)
Includes an entry per recently accessed page
Virtual address reference causes TLB search
If miss, hardware walks the in-memory TSB looking for the TTE
corresponding to the address
If match found, the CPU copies the TSB entry into the TLB and
translation completes
If no match found, kernel interrupted to search the hash table
The kernel then creates a TTE from the appropriate hash
table and stores it in the TSB, Interrupt handler returns
control to the MMU, which completes the address
translation.
48
Example: The Intel 32 and 64-bit
Architectures
Dominant industry chips
49
Example: The Intel IA-32 Architecture
Supports both segmentation and segmentation with paging
Each segment can be 4 GB
Up to 16 K segments per process
Divided into two partitions
First partition of up to 8 K segments are private to process
(kept in local descriptor table (LDT))
Second partition of up to 8K segments shared
among all processes (kept in global descriptor table (GDT))
CPU generates logical address
Selector given to segmentation unit
Which produces linear addresses
50
Example: The Intel IA-32 Architecture
Linear address given to paging unit
Which generates physical address in main memory
Paging units form equivalent of MMU
Pages sizes can be 4 KB or 4 MB
51
Intel IA-32 Segmentation
52
Intel IA-32 Paging Architecture
53
Intel IA-32 Page Address Extensions
32-bit address limits led Intel to create page address extension (PAE), allowing
32-bit apps access to more than 4GB of memory space
Paging went to a 3-level scheme
Top two bits refer to a page directory pointer table
Page-directory and page-table entries moved to 64-bits in size
Net effect is increasing address space to 36 bits – 64GB of physical memory
54
Intel x86-64
Current generation Intel x86 architecture
64 bits is ginormous (> 16 exabytes)
In practice only implement 48 bit addressing
Page sizes of 4 KB, 2 MB, 1 GB
Four levels of paging hierarchy
Canalso use PAE so virtual addresses are 48 bits and
physical addresses are 52 bits
55
Example: ARM Architecture
Dominant mobile platform 32
chip (Apple iOS and Google outer
bits
inner offse
Android devices for example) page page t
MB
One-level paging for sections, section
two-
level for smaller pages
Two levels of TLBs
Outer level has two micro
TLBs
First inner is checked, on miss outers are checked, and on miss page
(one data, one instruction)
table walk performed by CPU
Inner is single main TLB
56
Virtual Memory
Code needs to be in memory to execute, but entire program rarely
used
Error code, unusual routines, large data structures
Entire program code not needed at same time
Consider ability to execute partially-loaded program
Program no longer constrained by limits of physical memory
Each program takes less memory while running ->
more programs run at the same time
Increased CPU utilization and throughput with no increase in
response time or turnaround time
Less I/O needed to load or swap programs into memory -> each
user program runs faster
57
Background (Cont.)
Virtual memory – separation of user logical
memory from physical memory
Only part of the program needs to be in memory for
execution
Logical address space can therefore be much
larger than physical address space
Allows address spaces to be by several
shared processes
Allows for more efficient process
creation
More programs running concurrently
Less I/O needed to load or swap
processes 58
Background (Cont.)
Virtual address space – logical view of how process
is stored in memory
Usually start at address 0, contiguous addresses until
end of space
Meanwhile, physical memory in
organized frames page
MMU must map logical to physical
Virtual memory can be implemented via:
Demand paging
Demand segmentation
59
Virtual Memory That is Larger Than Physical Memory
60
Virtual-address Space
Usually design logical address space for stack to start at
Max logical address and grow “down” while heap grows
“up”
Maximizes address space use
Unused address space between the two is hole
No physical memory needed until heap or stack
grows to a given new page
Enables sparse address spaces with holes left for growth,
dynamically linked libraries, etc
System libraries shared via mapping into virtual address
space
Shared memory by mapping pages read-write into virtual
address space
Pages can be shared during fork(), speeding
process
creation
61
Shared Library Using Virtual Memory
62
Demand Paging
Could bring entire process into memory at
load time
Or bring a page into memory only when it is
needed
Less I/O needed, no unnecessary I/O
Less memory needed
Faster response
More users
Similar to paging system with swapping
(diagram on right)
Page is needed reference to it
invalid reference abort
not-in-memory bring to memory
Lazy swapper – never swaps a page into
memory unless page will be needed
Swapper that deals with pages is a
pager
63
Basic Concepts
With swapping, pager guesses which pages will be used before
swapping out again
Instead, pager brings in only those pages into memory
How to determine that set of pages?
Need new MMU functionality to implement demand paging
If pages needed are already memory resident
No difference from non demand-paging
If page needed and not memory resident
Need to detect and load the page into memory from storage
Without changing program behavior
Without programmer needing to change code
64
Valid-Invalid Bit
With each page table entry a valid–invalid bit is associated (v in-
memory – memory resident, i not-in-memory)
Initially valid–invalid bit is set to i on all entries
Example of a page table snapshot:
65
Page Fault
If there is a reference to a page, first reference to that
page will trap to operating system: page fault
Operating system looks at another table to decide:
Invalid reference abort
Just not in memory
Find free frame
Swap page into frame via scheduled disk operation
Reset tables to indicate page now in memory
Set validation bit = v
Restart the instruction that caused the page
fault
66
Steps in Handling a Page Fault
67
Aspects of Demand Paging
Extreme case – start process with no pages in memory
OS sets instruction pointer to first instruction of process, non-
memory-resident -> page fault
And for every other process pages on first access
Pure demand paging
Actually, a given instruction could access multiple pages -> multiple
page faults
Consider fetch and decode of instruction which adds 2 numbers
from memory and stores result back to memory
Pain decreased because of locality of reference
Hardware support needed for demand paging
Page table with valid / invalid bit
Secondary memory (swap device with swap space)
Instruction restart 68
Instruction Restart
Consider an instruction that could access several different
locations
Block move
69
Performance of Demand Paging
Stages in Demand Paging (worse case)
1. Trap to the operating system
2. Save the user registers and process state
3. Determine that the interrupt was a page fault
4. Check that the page reference was legal and determine the location of the page on the disk
5. Issue a read from the disk to a free frame:
1. Wait in a queue for this device until the read request is serviced
2. Wait for the device seek and/or latency time
3. Begin the transfer of the page to a free frame
6. While waiting, allocate the CPU to some other user
7. Receive an interrupt from the disk I/O subsystem (I/O completed)
8. Save the registers and process state for the other user
9. Determine that the interrupt was from the disk
10. Correct the page table and other tables to show page is now in memory
11. Wait for the CPU to be allocated to this process again
12. Restore the user registers, process state, and new page table, and then resume the interrupted
instruction
70
Performance of Demand Paging (Cont.)
Three major activities
Service the interrupt – careful coding means just several
hundred instructions needed
Read the page – lots of time
Restart the process – again just a small amount of time
Page Fault Rate 0 p
1
if p = 0 no page faults
if p = 1, every
reference is a fault
Effective Access Time
(EAT)
= (1 – p) x memory
access
Demand Paging Example
Memory access time = 200 nanoseconds
Average page-fault service time = 8 milliseconds
EAT = (1 – p) x 200 + p (8 milliseconds)
= (1 – p x 200 + p x 8,000,000
= 200 + p x 7,999,800
If one access out of 1,000 causes a page fault, then EAT = 8.2
microseconds.
This is a slowdown by a factor of 40!!
If want performance degradation < 10 percent
220 > 200 + 7,999,800 x
p 20 > 7,999,800 x p
p < .0000025
< one page fault in every
72
400,000 memory
Demand Paging Optimizations
Swap space I/O faster than file system I/O even if on the same device
Swap allocated in larger chunks, less management needed than file system
Copy entire process image to swap space at process load time
Then page in and out of swap space
Used in older BSD Unix
Demand page in from program binary on disk, but discard rather than paging
out when freeing frame
Used in Solaris and current BSD
Still need to write to swap space
Pages not associated with a file (like stack and heap) – anonymous
memory
Pages modified in memory but not yet written back to the file system
Mobile systems
Typically don’t support swapping
Instead, demand page from file system and reclaim read-only pages (such as
code) 73
Copy-on-Write
Copy-on-Write (COW) allows both parent and child processes to initially
share the same pages in memory
If either process modifies a shared page, only then is the page copied
COW allows more efficient process creation as only modified pages are
copied
In general, free pages are allocated from a pool of zero-fill-on-demand
pages
Pool should always have free frames for fast demand page execution
Don’t want to have to free a frame as well as other processing on
page fault
Why zero-out a page before allocating it?
vfork() variation on fork() system call has parent suspend and child using
copy-on-write address space of parent
Designed to have child call exec()
Very efficient 74
Before Process 1 Modifies Page C
75
After Process 1 Modifies Page C
76
What Happens if There is no Free Frame?
Used up by process pages
Also in demand from the kernel, I/O buffers, etc
How much to allocate to each?
Page replacement – find some page in memory, but not
really in use, page it out
Algorithm – terminate? swap out? replace the page?
Performance – want an algorithm which will result in
minimum number of page faults
Same page may be brought into memory several times
77
Page Replacement
Prevent over-allocation of memory by modifying
page-fault service routine to include page
replacement
Use modify (dirty) bit to reduce overhead of page
transfers – only modified pages are written to
disk
Page replacement completes separation between
logical memory and physical memory – large
virtual memory can be provided on a smaller
physical memory
78
Need For Page Replacement
79
Basic Page Replacement
1. Find the location of the desired page on disk
2. Find a free frame:
- If there is a free frame, use it
-If there is no free frame, use a page replacement algorithm to
select a victim frame
- Write victim frame to disk if dirty
3. Bring the desired page into the (newly) free frame; update
the page and frame tables
4. Continue the process by restarting the instruction that caused
the trap
Note now potentially 2 page transfers for page fault – increasing
EAT
80
Page Replacement
81
Page and Frame Replacement Algorithms
Frame-allocation algorithm determines
How many frames to give each process
Which frames to replace
Page-replacement algorithm
Want lowest page-fault rate on both first access and re-access
Evaluate algorithm by running it on a particular string of memory
references (reference string) and computing the number of page
faults on that string
String is just page numbers, not full addresses
Repeated access to the same page does not cause a page fault
Results depend on number of frames available
In all our examples, the reference string of referenced
page numbers is 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
82
Graph of Page Faults Versus The Number of
Frames
83
First-In-First-Out (FIFO) Algorithm
Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
3 frames (3 pages can be in memory at a time per process)
84
FIFO Illustrating Belady’s Anomaly
85
Optimal Algorithm
Replace page that will not be used for
longest period of time
9 is optimal for the example
How do you know this?
Can’t read the future
Used for measuring how well your
algorithm performs
86
Least Recently Used (LRU) Algorithm
Use past knowledge rather than future
Replace page that has not been used in the most amount of time
Associate time of last use with each page
87
LRU Algorithm (Cont.)
Counter implementation
Every page entry has a counter; every time page is
referenced
through this entry, copy the clock into the counter
When a page needs to be changed, look at the counters
to find smallest value
Search through table needed
Stack implementation
Keep a stack of page numbers in a double link form:
Page referenced:
move it to the top
requires 6 pointers to be changed
But each update more expensive
No search for replacement
LRU and OPT are cases of stack algorithms that don’t have Belady’s88
Anomaly
Use Of A Stack to Record Most Recent Page
References
89
LRU Approximation Algorithms
LRU needs special hardware and still slow
Reference bit
With each page associate a bit, initially = 0
When page is referenced bit set to 1
Replace any with reference bit = 0 (if one exists)
We do not know the order, however
Second-chance algorithm
Generally FIFO, plus hardware-provided reference bit
Clock replacement
If page to be replaced has
Reference bit = 0 -> replace it
reference bit = 1 then:
set reference bit 0, leave page in memory
replace next page, subject to same rules
90
Second-Chance (clock) Page-Replacement Algorithm
91
Enhanced Second-Chance Algorithm
Improve algorithm by using reference bit and modify bit
(if available) in concert
Take ordered pair (reference, modify)
1. (0, 0) neither recently used not modified – best page to replace
2. (0, 1) not recently used but modified – not quite as good, must
write out before replacement
3. (1, 0) recently used but clean – probably will be used again soon
4. (1, 1) recently used and modified – probably will be used again
soon and need to write out before replacement
When page replacement called for, use the clock scheme but
use the four classes replace page in lowest non-empty class
Might need to search circular queue several times
92
Counting Algorithms
Keep a counter of the number of references that
have been made to each page
Not common
Lease Frequently Used (LFU) Algorithm:
replaces page with smallest count
Most Frequently Used (MFU) Algorithm: based
on the argument that the page with the smallest
count was probably just brought in and has yet
to be used
93
Page-Buffering Algorithms
Keep a pool of free frames, always
Then frame available when needed, not found at fault time
Read page into free frame and select victim to evict and add to
free pool
When convenient, evict victim
Possibly, keep list of modified pages
When backing store otherwise idle, write pages there and set to
non-dirty
Possibly, keep free frame contents intact and note what is in them
If referenced again before reused, no need to load
contents again from disk
Generally useful to reduce penalty if wrong victim
frame selected
94
Applications and Page Replacement
All of these algorithms have OS guessing about
future page access
Some applications have better knowledge – i.e. databases
Memory intensive applications can cause double buffering
OS keeps copy of page in memory as I/O buffer
Application keeps page in memory for its own work
Operating system can given direct access to the
disk, getting out of the way of the applications
Raw disk mode
Bypasses buffering, locking, etc
95
Allocation of Frames
Each process needs minimum number of frames
Example: IBM 370 – 6 pages to handle SS
MOVE instruction:
instruction is 6 bytes, might span 2 pages
2 pages to handle from
2 pages to handle to
Maximum of course is total frames in the
system
Two major allocation schemes
fixed allocation
priority allocation
Many variations
96
Fixed Allocation
Equal allocation – For example, if there are 100 frames (after
allocating frames for the OS) and 5 processes, give each process
20 frames
Keep some as free frame buffer pool
Proportional allocation – Allocate according to the size of process
Dynamic as degree of multiprogramming, process sizes change
m
si size of process pi 64
ss12
S si 10
127 10
m total number of a1 62
137
frames
i allocation for pi S
a i m 127
4a 62
2
s
137 57
97
Priority Allocation
Use a proportional allocation using
scheme priorities rather than
size
If process Pi generates a page fault,
select for replacement one of its frames
select for replacement a frame from a process
with lower priority number
98
Global vs. Local Allocation
Global replacement – process selects a
replacement frame from the set of all frames; one
process can take a frame from another
But then process execution time can vary greatly
But greater throughput so more common
Local replacement – each process selects from
only its own set of allocated frames
More consistent per-process performance
But possibly underutilized memory
99
Non-Uniform Memory Access
So far all memory accessed equally
Many systems are NUMA – speed of access to memory varies
Consider system boards containing CPUs and
memory, interconnected over a system bus
Optimal performance comes from allocating memory “close to” the
CPU on which the thread is scheduled
And modifying the scheduler to schedule the thread on the same
system board when possible
Solved by Solaris by creating lgroups
Structure to track CPU / Memory low latency groups
Used my schedule and pager
When possible schedule all threads of a process and allocate
all memory for that process within the lgroup
100
Thrashing
If a process does not have “enough” pages, the page-fault rate is
very high
Page fault to get page
Replace existing frame
But quickly need replaced frame back
This leads to:
Low CPU utilization
Operating system thinking that it needs to
increase the degree of multiprogramming
Another process added to the
system
102
Working-Set Model
working-set window a fixed number of page references
Example: 10,000 instructions
WSSi (working set of Process Pi) =
total number of pages referenced in the most recent (varies in time)
if too small will not encompass entire locality
if too large will encompass several localities
if = will encompass entire program
D = WSSi total demand frames
Approximation of locality
if D > m Thrashing
Policy if D > m, then suspend or swap out one of the processes
103
Keeping Track of the Working Set
Approximate with interval timer + a reference bit
Example: = 10,000
Timer interrupts after every 5000 time units
Keep in memory 2 bits for each page
Whenever a timer interrupts copy and sets the values of all
reference bits to 0
If one of the bits in memory = 1 page in working set
Why is this not completely accurate?
Improvement = 10 bits and interrupt every 1000 time units
104
Page-Fault Frequency
More direct approach than WSS
Establish “acceptable” page-fault frequency (PFF) rate
and use local replacement policy
If actual rate too low, process loses frame
If actual rate too high, process gains frame
105
Working Sets and Page Fault Rates
Direct relationship between working set of a process and
its page-fault rate
Working set changes over time
Peaks and valleys over time
106
Memory-Mapped Files
Memory-mapped file I/O allows file I/O to be treated as routine
memory access by mapping a disk block to a page in memory
A file is initially read using demand paging
A page-sized portion of the file is read from the file system into
a physical page
Subsequent reads/writes to/from the file are
treated as ordinary memory accesses
Simplifies and speeds file access by driving file I/O through memory
rather than read() and write() system calls
Also allows several processes to map the same file allowing
the pages in memory to be shared
But when does written data make it to disk?
Periodically and / or at file close() time
For example, when the pager scans for dirty pages 107
Memory-Mapped File Technique for all I/O
Some OSes uses memory mapped files for standard I/O
Process can explicitly request memory mapping a file via
mmap() system call
Now file mapped into process address space
For standard I/O (open(), read(), write(), close()), mmap anyway
But map file into kernel address space
Process still does read() and write()
Copies data to and from kernel space and user space
Uses efficient memory management subsystem
Avoids needing separate subsystem
COW can be used for read/write non-shared pages
Memory mapped files can be used for shared memory (although
again via separate system calls)
108
Memory Mapped Files
109
Shared Memory via Memory-Mapped I/O
110
Shared Memory in Windows API
First create a file mapping for file to be mapped
Then establish a view of the mapped file in process’s
virtual address space
Consider producer / consumer
Producer create shared-memory object using memory
mapping features
Open file via CreateFile(), returning a
HANDLE
Create mapping via CreateFileMapping()
creating a named shared-memory object
Create view via MapViewOfFile()
Sample code in Textbook 111
Allocating Kernel Memory
Treated differently from user memory
Often allocated from a free-memory pool
Kernel requests memory for
structures of varying sizes
Some kernel memory needs to be contiguous
I.e. for device I/O
112
Buddy System
Allocates memory from fixed-size segment consisting of physically-contiguous
pages
Memory allocated using power-of-2 allocator
Satisfies requests in units sized as power of 2
Request rounded up to next highest power of 2
When smaller allocation needed than is available, current chunk split into
two buddies of next-lower power of 2
Continue until appropriate sized chunk available
For example, assume 256KB chunk available, kernel requests 21KB
Split into AL and AR of 128KB each
One further divided into BL and BR of 64KB
One further into CL and CR of 32KB each – one used to satisfy
request
Advantage – quickly coalesce unused chunks into larger chunk
Disadvantage - fragmentation
113
Buddy System Allocator
114
Slab Allocator
Alternate strategy
Slab is one or more physically contiguous pages
Cache consists of one or more slabs
Single cache for each unique kernel data structure
Each cache filled with objects – instantiations of
the data structure
When cache created, filled with objects marked as free
When structures stored, objects marked as used
If slab is full of used objects, next object allocated from empty slab
If no empty slabs, new slab allocated
Benefits include no fragmentation, fast memory request satisfaction
115
Slab Allocation
116
Slab Allocator in Linux
For example process descriptor is of type struct task_struct
Approx 1.7KB of memory
New task -> allocate new struct from cache
Will use existing free struct task_struct
Slab can be in three possible states
Full – all used
Empty – all free
Partial – mix of free and used
Upon request, slab allocator
Uses free struct in partial slab
If none, takes one from empty slab
If no empty slab, create new empty
117
Slab Allocator in Linux (Cont.)
Slab started in Solaris, now wide-spread for both
kernel mode and user memory in various OSes
Linux 2.2 had SLAB, now has both SLOB
and SLUB allocators
SLOB for systems with limited memory
Simple List of Blocks – maintains 3 list objects for small,
medium, large objects
SLUB is performance-optimized SLAB
removes per- CPU queues, metadata stored in
page structure
118
Other Considerations - Prepaging
Prepaging
To reduce the large number of page faults that
occurs at process startup
Prepage all or some of the pages a process will need, before
they are referenced
But if prepaged pages are unused, I/O and
memory was wasted
Assume s pages are prepaged and α of the pages is used
Is cost of s * α save pages faults > or < than the cost of
prepaging
s * (1- α) unnecessary pages?
α near zero prepaging loses
119
Other Issues – Page Size
Sometimes OS designers have a choice
Especially if running on custom-built CPU
Page size selection must take into consideration:
Fragmentation
Page table size
Resolution
I/O overhead
Number of page faults
Locality
TLB size and effectiveness
Always power of 2, usually in the range 212 (4,096 bytes) to 222
(4,194,304 bytes)
On average, growing over time
120
Other Issues – TLB Reach
TLB Reach - The amount of memory accessible from the TLB
TLB Reach = (TLB Size) X (Page Size)
Ideally, the working set of each process is stored in the TLB
Otherwise there is a high degree of page faults
Increase the Page Size
This may lead to an increase in fragmentation as not all
applications require a large page size
Provide Multiple Page Sizes
This allows applications that require larger page sizes the
opportunity to use them without an increase in
fragmentation
121
Other Issues – Program Structure
Program structure
int[128,128] data;
Each row is stored in one page
Program 1
for (j = 0; j <128; j++)
for (i = 0; i < 128; i++)
data[i,j] = 0;
128 x 128 = 16,384 page faults
Program 2
for (i = 0; i < 128; i++)
for (j = 0; j < 128; j+
+)
data[i,j] = 0;
128 page faults 122
Other Issues – I/O interlock
I/O Interlock Pages must
– sometimes
be memory locked into
Consider I/O - Pages that are
used for copying a file from a
device must be locked from
being selected for eviction by a
page replacement algorithm
Pinning of pages to lock into
memory
123
Operating System Examples
Windows
Solaris
124
Windows
Uses demand paging with clustering. Clustering brings in pages
surrounding the faulting page
Processes are assigned working set minimum and working set
maximum
Working set minimum is the minimum number of pages the
process is guaranteed to have in memory
A process may be assigned as many pages up to its working set
maximum
When the amount of free memory in the system falls below a
threshold, automatic working set trimming is performed to
restore the amount of free memory
Working set trimming removes pages from processes that have
pages in excess of their working set minimum
125
Solaris
Maintains a list of free pages to assign faulting processes
Lotsfree – threshold parameter (amount of free memory) to
begin paging
Desfree – threshold parameter to increasing paging
Minfree – threshold parameter to being swapping
Paging is performed by pageout process
Pageout scans pages using modified clock algorithm
Scanrate is the rate at which pages are scanned. This ranges
from slowscan to fastscan
Pageout is called more frequently depending upon the amount
of free memory available
Priority paging gives priority to process code pages
126
Solaris 2 Page Scanner
127
Questions?
128