Final Unit 6 Spos 2023
Final Unit 6 Spos 2023
Prepared By
Prof. Anand N. Gharu
(Assistant Professor)
Computer Dept.
Source: cse.iitkgp.ac.in/~bivasm/os_notes/memory_v3.pptx
SYLLABUS :
Introduction: Memory Management concepts, Memory Management
requirements.
Memory Partitioning: Fixed Partitioning, Dynamic Partitioning, Buddy
Systems Fragmentation, Paging, Segmentation, Address translation.
Placement Strategies: First Fit, Best Fit, Next Fit and Worst Fit.
Virtual Memory (VM): Concepts, Swapping, VM with Paging, Page
Table Structure, Inverted Page Table, Translation Look aside Buffer,
Page Size, VM with Segmentation, VM with Combined paging and
segmentation.
Page Replacement Policies: First In First Out (FIFO), Last Recently
Used(LRU), Optimal, Thrashing.
Content
• Memory management:
• Review of Programming Model of Intel 80386,
• Contiguous and non-contiguous,
• Swapping,
• Paging,
• Segmentation,
• Segmentation with Paging.
• Virtual Memory:
– Background,
– Demand paging,
– Page replacement scheme-
• FIFO,
• LRU,
• Optimal,
• Thrashing.
• Case Study: Memory Management in multi-cores OS.
PAGE
REPLACEMENT
ALGORITHM
Prof. Gharu Anand N. 5
PAGE REPLACENT ALGORITHMS
pages in the memory in a queue, the oldest page is in the front of the
queue. When a page needs to be replaced page in the front of the queue
Disadvantages
Poor performance.
Doesn’t consider the frequency of use or last used time, simply replaces
the oldest page.
Suffers from Belady’s Anomaly(i.e. more page faults when we increase
Prof. of
the number Gharu Anand
page N.
frames). 10
PAGE REPLACENT ALGORITHMS
1. FIFO :
Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3
page frames.Find the number of page faults.
Advantages
1. Efficient.
2. Doesn't suffer from Belady’s Anomaly.
Disadvantages
1. Complex Implementation.
2. Expensive.
3. Requires hardware support.
Disadvantages
1. Requires future knowledge of the program.
2. Time-consuming.
1. First Fit
2. Best Fit
3. Worst Fit
Block is available
Can fit the process
In simple words First Fit algorithm finds, the first block to fix the
process.
In the given example, let us assume the jobs and the memory
Prof. Gharu Anand N. 24
requirements as the following:
First, Best and Worst Fit algorithms
1. First fit :
https://prepinsta.com/operating-systems/first-fit-best-fit-worst-fit-in-os-example/
1. Protect OS
2. Protect user processes
Base and Limit Registers
• A pair of base and limit registers define
the logical address space
Hardware Address Protection with Base and Limit Registers
14000
Relocatable
code
Contiguous Allocation
• Worst-fit: Allocate the largest hole; must also search entire list
– Produces the largest leftover hole
Hardware Support for Relocation
and Limit Registers
• Relocation registers used to protect user processes from each other, and from changing
operating-system code and data
• Relocation register contains value of smallest physical address
• Limit register contains range of logical addresses – each logical address must be less
than the limit register
• Context switch
• MMU maps logical address dynamically
Fragmentation
• Processes loaded and removed from memory
– Memory is broken into little pieces
offset
page
page number page offset
p d
m-n n
– For given logical address space 2m and page size 2n
Paging Hardware
Paging Example
Logical address 0
(0*4+0)
Logical address = 16 Physical address:
Page size=4 (5*4+0)=20
Physical memory=32
Logical address 3
(0*4+3)
Physical address:
(5*4+0)=23
Logical address 4
User’s view (1*4+0)
Physical address:
(6*4+0)=24
Run time address binding
Logical address 13
(3*4+1)
Physical address:
(2*4+1)
n=2 and m=4 32-byte
memory and 4-byte pages
Paging
• External fragmentation??
• Calculating internal fragmentation
– Page size = 2,048 bytes
– Process size = 72,766 bytes
– 35 pages + 1,086 bytes
– Internal fragmentation of 2,048 - 1,086 = 962 bytes
• So small frame sizes desirable?
– But increases the page table size
– Poor disk I/O
– Page sizes growing over time
• Solaris supports two page sizes – 8 KB and 4 MB
• User’s view and physical memory now very different
– user view=> process contains in single contiguous memory space
• By implementation process can only access its own memory
– protection
• Each page table entry 4 bytes (32 bits) long
• Each entry can point to 232 page frames
• If each frame is 4 KB
• The system can address 244 bytes (16TB) of
physical memory
Use’s view
System’s view
RAM RAM
Before allocation After allocation
Implementation of Page Table
• For each process, Page table is kept in main memory
• Page-table base register (PTBR) points to the page table
• Page-table length register (PTLR) indicates size of the
page table
• In this scheme every data/instruction access requires two
memory accesses
– One for the page table and one for the data / instruction
• The two memory access problem can be solved by the
use of a special fast-lookup hardware cache called
associative memory or translation look-aside buffers
(TLBs)
Associative memory
Associative Memory
• Associative memory – parallel search
Page # Frame #
• On a TLB miss, value is loaded into the TLB for faster access next time
– Replacement policies must be considered (LRU)
– Some entries can be wired down for permanent fast access
• Some TLBs store address-space identifiers (ASIDs) in each TLB entry – uniquely
identifies each process (PID) to provide address-space protection for that process
– Otherwise need to flush at every context switch
Paging Hardware With TLB
Effective Access Time
• Associative Lookup = time unit
– Can be < 10% of memory access time
• Hit ratio =
– Hit ratio – percentage of times that a page number is found in the
associative registers; ratio related to size of TLB
• Consider = 80%, = 20ns for TLB search, 100ns for memory access
• Consider = 80%, = 20ns for TLB search, 100ns for memory access
– EAT = 0.80 x 120 + 0.20 x 220 = 140ns
• Consider better hit ratio -> = 98%, = 20ns for TLB search, 100ns for
memory access
– EAT = 0.98 x 120 + 0.02 x 220 = 122ns
Memory Protection
• Memory protection implemented by associating protection bit
with each frame to indicate if read-only or read-write access is
allowed
– Can also add more bits to indicate page execute-only, and so on
ptr
Shared
memory
Structure of the Page Table
• Memory requirement for page table can get huge using straight-
forward methods
– Consider a 32-bit logical address space as on modern computers
– Page size of 4 KB (212)
– Page table would have 1 million entries 220 (232 / 212)
– If each entry is 4 bytes -> 4 MB of physical address space / memory for
page table alone
• That amount of memory used to cost a lot
• Don’t want to allocate that contiguously in main memory
• Hierarchical Paging
d
p1
p2
Pentium II
Address-Translation Scheme
Pentium II
64-bit Logical Address Space
• Even two-level paging scheme not sufficient
• If page size is 4 KB (212)
– Then page table has 252 entries
– If two level scheme, inner page tables could be 210 4-byte entries
– Address would look like
inner page
outer page page offset
p1 p2 d
42 10 12
SPARC (32 bits), Motorola 68030 support three and four level paging respectively
Hashed Page Tables
• Common in virtual address spaces > 32 bits
• Each element contains (1) the page number (2) the value of the
mapped page frame (3) a pointer to the next element
Address space ID
Segmentation
• Memory-management scheme that supports user view of
memory
• A program is a collection of segments
– A segment is a logical unit such as:
Compiler generates the
main program segments
procedure Loader assign the seg#
function
method
object
local variables, global variables
common block
stack
symbol table
arrays
User’s View of a Program
User specifies each address
by two quantities
(a) Segment name
(b) Segment offset
4
1
3 2
4
Logical
address
space user space physical memory space
• Long term scheduler finds and allocates memory for all segments of a program
• Variable size partition scheme
Memory image
Executable file and virtual address
Symbol table
Name address
SQR 0
a.out
SUM 4 Virtual address
space
Paging view
0 Load 0
4 ADD 4
Segmentation view
<CODE, 0> Load <ST,0>
<CODE, 2> ADD <ST,4>
Segmentation Architecture
• Logical address consists of a two tuple:
<segment-number, offset>
• Segment table – maps two-dimensional logical address
to physical address;
• Each table entry has:
– base – contains the starting physical address where the
segments reside in memory
– limit – specifies the length of the segment
• Segment-table base register (STBR) points to the
segment table’s location in memory
• Segment-table length register (STLR) indicates number
of segments used by a program;
segment number s is legal if s < STLR
Example of Segmentation
Segmentation Hardware
Example of Segmentation
Segmentation Architecture
• Protection
• Protection bits associated with segments
– With each entry in segment table associate:
• validation bit = 0 illegal segment
• read/write/execute privileges
• Code sharing occurs at segment level
• Since segments vary in length, memory allocation is
a dynamic storage-allocation problem
– Long term scheduler
– First fit, best fit etc
• Fragmentation
Segmentation with Paging
Key idea:
Segments are splitted into multiple pages
Page table=220
entries
Example: The Intel Pentium
Large virtual
space
Small memory
Classical paging
• Process P1 arrives
• Requires n pages => n frames must be
available
• Allocate n frames to the process P1
• Create page table for P1
• When we want to
execute a process, swap
in
• Pager
Page Table When Some Pages
Are Not in Main Memory
….
ii Disk
address
• During address translation, if valid–invalid bit in page table entry
is i page fault
Page Fault
• If the page in not in memory, first reference to that page will trap to
operating system:
page fault
• Page fault
– No free frame
– Terminate? swap out? replace the page?
• Page replacement – find some page in memory, not really in use, page it out
– Performance – want an algorithm which will result in minimum number of page faults
P2
Need For Page Replacement
P1
P2
PC
Basic Page Replacement
1. Find the location of the desired page on disk
3. Bring the desired page into the (newly) free frame; update the page
and frame tables
4. Continue the process by restarting the instruction that caused the trap
Note now potentially 2 page transfers for page fault – increasing Effective
memory access time
Page Replacement
5
5 6
6
Page Replacement
5 5
6
6
Belady's Anomaly
# of Page Faults
Number of Frames
cs431-cotter 118
Belady’s Anomaly
• This most unexpected result is known as
Belady’s anomaly – for some page-
replacement algorithms, the page fault rate
may increase as the number of allocated
frames increases
• Local replacement – each process selects from only its own set of
allocated frames
– More consistent per-process performance
– But possibly underutilized memory
Email : [email protected]