Memory
Memory
Physical addresses
Virtual addresses
CPU MMU Memory Memory Controller
Disk
▶ Two types:
▶ Contiguous Storage Allocation
▶ Fixed Partition Allocation
▶ Variable Partition Allocation
▶ Non-Contiguous
▶ Paging
▶ Segmentation
Contiguous Storage Allocation
Partition 1 Partition 1
Partition 2 Partition 2
Partition 3 Partition 3
▶ Example
▶ Let, total memory = 1M = 1000 K
▶ Memory space occupied by OS = 200 K
▶ Memory space taken by an user program = 200 K
▶ Number of processes n = (1000 − 200)/200 = 4
program
▶ CPU utilization = 1 − (0.8)4 = 60%
Modeling Multiprogramming
PB 15K
PA 25K PA 25K
OS OS OS OS
▶ (a) Coalescing
▶ The process of merging two adjacent holes to form a
single larger hole is called coalescing.
Multiprogramming with Variable Partitions
▶ (b) Compaction
▶ Even when holes are coalesced, no individual hole may
be large enough to hold the job, although the sum of
holes is larger than the storage required for a process.
▶ It is possible to combine all the holes into one big one
by moving all the processes downward as far as
possible; this technique is called memory compaction.
Compaction
FREE 10K
Process-B
10K hole
Process-B
Process-A
Process-A
20K hole
OS OS
(a) (b)
Fixed vs. Variable Partitioning
Hole
Process Process
OS OS
(a) (c)
(b)to (g) similar transitions
Memory
Bitmap
Linked List P: process H: hole
X H
OS OS
▶ 1. First fit:
▶ The memory manager allocates the first hole that is big
enough. It stops the searching as soon as it finds a free
hole that is large enough.
▶ Advantages: It is a fast algorithm because it searches as
little as possible.
▶ Disadvantages: Not good in terms of storage utilization.
▶ Next fit:
▶ It works the same way as first fit, except that it keeps
track of where it is whenever it finds a suitable hole.
Storage Placement Strategies
▶ 3. Best fit:
▶ Allocate the smallest hole that is big enough.
▶ Best fit searches the entire list and takes the smallest
hole that is big enough to hold the new process.
▶ Best fit try to find a hole that is close to the actual size
needed.
▶ Advantages: more storage utilization than first fit.
▶ Disadvantages: slower than first fit because it requires
searching whole list at time.
Storage Placement Strategies
▶ 4. Worst fit:
▶ Allocate the largest hole.
▶ It search the entire list, and takes the largest hole,
rather than creating a tiny hole, it produces the largest
leftover hole, which may be more useful.
▶ Advantages: sometimes it has more storage utilization
than first fit and best fit.
▶ Disadvantages: not good for both performance and
utilization.
Memory Allocation Techniques
▶ Problem:
▶ Q: Given the memory partitions of 100K, 500K, 200K,
300K and 600K (in order), how would each of the
First-fit, Best-fit, and Worst-fit algorithms place
processes of 212K, 417K, 112K, and 426K (in order)?
▶ Which algorithm makes the most efficient use of
memory?
First-fit
0 0 0 0
1 - 1 8
2 - 2 9
3 3 10
Process A page table Process B Process C page table
(a) Paging
Mapping of Pages to Page Frames
28K-32K
24K-28K
20K-24K
16K-20K
Virtual addressPhysical
space memory address
12K-16K
8K-12K
4K-8K
2K-4K
Page frame
▶ MOV REG, 0
▶ → Virtual address 0 is sent to MMU. The MMU sees that
this virtual address falls in page 0 (0 to 4095), which is
mapped to page frame 2 (8192 to 12287).
▶ → Thus it transforms the address to 8192 & outputs
8192 onto the bus.
▶ Similarly,
▶ MOV REG 8192 is effectively transformed into MOV
REG, 24576.
Page Fault Processing
Page table
Page table of page tables
(a) (b)
Virtual Memory
▶ Demand paging
▶ Do not require all pages of a process in memory
▶ Bring in pages as required
▶ Page fault
▶ Required page is not in memory
▶ Operating System must swap in required page
▶ May need to swap out a page to make space
▶ Select page to throw out based on recent history
Thrashing
▶ Optimal
▶ FIFO (First In First Out)
▶ LFU (Page Based): (Least Frequently Used)
▶ LFU (Frame Based)
▶ LRU (Least Recently Used)
▶ MFU (Most Frequently Used)
▶ clock
Huge Pages
▶ What are Huge Pages?
▶ Memory is managed in blocks called pages; default size is 4KB.
▶ Huge Pages use much larger page sizes (commonly 2MB or 1GB),
reducing the number of pages needed for large memory allocations.
▶ Why Use Huge Pages?
▶ Reduces Page Table Size: Fewer pages mean smaller page tables,
lowering memory overhead.
▶ Improves TLB Efficiency: Translation Lookaside Buffer (TLB) can
cache fewer, larger entries, reducing TLB misses and speeding up
address translation.
▶ Performance: Critical for large applications (e.g., databases like
PostgreSQL, Oracle) to avoid performance bottlenecks.
▶ Example:
▶ A 4GB process with 4KB pages needs 1 million pages and a 4MB page
table.
▶ With 2MB pages, only 2,048 pages and a 16KB page table are needed.
▶ Other Benefits:
▶ Huge Pages are locked in memory and never swapped out, providing
consistent performance.
▶ Reduces kernel bookkeeping and overhead.
Transparent Huge Pages (THP)
▶ What is THP?
▶ Linux feature that automatically uses huge pages (2MB) for large
memory allocations, without requiring application changes.
▶ THP works with anonymous memory and tmpfs/shmem.
▶ Benefits
▶ No App Changes Needed: Applications benefit from huge pages
transparently.
▶ Performance: Reduces TLB misses, page faults, and memory
management overhead, improving throughput for memory-intensive
workloads (e.g., Java VMs, ML frameworks, PostgreSQL).
▶ Automatic Management: THP can promote or demote pages as
needed.
▶ Limitations and Caveats
▶ Fragmentation: Large contiguous memory is needed; fragmentation
can reduce effectiveness or cause latency spikes.
▶ Less Control: Fine-tuned applications (e.g., databases) may prefer
manual HugePages for predictability.
▶ Only 2MB Pages: THP supports only 2MB pages on x86-64.
▶ Example:
▶ PostgreSQL with a 1GB table can use 2MB pages, greatly reducing TLB
misses and improving performance.