0% found this document useful (0 votes)
65 views10 pages

Virtual Memory: Illusion Much Larger Than The Physical Memory

Virtual memory allows for an address space larger than physical memory by mapping virtual addresses to physical addresses. When a virtual address is not mapped to physical memory, it is fetched from disk, which has much higher access times than memory. A page is a fixed size block that can be mapped between virtual and physical addresses, while a segment is a variable sized logical component of a program or data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views10 pages

Virtual Memory: Illusion Much Larger Than The Physical Memory

Virtual memory allows for an address space larger than physical memory by mapping virtual addresses to physical addresses. When a virtual address is not mapped to physical memory, it is fetched from disk, which has much higher access times than memory. A page is a fixed size block that can be mapped between virtual and physical addresses, while a segment is a variable sized logical component of a program or data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Virtual memory

Main Memory Disk

P M
I D
L1 L2
Physical
Virtual
address space
Goals address space

1. Creates the illusion of an address space much


larger than the physical memory
2. Make provisions for protection

The main idea is that if a virtual address is not mapped into


the physical memory, then it has to be fetched from the disk.
The unit of transfer is a page (or a segment). Observe the
similarities (as well as differences) between virtual memory
and cache memory. Also, recall how slow is the disk (~ ms)
to the main memory (50 ns). So each miss (called a page
fault or a segment fault) has a large penalty.
Track Block

Sector
{
Read Write
head

DISK
What is a page? What is a segment?

A page is a fixed size fragment (say 4KB or 8 KB) of code


or data. A segment is a logical component of the program
(like a subroutine) or data (like a table). The size is variable.

VM Types

Segmented, paged, segmented and paged.

Page size 4KB –64 KB


Hit time 50-100 CPU clock cycles
Miss penalty 106 - 107 clock cycles
Access time 0.8 x 106 –0.8 x 107 clock cycles
Transfer time
0.2 x 106 –0.2 x 107 clock cycles
Miss rate 0.00001% - 0.001%
Virtual address 4 GB -16 x 1018 byte
Space size
A quick look at different types of VM

Segment sizes
are not fixed

A segment

Page sizes
are fixed

Page frame A page


or block

Segments can
be paged
Address Translation

Page Number Offset Virtual address


page
table
Physical address
Block Number Offset

Page Table (Direct Map Format)


Page Presence Block no./ Disk addr Other attributes
No. bit like protection

0 1 7 Read only
1 0 Sector 6, Track 18
2 1 45 Not cacheable
3 1 4
4 0 Sector 24,Track 32
Page Table (Associative Map Format)
Pg, Blk, P Block no./ Disk addr Other attributes

0, 7, 1 7 Read only
1, ?, 0 Sector 6, Track 18
2, 45, 1 45 Not cacheable
3, 4, 1 4
4, ?, 0 Sector 24, Track 32

Address translation overhead


Average Memory Access Time =

Hit time (no page fault) +

Miss rate (page fault rate) x Miss penalty

Examples of VM performance

Hit time = 50 ns.


Page fault rate = 0.001%
Miss penalty = 2 ms
Tav = 50 + 10-5 x 2 x 106 ns = 70 ns.
Improving the Performance of Virtual Memory

1. Hit time involves one extra table lookup. Hit time

can be reduced using a TLB

(TLB = Translation Lookaside Buffer).

2. Miss rate can be reduced by allocating enough

memory to hold the working set. Otherwise,

thrashing is a possibility.

3. Miss penalty can be reduced by using disk cache


Page Replacement policy

Determines which page needs to be discarded to

accommodate an incoming page. Common policies

are

! Least Recently Used (LRU)

! Least Frequently Used (LFU)

! Random

Writing into VM

Write-through is possible if a write buffer is used.

But write-back makes more sense. The page table

must keep track of dirty pages. There is no

overhead to discard a clean page, but to discard

dirty pages, they must be written back to the disk.


Working Set

Consider a page reference string

0, 1, 2, 2, 1, 1, 2, 2, 1, 1, 2, 2, … 100,000 references

The size of the working set is 2 pages.

Page thrashing
Fault
Rate
Enough to Available M
hold the
working set

Always allocate enough memory to hold the working


set of a program (Working Set Principle)

Disk cache

Modern computers allocate up a large fraction of the


main memory as file cache. Similar principles apply to
disk cache that drastically reduces the miss penalty.
Address Translation Using TLB

Page Offset
Page table base
+ register

16-512 entries

TLB
M
Set-associative
or fully
associative cache No match

Page table is the


Block Offset direct map in the
main memory

TLB is a set-associative cache that holds a partial


page table. In case of a TLB hit, the block number is
obtained from the TLB (fast mode). Otherwise (i.e. for
TLB miss), the block number is obtained from the
direct map of the page table in the main memory, and
the TLB is updated.

You might also like