0% found this document useful (0 votes)
74 views3 pages

9-Introduction To Paging

The document summarizes key concepts about paging in operating systems: - Processes are allocated physical memory in fixed-size chunks called page frames, while their address space is divided into pages. A virtual address is translated to a physical page frame using a page table. - Page tables map virtual page numbers to physical page frame numbers. Translations are cached in a translation lookaside buffer (TLB) for faster lookups. - Virtual memory allows processes to have a larger address space than physical memory by paging pages to disk when needed. This provides flexibility but incurs overhead for disk access and memory used for page tables.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views3 pages

9-Introduction To Paging

The document summarizes key concepts about paging in operating systems: - Processes are allocated physical memory in fixed-size chunks called page frames, while their address space is divided into pages. A virtual address is translated to a physical page frame using a page table. - Page tables map virtual page numbers to physical page frame numbers. Translations are cached in a translation lookaside buffer (TLB) for faster lookups. - Virtual memory allows processes to have a larger address space than physical memory by paging pages to disk when needed. This provides flexibility but incurs overhead for disk access and memory used for page tables.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 3

Operating Systems Lecture Notes

Lecture 9
Introduction to Paging
Martin C. Rinard
Basic idea: allocate physical memory to processes in fixed size chunks called page
frames. Present abstraction to application of a single linear address space. Inside
machine, break address space of application up into fixed size chunks called pages. Pages
and page frames are same size. Store pages in page frames. When process generates an
address, dynamically translate to the physical page frame hich holds data for that page.
So, a !irtual address no consists of to pieces: a page number and an offset ithin that
page. Page sizes are typically poers of "# this simplifies extraction of page numbers and
offsets. $o access a piece of data at a gi!en address, system automatically does the
folloing:
o %xtracts page number.
o %xtracts offset.
o $ranslate page number to physical page frame id.
o &ccesses data at offset in physical page frame.
'o does system perform translation( Simplest solution: use a page table. Page table is a
linear array indexed by !irtual page number that gi!es the physical page frame that
contains that page. What is lookup process(
o %xtract page number.
o %xtract offset.
o )heck that page number is ithin address space of process.
o *ook up page number in page table.
o &dd offset to resulting physical page number
o &ccess memory location.
Problem: for each memory access that processor generates, must no generate to
physical memory accesses.
Speed up the lookup problem ith a cache. Store most recent page lookup !alues in $*B.
$*B design options: fully associati!e, direct mapped, set associati!e, etc. )an make
direct mapped larger for a gi!en amount of circuit space.
'o does lookup ork no(
o %xtract page number.
o %xtract offset.
o *ook up page number in $*B.
o If there, add offset to physical page number and access memory location.
o +therise, trap to +S. +S performs check, looks up physical page number, and
loads translation into $*B. ,estarts the instruction.
*ike any cache, $*B can ork ell, or it can ork poorly. What is a good and bad case
for a direct mapped $*B( What about fully associati!e $*Bs, or set associati!e $*B(
-ixed size allocation of physical memory in page frames dramatically simplifies
allocation algorithm. +S can .ust keep track of free and used pages and allocate free
pages hen a process needs memory. $here is no fragmentation of physical memory into
smaller and smaller allocatable chunks.
But, are still pieces of memory that are unused. What happens if a program/s address
space does not end on a page boundary( ,est of page goes unused. Book calls this
internal fragmentation.
'o do processes share memory( $he +S makes their page tables point to the same
physical page frames. 0seful for fast interprocess communication mechanisms. $his is
!ery nice because it allos transparent sharing at speed.
What about protection( $here are a !ariety of protections:
o Pre!enting one process from reading or riting another process/ memory.
o Pre!enting one process from reading another process/ memory.
o Pre!enting a process from reading or riting some of its on memory.
o Pre!enting a process from reading some of its on memory.
'o is this protection integrated into the abo!e scheme(
Pre!enting a process from reading or riting memory: +S refuses to establish a mapping
from !irtual address space to physical page frame containing the protected memory.
When program attempts to access this memory, +S ill typically generate a fault. If user
process catches the fault, can take action to fix things up.
Pre!enting a process from riting memory, but alloing a process to read memory. +S
sets a rite protect bit in the $*B entry. If process attempts to rite the memory, +S
generates a fault. But, reads go through .ust fine.
1irtual 2emory Introduction.
When a segmented system needed more memory, it sapped segments out to disk and
then sapped them back in again hen necessary. Page based systems can do something
similar on a page basis.
Basic idea: hen +S needs to a physical page frame to store a page, and there are none
free, it can select one page and store it out to disk. It can then use the nely free page
frame for the ne page. Some pragmatic considerations:
o In practice, it makes sense to keep a fe free page frames. When number of free
pages drops belo this threshold, choose a page and store it out. $his ay, can
o!erlap I3+ re4uired to store out a page ith computation that uses the nely
allocated page frame.
o In practice the page frame size usually e4uals the disk block size. Why(
o 5o you need to allocate disk space for a !irtual page before you sap it out( 67ot
if alays keep one page frame free8 Why did BS5 do this( &t some point +S
must refuse to allocate a process more memory because has no sap space. When
can this happen( 6malloc, stack extension, ne process creation8.
When process tries to access paged out memory, +S must run off to the disk, find a free
page frame, then read page back off of disk into the page frame and restart process.
What is ad!antage of !irtual memory3paging(
o )an run programs hose !irtual address space is larger than physical memory. In
effect, one process shares physical memory ith itself.
o )an also flexibly share machine beteen processes hose total address space
sizes exceed the physical memory size.
o Supports a ide range of user9le!el stuff 9 See *i and &ppel paper.
5isad!antages of 123paging: extra resource consumption.
o 2emory o!erhead for storing page tables. In extreme cases, page table may take
up a significant portion of !irtual memory. +ne Solution: page the page table.
+thers: go to a more complicated data structure for storing !irtual to physical
translations.
o $ranslation o!erhead.

You might also like