Demand Paging: Amna Ahmad Muhammad Mustafa

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 24

Demand Paging

Amna Ahmad
Muhammad Mustafa
What we’ll cover today

There’s what you’ll find in today’s presentation:

1. What is demand paging?


2. Why use demand paging?
3. Concept of valid and invalid bits
4. Locality of reference
5. Pure demand paging
6. Concept of page fault
7. Free frame list
8. disadvantages of demand paging
9. Copy on write
10.References
What is
01 demand paging?
Demand paging

• pages are loaded only when they are demanded during program
execution.

• similar to a paging system with swapping.


•  avoids reading into memory pages that will not be used anyways.

•  also called lazy swapper


Why use
02 demand paging?
Benefits

1. increases the degree of multiprogramming


2. more efficient use of memory
3. compaction issue resolved
4. time-sharing system
5. swapping
6. multi programming
7. external fragmentation
8. partition management
9. ability to restart any instruction after a page fault
Disadvantages

• The amount of processor overhead and the number of


tables used for handling the page faults is greater than
in simple page management techniques.
• It has more probability of internal fragmentation.
• Its memory access time is longer.
• Page Table Length Register (PTLR) has a limit for
virtual memory.
• Page map table is needed additional memory and
registers.
03 Valid and invalid bits
Valid and invalid bits

when the bit is set to “valid,” the associated


page is both legal and in memory.

If the bit is set to “invalid,” the page either is


not valid (that is, not in the logical address
space of the process) or is valid but is currently
in secondary storage.
Steps for handling page fault

1. We check an internal table (usually kept with the process control block)
for this process to determine whether the reference was a valid or an
invalid memory access.
2. If the reference was invalid, we terminate the process. If it was valid but
we have not yet brought in that page, we now page it in.
3. We find a free frame (by taking one from the free-frame list, for
example).
4. We schedule a secondary storage operation to read the desired page into
the newly allocated frame.
5. When the storage read is complete, we modify the internal table kept with
the process and the page table to indicate that the page is now in memory.
6. We restart the instruction that was interrupted by the trap. The process
can now access the page as though it had always been in memory
Pure demand paging

● Extreme case – start process with no


pages in memory
OS sets instruction pointer to first instruction
of process, non-memory-resident -> page
fault

And for every other process pages on first


access
Pure demand paging
Locality of reference

Theoretically, some programs could access several new


pages of memory with each instruction execution (one
page for the instruction and many for data), possibly
causing multiple page faults per instruction. This situation
would result in unacceptable system performance.
Fortunately, analysis of running processes shows that this
behavior is exceedingly unlikely. Programs tend to have
locality of reference
Hardware required

01 02
Page table
Secondary memory
This memory holds those This table has the ability
pages that are not present in to mark an entry invalid
main memory through a valid–invalid
bit
Free Frame list

When a page fault occurs, the operating


system must bring the desired page from
secondary storage into main memory. To
resolve page faults, most operating
systems maintain a free-frame list, a pool
of free frames for satisfying such requests
performance of
04 demand paging
effective access time

Effective Memory Access time = (p)*(s) + (1-p)*(m)

Where,

p is the page fault rate.


s is the page fault service time.
m is the main memory access time.

Page Fault Rate 0 < =p <=1


if p = 0 no page faults
if p = 1, every reference is a fault
Handling page faults

1.Trap to the operating system.


2. Save the registers and process state.
3. Determine that the interrupt was a page fault.
4. Check that the page reference was legal, and determine the
location of the page in secondary storage.
5. Issue a read from the storage to a free frame:
a. Wait in a queue until the read request is serviced.
b. Wait for the device seek and/or latency time.
c. Begin the transfer of the page to a free frame.
Handling page faults
6. While waiting, allocate the CPU core to some other process.
7. Receive an interrupt from the storage I/O subsystem (I/O
completed).
8. Save the registers and process state for the other process (if step 6
is executed).
9. Determine that the interrupt was from the secondary storage device.
10. Correct the page table and other tables to show that the desired
page is now in memory.
11. Wait for the CPU core to be allocated to this process again.
12. Restore the registers, process state, and new page table, and then
resume the interrupted instruction.
With an average page-fault service time of 8 milliseconds and a memory-
access time of 200 nanoseconds, the effective access time in nanoseconds
is

effective access time = (1 − p) × (200) + p (8 milliseconds)


= (1 − p) × 200 + p × 8,000,000
= 200 + 7,999,800 × p.
If one access out of 1,000 causes a page fault, the effective access time is
8.2 microseconds. The computer will be slowed down by a factor of 40
because of demand paging! If we want performance degradation to be less
than 10 percent, we need to keep the probability of page faults at the
following level:
220 > 200 + 7,999,800 × p,
20 > 7,999,800 × p,
p < 0.0000025.
Copy on write
Copy-on-Write (COW) allows both parent and child processes to
initially share the same pages in memory

If either process modifies a shared page, only then is the page


copied

COW allows more efficient process creation as only modified pages


are copied
Fork() and vfork()

Several versions of UNIX (including Linux, macOS, and BSD UNIX)


provide a variation of the fork() system call—vfork() (for virtual
memory fork)— that operates differently from fork() with copy-on-write.
With vfork(), the parent process is suspended, and the child process uses
the address space of the parent.
Thank you!!

You might also like