0% found this document useful (0 votes)
28 views30 pages

OS PPT Unit 3.2

Uploaded by

Kiran Mani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views30 pages

OS PPT Unit 3.2

Uploaded by

Kiran Mani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Virtual Memory Management

• Virtual memory is a technique that allows the


execution of the process that may not be completely
in memory.
• Sometimes the program may be larger than the
available physical memory, the programmer no
longer needs to worry about the amount of physical
memory available.
• Virtual memory involves the separation of logical
memory from physical memory.
Demand Paging
• Demand paging is one of the technique for implementation of
Virtual Memory
• In this technique, pages are loaded only when they are
demanded during program execution. Pages that are never
accessed are never loaded into physical memory.
• Loading entire program into physical memory leads to
wastage of memory.
• A demand-paging system is similar to a paging with swapping
where processes reside in secondary memory. When we want
to execute a process, we swap it into memory.
• Rather than swapping the entire process into memory, we use
a lazy swapper.
• A lazy swapper never swaps a page into memory unless that
page will be needed.
Demand paging implementation
• To implement demand paging, some form of hardware
support is required to distinguish between the pages that are
in memory and pages that are in disks.
• The valid– invalid bit scheme can be used for this.
• Each page table entry consists of a valid & invalid bit
• Valid(v) means the page is in memory
• Invalid(i) means the page is not in memory
• When a process tries to access a page marked invalid causes a
page fault
• The page is never brought into memory until it is required.
This is called as Pure demand paging
• Effective Access Time (EAT) is given by
• effective access time = (1 − p) × ma + p × page fault time.
• p be the probability of a page fault (0 <= p <= 1)
• P=0 means no page fault
• P=1 means every reference is a page fault
• ma is the memory access time
• page fault time is given by
swap page out + swap page in + restart overhead
Copy-on-Write
• This technique provides rapid process creation and minimizes
the number of new pages that must be allocated to the newly
created process.
• We can use this technique to allow the parent and child
processes initially to share the same pages.
• These shared pages are marked as copy-on-write pages,
meaning that if either process writes to a shared page, a copy
of the shared page is created.
Page Replacement
• It is a the technique to free the frame when no
frame is free
• If no frame is free, we find one that is not
currently being used and free it
• We can free a frame by writing its contents to
swap space and changing the page table
Steps to implement page replacement
– Find the location of the desired page on the disk.
– Find a free frame:
» If there is a free frame, use it.
» If there is no free frame, select a victim frame.
» Write the victim frame to the disk; change the
page and frame tables accordingly.
– Read the desired page into the newly freed frame;
change the page and frame tables.
– Continue the user process from where the page
fault occurred.
Page replacement algorithm
• Every OS has its own replacement scheme and there are many
page replacement algorithm.
• Any algorithm should aim at lowest page fault rate.
• If no. of frames increases, no of page faults decreases
FIFO Page Replacement
• The simplest page-replacement algorithm is a first-in, first-out
(FIFO) algorithm.
• When a page must be replaced, the oldest page is chosen.
• For our example reference string
• Initially, frames are empty
• The 1st three reference (7 0 1) are brought into this empty
frames
• The next reference ‘2’ replaces page ‘7’ which is the oldest
page
• This process continuous until all the reference string is
completed
• Page fault rate = no. of faults/no. of bits in reference string
• (15/20) * 100 %
• 75%
Optimal Page Replacement

• The algorithm that has the lowest page-fault rate of all


algorithms
• Replace the page that will not be used for the longest period
of time.
• The first three reference (7 0 1) are brought into this empty
frames.
• The reference to page 2 replaces page 7, because page 7 will
not be used until reference 18
• whereas page 0 will be used at 5, and page 1 at 14.
• Unfortunately, the optimal page-replacement algorithm is
difficult to implement, because it requires future knowledge
of the reference string.
• Page fault rate = no. of faults/no. of bits in reference string
• (9/20) * 100 %
• 45%
LRU Page Replacement
• This approach is the least recently used (LRU) algorithm
• When a page must be replaced, LRU chooses the page that
has not been used for the longest period of time
• This strategy is similar to the optimal page replacement,
except that LRU looks backward in time.
• The first three faults ( 7 0 1) are same as those optimal
replacement
• Reference to page 2 replaces 7 as it is least recently used
• Similarly other pages replaced
• Page fault rate = no. of faults/no. of bits in reference string
• (12/20) * 100 = 60%
• LRU can be implemented using
• Counters - The clock is incremented for every memory
reference. Replace the page with the smallest time value.
• Stacks - Keep a stack of page numbers. Whenever a page is
referenced, it is removed from the stack and put on the top.
The least recently used page is always at the bottom
Allocation of Frames
• It provides how to allocate a frame among various processes
• There are many constraints for allocation of frames
• The no. of frames allocated must not be greater than no. of
available frames
• Minimum no. of frames must be allocated, otherwise if the
no. of frames allocated to each process decreases then the
page fault increases
• Minimum no. of frames per process is defined by the
architecture. Maximum no. is defined by the amount of
available physical memory
Allocation Algorithms
• Equal allocation - The easiest way to split m frames among n
processes is to give everyone an equal share, m/n frames
• For instance, if there are 93 frames and five processes, each
process will get 18 frames. The three leftover frames can be
used as a free-frame
• Proportional allocation – Here, different process will need
differing amount of memory. Allocate the available memory
to each process according to its size.
Global versus Local Allocation
• Global replacement allows a process to select a replacement
from the set of all frames even if that frame is currently
allocated to some other process. High priority processes use
this allocation which will replace low priority processes even if
allocated
• Local replacement selects from only its own set of allocated
frames
Thrashing
• Consider a process that doesn’t have enough frames. At this
point it must replace some page with low priority
• Since, all its pages are in active use, it must replace a page
that will be needed immediately
• swap-in, swap-out takes place continuously
• This high paging activity is called thrashing.
• A process is thrashing if it is spending more time in paging
than executing.
Cause of Thrashing
• Suppose that a process enters a new phase in its execution
and needs more frames. It starts faulting and taking frames
away from other processes. These processes need those
pages, however, and so they also fault, taking frames from
other processes.
• As processes wait for the paging device, CPU utilization
decreases because the processes are spending all their time
paging.
• The page- fault rate increases tremendously. As a result, the
effective memory-access time increases.
• CPU utilization is plotted against the degree of
multiprogramming. As the degree of multi- programming
increases, CPU utilization also increases
• If the degree of multiprogramming is increased even further,
thrashing sets in, and CPU utilization drops
• There are three approaches to prevent thrashing
• Locality model
– as a process executes, it moves from locality to locality.
– A locality is a set of pages
• Working-Set Model
– This model uses a parameter, ∆, to define the working-set
window
– The set of pages in the most recent ∆ page references is
the working set
• Page-Fault Frequency
– It is a direct approach to prevent thrashing
– If the actual page fault rate exceeds, allocate the process
another frame
– If the page fault rate falls, remove a frame from the
process
Memory-Mapped Files
• Consider a sequential read of a file on disk using the standard
system calls open(), read(), and write(). Each file access
requires a system call and disk access
• This approach, known as memory mapping a file, allows a
part of the virtual address space to be logically associated
with the file.
• This can lead to significant performance increases.
Allocating Kernel Memory
• When a process running in user mode requests additional
memory, pages are allocated from the list of free page frames
maintained by the kernel
• Kernel memory is often allocated from a free-memory pool
different from the user-mode processes
• Two strategies for managing free memory that is assigned to
kernel processes:
• buddy system - allocates memory from a fixed-size segment
consisting of physically contiguous pages
• slab allocation - A slab is made up of one or more contiguous
pages. A cache consists of one or more slabs. There is a single
cache for each kernel process

You might also like