Virtual Memory
Virtual Memory
Virtual memory
Virtual memory is a technique that allows the execution of processes that may not be
completely in main memory. The main visible advantage of this scheme is that programs
can be larger than physical memory.
Advantages: -
1) A program size may not be constrained by the amount of the physical memory that
is available.
2) Each user would be able to write programs for an extremely large virtual address
space.
3) Each user program could take less physical memory. So more programs could be run
at the same time. So the CPU utilization and through put also increases.
4) Less I/O would be needed to load or swap each user program in to memory. So each
user program would run faster.
5) Virtual memory makes the task of programming much easier, because the
programmer no longer needs to worry about the amount of P.M
6) Virtual memory is commonly implemented by using demand paging. It can also be
implemented in a segmentation system.
Demand paging: -
A demand paging system is similar to a paging system with swapping. Processes reside
on secondary memory. When we want to execute a process, we swap it in to memory,
rather than swapping the entire process in to memory. We use a lazy swapper. The lazy
swapper never swaps a page in to memory unless that page will be a page.
FIG 9.2
Here we need some hard ware support to distinguish between those pages that are in
memory and those pages that’s are on the disk. The valid-invalid bit scheme can be used
for this purpose. This bit is set to “valid”, this value indicates that the associated page is
both legal and in memory. if the bit is set to “invalid” this value indicates that page either
is not valid i.e. not in the logical address of process or is currently on the disk. The page
that is not currently in main memory is simply marked invalid or contains the address of
the page on disk in the page table entry.
FIG 9.3
Suppose the process wants to execute, and then access its pages that are memory resident,
and execution proceeds normally.
If the process tries to use a page that was not brought in to memory then it may happen
that page fault can occurs.
Page fault: - If the process tries to use a page that was not currently in the memory. This
situation is called page fault.
What happens when the page fault occurs in our system: -
There are six step can occurs when page fault occurs in our system.
1) We check an internal table (page table kept in pcb) for this process. To determine
whether the reference was a valid or invalid memory access.
2) If the reference was invalid, we terminate the process. If it was valid, but it was not
yet in memory, and it is in disk.
3) We find the free frame.
4) We schedule a disk operation to read the desired page in to the newly allocated
frame.
5) When the disk read is completed we modify the internal table kept with the process
and the page table indicates that page is now in memory.
6) We restart the instruction that was interrupted by the illegal address trap.
FIG (9.4)
Pure demand paging: - Never brings a page in to memory until it is required.
Suppose we start executing a process with no pages in memory. When the O.S set the
instruction pointer to the first instruction of the process, which is not in main memory,
the process would immediately fault for the page. After the page was brought in to the
main memory, the process would continue to execute. So faulting as necessary until
every page was actually in memory. This called pure demand paging.
Page replacement: - A user process is executing a page fault occurs. The H/W traps to
the O.S, which checks the page table to that this is a page fault and it is in the disk. The
O.S determines where the desired page is residing on the disk, but then finds there are no
free-frames on the free-frame list, all memory in use.
The O.S has several options at this point. One way is to use page replacement.
The page replacement takes the following approach. If no frame is free, we find one that
is not currently being used. We can free a frame by writing its contents to swap space and
changing the page table. The free frame can now be used to place the page for which the
process faulted. The page fault service routine is now modified to include page
replacement.
1) Find the location of the desired page on the disk.
2) Find a free frame.
a) If there is a free frame, use it.
b) Otherwise, use a page replacement algorithm to select a victim frame.
c) Write the victim page to the disk; change the page and frame tables
accordingly.
3) Read the desired page in to the free frame, change the page and frame table.
4) Restart the user process.
If no frames are free, two page transfers are required. Then the page fault service routine
time increases. Fig 9.6
To reduce this over head we use modify or dirty bit.
Modify bit: - The page table can maintain this modify bit. The modify bit for the page
is set by the hardware. If this bit is set that means that page is in the memory. If this bit
is not set page is in the disk.
Page replacement algorithms: -
There are many different page replacement algorithms. Each algorithm has its own
unique features. But how we select a particular replacement algorithm?
Depending on the lowest page fault rate.
Here we use the reference string. The reference string is the string of memory
references.
By using this reference string and frame numbers we evaluate an algorithm and
compute the number of page faults.
To determine the no. of page faults for a particular reference string, we also need to
know about the page frames.
The no. of available frames increases, then the no. of page faults will decreases.
FIFO- ALGORITHM: -
The simplest page- replacement algorithm is a FIFO algorithm. A FIFO
replacement algorithm uses the time when a page was brought in to memory. To replace
we must select oldest page.
EX:- (problem)
Reference string is 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1
And use 3 frames
We get 15 page faults.
It is very easy to under stand. But its performance is not always good.
This algorithm is affected by the belady’s anomaly.
Belady’s anomaly: -
The number of faults for n+1 frames is greater than the number of faults for n
frames.
i.e. the page fault rate may increases as the number of allocated frames increases.
EX:- SEE text book
Optimal algorithm: -
It is also called as OPT or MIN.
FIFO performance is not always good. So we search an optimal page replacement
algorithm. This use the time when a page is to be used.
An optimal page replacement algorithm has the lowest page- fault rate of all
algorithms.
It never suffers from the belady’s anomaly.
REPLACE THE PAGE THAT WILL NOT BE USED FOR THE LONGEST PERIOD OF THE TIME.
L.R.U: -
Replace the page that has not been used for the longest period of time.
It is quite good.
EX: - problems
7,0,1,2,0,3,0………..
Here we get 12 page faults. By using FIFO we get 15 faults and using optimal we get 9.
The major problems are how to implemented LRU replacement. Another problem is to
determine an order for frames defined by the time of last use.
It cannot suffer from belady’s anomaly. There is a stack algorithm. That can never
exhibit Belady’s anomaly.
Stack algorithm is an algorithm, for which it can be shown that the set of the pages in
memory for ‘n’ frames is always a subset of the set of pages that would be in memory
with n+1 frames.
There is some problems by using the L.R.U .i.e determine an order for the frames
defined by the time of last use.
This can be implemented by using 2 methods.
1) Counters 2) stack
Counters: - the counters or logical clock’s are added to the C.P.U. the value of the
counter is incremented for every memory reference.
The one extra bit i.e. time of last use field is added to the each entry of the page table.
Whenever a reference is made to a page, the contents of counter register are copied to the
time of use field in the page table for that page. In this way we always have the “time” of
last reference to each page. We replace the page with the smallest time value.
But here we have some drawbacks.
1) This scheme requires search of page table to find LRU page.
2) Write the contents of counter to page table.
Stacks:- another approach to implemented LRU replacement is to keep stack of page
numbers. Whenever a page is reference, it is placed on the top of the stack. In this way
the top of the stack is always the most recently used page. And the bottom is LRU page.
LRU APPROXIMATION ALGORITHM: -
Here we use reference bit. The reference bit for a page is set by the H/W. whenever a
page is referenced that page reference bit is set to 1. The reference bits are placed with
each entry in the page table.
Initially all bits are cleared to ‘0’ by O.S. a user process executes, the reference bit
of each page is set to 1 by the H/W. after some time we can determine which pages have
been used and which have not been used by examine the reference bit. But we don’t
know the order of use or how many times the page is referenced.
For this we have some methods.
1) Additional reference bits algorithm: -
Instead of 1 bit reference bit we use 8bit reference bit for each page in page table in
memory. We set the H/W, forever 100 milliseconds the timer will generate the interrupt.
When that interrupts generated the O.S can right shift the reference bit for each page. So
here we discard the lower bit. Then we add 1 in the higher order bit when the page
reference is done. Other wise add ‘0’.
EX:- if the shift register contains 00000000 then the page has not been used for eight
time periods or 100 milliseconds.
Suppose 11111111 is the shift registers value then the page has used at least once each
period.
A page with register value 11000100 has been used more recently than 01110111.
These bits are unsigned integers. The page with lowest number is LRU page and it can
be replaced.
Notice that the numbers are not guaranteed to be unique. Two pages have same
value, we use a FIFO selection among them.
Second chance algorithm: -
The basic of this algorithm is the FIFO replacement algorithm. When a page has
been selected, how ever we check its reference bit. If the value is ‘0’ we proceed to
replace this page. If the reference bit is ‘1’, then we give a second chance to that page.
And move on to select the next FIFO page. When a page gets second chance, its
reference bit is cleared and place ‘0’. It can be implemented by using circular queue.
Enhanced second chance algorithm: -
The second chance algorithm can be enhanced with both reference bit and
modify bit. With these 2 bits we have the following four possible classes.
1) (0,0) neither recently used nor modified-> best page to replace.
2) (0,1) Not recently used but modified-> not quite good because the page will need to
be written out before replacement.
3) (1,0) Recently used by clean -> will be used again soon.
4) (1,1) Recently used and modified. -> Probably will be used again and write out will
be need before replacement.
Counting algorithm: -
We have other algorithms used for page replacement. Here we would keep
a counter. The counter contains no. of references that have been made to each page.
LFU: - least frequently used page replacement algorithm requires that page with small
count be replaced.
MFU: - most frequently used page replacement algorithm is based on the argument, that
the page that is just brought in and has yet to be used.
Allocation of frames: -
How do you allocate the fixed amount of memory among the various processes?
Ex: -Suppose we have 128k memory and there is only one process in the system, and the
page size is 1k then there are 128 frames. There any problem cannot occurs when only
one process in system.
Suppose there are 2 processes, and the O.S can use 35k memory, the remaining i.e.
93k of memory is available to user process. Then how many frames does each process
get?
Under the pure demand paging all 93 free frames would be initially be put on the free
frame list. When a user process starts execution it would generate the sequence of page
faults. Then each page fault can use 93 frames. If no free frame is there then a page
replacement algorithm would be used to select one of 93 in memory to replace with the
94 and so on….
When a process terminates, the 93 frames would once again be placed on the free frame
list. In this method also we cannot get any problem on allocation of frames.
Minimum no. of frames: -
The minimum no. of frames per process is defined by the architecture, where as the
maximum number is defined by the amount of available physical memory.
Allocation algorithm: -
We have ‘m’ frames among ‘n’ processes then allocate the m/n frames to each process.
Ex: - if there are 93 frames and 5 processes each process will get 18 frames and the
remaining 3 frames are added to the free frame allocation list. This scheme is called
equal allocation.
Suppose if a small student process needs 10k and interactive databases of 127k are
only two processes running in system with 62 frames. We give 31 frames to each
process it does not give much sense. Here the student process no needs more than 10
frames, so the other 21 are wasted.
To solve this problem we use proportional allocation. We allocated available memory
to each process according to its size.
Let the size of the process pi be si and define
S=si
Then, if the total no.of available frames is m we allocate ai frames to process pi where
Ai=si/S*m;
Here the ai is greater then the minimum no.of frames required by that process, and not
exceeding m.
For proportional allocation, we would split 62 frames between two processes one of 10
pages and one of 127 pages is
10/137*62= 4
127/137*62=57 in this way both processes share the available frames accordingly to
there needs rather than equally.
Global versus local allocation: -
The multiple processes competing for frames, we can classify page replacement
algorithm in to two broad categories:
Global replacement: -
It allows a process to select a replacement frame from the set of all frames, even is
that frame is currently allocated to some other process. One process can take the frames
from another
Local replacement: - it allows that each process select from only its own set of allocated
frames.
Thrashing: -
A process contains large no.of pages that are in active use. If the process does not have
this no.of frames, it will very quickly page fault. At this point it must replace some page,
but here all its pages are in active use. It must replace a page that will be needed again.
Very quickly faults are come again and again.
The process continuous to fault, replacing page for which it will then fault and bring
back. This high paging activity is called thrashing. A process is thrashing if it is spending
more time paging then execution.
Causes of the thrashing: -
The O.S monitors CPU utilization. If the CPU utilization too low, we increases the
degree of multiprogramming by introducing a new processes to the system. a global page
replacement algorithm is used and replacing pages with no regard to the process to
which they belong.
A process enters in to system and needs more frames. But there are no such frames
then page-faulting can occurs. Then it takes pages away from the other processes. These
processes need those pages, then that processes also get fault, that process taking pages
from other process. The faulting processes must use the paging device, and then the
ready queue empties. As a process, waiting for paging device, CPU utilization decreases.
The CPU scheduler increases the degree of multiprogramming to increase the CPU
utilization. As result the new process tries to get started by taking pages from running
processes, causing more page faults, a longer queue is form for page device. As the
result, the CPU utilization drops, and the scheduler tries to increases the degree of
multiprogramming even more. Thrashing has occurred and through put is drop.
Graph FIG
The graph is drawn between the degree of multiprogramming and the CPU utilization.
As the degree of multiprogramming increases, cpu utilization also increases more slowly
until a maximum is reached. If the degree of multiprogramming is increased even
further, thrashing set in the CPU and utilization drops sharply.
At this point, to increase CPU utilization and stop thrashing, we must decreases the
degree of multiprogramming.
The effect of thrashing can be limited by using local replacement algorithm. With local
replacement each process select from only its own set of allocated frames. If one process
starts, thrashing, it cannot steal frames from another process, so other process cannot
effect by thrashing.
To prevent thrashing, we must provide the as many frames as it need to a processes.
But how do we know how many frames it needs? There are several techniques, by using
some techniques; we find how many frames a process is actually using.
This approach defines the locality model of process execution.
Locality: - a set of pages that are in actively used together. And the locality model states
that, as a process executes, it moves from one locality to other locality.
We see that localities are defined by the program structure and its data structures.
Work – set model: -
Work set model is based on the assumption pf locality. This model uses a parameter .
It defines the work set window. The work set window indicates most recent page
references. The set of pages in the work set window is called work set.
If the pages are in active use, it will be in the work set. If it is no longer needed, it will
drop from the working set, after last reference.
Ex:- see text book FIG
The working set dependent on the selection of workset window. We have to compute
the size of the working set is wssi for each process in the system is defined by
D =wssi
Where D is the total demand for frames. If the total demand is greater than total no.of
available frames (D>m) then thrashing will occurs. The use of working set model is
simple. But the difficulty with this is keep tack of working set window is moving
window. At each memory reference, a new reference appears at one end and oldest
reference is drop at other end.
Page fault frequency: -
To prevent thrashing we use page fault frequency. We know thrashing has a high
page fault rate. We want to control the page fault rate is too high, we know that the
process needs more frames. If the page fault rate is too low, then the process may
have too many frames.
We can establish upper and lower bounds on the desired page fault rate. If the actual
page fault rate exceeds the upper limit, we allocate another frames to that process, if
page fault rate falls below, the lower limit, we remove a frame from that processes.
Fig see textbook.