Os Unit-Iv
Os Unit-Iv
OPERATING SYSTEMS
UNIT-IV:
Memory Management: Swapping, Contiguous memory allocation, Paging, Segmentation.
Virtual memory management - Demand paging, copy-on-write, page-replacement, Thrashing.
-----------------------------------------------------------------------------------------------------------------------
It is the most important function of an operating system that manages primary memory. It helps
processes to move back and forward between the main memory and execution disk. It helps OS to
keep track of every memory location, irrespective of whether it is allocated to some process or it
remains free.
Why Use Memory Management?
Here, are reasons for using memory management:
It allows you to check how much memory needs to be allocated to processes that decide
which processor should get memory at what time.
Tracks whenever inventory gets freed or unallocated. According to it will update the status.
It allocates the space to application routines.
It also make sure that these applications do not interfere with each other.
Helps protect different processes from each other
It places the programs in memory so that memory is utilized to its full extent.
Partitioned Allocation
It divides primary memory into various memory partitions, which is mostly contiguous areas of
memory. Every partition stores all the information for a specific task or job. This method consists of
allotting a partition to a job when it starts & unallocated when it ends.
1|Page
Operating Systems-UNIT-IV
This method divides the computer’s main memory into fixed-size units known as page frames. This
hardware memory management unit maps pages into frames which should be allocated on a page
basis.
Segments need hardware support in the form of a segment table. It contains the physical address of
the section in memory, size, and other data like access protection bits and status.
What is Swapping?
Swapping is a method in which the process should be swapped temporarily from the main memory to
the backing store. It will be later brought back into the memory for continue execution.
Backing store is a hard disk or some other secondary storage device that should be big enough in
order to accommodate copies of all memory images for all users. It is also capable of offering direct
access to these memory images.
Benefits of Swapping
Here, are major benefits/pros of swapping:
2|Page
Operating Systems-UNIT-IV
Partition Allocation
Memory is divided into different blocks or partitions. Each process is allocated according to the
requirement. Partition allocation is an ideal method to avoid internal fragmentation.
First Fit: In this type fit, the partition is allocated, which is the first sufficient block from the
beginning of the main memory.
Best Fit: It allocates the process to the partition that is the first smallest partition among the
free partitions.
Worst Fit: It allocates the process to the partition, which is the largest sufficient freely
available partition in the main memory.
Next Fit: It is mostly similar to the first Fit, but this Fit, searches for the first sufficient partition
from the last allocation point.
Paging solves the problem of fitting memory chunks of varying sizes onto the backing store and this
problem is suffered by many memory management schemes.
Paging helps to avoid external fragmentation and the need for compaction.
The paging technique divides the physical memory(main memory) into fixed-size blocks that are
known as Frames and also divide the logical memory(secondary memory) into blocks of the same
size that are known as Pages.
3|Page
Operating Systems-UNIT-IV
The Frame has the same size as that of a Page. A frame is basically a place where a (logical) page can
be (physically) placed.
Each process is mainly divided into parts where the size of each part is the same as the page size.
There is a possibility that the size of the last part may be less than the page size.
Pages of a process are brought into the main memory only when there is a requirement
otherwise they reside in the secondary storage.
One page of a process is mainly stored in one of the frames of the memory. Also, the pages
can be stored at different locations of the memory but always the main priority is to find
contiguous frames.
Let us now cover the concept of translating a logical address into the physical address:
1. Page Number(p)
where,
Page Number is used to specify the specific page of the process from which the CPU wants to read
the data. and it is also used as an index to the page table.
and Page offset is mainly used to specify the specific word on the page that the CPU wants to read.
Page Table in OS
The Page table mainly contains the base address of each page in the Physical memory. The base
address is then combined with the page offset in order to define the physical memory address which
is then sent to the memory unit.
Thus page table mainly provides the corresponding frame number (base address of the frame) where
that page is stored in the main memory.
As we have told you above that the frame number is combined with the page offset and forms
the required physical address.
1. Page offset(d)
2. Frame Number(f)
where,The Frame number is used to indicate the specific frame where the required page is stored.
and Page Offset indicates the specific word that has to be read from that page.
The Page size (like the frame size) is defined with the help of hardware. It is important to note here
that the size of the page is typically the power of 2 that varies between 512 bytes and 16 MB per
page and it mainly depends on the architecture of the computer.
If the size of logical address space is 2 raised to the power m and page size is 2 raised to the power
n addressing units then the high order m-n bits of logical address designates the page number and
the n low-order bits designate the page offset.
where p indicates the index into the page table, and d indicates the displacement within the page.
5|Page
Operating Systems-UNIT-IV
The above diagram indicates the translation of the Logical address into the Physical address.
The PTBR in the above diagram means page table base register and it basically holds the base address
for the page table of the current process.
The PTBR is mainly a processor register and is managed by the operating system. Commonly, each
process running on a processor needs its own logical address space.
But there is a problem with this approach and that is with the time required to access a user memory
location. Suppose if we want to find the location i, we must first find the index into the page table by
using the value in the PTBR offset by the page number for I. And this task requires memory access. It
then provides us the frame number which is combined with the page offset in order to produce the
actual address. After that, we can then access the desired place in the memory.
With the above scheme, two memory accesses are needed in order to access a byte( one for the
page-table entry and one for byte). Thus memory access is slower by a factor of 2 and in most cases,
this scheme slowed by a factor of 2.
There is the standard solution for the above problem that is to use a special, small, and fast-lookup
hardware cache that is commonly known as Translation of look-aside buffer(TLB).
Each entry in the TLB mainly consists of two parts: a key(that is the tag) and a value.
When associative memory is presented with an item, then the item is compared with all keys
simultaneously. In case if the item is found then the corresponding value is returned.
The number of entries in the TLB is small and generally lies in between 64 and 1024.
6|Page
Operating Systems-UNIT-IV
The TLB contains only a few of the page-table entries. Whenever the logical address is generated by
the CPU then its page number is presented to the TLB.
If the page number is found, then its frame number is immediately available and is used in
order to access the memory. The above whole task may take less than 10 percent longer than
would if an unmapped memory reference were used.
In case if the page number is not in the TLB (which is known as TLB miss), then a memory
reference to the Page table must be made.
When the frame number is obtained it can be used to access the memory. Additionally, page
number and frame number is added to the TLB so that they will be found quickly on the next
reference.
In case if the TLB is already full of entries then the Operating system must select one for
replacement.
TLB allows some entries to be wired down, which means they cannot be removed from the
TLB. Typically TLB entries for the kernel code are wired down.
Advantages of Paging
Given below are some advantages of the Paging technique in the operating system:
7|Page
Operating Systems-UNIT-IV
There is an increase in time taken to fetch the instruction since now two memory accesses are
required.
Paging Hardware
1. Page Number(p)
where,
Page Number is used as an index into the page table that generally contains the base address of each
page in the physical memory.
Page offset is combined with base address in order to define the physical memory address which is
then sent to the memory unit.
If the size of logical address space is 2 raised to the power m and page size is 2 raised to the power
n addressing units then the high order m-n bits of logical address designates the page number and
the n low-order bits designate the page offset.
where p indicates the index into the page table, and d indicates the displacement within the page.
The Page size is usually defined by the hardware. The size of the page is typically the power of 2 that
varies between 512 bytes and 16 MB per page.
8|Page
Operating Systems-UNIT-IV
Paging Example
In order to implement paging, one of the simplest methods is to implement the page table as a set of
registers. As the size of registers is limited but the size of the page table is usually large thus page
table is kept in the main memory.
There is no External fragmentation caused due to this scheme; Any frame that is free can be allocated
to any process that needs it. But the internal fragmentation is still there.
The first page of the process is loaded into the first frame that is listed on the free-frame list,
and then the frame number is put into the page table.
9|Page
Operating Systems-UNIT-IV
The frame table is a data structure that keeps the information of which frames are allocated or which
frames are available and many more things. This table mainly has one entry for each physical page
frame.
The Operating system maintains a copy of the page table for each process in the same way as it
maintains the copy of the instruction counter and register contents. Also, this copy is used to
translate logical addresses to physical addresses whenever the operating system maps a logical
address to a physical address manually.
This copy is also used by the CPU dispatcher in order to define the hardware page table whenever a
process is to be allocated to the CPU.
Basically, a process is divided into segments. Like paging, segmentation divides or segments the
memory. But there is a difference and that is while the paging divides the memory into a fixed
size and on the other hand, segmentation divides the memory into variable segments these are then
loaded into logical memory space.
A Program is basically a collection of segments. And a segment is a logical unit such as:
main program
procedure
function
10 | P a g e
Operating Systems-UNIT-IV
method
object
symbol table
common block
stack
arrays
Simple Segmentation
With the help of this type, each process is segmented into n divisions and they are all together
segmented at once exactly but at the runtime and can be non-contiguous (that is they may be
scattered in the memory).
Characteristics of Segmentation
Thus with the help of this technique, secondary memory and main memory are divided into
unequal-sized partitions.
Need of Segmentation
One of the important drawbacks of memory management in the Operating system is the separation
of the user's view of memory and the actual physical memory. and Paging is a technique that
provides the separation of these two memories.
User'view is basically mapped onto physical storage. And this mapping allows differentiation between
Physical and logical memory.
11 | P a g e
Operating Systems-UNIT-IV
It is possible that the operating system divides the same function into different pages and those
pages may or may not be loaded into the memory at the same time also Operating system does not
care about the User's view of the process. Due to this technique system's efficiency decreases.
logical address
Basic Method
A computer system that is using segmentation has a logical address space that can be viewed as
multiple segments. And the size of the segment is of the variable that is it may grow or shrink. As we
had already told you that during the execution each segment has a name and length. And the address
mainly specifies both thing name of the segment and the displacement within the segment.
Therefore the user specifies each address with the help of two quantities: segment name and offset.
For simplified Implementation segments are numbered; thus referred to as segment number rather
than segment name.
<segment-number,offset>
where,
Segment Number(s):
Segment Number is used to represent the number of bits that are required to represent the segment.
12 | P a g e
Operating Systems-UNIT-IV
Offset(d)
Segment offset is used to represent the number of bits that are required to represent the size of the
segment.
Segmentation Architecture
Segment Table
A Table that is used to store the information of all segments of the process is commonly known as
Segment Table. Generally, there is no simple relationship between logical addresses and physical
addresses in this scheme.
The table that stores the base address of the segment table is commonly known as the
Segment table base register (STBR)
2. Segment Limit:
The segment limit is mainly used to specify the length of the segment.
Segmentation Hardware
13 | P a g e
Operating Systems-UNIT-IV
Offset(d): It must lie in between '0' and 'segment limit'.In this case, if the Offset exceeds the segment
limit then the trap is generated.
Advantages of Segmentation
In the Segmentation technique, the segment table is mainly used to keep the record of
segments. Also, the segment table occupies less space as compared to the paging table.
Segmentation generally allows us to divide the program into modules that provide better
visualization.
Disadvantages of Segmentation
14 | P a g e
Operating Systems-UNIT-IV
The time is taken in order to fetch the instruction increases since now two memory accesses
are required.
Segments are of unequal size in segmentation and thus are not suitable for swapping.
This technique leads to external fragmentation as the free space gets broken down into
smaller pieces along with the processes being loaded and removed from the main memory
then this will result in a lot of memory waste.
Example of Segmentation
Given below is the example of the segmentation, There are five segments numbered from 0 to 4.
These segments will be stored in Physical memory as shown. There is a separate entry for each
segment in the segment table which contains the beginning entry address of the segment in the
physical memory( denoted as the base) and also contains the length of the segment(denoted as
limit).
Segment 2 is 400 bytes long and begins at location 4300. Thus in this case a reference to byte 53 of
segment 2 is mapped onto the location 4300 (4300+53=4353). A reference to segment 3, byte 85 is
mapped to 3200(the base of segment 3)+852=4052.
A reference to byte 1222 of segment 0 would result in the trap to the OS, as the length of this
segment is 1000 bytes.
What is Segmentation?
15 | P a g e
Operating Systems-UNIT-IV
Segmentation method works almost similarly to paging. The only difference between the two is that
segments are of variable-length, whereas, in the paging method, pages are always of fixed size.
A program segment includes the program’s main function, data structures, utility functions, etc. The
OS maintains a segment map table for all the processes. It also includes a list of free memory blocks
along with its size, segment numbers, and its memory locations in the main memory or virtual
memory.
16 | P a g e
Operating Systems-UNIT-IV
S.
Logical Address Physical Address
No.
1. This address is generated by the This address is a location in the memory unit.
CPU.
2. The address space consists of the set This address is a set of all physical addresses that
of all logical addresses. are mapped to the corresponding logical
addresses.
4. The user has the ability to view the The user can’t view the physical address of
logical address of a program. program directly.
17 | P a g e
Operating Systems-UNIT-IV
5. The user can use the logical address The user can indirectly access the physical
in order to access the physical address.
address.
The demand paging system is somehow similar to the paging system with
swapping where processes mainly reside in the main memory(usually in the hard
disk). Thus demand paging is the process that solves the above problem only by
swapping the pages on Demand. This is also known as lazy swapper( It never
swaps the page into the memory unless it is needed).
Swapper that deals with the individual pages of a process are referred to
as Pager.
Demand Paging is a technique in which a page is usually brought into the main
memory only when it is needed or demanded by the CPU. Initially, only those
pages are loaded that are required by the process immediately. Those pages that
are never accessed are thus never loaded into the physical memory.
Valid-Invalid Bit
Some form of hardware support is used to distinguish between the pages that are
in the memory and the pages that are on the disk. Thus for this purpose Valid-
Invalid scheme is used:
1. If the bit is set to "valid", then the associated page is both legal and is in
memory.
2. If the bit is set to "invalid" then it indicates that the page is either not
valid or the page is valid but is currently not on the disk.
For the pages that are brought into the memory, the page table is
set as usual.
But for the pages that are not currently in the memory, the page
table is either simply marked as invalid or it contains the address of
the page on the disk.
During the translation of address, if the valid-invalid bit in the page table entry is
0 then it leads to page fault.
The above figure is to indicates the page table when some pages are not in the
main memory.
Main Memory
CPU
Secondary Memory
Interrupt
Operating System
Page Table
1. If a page is not available in the main memory in its active state; then a
request may be made to the CPU for that page. Thus for this purpose, it has
to generate an interrupt.
2. After that, the Operating system moves the process to the blocked state as
an interrupt has occurred.
3. Then after this, the Operating system searches the given page in the
Logical address space.
4. And Finally with the help of the page replacement algorithms, replacements
are made in the physical address space. Page tables are updated
simultaneously.
5. After that, the CPU is informed about that update and then asked to go
ahead with the execution and the process gets back into its ready state.
When the process requires any of the pages that are not loaded into the memory,
a page fault trap is triggered and the following steps are followed,
3. In case the request by the process is valid, a free frame is located, possibly
from a free-frame list, where the required page will be moved.
4. A new operation is scheduled to move the necessary page from the disk to
the specified memory location. ( This will usually block the process on an
I/O wait, allowing some other process to use the CPU in the meantime. )
5. When the I/O operation is complete, the process's page table is updated
with the new frame number, and the invalid bit is changed to valid.
20 | P a g e
Operating Systems-UNIT-IV
6. The instruction that caused the page fault must now be restarted from the
beginning.
With this technique, portions of the process that are never called are never
loaded.
In the case of pure demand paging, there is not even a single page that is
loaded into the memory initially. Thus pure demand paging causes the
page fault.
21 | P a g e
Operating Systems-UNIT-IV
When the execution of the process starts with no pages in the memory,
then the operating system sets the instruction pointer to the first
instruction of the process and that is on a non-memory resident page and
then in this case the process immediately faults for the page.
After that when this page is brought into the memory then the process
continues its execution, page fault is necessary until every page that it
needs is in the memory.
Advantages :
Reduces memory requirement
Swap time is also reduced.
increases the degree of multiprogramming(cpu utilization time increases )
Disadvantages:
Page fault rate increases for bigger programs .
If the size of swap file is big it is difficult for main memory
Copy-On-Write:
22 | P a g e
Operating Systems-UNIT-IV
The CoW is basically a technique of efficiently copying the data resources in the
computer system. In this case, if a unit of data is copied but is not modified
then "copy" can mainly exist as a reference to the original data.
But when the copied data is modified, then at that time its copy is
created(where new bytes are actually written )as suggested from the name of the
technique.
The main use of this technique is in the implementation of the fork system call in
which it shares the virtual memory/pages of the Operating system.
Recall in the UNIX(OS), the fork() system call is used to create a duplicate
process of the parent process which is known as the child process.
Free pages in this technique are allocated from a pool of zeroed-out pages.
These shared pages between parent and child process will be marked as
copy-on-write which means that if the parent or child process will attempt
to modify the shared pages then a copy of these pages will be created and
the modifications will be done only on the copy of pages by that process
and it will not affect other processes.
Now its time to take a look at the basic example of this technique:
Let us take an example where Process A creates a new process that is Process B,
initially both these processes will share the same pages of the memory.
23 | P a g e
Operating Systems-UNIT-IV
Figure: Above figure indicates parent and child process sharing the same pages
Now, let us assume that process A wants to modify a page in the memory. When
the Copy-on-write(CoW) technique is used, only those pages that are modified
by either process are copied; all the unmodified pages can be easily shared by
the parent and child process.
And these free pages are allocated typically when the stack/heap for a process
must expand or when there are copy-on-write pages to manage.
These pages are typically allocated using the technique that is known as Zero-
fill-on-demand. And the Zero-fill-on-demand pages are zeroed-out before
being allocated and thus erasing the previous content.
24 | P a g e
Operating Systems-UNIT-IV
The Copy on Write mechanism is particularly useful in virtualized environments, where multiple
virtual machines may be running on a single physical server. By allowing virtual machines to share
memory pages, the operating system can reduce the amount of memory needed to support each
virtual machine, which can improve overall system performance.
Overall, the Copy on Write mechanism offers significant advantages in terms of memory usage,
performance, and system scalability. It is a widely used technique in modern operating systems and
has become an important part of the memory management strategies used by operating system
developers.
2. Increased memory usage when multiple processes modify the same page frequently
If multiple processes modify the same page frequently, the Copy on Write mechanism may create
multiple copies of the page, which can lead to increased memory usage. This can become a concern
in scenarios where memory usage is limited or where many processes are frequently modifying the
same pages.
3. Complexity of implementation
The Copy on Write mechanism is a complex technique that requires careful implementation to
ensure that it functions correctly. This can make it more difficult to develop and maintain operating
systems that use this technique.
Despite these potential drawbacks, the Copy on Write mechanism remains a widely used technique
in modern operating systems due to its many advantages. Operating system developers must
carefully consider the benefits and drawbacks of the Copy on Write mechanism when designing and
implementing their memory management strategies.
Thrashing:
Look at any process that does not have “enough” frames. If the process does not have the number of
frames it needs to support pages in active use, it will quickly page-fault. At this point, it must replace
25 | P a g e
Operating Systems-UNIT-IV
some page. However, since all its pages are in active use, it must replace a page that will be needed
again right away. Consequently, it quickly faults again, and again, and again, replacing pages that it
must bring back in immediately. This high paging activity is called thrashing. A process is thrashing if it
is spending more time paging than executing.
Cause of Thrashing :
Thrashing results in severe performance problems.
The operating system monitors CPU utilization. If CPU utilization is too low, we increase the degree of
multiprogramming by introducing a new process to the system. A global page-replacement algorithm
is used; it replaces pages without regard to the process to which they belong. Now suppose that a
process enters a new phase in its execution and needs more frames. It starts faulting and taking
frames away from other processes. These processes need those pages, however, and so they also
fault, taking frames from other processes. These faulting processes must use the paging device to
swap pages in and out. As they queue up for the paging device, the ready queue empties. As
processes wait for the paging device, CPU utilization decreases. The CPU scheduler sees the
decreasing CPU utilization and increases the degree of multiprogramming as a result. The new
process tries to get started by taking frames from running processes, causing more page faults and a
longer queue for the paging device. As a result, CPU utilization drops even further, and the CPU
scheduler tries to increase the degree of multiprogramming even more. Thrashing has occurred, and
system throughput plunges. The pagefault rate increases tremendously. As a result, the effective
memory-access time increases. No work is getting done, because the processes are spending all their
time paging.
Let's understand by an example, if any process does not have the number of
frames that it needs to support pages in active use then it will quickly page fault.
And at this point, the process must replace some pages. As all the pages of the
process are actively in use, it must replace a page that will be needed again right
away. Consequently, the process will quickly fault again, and again, and again,
replacing pages that it must bring back in immediately. This high paging activity
by a process is called thrashing.
During thrashing, the CPU spends less time on some actual productive work
spend more time swapping.
26 | P a g e
Operating Systems-UNIT-IV
Figure: Thrashing
Causes of Thrashing
Thrashing affects the performance of execution in the Operating system. Also,
thrashing results in severe performance problems in the Operating system.
When the utilization of CPU is low, then the process scheduling mechanism tries
to load many processes into the memory at the same time due to which degree
of Multiprogramming can be increased. Now in this situation, there are more
processes in the memory as compared to the available number of frames in the
memory. Allocation of the limited amount of frames to each process.
Whenever any process with high priority arrives in the memory and if the frame is
not freely available at that time then the other process that has occupied the
frame is residing in the frame will move to secondary storage and after that this
free frame will be allocated to higher priority process.
We can also say that as soon as the memory fills up, the process starts spending
a lot of time for the required pages to be swapped in. Again the utilization of the
CPU becomes low because most of the processes are waiting for pages.
Thus a high degree of multiprogramming and lack of frames are two main causes
of thrashing in the Operating system.
27 | P a g e
Operating Systems-UNIT-IV
This phenomenon is illustrated in Figure, in which CPU utilization is plotted against the degree of
multiprogramming. As the degree of multiprogramming increases, CPU utilization also increases,
although more slowly, until a maximum is reached. If the degree of multiprogramming is increased
even further, thrashing sets in, and CPU utilization drops sharply. At this point, to increase CPU
utilization and stop thrashing, we must decrease the degree of multiprogramming.
Figure: Thrashing
We can limit the effects of thrashing by using a local replacement algorithm (or priority replacement
algorithm). With local replacement, if one process starts thrashing, it cannot steal frames from
another process and cause the latter to thrash as well. However, the problem is not entirely solved. If
processes are thrashing, they will be in the queue for the paging device most of the time. The average
service time for a page fault will increase because of the longer average queue for the paging device.
Thus, the effective access time will increase even for a process that is not thrashing. To prevent
thrashing, we must provide a process with as many frames as it needs.
we can know how many frames it “needs” by several techniques. The working-set strategy starts by
looking at how many frames a process is actually using. This approach defines the locality model of
process execution. The locality model states that, as a process executes, it moves from locality to
locality. A locality is a set of pages that are actively used together. A program is generally composed
of several different localities, which may overlap. For example, when a function is called, it defines a
new locality. In this locality, memory references are made to the instructions of the function call, its
local variables, and a subset of the global variables. When we exit the function, the process leaves
this locality, since the local variables and instructions of the function are no longer in active use. We
may return to this locality later. we see that localities are defined by the program structure and its
data structures. The locality model states that all programs will exhibit this basic memory reference
structure.
We allocate enough frames to a process to accommodate its current locality. It will fault for the pages
in its locality until all these pages are in memory; then, it will not fault again until it changes localities.
If we do not allocate enough frames to accommodate the size of the current locality, the process will
thrash, since it cannot keep in memory all the pages that it is actively using.
Page-Fault Frequency:
The working-set model is successful, and knowledge of the working set can be useful for prepaging ,
but it seems a clumsy way to control thrashing. A strategy that uses the page-fault frequency (PFF)
takes a more direct approach. The specific problem is how to prevent thrashing. Thrashing has a high
page-fault rate. Thus, we want to control the page-fault rate. When it is too high, we know that the
process needs more frames. Conversely, if the page-fault rate is too low, then the process may have
too many frames. We can establish upper and lower bounds on the desired page-fault rate. If the
actual page-fault rate exceeds the upper limit, we allocate the process another frame. If the page-
fault rate falls below the lower limit, we remove a frame from the process. Thus, we can directly
measure and control the page-fault rate to prevent thrashing.
This lesson will introduce you to the concept of page replacement, which is used in memory
management. You will understand the definition, need and various algorithms related to page
replacement.
28 | P a g e
Operating Systems-UNIT-IV
A computer system has a limited amount of memory. Adding more memory physically is very
costly. Therefore most modern computers use a combination of both hardware and software to
allow the computer to address more memory than the amount physically present on the system.
This extra memory is actually called Virtual Memory.
Virtual Memory is a storage allocation scheme used by the Memory Management Unit(MMU) to
compensate for the shortage of physical memory by transferring data from RAM to disk storage. It
addresses secondary memory as though it is a part of the main memory. Virtual Memory makes the
memory appear larger than actually present which helps in the execution of programs that are
larger than the physical memory.
Paging
Segmentation
Paging
Paging is a process of reading data from, and writing data to, the secondary storage. It is a memory
management scheme that is used to retrieve processes from the secondary memory in the form of
pages and store them in the primary memory. The main objective of paging is to divide each
process in the form of pages of fixed size. These pages are stored in the main memory in frames.
Pages of a process are only brought from the secondary memory to the main memory when they
are needed.
When an executing process refers to a page, it is first searched in the main memory. If it is not
present in the main memory, a page fault occurs.
** Page Fault is the condition in which a running process refers to a page that is not loaded in the
main memory.
In such a case, the OS has to bring the page from the secondary storage into the main memory. This
may cause some pages in the main memory to be replaced due to limited storage. A Page
Replacement Algorithm is required to decide which page needs to be replaced.
not present in the main memory and the available space is not sufficient for allocation to the
requested page.
When the page that was selected for replacement was paged out, and referenced again, it has to
read in from disk, and this requires for I/O completion. This process determines the quality of the
page replacement algorithm: the lesser the time waiting for page-ins, the better is the algorithm.
A page replacement algorithm tries to select which pages should be replaced so as to minimize the
total number of page misses. There are many different page replacement algorithms. These
algorithms are evaluated by running them on a particular string of memory reference and
computing the number of page faults. The fewer is the page faults the better is the algorithm for
that situation.
** If a process requests for page and that page is found in the main memory then it is called page
hit , otherwise page miss or page fault .
When there is a need for page replacement, the FIFO algorithm, swaps out the page at the front of
the queue, that is the page which has been in the memory for the longest time.
For Example:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4(i.e.
maximum 4 pages in a frame).
30 | P a g e
Operating Systems-UNIT-IV
Initially, all 4 slots are empty, so when 1, 2, 3, 4 came they are allocated to the empty slots in order
of their arrival. This is page fault as 1, 2, 3, 4 are not available in memory.
When 5 comes, it is not available in memory so page fault occurs and it replaces the oldest page in
memory, i.e., 1.
When 1 comes, it is not available in memory so page fault occurs and it replaces the oldest page in
memory, i.e., 2.
When 3,1 comes, it is available in the memory, i.e., Page Hit, so no replacement occurs.
When 6 comes, it is not available in memory so page fault occurs and it replaces the oldest page in
memory, i.e., 3.
When 3 comes, it is not available in memory so page fault occurs and it replaces the oldest page in
memory, i.e., 4.
When 2 comes, it is not available in memory so page fault occurs and it replaces the oldest page in
memory, i.e., 5.
When 3 comes, it is available in the memory, i.e., Page Hit, so no replacement occurs.
Advantages
Low overhead.
Disadvantages
Poor performance.
Doesn’t consider the frequency of use or last used time, simply replaces the oldest page.
Suffers from Belady’s Anomaly(i.e. more page faults when we increase the number of page
frames).
In LRU, whenever page replacement happens, the page which has not been used for the longest
amount of time is replaced.
For Example
Initially, all 4 slots are empty, so when 1, 2, 3, 4 came they are allocated to the empty slots in order
of their arrival. This is page fault as 1, 2, 3, 4 are not available in memory.
32 | P a g e
Operating Systems-UNIT-IV
When 5 comes, it is not available in memory so page fault occurs and it replaces 1 which is the least
recently used page.
When 1 comes, it is not available in memory so page fault occurs and it replaces 2.
When 3,1 comes, it is available in the memory, i.e., Page Hit, so no replacement occurs.
When 6 comes, it is not available in memory so page fault occurs and it replaces 4.
When 3 comes, it is available in the memory, i.e., Page Hit, so no replacement occurs.
When 2 comes, it is not available in memory so page fault occurs and it replaces 5.
When 3 comes, it is available in the memory, i.e., Page Hit, so no replacement occurs.
Advantages
Efficient.
Disadvantages
Complex Implementation.
Expensive.
In this algorithm, pages are replaced which would not be used for the longest duration of time in
the future, i.e., the pages in the memory which are going to be referred farthest in the future are
replaced.
33 | P a g e
Operating Systems-UNIT-IV
This algorithm was introduced long back and is difficult to implement because it requires future
knowledge of the program behaviour. However, it is possible to implement optimal page
replacement on the second run by using the page reference information collected on the first run.
For Example
Initially, all 4 slots are empty, so when 1, 2, 3, 4 came they are allocated to the empty slots in order
of their arrival. This is page fault as 1, 2, 3, 4 are not available in memory.
When 5 comes, it is not available in memory so page fault occurs and it replaces 4 which is going to
be used farthest in the future among 1, 2, 3, 4.
When 1,3,1 comes, they are available in the memory, i.e., Page Hit, so no replacement occurs.
When 6 comes, it is not available in memory so page fault occurs and it replaces 1.
When 3, 2, 3 comes, it is available in the memory, i.e., Page Hit, so no replacement occurs.
Advantages
Easy to Implement.
34 | P a g e
Operating Systems-UNIT-IV
Highly efficient.
Disadvantages
Time-consuming.
35 | P a g e