Unit 3 Memory Management
Unit 3 Memory Management
Unit 3
Memory Management
Memory Management:
1
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
Role/Function of Memory management
Memory management keeps track of each and every memory location, regardless of
either it is allocated to some process or it is free.
It checks how much memory is to be allocated to processes. It decides which process
will get memory at what time.
It tracks whenever some memory gets freed or unallocated and correspondingly it
updates the status.
Memory managers should enable sharing of memory space between processes. Thus, two
programs can reside at the same memory location although at different times.
The memory management techniques can be classified into following main categories:
2
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
1. Contiguous Allocation
In Contiguous memory allocation which is a memory management technique,
whenever there is a request by the user process for the memory then a single section of
the contiguous memory block is given to that process according to its requirement.
Contiguous Memory allocation is achieved just by dividing the memory into the fixed-
sized partition.
The memory can be divided either in the fixed-sized partition or in the variable-sized
partition in order to allocate contiguous space to user processes.
This technique is also known as static partitioning. In this scheme, the system
divides the memory into fixed-size partitions. The partitions may or may not be the
same size. The size of each partition is fixed as indicated by the name of the
technique and it cannot be changed.
In this partition scheme, each partition may contain exactly one process. There is a
problem that this technique will limit the degree of multiprogramming because the
number of partitions will basically decide the number of processes.
3
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
Whenever any process terminates then the partition becomes available for another
process.
Example
Let's take an example of fixed size partitioning scheme, we will divide a memory size of 15
KB into fixed-size partitions:
It is important to note that these partitions are allocated to the processes as they arrive
and the partition that is allocated to the arrived process basically depends on the
algorithm followed.
If there is some wastage inside the partition then it is termed Internal Fragmentation.
4
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
Disadvantages of Fixed-size Partition Scheme
1. Internal Fragmentation
Suppose the size of the process is lesser than the size of the partition in that case some
size of the partition gets wasted and remains unused. This wastage inside the memory is
generally termed as internal fragmentation
As we have shown in the above diagram the 70 KB partition is used to load a process of
50 KB so the remaining 20 KB got wasted.
3. External Fragmentation
It is another drawback of the fixed-size partition scheme as total unused space by
various partitions cannot be used in order to load the processes even though there is
the availability of space but it is not in the contiguous fashion.
5
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
As partition size varies according to the need of the process so in this partition scheme
there is no internal fragmentation.
6
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
Disadvantages of Variable-size Partition Scheme
1. External Fragmentation:
As there is no internal fragmentation which is an advantage of using this partition
scheme does not mean there will no external fragmentation. (External fragmentation
happens when there’s a sufficient quantity of area within the memory to satisfy
the memory request of a method. However, the process’s memory request
cannot be fulfilled because the memory offered is in a non-contiguous manner).
Let us understand this with the help of an example:
In the above diagram, we can see that, there is enough space (55 KB) to run a process-
07 (required 50 KB) but the memory (fragment) is not contiguous. Here, we use
compaction, paging, or segmentation to use the free space to run a process.
2. Difficult Implementation:
The implementation of this partition scheme is difficult as compared to the Fixed
Partitioning scheme as it involves the allocation of memory at run-time rather than
during the system configuration. As we know that OS keeps the track of all the
partitions but here allocation and deallocation are done very frequently and partition
size will be changed at each time so it will be difficult for the operating system to
manage everything.
7
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
2. Non-Contiguous Allocation
The Non-contiguous memory allocation allows a process to acquire the several memory
blocks at the different location in the memory according to its requirement.
The non-contiguous memory allocation also reduces the memory wastage caused due to
internal and external fragmentation. As it utilizes the memory holes, created during
internal and external fragmentation.
o The available free memory space are scattered here and there and all the free
memory space is not at one place. So this is time-consuming.
o A process will acquire the memory space but it is not at one place it is at the
different locations according to the process requirement.
It reduces the wastage of memory which leads to internal and external fragmentation.
This utilizes all the free memory space which is created by a different process.
(I) PAGING:
A non-contiguous policy with a fixed size partition is called paging.
In paging, secondary memory and main memory are divided into equal fixed size
partitions.
8
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
The partitions of the secondary memory are called pages while the partitions of
the main memory are called frames. They are divided into equal size partitions to
have maximum utilization of the main memory and avoid external fragmentation.
Example: We have a process P having process size as 4B, page size as 1B. Therefore
there will we four pages (say, P0, P1, P2, P3) each of size 1B. Also, when this
process goes into the main memory for execution then depending upon the
availability, it may be stored in non-contiguous fashion in the main memory frame
as shown below:
Logical Address: The logical address consists of two parts page number and page
offset.
1. Page Number: It tells the exact page of the process which the CPU wants to
access.
2. Page Offset: It tells the exact word on that page which the CPU wants to
read.
Page table: A page table contains the frame number corresponding to the page
number of some specific process. So, each process will have its own page table. A
register called Page Table Base Register (PTBR) which holds the base value of the
page table.
Now, let's see how the translation is done.
The CPU generates the logical address which contains the page number and
the page offset. The PTBR register contains the address of the page table. Now,
the page table helps in determining the frame number corresponding to the page
number. Now, with the help of frame number and the page offset the physical
Address is determined and the page is accessed in the main memory .
Advantages of Paging
10
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
2. Swapping is easy between equal-sized pages and frames.
Disadvantages of Paging
1. As the size of the frame is fixed, so it may suffer from internal fragmentation. It may
happen that the process is too small and it may not acquire the entire frame size.
2. The access time increases because of paging as the main memory has to be now
accessed two times. First, we need to access the page table which is also stored in the
main memory and second, combine the frame number with the page offset and then
get the physical address of the page which is again stored in the main memory.
3. For every process, we have an independent page table and maintaining the page
table is extra overhead.
II) Segmentation:
In paging, we were blindly dividing the process into pages of fixed sizes but in
segmentation, we divide the process into modules for better visualization of the
process. Here each segment or module consists of the same type of functions.
For example, the main function is included in one segment; library function is kept
in other segments, and so on. As the size of segments may vary, so memory is
divided into variable size parts.
11
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
Translation of logical Address into physical Address
As a CPU always generates a logical address and we need a physical address for
accessing the main memory. This mapping is done by the MMU (memory
management Unit) with the help of the segment table.
Let’s first understand some of the basic terms then we will see how this
translation is done.
Logical Address: The logical address consists of two parts segment
number and page offset.
1. Segment Number: It tells the specific segment of the process from
which the CPU wants to read the data.
2. Segment Offset: It tells the exact word in that segment which the CPU
wants to read.
Logical Address = Segment Number + Segment Offset
Physical Address: The physical address is obtained by adding the base
address of the segment to the segment offset.
Segment table: A segment table stores the base address of each segment in the
main memory. It has two parts i.e. Base and Limit. Here, base indicates the base
address or starting address of the segment in the main memory. Limit tells the
size of that segment. A register called Segment Table Base Register (STBR) which
holds the base value of the segment table. The segment table is also stored in the
main memory itself.
How is the translation done?
The CPU generates the logical address which contains the segment number and
the segment offset. STBR register contains the address of the segment table. Now,
the segment table helps in determining the base address of the
segment corresponding to the page number. Now, the segment offset is compared
with the limit corresponding to the Base.
If the segment offset is greater than the limit then it is an invalid address. This is
because the CPU is trying to access a word in the segment and this value is greater
than the size of the segment itself which is not possible. If the segment offset is
less than or equal to the limit then only the request is accepted. The physical
12
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
address is generated by adding the base address of the segment to the segment
offset.
Advantages of Segmentation
1. The size of the segment table is less compared to the size of the page table.
2. There is no internal fragmentation.
Disadvantages of Segmentation
1. When the processes are loaded and removed (during swapping ) from the main
memory then free memory spaces are broken into smaller pieces and this causes
external fragmentation.
2. Here also the time to access the data increases as due to segmentation the main
memory has to be now accessed two times. First, we need to access the segment
table which is also stored in the main memory and second, combine the base
address of the segment with the segment offset and then get the physical address
which is again stored in the main memory.
13
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
Demand Paging:
Demand paging is a process in which data is moved from secondary memory to RAM on a
demand basis, which means all data is not stored in the main memory because the space is
limited in RAM. So if the CPU demands the process, if that page is not in RAM, then
swapping is needed. This means shifting the existing page from RAM and putting back in
secondary memory and putting the new page in RAM.
14
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
The CPU is informed about that update. And asked to proceed with the execution, and the
process returns to its ready state. The page table is updated accordingly.
Page Fault:
If the referred page is not present in the main memory then there will be a miss and the
concept is called Page miss or page fault.
Whenever any page fault occurs, then the required page has to be fetched from the
secondary memory into the main memory.
In case if the required page is not loaded into the memory, then a page fault trap arises
The page fault mainly generates an exception, which is used to notify the operating system that
it must have to retrieve the "pages" from the virtual memory in order to continue the execution.
Once all the data is moved into the physical memory the program continues its execution
normally. The Page fault process takes place in the background and thus goes unnoticed by the
user.
15
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
The hardware of the computer tracks to the kernel and the program counter (PC) is
generally saved on the stack. CPU registers store the information of the current state of
instruction.
An assembly program is started that usually saves the general registers and also saves the
other volatile information to prevent the OS from destroying it.
If you will access a page that is marked as invalid then it also causes a Page Fault. Then the
Paging hardware during translating the address through the page table will notice that the invalid
bit is set that will cause a trap to the Operating system.
This trap is mainly the result of the failure of the Operating system in order to bring the desired
page into memory.
Let us understand the procedure to handle the page fault as shown with the help of the
above diagram:
1. First of all, internal table (that is usually the process control block) for this process in
order to determine whether the reference was valid or invalid memory access.
2. If the reference is invalid, then we will terminate the process. If the reference is valid, but
we have not bought in that page so now we just page it in.
16
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
3. Then we locate the free frame list in order to find the free frame.
4. Now a disk operation is scheduled in order to read the desired page into the newly
allocated frame.
5. When the disk is completely read, then the internal table is modified that is kept with
the process, and the page table that mainly indicates the page is now in memory.
6. Now we will restart the instruction that was interrupted due to the trap. Now the process
can access the page as though it had always been in memory.
What is Fragmentation?
Fragmentation is an unwanted problem in the operating system in which the processes are
loaded and unloaded from memory, and free memory space is fragmented. Processes can't be
assigned to memory blocks due to their small size, and the memory blocks stay unused. It is also
necessary to understand that as programs are loaded and deleted from memory, they generate
free space or a hole in the memory. These small blocks cannot be allotted to new arriving
processes, resulting in inefficient memory use.
The conditions of fragmentation depend on the memory allocation system. As the process is
loaded and unloaded from memory, these areas are fragmented into small pieces of memory that
cannot be allocated to incoming processes. It is called fragmentation.
Types of Fragmentation
There are mainly two types of fragmentation in the operating system. These are as follows:
1. Internal Fragmentation
2. External Fragmentation
Internal Fragmentation
When a process is allocated to a memory block, and if the process is smaller than the amount of
memory requested, a free space is created in the given memory block. Due to this, the free space
of the memory block is unused, which causes internal fragmentation.
For Example:
17
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
Assume that memory allocation in RAM is done using fixed partitioning (i.e., memory blocks of
fixed sizes). 2MB, 4MB, 4MB, and 8MB are the available sizes. The Operating System uses a
part of this RAM.
Let's suppose a process P1 with a size of 3MB arrives and is given a memory block of 4MB. As
a result, the 1MB of free space in this block is unused and cannot be used to allocate memory to
another process. It is known as internal fragmentation.
External Fragmentation
External fragmentation happens when a dynamic memory allocation method allocates some
memory but leaves a small amount of memory unusable. The quantity of available memory is
substantially reduced if there is too much external fragmentation. There is enough memory
space to complete a request, but it is not contiguous. It's known as external fragmentation.
For Example:
18
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
Let's take the example of external fragmentation. In the above diagram, you can see that there is
sufficient space (50 KB) to run a process (05) (need 45KB), but the memory is not contiguous.
You can use compaction, paging, and segmentation to use the free space to execute a process.
19
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
Contiguous memory allocation includes single
Non-Contiguous memory allocation
partition allocation and multi-partition
includes Paging and Segmentation.
allocation.
In this type of memory allocation generally, a
In this type of memory allocation, generally,
table has to be maintained for each
a table is maintained by the operating system
process that mainly carries the base addresses
that maintains the list of all available and
of each block that has been acquired by a
occupied partitions in the memory space.
process in the memory.
There is wastage of memory in Contiguous There is no wastage of memory in Non-
Memory allocation. Contiguous Memory allocation.
In this type of allocation, swapped-in In this type of allocation, swapped-in
processes are arranged in the originally processes can be arranged in any place in the
allocated space. memory.
Thrashing:
Thrashing is when the page fault and swapping happens very frequently at a higher rate,
and then the operating system has to spend more time swapping these pages. This state in
20
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
The basic concept involved is that if a process is allocated too few frames, then there will be too
many and too frequent page faults. As a result, no valuable work would be done by the CPU,
and the CPU utilization would fall drastically.
The long-term scheduler would then try to improve the CPU utilization by loading some more
processes into the memory, thereby increasing the degree of multiprogramming. Unfortunately,
this would result in a further decrease in the CPU utilization, triggering a chained reaction of
higher page faults followed by an increase in the degree of multiprogramming, called thrashing.
Working Set:
In operating systems, the working set of a process is the set of pages in memory that are
actively used by the process. This includes code, data, and stack pages. The working set
is used by the operating system to determine the memory usage of a process and to
determine which pages should be paged out to disk if there is a shortage of free memory.
The working set can change over time as the process executes and different parts of its
memory are accessed.
This model is based on the above-stated concept of the Locality Model.
The basic principle states that if we allocate enough frames to a process to accommodate
its current locality, it will only fault whenever it moves to some new locality. But if the
allocated frames are lesser than the size of the current locality, the process is bound to
thrash.
21
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
According to this model, based on parameter A, the working set is defined as the set of
pages in the most recent ‘A’ page references. Hence, all the actively used pages would
always end up being a part of the working set.
The accuracy of the working set is dependent on the value of parameter A. If A is too
large, then working sets may overlap. On the other hand, for smaller values of A, the
locality might not be covered entirely.
If D is the total demand for frames and WSSi is the working set size for process i,
Now, if ‘m’ is the number of frames available in the memory, there are 2 possibilities:
(i) D>m i.e. total demand exceeds the number of frames, then thrashing will occur as some
processes would not get enough frames.
(ii) D<=m, then there would be no thrashing.
Locality:
A locality is a set of pages that are actively used together. The locality model states that as a
process executes, it moves from one locality to another. A program is generally composed of
several different localities which may overlap.
For example, when a function is called, it defines a new locality where memory references are
made to the instructions of the function call, it’s local and global variables, etc. Similarly, when
the function is exited, the process leaves this locality.
Garbage Collection:
Garbage collection is a function of an operating system or programming language that
reclaims memory no longer in use. For example, Java and .NET have built-in garbage
collection, but C and C++ do not, and programmers have to write the code to allocate and
deallocate, which is tedious and error prone.
Garbage Collection (GC) is a memory management technique frequently used in high-level
languages that allows the programmer not to worry about when memory areas should be
22
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
returned to the system. Virtually all the object-oriented languages introduced
after C++ provide some way of garbage collection (including Python, Java, Objective C). It
can also be found in LISP or PERL, for instance.
The most basic implementation of garbage collection uses reference counting: each object is
associated with a counter that tells how many other objects refer to it. Say for instance you
have a Disk object, every time your system needs another reference to that object (for
instance because the Disk Partition object has a reference to the parent Disk object), the
reference counter of Disk is incremented. Of course, if programmed manually, this is
tedious and bug prone (although in C++ at least, the use of smart pointers can automate this
bookkeeping). Plus GC's solely based on reference counting will be unable to free self-
referencing (or circular) structures, meaning that memory leaks are possible.
Mark & Sweep
A mark-sweep garbage collector traverses all reachable objects in the heap by following pointers
beginning with the "roots", i.e. pointers stored in statically allocated or stack allocated program
variables (and possibly registers as well, depending on the GC implementation). All such
reachable objects are marked. A sweep over the entire heap is then performed to restore
unmarked objects to a free list, so they can be reallocated.
Page replacement is needed in the operating systems that use virtual memory using
Demand Paging. As we know in Demand paging, only a set of pages of a process is
loaded into the memory. This is done so that we can have more processes in the memory
at the same time.
When a page that is residing in virtual memory is requested by a process for its
execution, the Operating System needs to decide which page will be replaced by this
requested page. This process is known as page replacement and is a vital component in
virtual memory management.
There are three Page replacement Algorithm
1. FIFO (First In First Out)
2. Optimal
3. LRU (Least Recently Used)
23
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
1) FIFO:
The FIFO algorithm is the simplest of all the page replacement algorithms. In this, we
maintain a queue of all the pages that are in the memory currently. The oldest page in the
memory is at the front end of the queue and the most recent page is at the back or rear
end of the queue.
Whenever a page fault occurs, the operating system looks at the front end of the queue to
know the page to be replaced by the newly requested page. It also adds this newly
requested page at the rear end and removes the oldest page from the front end of the
queue.
Example: Consider the page reference string as 3, 1, 2, 1, 6, 5, 1, 3 with 3-page frames.
Let’s try to find the number of page faults:
24
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
25
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Managed By Anjuman Hami-E-Islam, Nagpur
Department of Computer Science & Engineering
Subject: Operating System
26
Year/ Semester: 2nd year/ 3rd sem Name of Faculty: Prof. Qudsiya Naaz