0% found this document useful (0 votes)
30 views18 pages

COM 321 Lecture Two

Uploaded by

aliyukawuji01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views18 pages

COM 321 Lecture Two

Uploaded by

aliyukawuji01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

COM 321|| LECTURE TWO (2)

Fixed Partitioning
The earliest and one of the simplest techniques that can be used to load more than one process into
the main memory is fixed partitioning or contiguous memory allocation.
This technique divides the main memory into partitions of equal or different sizes. The operating
system always resides in the first partition while the other partitions can be used to store user
processes. The memory is assigned to the processes in a contiguous way.
In fixed partitioning,
1. The partitions cannot overlap.
2. A process must be contiguously present in a partition for the execution.
There are various cons of using this technique.
1. Internal Fragmentation
If the size of the process is lesser than the total size of the partition, then some size get wasted and
remains unused. This is a waste of memory and is called internal fragmentation.
As shown in the image below, the 4 MB partition is used to load only 3 MB process and the remaining
1 MB got wasted.
2. External Fragmentation
The total unused space of various partitions cannot be used to load the processes even though there
is space available but not in the contiguous form.
As shown in the image below, the remaining 1 MB space of each partition cannot be used as a unit to
store a 4 MB process. Even though sufficient space is available to load the process, the process will
not be loaded.
3. Limitation on the size of the process
If the process size is larger than the size of the maximum-sized partition, then that process cannot be
loaded into the memory. Therefore, a limitation can be imposed on the process size that is it cannot
be larger than the size of the largest partition.
4. The degree of multiprogramming is less
By Degree of multi-programming, we simply mean the maximum number of processes that can be
loaded into the memory at the same time. In fixed partitioning, the degree of multiprogramming is
fixed and very less because the size of the partition cannot be varied according to the size of processes.

1|Pa ge
Dynamic Partitioning
Dynamic partitioning tries to overcome the problems caused by fixed partitioning. In this technique,
the partition size is not declared initially. It is declared at the time of process loading.
The first partition is reserved for the operating system. The remaining space is divided into parts. The
size of each partition will be equal to the size of the process. The partition size varies according to
the need of the process so that internal fragmentation can be avoided.

2|Pa ge
Advantages of Dynamic Partitioning over fixed partitioning
1. No Internal Fragmentation
Given the fact that the partitions in dynamic partitioning are created according to the need of the
process, It is clear that there will not be any internal fragmentation because there will not be any
unused remaining space in the partition.
2. No Limitation on the size of the process
In Fixed partitioning, the process with the size greater than the size of the largest partition could not
be executed due to the lack of sufficient contiguous memory. Here, In Dynamic partitioning, the
process size can't be restricted since the partition size is decided according to the process size.
3. Degree of multiprogramming is dynamic
Due to the absence of internal fragmentation, there will not be any unused space in the partition hence
more processes can be loaded in the memory at the same time.
Disadvantages of dynamic partitioning
External Fragmentation
Absence of internal fragmentation doesn't mean that there will not be external fragmentation.
Let's consider three processes P1 (1 MB) and P2 (3 MB) and P3 (1 MB) are being loaded in the
respective partitions of the main memory.
After some time P1 and P3 got completed and their assigned space is freed. Now there are two unused
partitions (1 MB and 1 MB) available in the main memory, but they cannot be used to load a 2 MB
process in the memory since they are not contiguously located.
The rule says that the process must be contiguously present in the main memory to get executed. We
need to change this rule to avoid external fragmentation.

3|Pa ge
Compaction
Compaction in computer memory is a process used in memory management to combat fragmentation
and optimize the use of available memory. Fragmentation occurs when free memory is divided into
small, non-contiguous blocks, making it difficult to allocate large contiguous blocks of memory to
programs or processes. Compaction addresses this issue by rearranging the contents of memory to
create larger contiguous blocks of free space.
There are two types of fragmentation that compaction aims to resolve:
1. External Fragmentation: Occurs when free memory is scattered in small blocks throughout
the memory space. Even if there is enough total free memory to satisfy a request, the allocation
may fail because the free memory is not contiguous.
2. Internal Fragmentation: Happens when allocated memory blocks are larger than needed,
leaving unused space within these blocks. Compaction does not typically address internal
fragmentation directly but focuses on external fragmentation.

4|Pa ge
As shown in the image above, the process P5, which could not be loaded into the memory due to the
lack of contiguous space, can be loaded now in the memory since the free partitions are made
contiguous.

Partitioning Algorithms
There are various algorithms which are implemented by the Operating System in order to find out the
holes in the linked list and allocate them to the processes.
The explanation about each of the algorithm is given below.
1. First Fit Algorithm
First Fit algorithm scans the linked list and whenever it finds the first big enough hole to store a
process, it stops scanning and load the process into that hole. This procedure produces two partitions.
Out of them, one partition will be a hole while the other partition will store the process.
First Fit algorithm maintains the linked list according to the increasing order of starting index. This
is the simplest to implement among all the algorithms and produces bigger holes as compare to the
other algorithms.
2. Next Fit Algorithm
5|Pa ge
Next Fit algorithm is similar to First Fit algorithm except the fact that, Next fit scans the linked list
from the node where it previously allocated a hole.
Next fit doesn't scan the whole list, it starts scanning the list from the next node. The idea behind the
next fit is the fact that the list has been scanned once therefore the probability of finding the hole is
larger in the remaining part of the list.
Experiments over the algorithm have shown that the next fit is not better then the first fit. So it is not
being used these days in most of the cases.
3. Best Fit Algorithm
The Best Fit algorithm tries to find out the smallest hole possible in the list that can accommodate the
size requirement of the process.
Using Best Fit has some disadvantages.
1. 1. It is slower because it scans the entire list every time and tries to find out the smallest hole
which can satisfy the requirement the process.
2. Since the difference between the whole size and the process size is very small, the holes
produced will be as small as it cannot be used to load any process and therefore it remains
useless. Despite of the fact that the name of the algorithm is best fit, It is not the best algorithm
among all.
4. Worst Fit Algorithm
The worst fit algorithm scans the entire list every time and tries to find out the biggest hole in the list
which can fulfill the requirement of the process.
Despite of the fact that this algorithm produces the larger holes to load the other processes, this is not
the better approach since it is slower because it searches the entire list every time again and again.
5. Quick Fit Algorithm
The quick fit algorithm suggests maintaining the different lists of frequently used sizes. Although, it
is not practically suggestible because the procedure takes so much time to create the different lists
and then expending the holes to load a process.
The first fit algorithm is the best algorithm among all because
1. It takes lesser time compared to the other algorithms.
2. It produces bigger holes that can be used to load other processes later on.
3. It is easiest to implement.

Some questions on best fit and first fit


6|Pa ge
From the GATE point of view, Numerical on best fit and first fit are being asked frequently in 1 mark.
Let's have a look on the one given as below.
Q. Process requests are given as.
25 K, 50 K, 100 K, 75 K

Determine the algorithm which can optimally satisfy this requirement.


1. First Fit algorithm
2. Best Fit Algorithm
3. Neither of the two
4. Both of them
In the question, there are five partitions in the memory. 3 partitions are having processes inside them
and two partitions are holes.
Our task is to check the algorithm which can satisfy the request optimally.

Using First Fit algorithm


Let's see, how first fit algorithm works on this problem.
1. 25 K requirement
The algorithm scans the list until it gets first hole which should be big enough to satisfy the request
of 25 K. it gets the space in the second partition which is free hence it allocates 25 K out of 75 K to
the process and the remaining 50 K is produced as hole.

2. 50 K requirement

7|Pa ge
The 50 K requirement can be fulfilled by allocating the third partition which is 50 K in size to the
process. No free space is produced as free space.

3. 100 K requirement
100 K requirement can be fulfilled by using the fifth partition of 175 K size. Out of 175 K, 100 K
will be allocated and remaining 75 K will be there as a hole.

4. 75 K requirement
Since we are having a 75 K free partition hence we can allocate that much space to the process which
is demanding just 75 K space.

Using first fit algorithm, we have fulfilled the entire request optimally and no useless space is
remaining.
Let's see, How Best Fit algorithm performs for the problem.

Using Best Fit Algorithm


1. 25 K requirement

8|Pa ge
To allocate 25 K space using best fit approach, need to scan the whole list and then we find that a 75
K partition is free and the smallest among all, which can accommodate the need of the process.
Therefore 25 K out of those 75 K free partition is allocated to the process and the remaining 5o K is
produced as a hole.

2. 50 K requirement
To satisfy this need, we will again scan the whole list and then find the 50 K space is free which the
exact match of the need is. Therefore, it will be allocated for the process.

3. 100 K requirement
100 K need is close enough to the 175 K space. The algorithm scans the whole list and then allocates
100 K out of 175 K from the 5th free partition.

4. 75 K requirement

9|Pa ge
75 K requirement will get the space of 75 K from the 6th free partition but the algorithm will scan the
whole list in the process of taking this decision.

By following both of the algorithms, we have noticed that both the algorithms perform similar to most
of the extant in this case.
Both can satisfy the need of the processes but however, the best fit algorithm scans the list again and
again which takes lot of time.
Therefore, if you ask me that which algorithm performs in more optimal way then it will be First Fit
algorithm for sure.
Therefore, the answer in this case is A.

Paging
Paging is a memory management scheme that eliminates the need for contiguous allocation of
physical memory and thus eliminates the problems of fitting varying-sized memory chunks onto the
backing store.

Virtual memory
Virtual memory is a memory management technique that creates an illusion of a large, contiguous
memory space for programs, even if the physical memory (RAM) is limited. It allows the operating
system to use disk storage to extend the apparent size of RAM, thereby enabling programs to use
more memory than physically available.

Locality of reference

Locality of reference is a key concept in computer systems and memory management that describes
how programs tend to access a limited range of memory locations repetitively. This principle
underpins many memory optimization techniques and helps improve system performance. It is
commonly divided into two main types:

10 | P a g e
There are two main types of locality of reference:

1. Temporal Locality: This refers to the tendency of a program to access the same memory
location repeatedly within a short time frame. For example, if a function repeatedly accesses
the same variables or instructions, it exhibits temporal locality. Caches are designed to take
advantage of temporal locality by keeping recently accessed data close to the processor,
reducing the need to fetch it from slower main memory.

2. Spatial Locality: This refers to the tendency of a program to access memory locations that are
close to each other. For instance, if a program is accessing a sequential array or a contiguous
block of memory, it exhibits spatial locality. Memory systems leverage spatial locality by
loading not just the single requested item but also adjacent memory blocks into the cache,
anticipating that nearby memory locations will be accessed soon.

Virtual Memory in Operating Systems (OS)


A storage method known as virtual memory gives the user the impression that their main memory is
quite large. By considering a portion of secondary memory as the main memory, this is accomplished.
By giving the user, the impression that there is memory available to load the process, this approach
allows them to load larger size programs than the primary memory that is accessible.
The Operating System loads the many components of several processes in the main memory as
opposed to loading a single large process there.
By doing this, the level of multiprogramming will be enhanced, which will increase CPU
consumption.
Demand Paging
The Demand Paging is a condition which occurred in the Virtual Memory. We know that the pages of
the process are stored in secondary memory. The page is brought to the main memory when required.
We do not know when this requirement is going to occur. So, the pages are brought to the main
memory when required by the Page Replacement Algorithms.
So, the process of calling the pages to main memory to secondary memory upon demand is known as
Demand Paging.

11 | P a g e
The important jobs of virtual memory in Operating Systems are two. They are:
o Frame Allocation
o Page Replacement.
Frame Allocation in Virtual Memory
Demand paging is used to implement virtual memory, an essential component of operating systems.
A page-replacement mechanism and a frame allocation algorithm must be created for demand paging.
If you have numerous processes, frame allocation techniques are utilized to determine how many
frames to provide to each process.
A Physical Address is required by the Central Processing Unit (CPU) for the frame creation and the
physical Addressing provides the actual address to the frame created. For each page a frame must be
created.
Frame Allocation Constraints
o The Frames that can be allocated cannot be greater than total number of frames.
o Each process should be given a set minimum amount of frames.
o When fewer frames are allocated then the page fault ration increases and the process execution
becomes less efficient
o There ought to be sufficient frames to accommodate all the many pages that a single instruction
may refer to
Frame Allocation Algorithms
There are three types of Frame Allocation Algorithms in Operating Systems. They are:
1) Equal Frame Allocation Algorithms
Here, in this Frame Allocation Algorithm we take number of frames and number of processes at once.
We divide the number of frames by number of processes. We get the number of frames we must
provide for each process.

12 | P a g e
This means if we have 36 frames and 6 processes. For each process 6 frames are allocated.
It is not very logical to assign equal frames to all processes in systems with processes of different
sizes. A lot of allocated but unused frames will eventually be wasted if a lot of frames are given to a
little operation.
2) Proportionate Frame Allocation Algorithms
Here, in this Frame Allocation Algorithms we take number of frames based on the process size. For
big process more number of frames is allocated. For small processes less number of frames is
allocated by the operating system.
The problem in the Proportionate Frame Allocation Algorithm is number of frames are wasted in
some rare cases.
The advantage in Proportionate Frame Allocation Algorithm is that instead of equally, each operation
divides the available frames according to its demands.
3) Priority Frame Allocation Algorithms
According to the quantity of frame allocations and the processes, priority frame allocation distributes
frames. Let's say a process has a high priority and needs more frames; in such case, additional frames
will be given to the process. Processes with lower priorities are then later executed in future and first
only high priority processes are executed first.
Page Replacement Algorithms
There are three types of Page Replacement Algorithms. They are:
o Optimal Page Replacement Algorithm
o First In First Out Page Replacement Algorithm
o Least Recently Used (LRU) Page Replacement Algorithm
First in First out Page Replacement Algorithm
This is the first basic algorithm of Page Replacement Algorithms. This algorithm is basically
dependent on the number of frames used. Then each frame takes up the certain page and tries to access
it. When the frames are filled then the actual problem starts. The fixed number of frames is filled up
with the help of first frames present. This concept is fulfilled with the help of Demand Paging
After filling up of the frames, the next page in the waiting queue tries to enter the frame. If the frame
is present then, no problem is occurred. Because of the page which is to be searched is already present
in the allocated frames.
If the page to be searched is found among the frames then, this process is known as Page Hit.
If the page to be searched is not found among the frames then, this process is known as Page Fault.

13 | P a g e
When Page Fault occurs this problem arises, then the First In First Out Page Replacement Algorithm
comes into picture.
The First In First Out (FIFO) Page Replacement Algorithm removes the Page in the frame which is
allotted long back. This means the useless page which is in the frame for a longer time is removed
and the new page which is in the ready queue and is ready to occupy the frame is allowed by the First
In First Out Page Replacement.
Let us understand this First In First Out Page Replacement Algorithm working with the help of an
example.
Example:
Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0 for a memory with
three frames and calculate number of page faults by using FIFO (First In First Out) Page replacement
algorithms.
Points to Remember
Page Not Found - - - > Page Fault
Page Found - - - > Page Hit
Reference String:

Number of Page Hits = 8


Number of Page Faults = 12
Explanation
First, fill the frames with the initial pages. Then, after the frames are filled we need to create a space
in the frames for the new page to occupy. So, with the help of First in First Out Page Replacement
Algorithm we remove the frame which contains the page is older among the pages. By removing the
older page we give access for the new frame to occupy the empty space created by the First in First
out Page Replacement Algorithm.

14 | P a g e
OPTIMAL Page Replacement Algorithm
This is the second basic algorithm of Page Replacement Algorithms. This algorithm is basically
dependent on the number of frames used. Then each frame takes up the certain page and tries to access
it. When the frames are filled then the actual problem starts. The fixed number of frames is filled up
with the help of first frames present. This concept is fulfilled with the help of Demand Paging
After filling up of the frames, the next page in the waiting queue tries to enter the frame. If the frame
is present then, no problem is occurred. Because of the page which is to be searched is already present
in the allocated frames.
If the page to be searched is found among the frames then, this process is known as Page Hit.
If the page to be searched is not found among the frames then, this process is known as Page Fault.
When Page Fault occurs this problem arises, then the OPTIMAL Page Replacement Algorithm comes
into picture.
The OPTIMAL Page Replacement Algorithms works on a certain principle. The principle is:
Replace the Page which is not used in the Longest Dimension of time in future
This principle means that after all the frames are filled then, see the future pages which are to occupy
the frames. Go on checking for the pages which are already available in the frames. Choose the page
which is at last.
Example:
Suppose the Reference String is:
0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0
6, 1, 2 are in the frames occupying the frames.
Now we need to enter 0 into the frame by removing one page from the page
So, let us check which page number occurs last
From the sub sequence 0, 3, 4, 6, 0, 2, 1 we can say that 1 is the last occurring page number. So we
can say that 0 can be placed in the frame body by removing 1 from the frame.
Let us understand this OPTIMAL Page Replacement Algorithm working with the help of an example.
Example:
Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 4, 0 for a memory with
three frames and calculate number of page faults by using OPTIMAL Page replacement algorithms.
Points to Remember
Page Not Found - - - > Page Fault

15 | P a g e
Page Found - - - > Page Hit
Reference String:

Number of Page Hits = 8


Number of Page Faults = 12
The Ratio of Page Hit to the Page Fault = 8 : 12 - - - > 2 : 3 - - - > 0.66
The Page Hit Percentage = 8 *100 / 20 = 40%
The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 40 = 60%
Explanation
First, fill the frames with the initial pages. Then, after the frames are filled we need to create a space
in the frames for the new page to occupy.
Here, we would fill the empty spaces with the pages we and the empty frames we have. The problem
occurs when there is no space for occupying of pages. We have already known that we would replace
the Page which is not used in the Longest Dimension of time in future.
There comes a question what if there is absence of page which is in the frame.
Suppose the Reference String is:
0, 2, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0
6, 1, 5 are in the frames occupying the frames.
Here, we can see that page number 5 is not present in the Reference String. But the number 5 is
present in the Frame. So, as the page number 5 is absent we remove it when required and other page
can occupy that position.
Least Recently Used (LRU) Replacement Algorithm
This is the last basic algorithm of Page Replacement Algorithms. This algorithm is basically
dependent on the number of frames used. Then each frame takes up the certain page and tries to access
16 | P a g e
it. When the frames are filled then the actual problem starts. The fixed number of frames is filled up
with the help of first frames present. This concept is fulfilled with the help of Demand Paging
After filling up of the frames, the next page in the waiting queue tries to enter the frame. If the frame
is present then, no problem occurred. Because of the page which is to be searched is already present
in the allocated frames.
If the page to be searched is found among the frames, then, this process is known as Page Hit.
If the page to be searched is not found among the frames, then, this process is known as Page Fault.
When Page Fault occurs this problem arises, then the Least Recently Used (LRU) Page Replacement
Algorithm comes into picture.
The Least Recently Used (LRU) Page Replacement Algorithms works on a certain principle. The
principle is:
Replace the page with the page which is less dimension of time recently used page in the past.

Example:
Suppose the Reference String is:
6, 1, 1, 2, 0, 3, 4, 6, 0
The pages with page numbers 6, 1, 2 are in the frames occupying the frames.
Now, we need to allot a space for the page numbered 0.
Now, we need to travel back into the past to check which page can be replaced.
6 is the oldest page which is available in the Frame.
So, replace 6 with the page numbered 0.
Let us understand this Least Recently Used (LRU) Page Replacement Algorithm working with the
help of an example.
Example:
Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0 for a memory with
three frames and calculate number of page faults by using Least Recently Used (LRU) Page
replacement algorithms.
Points to Remember
Page Not Found - - - > Page Fault
Page Found - - - > Page Hit
Reference String:

17 | P a g e
Number of Page Hits = 7
Number of Page Faults = 13
The Ratio of Page Hit to the Page Fault = 7 : 12 - - - > 0.5833 : 1
The Page Hit Percentage = 7 * 100 / 20 = 35%
The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 35 = 65%
Explanation
First, fill the frames with the initial pages. Then, after the frames are filled we need to create a space
in the frames for the new page to occupy.
Here, we would fill the empty spaces with the pages we and the empty frames we have. The problem
occurs when there is no space for occupying of pages. We have already known that we would replace
the Page which is not used in the Longest Dimension of time in past or can be said as the Page which
is very far away in the past.

18 | P a g e

You might also like