Memory Management in Operating System
Memory Management in Operating System
Memory Hierarchy
In a multiprogramming computer, the operating system resides in a part of
memory and the rest is used by multiple processes. The task of subdividing the
memory among different processes is called memory management. Memory
management is a method in the operating system to manage operations between main
memory and disk during process execution. The main aim of memory management is
to achieve efficient utilization of memory.
Fixed Partitioning
The earliest and one of the simplest technique which can be used to load more than
one processes into the main memory is Fixed partitioning or Contiguous memory
allocation.
In this technique, the main memory is divided into partitions of equal or different sizes.
The operating system always resides in the first partition while the other partitions can
be used to store user processes. The memory is assigned to the processes in
contiguous way.
In fixed partitioning,
2. External Fragmentation
The total unused space of various partitions cannot be used to load the processes
even though there is space available but not in the contiguous form.
As shown in the image below, the remaining 1 MB space of each partition cannot
be used as a unit to store a 4 MB process. Despite of the fact that the sufficient
space is available to load the process, process will not be loaded.
If the process size is larger than the size of maximum sized partition then that
process cannot be loaded into the memory. Therefore, a limitation can be
imposed on the process size that is it cannot be larger than the size of the largest
partition.
Dynamic Partitioning
The first partition is reserved for the operating system. The remaining space is
divided into parts. The size of each partition will be equal to the size of the
process. The partition size varies according to the need of the process so that the
internal fragmentation can be avoided.
Advantages of Dynamic Partitioning over fixed partitioning
1. No Internal Fragmentation
Given the fact that the partitions in dynamic partitioning are created according to
the need of the process, It is clear that there will not be any internal
fragmentation because there will not be any unused remaining space in the
partition.
2. No Limitation on the size of the process
In Fixed partitioning, the process with the size greater than the size of the largest
partition could not be executed due to the lack of sufficient contiguous memory.
Here, In Dynamic partitioning, the process size can't be restricted since the
partition size is decided according to the process size.
3. Degree of multiprogramming is dynamic
Due to the absence of internal fragmentation, there will not be any unused space
in the partition hence more processes can be loaded in the memory at the same
time.
Disadvantages of dynamic partitioning
External Fragmentation
Absence of internal fragmentation doesn't mean that there will not be external
fragmentation.
There are various methods and with their help Memory Management can be done
intelligently by the Operating System:
Partitioning Algorithms
There are various algorithms which are implemented by the Operating System in order
to find out the holes in the linked list and allocate them to the processes.
First Fit algorithm scans the linked list and whenever it finds the first big enough hole
to store a process, it stops scanning and load the process into that hole. This procedure
produces two partitions. Out of them, one partition will be a hole while the other
partition will store the process.
First Fit algorithm maintains the linked list according to the increasing order of starting
index. This is the simplest to implement among all the algorithms and produces bigger
holes as compare to the other algorithms.
Next Fit algorithm is similar to First Fit algorithm except the fact that, Next fit scans the
linked list from the node where it previously allocated a hole.
Next fit doesn't scan the whole list, it starts scanning the list from the next node. The
idea behind the next fit is the fact that the list has been scanned once therefore the
probability of finding the hole is larger in the remaining part of the list.
Experiments over the algorithm have shown that the next fit is not better then the first
fit. So it is not being used these days in most of the cases.
The Best Fit algorithm tries to find out the smallest hole possible in the list that can
accommodate the size requirement of the process.
t is slower because it scans the entire list every time and tries to find out the smallest
hole which can satisfy the requirement the process.
Due to the fact that the difference between the whole size and the process size is very
small, the holes produced will be as small as it cannot be used to load any process and
therefore it remains useless.
Despite of the fact that the name of the algorithm is best fit, It is not the best algorithm
among all.