0% found this document useful (0 votes)
54 views

Memory Management in Operating System

The document discusses memory management in operating systems. It defines memory and main memory, and explains that memory management is required to allocate and deallocate memory for processes, minimize fragmentation, and optimize memory usage. It describes fixed and dynamic partitioning approaches for allocating memory to processes, and discusses first fit, next fit, and best fit algorithms used in partitioning.

Uploaded by

gyan prakash
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views

Memory Management in Operating System

The document discusses memory management in operating systems. It defines memory and main memory, and explains that memory management is required to allocate and deallocate memory for processes, minimize fragmentation, and optimize memory usage. It describes fixed and dynamic partitioning approaches for allocating memory to processes, and discusses first fit, next fit, and best fit algorithms used in partitioning.

Uploaded by

gyan prakash
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Memory Management in Operating System

The term Memory can be defined as a collection of data in a specific format. It is


used to store instructions and processed data. The memory comprises a large array or
group of words or bytes, each with its own location. The primary motive of a computer
system is to execute programs. These programs, along with the information they
access, should be in the main memory during execution. The CPU fetches instructions
from memory according to the value of the program counter.

To achieve a degree of multiprogramming and proper utilization of memory,


memory management is important.

What is Main Memory?

The main memory is central to the operation of a modern computer. Main


Memory is a large array of words or bytes, ranging in size from hundreds of thousands
to billions. Main memory is a repository of rapidly available information shared by the
CPU and I/O devices. Main memory is the place where programs and information are
kept when the processor is effectively utilizing them. Main memory is associated with
the processor, so moving instructions and information into and out of the processor is
extremely fast. Main memory is also known as RAM(Random Access Memory). This
memory is a volatile memory.RAM lost its data when a power interruption occurs.

Memory Hierarchy
In a multiprogramming computer, the operating system resides in a part of
memory and the rest is used by multiple processes. The task of subdividing the
memory among different processes is called memory management. Memory
management is a method in the operating system to manage operations between main
memory and disk during process execution. The main aim of memory management is
to achieve efficient utilization of memory.

Why Memory Management is required:

 Allocate and de-allocate memory before and after process execution.


 To keep track of used memory space by processes.
 To minimize fragmentation issues.
 To proper utilization of main memory.
 To maintain data integrity while executing of process.

Memory Allocation Policies

Fixed Partitioning

The earliest and one of the simplest technique which can be used to load more than
one processes into the main memory is Fixed partitioning or Contiguous memory
allocation.

In this technique, the main memory is divided into partitions of equal or different sizes.
The operating system always resides in the first partition while the other partitions can
be used to store user processes. The memory is assigned to the processes in
contiguous way.

In fixed partitioning,

 The partitions cannot overlap.


 A process must be contiguously present in a partition for the execution.

There are various cons of using this technique.


1. Internal Fragmentation
If the size of the process is lesser then the total size of the partition then some
size of the partition get wasted and remain unused. This is wastage of the
memory and called internal fragmentation.

As shown in the image below, the 4 MB partition is used to load only 3 MB


process and the remaining 1 MB got wasted.

2. External Fragmentation

The total unused space of various partitions cannot be used to load the processes
even though there is space available but not in the contiguous form.

As shown in the image below, the remaining 1 MB space of each partition cannot
be used as a unit to store a 4 MB process. Despite of the fact that the sufficient
space is available to load the process, process will not be loaded.

3. Limitation on the size of the process

If the process size is larger than the size of maximum sized partition then that
process cannot be loaded into the memory. Therefore, a limitation can be
imposed on the process size that is it cannot be larger than the size of the largest
partition.

Dynamic Partitioning

Dynamic partitioning tries to overcome the problems caused by fixed partitioning.


In this technique, the partition size is not declared initially. It is declared at the
time of process loading.

The first partition is reserved for the operating system. The remaining space is
divided into parts. The size of each partition will be equal to the size of the
process. The partition size varies according to the need of the process so that the
internal fragmentation can be avoided.
Advantages of Dynamic Partitioning over fixed partitioning

1. No Internal Fragmentation
Given the fact that the partitions in dynamic partitioning are created according to
the need of the process, It is clear that there will not be any internal
fragmentation because there will not be any unused remaining space in the
partition.
2. No Limitation on the size of the process
In Fixed partitioning, the process with the size greater than the size of the largest
partition could not be executed due to the lack of sufficient contiguous memory.
Here, In Dynamic partitioning, the process size can't be restricted since the
partition size is decided according to the process size.
3. Degree of multiprogramming is dynamic
Due to the absence of internal fragmentation, there will not be any unused space
in the partition hence more processes can be loaded in the memory at the same
time.
Disadvantages of dynamic partitioning
External Fragmentation
Absence of internal fragmentation doesn't mean that there will not be external
fragmentation.

Methods Involved in Memory Management

There are various methods and with their help Memory Management can be done
intelligently by the Operating System:

Partitioning Algorithms
There are various algorithms which are implemented by the Operating System in order
to find out the holes in the linked list and allocate them to the processes.

1. First Fit Algorithm

First Fit algorithm scans the linked list and whenever it finds the first big enough hole
to store a process, it stops scanning and load the process into that hole. This procedure
produces two partitions. Out of them, one partition will be a hole while the other
partition will store the process.

First Fit algorithm maintains the linked list according to the increasing order of starting
index. This is the simplest to implement among all the algorithms and produces bigger
holes as compare to the other algorithms.

2. Next Fit Algorithm

Next Fit algorithm is similar to First Fit algorithm except the fact that, Next fit scans the
linked list from the node where it previously allocated a hole.

Next fit doesn't scan the whole list, it starts scanning the list from the next node. The
idea behind the next fit is the fact that the list has been scanned once therefore the
probability of finding the hole is larger in the remaining part of the list.

Experiments over the algorithm have shown that the next fit is not better then the first
fit. So it is not being used these days in most of the cases.

3. Best Fit Algorithm

The Best Fit algorithm tries to find out the smallest hole possible in the list that can
accommodate the size requirement of the process.

t is slower because it scans the entire list every time and tries to find out the smallest
hole which can satisfy the requirement the process.

Due to the fact that the difference between the whole size and the process size is very
small, the holes produced will be as small as it cannot be used to load any process and
therefore it remains useless.

Despite of the fact that the name of the algorithm is best fit, It is not the best algorithm
among all.

You might also like