Memory Management in Operating System - GeeksforGeeks
Memory Management in Operating System - GeeksforGeeks
Memory Management in Operating System - GeeksforGeeks
The term memory can be defined as a collection of data in a specific format. It is used
to store instructions and process data. The memory comprises a large array or group
of words or bytes, each with its own location. The primary purpose of a computer
system is to execute programs. These programs, along with the information they
access, should be in the main memory during execution. The CPU fetches instructions
from memory according to the value of the program counter.
Now we are discussing the concept of Logical Address Space and Physical Address
Space
Static Loading: Static Loading is basically loading the entire program into a fixed
address. It requires more memory space.
Dynamic Loading: The entire program and all data of a process must be in physical
memory for the process to execute. So, the size of a process is limited to the size of
physical memory. To gain proper memory utilization, dynamic loading is used. In
dynamic loading, a routine is not loaded until it is called. All routines are residing on
disk in a relocatable load format. One of the advantages of dynamic loading is that
the unused routine is never loaded. This loading is useful when a large amount of
code is needed to handle it efficiently.
Static Linking: In static linking, the linker combines all necessary program modules
into a single executable program. So there is no runtime dependency. Some
operating systems support only static linking, in which system language libraries
are treated like any other object module.
Dynamic Linking: The basic concept of dynamic linking is similar to dynamic
loading. In dynamic linking, “Stub” is included for each appropriate library routine
reference. A stub is a small piece of code. When the stub is executed, it checks
whether the needed routine is already in memory or not. If not available then the
program loads the routine into memory.
Swapping
When a process is executed it must have resided in memory. Swapping is a process of
swapping a process temporarily into a secondary memory from the main memory,
which is fast compared to secondary memory. A swapping allows more processes to
be run and can be fit into memory at one time. The main part of swapping is
transferred time and the total time is directly proportional to the amount of memory
swapped. Swapping is also known as roll-out, or roll because if a higher priority
process arrives and wants service, the memory manager can swap out the lower
priority process and then load and execute the higher priority process. After finishing
higher priority work, the lower priority process swapped back in memory and continued
to the execution process.
This is the simplest memory management approach the memory is divided into two
sections:
Fence Register
Example: As shown in fig. memory is partitioned into 5 regions the region is reserved
for updating the system the remaining four partitions are for the user program.
Operating System
p1
p2
p3
p4
Partition Table
Once partitions are defined operating system keeps track of the status of memory
partitions it is done through a data structure called a partition table.
0k 200k allocated
Memory Allocation
To gain proper memory utilization, memory allocation must be allocated efficient
manner. One of the simplest methods for allocating memory is to divide memory into
several fixed-sized partitions and each partition contains exactly one process. Thus,
the degree of multiprogramming is obtained by the number of partitions.
Multiple partition allocation: In this method, a process is selected from the input
queue and loaded into the free partition. When the process terminates, the partition
becomes available for other processes.
Fixed partition allocation: In this method, the operating system maintains a table
that indicates which parts of memory are available and which are occupied by
processes. Initially, all memory is available for user processes and is considered
one large block of available memory. This available memory is known as a “Hole”.
When the process arrives and needs memory, we search for a hole that is large
enough to store this process. If the requirement is fulfilled then we allocate memory
to process, otherwise keeping the rest available to satisfy future requests. While
allocating a memory sometimes dynamic storage allocation problems occur, which
concerns how to satisfy a request of size n from a list of free holes. There are some
solutions to this problem:
First Fit
In the First Fit, the first available free hole fulfil the requirement of the process
allocated.
First Fit
Here, in this diagram, a 40 KB memory block is the first available free hole that can
store process A (size of 25 KB), because the first two blocks did not have sufficient
memory space.
Best Fit
In the Best Fit, allocate the smallest hole that is big enough to process requirements.
For this, we search the entire list, unless the list is ordered by size.
Best Fit
Here in this example, first, we traverse the complete list and find the last hole 25KB is
the best suitable hole for Process A(size 25KB). In this method, memory utilization is
maximum as compared to other memory allocation techniques.
Worst Fit
In the Worst Fit, allocate the largest available hole to process. This method produces
the largest leftover hole.
Worst Fit
Here in this example, Process A (Size 25 KB) is allocated to the largest available
memory block which is 60KB. Inefficient memory utilization is a major issue in the
worst fit.
Fragmentation
Fragmentation is defined as when the process is loaded and removed after execution
from memory, it creates a small free hole. These holes can not be assigned to new
processes because holes are not combined or do not fulfill the memory requirement of
the process. To achieve a degree of multiprogramming, we must reduce the waste of
memory or fragmentation problems. In the operating systems two types of
fragmentation:
Another possible solution to the external fragmentation is to allow the logical address
space of the processes to be noncontiguous, thus permitting a process to be allocated
physical memory wherever the latter is available.
Paging
Paging is a memory management scheme that eliminates the need for a contiguous
allocation of physical memory. This scheme permits the physical address space of a
process to be non-contiguous.
Example:
If Logical Address = 31 bits, then Logical Address Space = 231 words = 2 G words (1
G = 230)
If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address =
log2 227 = 27 bits
If Physical Address = 22 bits, then Physical Address Space = 222 words = 4 M words
(1 M = 220)
If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address =
log2 224 = 24 bits
The mapping from virtual to physical address is done by the memory management unit
(MMU) which is a hardware device and this mapping is known as the paging
technique.
The Physical Address Space is conceptually divided into several fixed-size blocks,
called frames.
The Logical Address Space is also split into fixed-size blocks, called pages.
Page Size = Frame Size
Let us consider an example:
Paging
The hardware implementation of the page table can be done by using dedicated
registers. But the usage of the register for the page table is satisfactory only if the
page table is small. If the page table contains a large number of entries then we can
use TLB(translation Look-aside buffer), a special, small, fast look-up hardware cache.
Q.1: What is a memory leak, and how does it affect system performance?
Answers:
A memory leak occurs when a program fails to release memory that it no longer
needs, resulting in wasted memory resources. Over time, if memory leaks
accumulate, the system’s available memory diminishes, leading to reduced
performance and possibly system crashes.
Answers:
Q.3: What are the advantages and disadvantages of using virtual memory?
Answers:
Virtual memory provides several benefits, including the ability to run larger
programs than the available physical memory, increased security through memory
isolation, and simplified memory management for applications. However, using
virtual memory also introduces additional overhead due to page table lookups and
potential performance degradation if excessive swapping occurs.
Suggest improvement
Suggest improvement
Previous Next
Similar Reads
Best Ways for Operating System Operating Systems | Memory
Memory Management Management | Question 1
V varshacho…
About Us Hack-A-Thons
Contact Us Master CP
Languages DSA
Java Algorithms
Tutorials Archive
Django Tutorial
Django Tutorial
GCP Top 50 DP
OOAD NodeJS
Puzzles Biology
English Grammar
World GK
Management & Finance Free Online Tools
SAP C++
Linux CS Subjects
Excel