Unit IV
Unit IV
Memory Management
Operating System
1. Relocation
2. Protection
3. Sharing
4. Logical Organization
5. Physical Organization
● This technique helps in placing the programs in memory in such a way so that memory is utilized at its fullest extent.
● This technique helps to protect different processes from each other so that they do not interfere with each other's operations.
● It helps to allocate space to different application routines.
● This technique allows you to check how much memory needs to be allocated to processes that decide which processor should get
memory at what time.
● It keeps the track of each memory location whether it is free or allocated.
● This technique keeps the track of inventory whenever memory gets freed or unallocated and it will update the status accordingly.
1.Fixed Partitioning
2.Variable Partitioning
1. No Internal Fragmentation
1. External Fragmentation
1. Internal Fragmentation
2. External Fragmentation
● First Fit
● Best Fit
● Worst Fit
Whenever a new process P2 comes, it does the same thing. Search from the first index again.
Output:
1 212 2
2 417 5
3 112 2
● OS can allocate processes quickly as algorithm to allocate processes will be quick as compared to other
Disadvantages
● Causes huge internal fragmentation
In the case of the best fit memory allocation scheme, the operating system searches for the empty memory block.
When the operating system finds the memory block with minimum wastage of memory, it is allocated to the process
this is known as Best Fit Algorithm in Operating System.
Example –
If you see the image below/right you will see that the process size is 40.
While blocks 1, 2 and 4 can accommodate the process. Block 2 is chosen as it leaves the lowest memory wastage
This scheme is considered the best approach as it results in the most optimized memory allocation.
Disadvantage:
However, finding the best fit memory allocation may be time-consuming.
The algorithms searches sequentially starting from first memory block and searches for the memory block that
This strategy is the modified version of the First fit because in Next Fit and in this memory is searched for empty
spaces similar to the first fit memory allocation scheme. But it differs from the first fit as when called Next time it
starts from where it let off and not from the beginning.
Because all of the frames are initially empty, the pages of the processes will be stored in a continuous manner. The graphic below
depicts frames, pages, and the mapping between them.
The details about each segment are stored in a table called a segment table. Segment table is stored in one (or many) of
the segments.
1. Segment Base: The segment base is also known as the base address of a segment. The segment base contains the
starting physical address of the segments residing in the memory.
2. Segment Limit: The segment limit is also known as segment offset. The segment contains the specific length of the
segment.
The basic overview of the Segment Table is shown below.
1. Virtual Memory Segmentation: Virtual Memory Segmentation divides the processes into n number of segments. All
the segments are not divided at a time. Virtual Memory Segmentation may or may not take place at the run time of a
program.
2. Simple Segmentation: Simple Segmentation also divides the processes into n number of segments but the
segmentation is done all together at once. Simple segmentation takes place at the run time of a program. Simple
segmentation may scatter the segments into the memory such that one segment of the process can be at a different
location than the other(in a noncontinuous manner).
In Virtual Memory Management, Page Replacement Algorithms play an important role. The main objective of all the Page
replacement policies is to decrease the maximum number of page faults.
Page Fault – It is basically a memory error, and it occurs when the current programs attempt to access the memory page
for mapping into virtual address space, but it is unable to load into the physical memory then this is referred to as Page
fault.
Page Replacement technique uses the following approach. If there is no free frame, then we will find the one that is not
currently being used and then free it. A-frame can be freed by writing its content to swap space and then change the page
table in order to indicate that the page is no longer in the memory.
1. First of all, find the location of the desired page on the disk.
2. Find a free Frame: a) If there is a free frame, then use it. b) If there is no free frame then make use of the
page-replacement algorithm in order to select the victim frame. c) Then after that write the victim frame to the disk
and then make the changes in the page table and frame table accordingly.
3. After that read the desired page into the newly freed frame and then change the page and frame tables.
4. Restart the process.
If the referred page is not present in the main memory then there will be a miss and the concept is called Page miss or page
fault.
The CPU has to access the missed page from the secondary memory. If the number of page fault is very high then the
effective access time of the system will become very high.
This algorithm helps to decide which pages must be swapped out from the main memory in order to create a room for the
incoming page. This Algorithm wants the lowest page-fault rate.
Various Page Replacement algorithms used in the Operating system are as follows;
A Translation look aside buffer can be defined as a memory cache which can be used to reduce the time
taken to access the page table again and again.
It is a memory cache which is closer to the CPU and the time taken by CPU to access TLB is lesser then that
taken to access main memory.
In other words, we can say that TLB is faster and smaller than the main memory but cheaper and bigger than
the register.
TLB follows the concept of locality of reference which means that it contains only the entries of those many
pages that are frequently accessed by the CPU.
TLB hit is a condition where the desired entry is found in translation lookaside buffer. If this happens
then the CPU simply access the actual location in the main memory.
However, if the entry is not found in TLB (TLB miss) then CPU has to access page table in the main
memory and then access the actual frame in the main memory.
Therefore, in the case of TLB hit, the effective access time will be lesser as compare to the case of
TLB miss.
If the probability of TLB hit is P% (TLB hit rate) then the probability of TLB miss (TLB miss rate)
will be (1-P) %.
1. EAT = P (t + m) + (1 - p) (t + k.m + m)
Where, p → TLB hit rate, t → time taken to access TLB, m → time taken to access main memory k =
1, if the single level paging has been implemented.
1. Effective access time will be decreased if the TLB hit rate is increased.
2. Effective access time will be increased in the case of multilevel paging.