Memory Managemnet and Virtual Memeory Notes
Memory Managemnet and Virtual Memeory Notes
MANAGEMENT
WHAT IS MEMORY?
Computer memory can be defined as a collection of some data represented in the binary format.
On the basis of various functions, memory can be classified into various categories. We will
discuss each one of them later in detail.
A computer device that is capable to store any information or data temporally or permanently is
called storage device.
In order to understand memory management, we have to make everything clear about how data
is being stored in a computer system.
Machine understands only binary language that is 0 or 1. Computer converts every data into
binary language first and then stores it into the memory.
That means if we have a program line written as int α = 10 then the computer converts it into
the binary language and then store it into the memory blocks.
However, The CPU can directly access the main memory, Registers and cache of the system.
The program always executes in main memory. The size of main memory affects degree of
Multi programming to most of the extant. If the size of the main memory is larger than CPU can
load more processes in the main memory at the same time and therefore will increase degree of
Multi programming as well as CPU utilization.
FIXED PARTITIONING
The earliest and one of the simplest technique which can be used to load more than one
processes into the main memory is Fixed partitioning or Contiguous memory allocation.
In this technique, the main memory is divided into partitions of equal or different sizes. The
operating system always resides in the first partition while the other partitions can be used to
store user processes. The memory is assigned to the processes in contiguous way.
In fixed partitioning,
1. Internal Fragmentation
If the size of the process is lesser then the total size of the partition then some size of the partition
get wasted and remain unused. This is wastage of the memory and called internal fragmentation.
As shown in the image below, the 4 MB partition is used to load only 3 MB process and the
remaining 1 MB got wasted.
2. External Fragmentation
The total unused space of various partitions cannot be used to load the processes even though
there is space available but not in the contiguous form.
As shown in the image below, the remaining 1 MB space of each partition cannot be used as a
unit to store a 4 MB process. Despite of the fact that the sufficient space is available to load the
process, process will not be loaded.
If the process size is larger than the size of maximum sized partition then that process cannot be
loaded into the memory. Therefore, a limitation can be imposed on the process size that is it
cannot be larger than the size of the largest partition.
By Degree of multi programming, we simply mean the maximum number of processes that can
be loaded into the memory at the same time. In fixed partitioning, the degree of
multiprogramming is fixed and very less due to the fact that the size of the partition cannot be
varied according to the size of processes.
DYNAMIC/ VATIABLE PARTITIONING
Dynamic partitioning tries to overcome the problems caused by fixed partitioning. In this
technique, the partition size is not declared initially. It is declared at the time of process loading.
The first partition is reserved for the operating system. The remaining space is divided into
parts. The size of each partition will be equal to the size of the process. The partition size varies
according to the need of the process so that the internal fragmentation can be avoided.
Advantages of Dynamic Partitioning over fixed partitioning
1. No Internal Fragmentation
Given the fact that the partitions in dynamic partitioning are created according to the need of
the process, It is clear that there will not be any internal fragmentation because there will not be
any unused remaining space in the partition.
In Fixed partitioning, the process with the size greater than the size of the largest partition could
not be executed due to the lack of sufficient contiguous memory. Here, In Dynamic partitioning,
the process size can't be restricted since the partition size is decided according to the process
size.
Due to the absence of internal fragmentation, there will not be any unused space in the partition
hence more processes can be loaded in the memory at the same time.
External Fragmentation
Absence of internal fragmentation doesn't mean that there will not be external fragmentation.
Let's consider three processes P1 (1 MB) and P2 (3 MB) and P3 (1 MB) are being loaded in the
respective partitions of the main memory.
After some time P1 and P3 got completed and their assigned space is freed. Now there are two
unused partitions (1 MB and 1 MB) available in the main memory but they cannot be used to
load a 2 MB process in the memory since they are not contiguously located.
The rule says that the process must be contiguously present in the main memory to get executed.
We need to change this rule to avoid external fragmentation.
Complex Memory Allocation
In Fixed partitioning, the list of partitions is made once and will never change but in dynamic
partitioning, the allocation and deallocation is very complex since the partition size will be
varied every time when it is assigned to a new process. OS has to keep track of all the partitions.
Due to the fact that the allocation and deallocation are done very frequently in dynamic memory
allocation and the partition size will be changed at each time, it is going to be very difficult for
OS to manage everything.
Compaction
We got to know that the dynamic partitioning suffers from external fragmentation. However, this
can cause some serious problems.
To avoid compaction, we need to change the rule which says that the process can't be stored in
the different places in the memory.
By applying this technique, we can store the bigger processes in the memory. The free
partitions are merged which can now be allocated according to the needs of new processes. This
technique is also called defragmentation.
PARTITIONING ALGORITHMS
There are various algorithms which are implemented by the Operating System in order to find
out the holes in the linked list and allocate them to the processes.
First Fit algorithm scans the linked list and whenever it finds the first big enough hole to store a
process, it stops scanning and load the process into that hole. This procedure produces two
partitions. Out of them, one partition will be a hole while the other partition will store the
process.
First Fit algorithm maintains the linked list according to the increasing order of starting index.
This is the simplest to implement among all the algorithms and produces bigger holes as
compare to the other algorithms.
Next Fit algorithm is similar to First Fit algorithm except the fact that, Next fit scans the linked
list from the node where it previously allocated a hole.
Next fit doesn't scan the whole list, it starts scanning the list from the next node. The idea behind
the next fit is the fact that the list has been scanned once therefore the probability of finding the
hole is larger in the remaining part of the list.
Experiments over the algorithm have shown that the next fit is not better then the first fit. So it is
not being used these days in most of the cases.
The Best Fit algorithm tries to find out the smallest hole possible in the list that can
accommodate the size requirement of the process.
1. 1. It is slower because it scans the entire list every time and tries to find out the
smallest hole which can satisfy the requirement the process.
2. Due to the fact that the difference between the whole size and the process size is very
small, the holes produced will be as small as it cannot be used to load any process
and therefore it remains useless.
Despite of the fact that the name of the algorithm is best fit, It is not the best algorithm
among all.
3.
The worst fit algorithm scans the entire list every time and tries to find out the biggest hole in
the list which can fulfill the requirement of the process.
Despite of the fact that this algorithm produces the larger holes to load the other processes, this is
not the better approach due to the fact that it is slower because it searches the entire list every
time again and again.
The first fit algorithm is the best algorithm among all because
The main disadvantage of Dynamic Partitioning is External fragmentation. Although, this can be
removed by Compaction but as we have discussed earlier, the compaction makes the system
inefficient.
We need to find out a mechanism which can load the processes in the partitions in a more
optimal way. Let us discuss a dynamic and flexible mechanism called paging.
PAGING
Lets consider a process P1 of size 2 MB and the main memory which is divided into three
partitions. Out of the three partitions, two partitions are holes of size 1 MB each.
P1 needs 2 MB space in the main memory to be loaded. We have two holes of 1 MB each but
they are not contiguous.
Although, there is 2 MB space available in the main memory in the form of those holes but that
remains useless until it become contiguous. This is a serious problem to address.
We need to have some kind of mechanism which can store one process at different locations of
the memory.
The Idea behind paging is to divide the process in pages so that, we can store them in the
memory at different holes. We will discuss paging with the examples in the next sections.
Paging with Example
In Operating Systems, Paging is a storage mechanism used to retrieve processes from the
secondary storage into the main memory in the form of pages.
The main idea behind the paging is to divide each process in the form of pages. The main
memory will also be divided in the form of frames.
One page of the process is to be stored in one of the frames of the memory. The pages can be
stored at the different locations of the memory but the priority is always to find the contiguous
frames or holes.
Pages of the process are brought into the main memory only when they are required otherwise
they reside in the secondary storage.
Different operating system defines different frame sizes. The sizes of each frame must be
equal. Considering the fact that the pages are mapped to the frames in Paging, page size needs
to be as same as frame size.
Example
Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main memory
will be divided into the collection of 16 frames of 1 KB each.
There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process is
divided into pages of 1 KB each so that one page can be stored in one frame.
Initially, all the frames are empty therefore pages of the processes will get stored in the
contiguous way.
Frames, pages and the mapping between the two is shown in the image below.
Let us consider that, P2 and P4 are moved to waiting state after some time. Now, 8 frames
become empty and therefore other pages can be loaded in that empty place. The process P5 of
size 8 KB (8 pages) is waiting inside the ready queue.
Given the fact that, we have 8 non contiguous frames available in the memory and paging
provides the flexibility of storing the process at the different places. Therefore, we can load the
pages of process P5 in the place of P2 and P4.
Memory Management Unit
The purpose of Memory Management Unit (MMU) is to convert the logical address into the
physical address. The logical address is the address generated by the CPU for every page while
the physical address is the actual address of the frame where each page will be stored.
When a page is to be accessed by the CPU by using the logical address, the operating system
needs to obtain the physical address to access that page physically.
1. Page Number
2. Offset
Memory management unit of OS needs to convert the page number to the frame number.
Example
Considering the above image, let's say that the CPU demands 10th word of 4th page of process
P3. Since the page number 4 of process P1 gets stored at frame number 9 therefore the 10th word
of 9th frame will be returned as the physical address.
Computer system assigns the binary addresses to the memory locations. However, The system
uses amount of bits to address a memory location.
Using 1 bit, we can address two memory locations. Using 2 bits we can address 4 and using 3
bits we can address 8 memory locations.
A pattern can be identified in the mapping between the number of bits in the address and the
range of the memory locations.
We know,
Physical address space in a system can be defined as the size of the main memory. It is really
important to compare the process size with the physical address space. The process size must be
less than the physical address space.
Physical Address Space = Size of the Main Memory
Hence,
Physical address space (in words) = (2 ^ 16) / (2 ^ 3) = 2 ^ 13 Words
Therefore,
Physical Address = 13 bits
In General,
If, Physical Address Space = N Words
Logical address space can be defined as the size of the process. The size of the process should
be less enough so that it can reside in the main memory.
Let's say,
In general,
If, logical address space = L words
Then, Logical Address = Log2L bits
What is a Word?
The Word is the smallest unit of the memory. It is the collection of bytes. Every
operating system defines different word sizes after analyzing the n-bit address that is
inputted to the decoder and the 2 ^ n memory locations that are produced from the
decoder.
PAGE TABLE:-
Page Table is a data structure used by the virtual memory system to store the mapping between
logical addresses and physical addresses.
Logical addresses are generated by the CPU for the pages of the processes therefore they are
generally used by the processes.
Physical addresses are the actual frame address of the memory. They are generally used by the
hardware or more specifically by RAM subsystems.
In this situation, a unit named as Memory Management Unit comes into the picture. It
converts the page number of the logical address to the frame number of the physical address.
The offset remains same in both the addresses.
To perform this task, Memory Management unit needs a special kind of mapping which is done
by page table. The page table stores all the Frame numbers corresponding to the page numbers of
the page table.
In other words, the page table maps the page number to its actual location (frame number) in
the memory.
In the image given below shows, how the required word of the frame is accessed with the help
of offset.
In operating systems, there is always a requirement of mapping from logical address to the
physical address. However, this process involves various steps which are defined as follows.
CPU generates logical address for each page of the process. This contains two parts: page
number and offset.
2. Scaling
To determine the actual page number of the process, CPU stores the page table base in a special
register. Each time the address is generated, the value of the page table base is added to the
page number to get the actual location of the page entry in the table. This process is called
scaling.
The frame number of the desired page is determined by its entry in the page table. A physical
address is generated which also contains two parts : frame number and offset. The Offset will be
similar to the offset of the logical address therefore it will be copied from the logical address.
The frame number and the offset from the physical address is mapped to the main memory in
order to get the actual word address.
Virtual Memory
Virtual Memory is a storage scheme that provides user an illusion of having a very big main
memory. This is done by treating a part of secondary memory as the main memory.
In this scheme, User can load the bigger size processes than the available main memory by
having the illusion that the memory is available to load the process.
Instead of loading one big process in the main memory, the Operating System loads the different
parts of more than one process in the main memory.
By doing this, the degree of multiprogramming will be increased and therefore, the CPU
utilization will also be increased.
In modern word, virtual memory has become quite common these days. In this scheme,
whenever some pages needs to be loaded in the main memory for the execution and the memory
is not available for those many pages, then in that case, instead of stopping the pages from
entering in the main memory, the OS search for the RAM area that are least used in the recent
times or that are not referenced and copy that into the secondary memory to make the space for
the new pages in the main memory.
Since all this procedure happens automatically, therefore it makes the computer feel like it is
having the unlimited RAM.
Demand Paging
Demand Paging is a popular method of virtual memory management. In demand paging, the
pages of a process which are least used, get stored in the secondary memory.
A page is copied to the main memory when its demand is made or page fault occurs. There are
various page replacement algorithms which are used to determine the pages which will be
replaced. We will discuss each one of them later in detail.
Let us assume 2 processes, P1 and P2, contains 4 pages each. Each page size is 1 KB. The
main memory contains 8 frame of 1 KB each. The OS resides in the first two partitions. In the
third partition, 1st page of P1 is stored and the other frames are also shown as filled with the
different pages of processes in the main memory.
The page tables of both the pages are 1 KB size each and therefore they can be fit in one frame
each. The page tables of both the processes contain various information that is also shown in the
image.
The CPU contains a register which contains the base address of page table that is 5 in the case of
P1 and 7 in the case of P2. This page table base address will be added to the page number of the
Logical address when it comes to accessing the actual corresponding entry.
Advantages of Virtual Memory
Drawbacks of Paging
1. Size of Page table can be very big and therefore it wastes main memory.
2. CPU will take more time to read a single word from the main memory.
Demand Paging
According to the concept of Virtual Memory, in order to execute some process, only a part of the
process needs to be present in the main memory which means that only a few pages will only be
present in the main memory at any time.
However, deciding, which pages need to be kept in the main memory and which need to be kept
in the secondary memory, is going to be difficult because we cannot say in advance that a
process will require a particular page at particular time.
Therefore, to overcome this problem, there is a concept called Demand Paging is introduced. It
suggests keeping all pages of the frames in the secondary memory until they are required. In
other words, it says that do not load any page in the main memory until it is required.
Whenever any page is referred for the first time in the main memory, then that page will be
found in the secondary memory.
After that, it may or may not be present in the main memory depending upon the page
replacement algorithm which will be covered later in this tutorial.
If the referred page is not present in the main memory then there will be a miss and the concept
is called Page miss or page fault.
The CPU has to access the missed page from the secondary memory. If the number of page fault
is very high then the effective access time of the system will become very high.
What is Thrashing?
If the number of page faults is equal to the number of referred pages or the number of page faults
are so high so that the CPU remains busy in just reading the pages from the secondary memory
then the effective access time will be the time taken by the CPU to read one word from the
secondary memory and it will be so high. The concept is called thrashing.
Inverted Page Table
Inverted Page Table is the global page table which is maintained by the Operating System for all
the processes. In inverted page table, the number of entries is equal to the number of frames in
the main memory. It can be used to overcome the drawbacks of page table.
There is always a space reserved for the page regardless of the fact that whether it is present
in the main memory or not. However, this is simply the wastage of the memory if the page is
not present.
We can save this wastage by just inverting the page table. We can save the details only for the
pages which are present in the main memory. Frames are the indices and the information saved
inside the block will be Process ID and page number.
Page Replacement Algorithms
The page replacement algorithm decides which memory page is to be replaced. The process
of replacement is sometimes called swap out or write to disk. Page replacement is done when
the requested page is not found in the main memory (page fault).
There are two main aspects of virtual memory, Frame allocation and Page Replacement. It
is very important to have the optimal frame allocation and page replacement algorithm.
Frame allocation is all about how many frames are to be allocated to the process while the
page replacement is all about determining the page number which needs to be replaced in
order to make space for the requested page.
1. if the number of frames which are allocated to a process is not sufficient or accurate then
there can be a problem of thrashing. Due to the lack of frames, most of the pages will be
residing in the main memory and therefore more page faults will occur.
However, if OS allocates more frames to the process then there can be internal fragmentation.
2. If the page replacement algorithm is not optimal then there will also be the problem of
thrashing. If the number of pages that are replaced by the requested pages will be referred in the
near future then there will be more number of swap-in and swap-out and therefore the OS has
to perform more replacements then usual which causes performance deficiency.
Therefore, the task of an optimal page replacement algorithm is to choose the page which can
limit the thrashing.
There are various page replacement algorithms. Each algorithm has a different method by which
the pages can be replaced.
1. Optimal Page Replacement algorithm → this algorithms replaces the page which will
not be referred for so long in future. Although it can not be practically implementable
but it can be used as a benchmark. Other algorithms are compared to this in terms of
optimality.
2. Least recent used (LRU) page replacement algorithm → this algorithm replaces the
page which has not been referred for a long time. This algorithm is just opposite to the
optimal page replacement algorithm. In this, we look at the past instead of staring at
future.
3. FIFO → in this algorithm, a queue is maintained. The page which is assigned the
frame first will be replaced first. In other words, the page which resides at the rare end
of the queue will be replaced on the every page fault.
Algorithm
6 BELADY'SANOMALY:-
In the case of LRU and optimal page replacement algorithms, it is seen that the number of page
faults will be reduced if we increase the number of frames. However, Balady found that, In FIFO
page replacement algorithm, the number of page faults will get increased with the increment in
number of frames.
This is the strange behavior shown by FIFO algorithm in some of the cases. This is an Anomaly
called as Belady'sAnomaly.
Request 0 1 5 3 0 1 4 0 1 5 3 4
Frame 3 5 5 5 1 1 1 1 1 3 3
Frame 2 1 1 1 0 0 0 0 0 5 5 5
Frame 1 0 0 0 3 3 3 4 4 4 4 4 4
Miss/Hit Miss Miss Miss Miss Miss Miss Miss Hit Hit Miss Miss Hit
Number of Page Faults = 9
Req 0 1 5 3 0 1 4 0 1 5 3 4
uest
Fra 3 3 3 3 3 3 5 5 5
me
4
Fra 5 5 5 5 5 5 1 1 1 1
me
3
Fra 1 1 1 1 1 1 0 0 0 0 4
me
2
Fra 0 0 0 0 0 0 4 4 4 4 3 3
me
1
Miss M Mi M M H H M M M M M M
/Hit i ss is is i i is is is is is is
s s s t t s s s s s s
s
Therefore, in this example, the number of page faults is increasing by increasing the number of
frames hence this suffers from Belady's Anomaly.
SEGMENTATION:-
Till now, we were using Paging as our main memory management technique. Paging is more
close to Operating system rather than the User. It divides all the process into the form of pages
regardless of the fact that a process can have some relative parts of functions which needs to be
loaded in the same page.
Operating system doesn't care about the User's view of the process. It may divide the same
function into different pages and those pages may or may not be loaded at the same time into the
memory. It decreases the efficiency of the system.
It is better to have segmentation which divides the process into the segments. Each segment
contain same type of functions such as main function can be included in one segment and the
library functions can be included in the other segment,
1. Segment Number
2. Offset
The Segment number is mapped to the segment table. The limit of the respective segment is
compared with the offset. If the offset is less than the limit then the address is valid otherwise it
throws an error as the address is invalid.
In the case of valid address, the base address of the segment is added to the offset to get the
physical address of actual word in the main memory.
Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compare to the page table in paging.
Disadvantages
Paging VS Segmentation
Sr Paging Segmentation
No
2 Paging divides program Segmentation divides program into variable size segments.
into fixed size pages.
Segmented Paging
Pure segmentation is not very popular and not being used in many of the operating systems.
However, Segmentation can be combined with Paging to get the best features out of both the
techniques.
In Segmented Paging, the main memory is divided into variable size segments which are further
divided into fixed size pages.
Each Page table contains the various information about every page of the segment. The Segment
Table contains the information about every segment. Each segment table entry points to a page
table entry and every page table entry is mapped to one of the page within a segment.
Translation of logical address to physical address
The CPU generates a logical address which is divided into two parts: Segment Number and
Segment Offset. The Segment Offset must be less than the segment limit. Offset is further
divided into Page number and Page Offset. To map the exact page number in the page table, the
page number is added into the page table base.
The actual frame number with the page offset is mapped to the main memory to get the desired
word in the page of the certain segment of the process.
Advantages of Segmented Paging