OS Unit 4
OS Unit 4
Memory Management: Swapping, contiguous memory allocation, paging, segmentation, structure of page
the table.
File Concepts: File concept, access Methods, directory and disk structure, protection.
Memory Management
assigning portions known as blocks to various running programs to optimize the overall
It is the most important function of an operating system that manages primary memory.
It helps processes to move back and forward between the main memory and execution disk.
Relocation
Protection
Sharing
Logical Organization
Physical Organization
Uses
It allows you to check how much memory needs to be allocated to processes that decide which
Tracks whenever inventory gets freed or unallocated. According to it will update the status.
It also make sure that these applications do not interfere with each other.
It places the programs in memory so that memory is utilized to its full extent.
Swapping
Swapping is a method in which the process should be swapped temporarily from the main memory
to the backing store.
It will be later brought back into the memory for continue execution.
Backing store is a hard disk or some other secondary storage device that should be big enough
ignored to accommodate copies of all memory images for all users
Benefits
Memory allocation is a process by which computer programs are assigned memory or space.
Because of this all the available memory space resides at the same place together,
which means that the freely/unused available memory partitions are not distributed in a random
fashion here and there across the whole memory space.
The main memory is a combination of two main portions- one for the operating system and other
for the user program.
Contiguous Technique can be divided into:
Fixed (or static) partitioning
Variable (or dynamic) partitioning
fixed size partitions
In this partitioning, number of partitions (non-overlapping) in RAM are fixed but size of each
In fixed partitioning,
Easy to implement:
Algorithms needed to implement Fixed Partitioning are easy to implement. It simply requires putting a process into
certain partition without focusing on the emergence of Internal and External Fragmentation.
Little OS overhead:
Processing of Fixed Partitioning require lesser excess and indirect computational power.
Internal Fragmentation
If the size of the process is lesser then the total size of the partition then some size of the
partition get wasted and remain unused. This is wastage of the memory and called internal
fragmentation.
External Fragmentation
The total unused space of various partitions cannot be used to load the processes even though there is
space available but not in the contiguous form.
If the process size is larger than the size of maximum sized partition then that process cannot be loaded
into the memory. Therefore, a limitation can be imposed on the process size that is it cannot be larger than the
size of the largest partition.
By Degree of multi programming, we simply mean the maximum number of processes that can be
loaded into the memory at the same time. In fixed partitioning, the degree of multiprogramming is fixed and
very less due to the fact that the size of the partition cannot be varied according to the size of processes.
Dynamic Partitioning
The size of each partition will be equal to the size of the process.
The partition size varies according to the need of the process so that the internal fragmentation can be
avoided.
Advantages
No Internal Fragmentation
Given the fact that the partitions in dynamic partitioning are created according to the need of the process, It
is clear that there will not be any internal fragmentation because there will not be any unused remaining space in
the partition.
In Fixed partitioning, the process with the size greater than the size of the largest partition could not be
executed due to the lack of sufficient contiguous memory. Here, In Dynamic partitioning, the process size can't
be restricted since the partition size is decided according to the process size.
Due to the absence of internal fragmentation, there will not be any unused space in the partition hence
more processes can be loaded in the memory at the same time.
Disadvantages
External Fragmentation
The total unused space of various partitions cannot be used to load the processes even though
In Fixed partitioning, the list of partitions is made once and will never change but in dynamic
partitioning, the allocation and deallocation is very complex since the partition size will be varied
every time when it is assigned to a new process. OS has to keep track of all the partitions.
Compaction
compaction used to minimize the probability of external fragmentation.
In compaction, all the free partitions are made contiguous and all the loaded partitions are brought
together.
By applying this technique, The free partitions are merged which can now be allocated according to the
needs of new processes.
The efficiency of the system is decreased in the case of compaction due to the fact that all the free spaces
will be transferred from several places to a single place.
Huge amount of time is invested for this procedure and the CPU will remain idle for all this time.
Despite of the fact that the compaction avoids external fragmentation, it makes system inefficient
Partition Allocation
Memory is divided into different blocks or partitions. Each process is allocated according to the
requirement. Partition allocation is an ideal method to avoid internal fragmentation.
First Fit: In this type fit, the partition is allocated, which is the first sufficient block from the
beginning of the main memory.
Best Fit: It allocates the process to the partition that is the first smallest partition among the free
partitions.
Worst Fit: It allocates the process to the partition, which is the largest sufficient freely available
partition in the main memory.
Next Fit: It is mostly similar to the first Fit, but this Fit, searches for the first sufficient partition
from the last allocation point.
Non Contiguous Memory Allocation
There are two popular techniques used for non-contiguous memory allocation-
Paging
Paging is a storage mechanism that allows OS to retrieve processes from the secondary storage
into the main memory in the form of pages.
It eliminates the need for contiguous allocation of physical memory. This scheme permits the
physical address space of a process to be non – contiguous.
Logical Address or Virtual Address (represented in bits): An address generated by the CPU
Logical Address Space or Virtual Address Space( represented in words or bytes): The set of all
logical addresses generated by a program
Physical Address Space (represented in words or bytes): The set of all physical addresses
corresponding to the logical addresses
The mapping from virtual to physical address is done by the memory management unit (MMU)
The Physical Address Space is conceptually divided into a number of fixed-size blocks,
called frames.
The Logical address Space is also splitted into fixed-size blocks, called pages.
In the Paging method, the main memory is divided into small fixed-size blocks of physical
For example, if the main memory size is 16 KB and Frame size is 1 KB. Here, the main memory
There are 4 separate processes in the system that is A1, A2, A3, and A4 of 4 KB each.
Here, all the processes are divided into pages of 1 KB each so that operating system can store one
At the beginning of the process, all the frames remain empty so that all the pages of the processes
Therefore, eight frames become empty, and so other pages can be loaded in that empty blocks.
The process A5 of size 8 pages (8 KB) are waiting in the ready queue.
Address Translation Scheme
Address generated by CPU is divided into
Page number(p): Number of bits required to represent the pages in Logical Address Space or
Page number
Page offset(d): Number of bits required to represent particular word in a page or page size of
Logical Address Space or word number of a page or page offset.
Frame number(f): Number of bits required to represent the frame of Physical Address Space or
Frame number.
Frame offset(d): Number of bits required to represent particular word in a frame or frame size of
Physical Address Space or word number of a frame or frame offset.
Paging Hardware With TLB
(Translation Lookaside Buffer )
Translation Lookaside Buffer (TLB) is nothing but a special cache used to keep track of recently used transactions.
TLB contains page table entries that have been most recently used.
Steps in TLB hit:
• CPU generates virtual (logical) address.
• It is checked in TLB (present).
• Corresponding frame number is retrieved, which now tells where the main memory page lies.
Steps in TLB miss:
• CPU generates virtual (logical) address.
• It is checked in TLB (not present).
• Now the page number is matched to page table residing in main memory (assuming page table contains all
PTE).
• Corresponding frame number is retrieved, which now tells where the main memory page lies.
• The TLB is updated with new PTE (if space is not there, one of the replacement technique comes into
picture i.e either FIFO, LRU or MFU etc).
Translation look aside buffer (TLB)…
Translation look aside buffer (TLB) is a special, small, fast-lookup hardware cache
and the TLB is associative, high-speed, memory.
Typically, the number of entries in a TLB is small, often numbering between 64 and
1,0241.
when the associative memory is presented with an item, the item is compared with
all keys simultaneously.
A Translation look aside buffer can be defined as a memory cache which can be used
to reduce the time taken to access the page table again and again.
Translation look aside buffer (TLB)…
However, if the entry is not found in TLB (TLB miss) then CPU has to access page
table in the main memory and then access the actual frame in the main memory.
Therefore, in the case of TLB hit, the effective access time will be lesser as compare
to the case of TLB miss.
In addition, we add the page number and frame number to the TLB, so that they will
be found quickly on the next reference.
If the TLB is already full of entries, the operating system must select one for
replacement. Replacement policies range from least recently used (LRU) to
random.
Translation look aside buffer (TLB)…
If the probability of TLB hit is P% (TLB hit rate) then the probability of TLB miss
(TLB miss rate) will be (1-P) %.
Characteristics-
In segmentation, secondary memory and main memory are divided into partitions of unequal size.
Segment table is a table that stores the information about each segment of the process.
Second column stores the base address or starting address of the segment in the main memory.
Segment table base register (STBR) stores the base address of the segment table.
Translation of Logical address into physical address
CPU generates a logical address which contains two parts:
Segment Number
Offset
If the offset is less than the limit then the address is valid otherwise it throws an error as the address is invalid.
In the case of valid address, the base address of the segment is added to the offset to get the physical address
7 Page table is used to maintain the Segment Table maintains the segment
page information. information
8 Page table entry has the frame Segment table entry has the base address of
number and some flag bits to the segment and some protection bits for
represent details about pages. the segments.
Swapping
Swapping is a memory management technique and is used to temporarily remove the
inactive programs from the main memory of the computer system.
Any process must be in the memory for its execution, but can be swapped temporarily
out of memory to a backing store and then again brought back into the memory to
complete its execution.
Swapping is done so that other processes get memory for their execution.
Due to the swapping technique, the performance usually gets affected, but it also helps
in running multiple and big processes in parallel.
It is used to improve main memory utilization. In secondary memory, the place where the
swapped-out process is stored is called swap space.
Swapping…
The concept of swapping has divided into two more concepts: Swap-in and Swap-out.
Swap-out is a method of removing a process from RAM and adding it to the hard disk.
Swap-in is a method of removing a program from a hard disk and putting it back into the main
memory or RAM.
• Note:
In a single tasking operating system, only one process occupies the user program area of
memory and stays in memory until the process is complete.
In a multitasking operating system, a situation arises when all the active processes
cannot coordinate in the main memory, then a process is swap out from the main
memory so that other processes can enter it.
Swapping…
Example: Suppose the user process's size is 2048KB and is a standard hard disk where
swapping has a data transfer rate of 1Mbps. Now we will calculate how long it will take
to transfer from main memory to secondary memory.
Solution: User process size is 2048Kb and Data transfer rate is 1Mbps = 1024 kbps
Time = process size / transfer rate
= 2048 / 1024
= 2 seconds
= 2000 milliseconds
Now taking swap-in and swap-out time, the process will take 4000 milliseconds.
Advantages/benefits of Swapping
5) In this technique, the CPU can perform several tasks simultaneously. Thus,
1) If the computer system loses power, the user may lose all information related
to the program in case of substantial swapping activity.
2) If the swapping algorithm is not good, the composite method can increase the
number of Page Faults and decrease the overall processing performance.
A computer can address more memory than the amount physically installed on the system.
Second, it allows us to have memory protection, because each virtual address is translated to a
physical address.
According to the concept of Virtual Memory, in order to execute some process, only a part of the
process needs to be present in the main memory
which means that only a few pages will only be present in the main memory at any time.
However, deciding, which pages need to be kept in the main memory and which need to be kept in
the secondary memory, is going to be difficult.
Because we cannot say in advance that a process will require a particular page at particular time.
Therefore, to overcome this problem, there is a concept called Demand Paging is introduced.
It suggests keeping all pages of the frames in the secondary memory until they are required.
In other words, it says that do not load any page in the main memory until it is required.
The main steps involved in demand paging that is in between the page is
requested and it is loaded into main memory are as follows:-
that allows the parent and child process to share the same pages of the memory initially will be
marked as copy-on-write.
which means that if any of these processes will try to modify the shared pages then only a copy of these
pages will be created
And the modifications will be done on the copy of pages by that process and thus not affecting the
other process.
if a unit of data is copied but is not modified then "copy" can mainly exist as a reference to the
original data.
Suppose, there is a process P that creates a new process Q and then process P modifies page.
The below figures shows what happens before and after process P modifies page 3.
There are two main aspects of virtual memory, Frame allocation and Page Replacement.
It is very important to have the optimal frame allocation and page replacement algorithm.
Frame allocation is all about how many frames are to be allocated to the process
while the page replacement is all about determining the page number which needs to be replaced in
In an operating system that uses paging for memory management, a page replacement algorithm is
needed to decide which page needs to be replaced when new page comes in.
Page Fault – A page fault happens when a running program accesses a memory page that is
mapped into the virtual address space, but not loaded in physical memory.
Actual physical memory is much smaller than virtual memory, page faults happen.
In case of page fault, Operating System might have to replace one of the existing pages with the
newly needed page.
Different page replacement algorithms suggest different ways to decide which page to replace.
The target for all algorithms is to reduce the number of page faults.
Page Replacement Algorithms :
In this algorithm, the operating system keeps track of all pages in the memory in a queue, the
Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. —>1 Page Fault.
6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —>1 Page Fault.
Belady’s anomaly –
Belady’s anomaly proves that it is possible to have more page faults when increasing the number of
page frames while using the First in First Out (FIFO) page replacement algorithm.
For example, if we consider reference string 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4 and 3 slots, we get 9 total
page faults, but if we increase slots to 4, we get 10 page faults.
Optimal Page replacement –
In this algorithm, pages are replaced which would not be used for the longest duration of time in the future.
Example: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frame. Find number
of page fault.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
when 3 came it will take the place of 7 because it is not used for the longest duration of time in the future.
—>1 Page fault.
Now for the further page reference string —> 0 Page fault because they are already available in the
memory.
Optimal page replacement is perfect, but not possible in practice as the operating system cannot know
future requests.
The use of Optimal Page replacement is to set up a benchmark so that other replacement algorithms can be
analyzed against it.
Least Recently Used –
In this algorithm page will be replaced which is least recently used.
Example: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frames. Find
number of page faults.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
when 3 came it will take the place of 7 because it is least recently used —>1 Page fault
Now for the further page reference string —> 0 Page fault because they are already available in
the memory.
Allocation of frames
Frame allocation algorithms are used if you have multiple processes; it helps decide how many frames
to allocate to each process.
There are various constraints to the strategies for the allocation of frames:
You cannot allocate more than the total number of available frames.
The first reason is, as less number of frames are allocated, there is an increase in the page fault ratio,
decreasing the performance of the execution of the process.
Secondly, there should be enough frames to hold all the different pages that any single instruction can
reference.
Frame Allocation Strategies
EQUAL ALLOCATION: :
Not very much useful as not every process will require equal number of frames;
some process may require extra frames whereas some process may require less number of frames
Depending on the size of the processes, number of frames will be allocated accordingly. More
number of frames given the process of larger size.
For example: available processes of size P1: 20 Pages, P2: 30 Pages, P3: 50 Pages
Available frames: 10
P2= 30/100*10=3
P3=50/100*10=5
PRIORITY ALLOCATION:
Suppose, Process P1 has higher priority than process P2 and requires more frame then P1 pulls out
LOCAL PAGE-REPLACEMENT:
Local page replacement strategy works as static allocation. Whenever we need to replace a page
from the main memory then we will replace the page only from the frames which are allocated to
that particular process without disturbing any other pages of other processes.
GLOBAL PAGE-REPLACEMENT:
This strategy works differently than local page replacement strategy while replacing any page
we have to consider everything for replacement, we can consider all the available frames to replace.
Thrashing
Thrashing is a condition or a situation when the system is spending a major portion of its time in
servicing the page faults, but the actual processing done is very negligible.
Working Set:
The set of the pages in the most recent? page reference is known as the working set.
In case if the page is no longer being used then it will drop from the working set ? times after its last
reference.
Page Fault Frequency:
When the Page fault is too high, then we know that the process needs more frames. Conversely, if
the page fault-rate is too low then the process may have too many frames.
We can establish upper and lower bounds on the desired page faults.
If the actual page-fault rate exceeds the upper limit then we will allocate the process to another
frame.
And if the page fault rate falls below the lower limit then we can remove the frame from the
process.
Thus with this, we can directly measure and control the page fault rate in order to prevent thrashing.