COOS UNIT-IV (Part-2)
COOS UNIT-IV (Part-2)
Memory Management:
Swapping, Demand Paging,
Contiguous Memory Allocation Page-Replacement Algorithms
Paging, Structure of the Page Table Allocation of Frames
Segmentation Thrashing Case Studies - UNIX,
Virtual Memory Linux, Windows
Swapping:
Swapping is a process of swapping a process temporarily to a secondary memory from the main
memory which is fast than compared to secondary memory. But as RAM is of less size so the process
that is inactive is transferred to secondary memory. The main part of swapping is transferred time and
the total time is directly proportional to the amount of memory swapped.
Swap-in: A swap-in process in which a process moves from secondary storage / hard disk to main
memory (RAM).
Swap out: Swap out takes a process out of the main memory and places it in secondary memory.
Advantages of Swapping
1. It helps the CPU to manage multiple processes within a single main memory.
2. It helps to create and use virtual memory.
3. Swapping allows the CPU to perform multiple tasks simultaneously. Therefore, processes do not
have to wait very long before they are executed.
4. It improves the main memory utilization.
Disadvantages of Swapping
1. If the computer system loses power, the user may lose all information related to the program in
case of substantial swapping activity.
2. If the swapping algorithm is not good, the composite method can increase the number of Page
Fault and decrease the overall processing performance.
CO&OS UNIT-IV Part-2 Page 1 of 18
Contiguous and Non-Contiguous Memory Allocation:
Memory is a huge collection of bytes, and memory allocation refers to allocating space to computer
applications.
1. Contiguous and
2. Non-contiguous memory allocation.
Contiguous memory allocation allows a single memory space to complete the tasks. On the other hand,
non-contiguous memory allocation assigns the method to distinct memory sections at numerous memory
locations.
In this article, you will learn about Contiguous and Non-contiguous memory allocation with their
advantages, disadvantages, and differences.
It is the type of memory allocation method. When a process requests the memory, a single contiguous
section of memory blocks is allotted depending on its requirements.
It is completed by partitioning the memory into fixed-sized partitions and assigning every partition to a
single process. However, it will limit the degree of multiprogramming to the number of fixed partitions
done in memory.
This allocation also leads to internal fragmentation. For example, suppose a fixed-sized memory block
assigned to a process is slightly bigger than its demand. In that case, the remaining memory space in the
block is referred to as internal fragmentation. When a process in a partition finishes, the partition becomes
accessible for another process to run.
The OS preserves a table showing which memory partitions are free and occupied by processes in the
variable partitioning scheme. Contiguous memory allocation speeds up process execution by decreasing
address translation overheads.
There are various advantages and disadvantages of contiguous memory allocation. Some of the
advantages and disadvantages are as follows:
Advantages
1. It is simple to keep track of how many memory blocks are left, which determines how many more
processes can be granted memory space.
2. The read performance of contiguous memory allocation is good because the complete file may be
read from the disk in a single task.
3. The contiguous allocation is simple to set up and performs well.
1. Fragmentation isn't a problem because every new file may be written to the end of the disk after
the previous one.
2. When generating a new file, it must know its eventual size to select the appropriate hole size.
3. When the disk is filled up, it would be necessary to compress or reuse the spare space in the holes.
The two methods for making a process's physical address space non-contiguous are paging and
segmentation. Non-contiguous memory allocation divides the process into blocks (pages or segments) that
are allocated to different areas of memory space based on memory availability.
Non-contiguous memory allocation can decrease memory wastage, but it also raises address translation
overheads. As the process portions are stored in separate locations in memory, the memory execution is
slowed because time is consumed in address translation.
There are various advantages and disadvantages of non-contiguous memory allocation. Some of the
advantages and disadvantages are as follows:
Advantages
1. It has the advantage of reducing memory waste, but it increases overhead because of the address
translation.
2. It slows down the memory execution because time is consumed in address translation.
Disadvantages
1. The downside of this memory allocation is that the access is slow because you must reach the other
nodes using pointers and traverse them.
In most cases, the operating system keeps a In the non-contiguous memory allocation, each
table that lists all available and occupied process must keep a table that primarily
partitions in the contiguous memory allocation. contains each block's base addresses acquired
by the memory.
The Operating System can better control The Non-Contiguous Memory Allocation is
contiguous memory allocation difficult for the Operating System to manage.
The memory space is partitioned into fixed- It is broken into several blocks, which are then
sized partitions in the contiguous memory placed in different areas of the memory based
allocation, and each partition is only assigned to on available memory space.
one process.
Both internal and exterior fragmentation occurs. The non-Contiguous memory allocation
method causes external fragmentation.
In Operating Systems, Paging is a storage mechanism used to retrieve processes from the secondary
storage into the main memory in the form of pages.
The main idea behind the paging is to divide each process in the form of pages. The main memory will
also be divided in the form of frames.
One page of the process is to be stored in one of the frames of the memory. The pages can be stored at the
different locations of the memory but the priority is always to find the contiguous frames or holes.
Pages of the process are brought into the main memory only when they are required otherwise they reside
in the secondary storage.
Different operating system defines different frame sizes. The sizes of each frame must be equal.
Considering the fact that the pages are mapped to the frames in Paging, page size needs to be as same as
frame size.
Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main memory will be
divided into the collection of 16 frames of 1 KB each.
There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process is divided into
pages of 1 KB each so that one page can be stored in one frame.
Initially, all the frames are empty therefore pages of the processes will get stored in the contiguous way.
Frames, pages and the mapping between the two is shown in the image below.
Let us consider that, P2 and P4 are moved to waiting state after some time. Now, 8 frames become empty
and therefore other pages can be loaded in that empty place. The process P5 of size 8 KB (8 pages) is
waiting inside the ready queue.
Given the fact that, we have 8 non contiguous frames available in the memory and paging provides the
flexibility of storing the process at the different places. Therefore, we can load the pages of process P5 in
the place of P2 and P4.
Page Table is a data structure used by the virtual memory system to store the mapping between logical
addresses and physical addresses.
Logical addresses are generated by the CPU for the pages of the processes therefore they are generally
used by the processes.
Physical addresses are the actual frame address of the memory. They are generally used by the hardware
or more specifically by RAM subsystems.
In this situation, a unit named as Memory Management Unit comes into the picture. It converts the page
number of the logical address to the frame number of the physical address. The offset remains same in
both the addresses.
To perform this task, Memory Management unit needs a special kind of mapping which is done by page
table. The page table stores all the Frame numbers corresponding to the page numbers of the page table.
In other words, the page table maps the page number to its actual location (frame number) in the
memory. In the image given below shows, how the required word of the frame is accessed with the help
of offset.
In Operating Systems, Segmentation is a memory management technique in which the memory is divided
into the variable size parts. Each part is known as a segment which can be allocated to a process.
The details about each segment are stored in a table called a segment table. Segment table is stored in one
(or many) of the segments.
Till now, we were using Paging as our main memory management technique. Paging is more close to the
Operating system rather than the User. It divides all the processes into the form of pages regardless of the
fact that a process can have some relative parts of functions which need to be loaded in the same page.
Operating system doesn't care about the User's view of the process. It may divide the same function into
different pages and those pages may or may not be loaded at the same time into the memory. It decreases
the efficiency of the system.
It is better to have segmentation which divides the process into the segments. Each segment contains the
same type of functions such as the main function can be included in one segment and the library functions
can be included in the other segment.
1. Segment Number
2. Offset
Suppose a 16 bit address is used with 4 bits for the segment number and 12 bits for the segment offset so
the maximum segment size is 4096 and the maximum number of segments that can be refereed is 16.
When a program is loaded into memory, the segmentation system tries to locate space that is large enough
to hold the first segment of the process; space information is obtained from the free list maintained by
memory manager. Then it tries to locate space for other segments. Once adequate space is located for all
the segments, it loads them into their respective areas.
The operating system also generates a segment map table for each program.
With the help of segment map tables and hardware assistance, the operating system can easily translate a
logical address into physical address on execution of a program.
The Segment number is mapped to the segment table. The limit of the respective segment is compared
with the offset. If the offset is less than the limit then the address is valid otherwise it throws an error as
the address is invalid.
The above figure shows how address translation is done in case of segmentation.
Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compared to the page table in paging.
Disadvantages of Segmentation
1. It can have external fragmentation.
2. It is difficult to allocate contiguous memory to variable sized partition.
3. Costly memory management algorithms.
2 Paging divides program into fixed size Segmentation divides program into variable
pages. size segments.
8 Logical address is divided into page Logical address is divided into segment
number and page offset number and segment offset
9 Page table is used to maintain the page Segment Table maintains the segment
information. information
10 Page table entry has the frame number Segment table entry has the base address of
and some flag bits to represent details the segment and some protection bits for the
about pages. segments.
Virtual Memory:
Virtual Memory is a storage scheme that provides user an illusion of having a very big main memory. This
is done by treating a part of secondary memory as the main memory.
In this scheme, User can load the bigger size processes than the available main memory by having the
illusion that the memory is available to load the process.
Instead of loading one big process in the main memory, the Operating System loads the different parts of
more than one process in the main memory.
By doing this, the degree of multiprogramming will be increased and therefore, the CPU utilization will
also be increased.
In modern word, virtual memory has become quite common these days. In this scheme, whenever some
pages needs to be loaded in the main memory for the execution and the memory is not available for those
many pages, then in that case, instead of stopping the pages from entering in the main memory, the OS
search for the RAM area that are least used in the recent times or that are not referenced and copy that into
the secondary memory to make the space for the new pages in the main memory.
The CPU contains a register which contains the base address of page table that is 5 in the case of P1 and 7
in the case of P2. This page table base address will be added to the page number of the Logical address
when it comes to accessing the actual corresponding entry.
Demand Paging
Demand Paging is a popular method of virtual memory management. In demand paging, the pages of a
process which are least used, get stored in the secondary memory.
A page is copied to the main memory when its demand is made or page fault occurs. There are various
page replacement algorithms which are used to determine the pages which will be replaced. We will
discuss each one of them later in detail.
In Virtual Memory Management, Page Replacement Algorithms play an important role. The main
objective of all the Page replacement policies is to decrease the maximum number of page faults.
Page Fault – It is basically a memory error, and it occurs when the current programs attempt to access the
memory page for mapping into virtual address space, but it is unable to load into the physical memory
then this is referred to as Page fault.
Page Replacement technique uses the following approach. If there is no free frame, then we will find the
one that is not currently being used and then free it. A-frame can be freed by writing its content to swap
space and then change the page table in order to indicate that the page is no longer in the memory.
1. First of all, find the location of the desired page on the disk.
2. Find a free Frame: a) If there is a free frame, then use it. b) If there is no free frame then make use
of the page-replacement algorithm in order to select the victim frame. c) Then after that write the
victim frame to the disk and then make the changes in the page table and frame table accordingly.
3. After that read the desired page into the newly freed frame and then change the page and frame
tables.
4. Restart the process.
The page replacement algorithm decides which memory page is to be replaced. The process of
replacement is sometimes called swap out or write to disk. Page replacement is done when the requested
page is not found in the main memory (page fault).
1. If the number of frames which are allocated to a process is not sufficient or accurate then there can be a
problem of thrashing. Due to the lack of frames, most of the pages will be residing in the main memory
and therefore more page faults will occur.
2. If the page replacement algorithm is not optimal then there will also be the problem of thrashing. If the
number of pages that are replaced by the requested pages will be referred in the near future then there will
be more number of swap-in and swap-out and therefore the OS has to perform more replacements then
usual which cause performance deficiency. Therefore, the task of an optimal page replacement algorithm
is to choose the page which can limit the thrashing.
1. Optimal Page Replacement algorithm → this algorithms replaces the page which will not be
referred for so long in future. Although it cannot be practically implementable but it can be used as
a benchmark. Other algorithms are compared to this in terms of optimality.
2. LEAST RECENT USED (LRU) page replacement algorithm → this algorithm replaces the
page which has not been referred for a long time. This algorithm is just opposite to the optimal
page replacement algorithm. In this, we look at the past instead of staring at future.
3. FIFO → in this algorithm, a queue is maintained. The page which is assigned the frame first will
be replaced first. In other words, the page which resides at the rare end of the queue will be
replaced on the every page fault.
Q. Consider a main memory with five page frames and the following sequence of page references: 3,
8, 2, 3, 9, 1, 6, 3, 8, 9, 3, 6, 2, 1, 3. which one of the following is true with respect to page replacement
policies First-In-First-out (FIFO) and Least Recently Used (LRU)?
A. Both incur the same number of page faults C. LRU incurs 2 more page faults than FIFO
B. FIFO incurs 2 more page faults than LRU D. FIFO incurs 1 more page faults than LRU
Number of frames = 5
FIFO
According to FIFO, the page which first comes in the memory will first goes out.
LRU
According to LRU, the page which has not been requested for a long time will get replaced with the new
one.
The Number of page faults in both the cases is equal therefore the Answer is option (A).
There are various constraints to the strategies for the allocation of frames:
You cannot allocate more than the total number of available frames.
At least a minimum number of frames should be allocated to each process. This constraint is supported
by two reasons. The first reason is, as less number of frames are allocated, there is an increase in the
page fault ratio, decreasing the performance of the execution of the process. Secondly, there should be
enough frames to hold all the different pages that any single instruction can reference.
Frame allocation algorithms –
The two algorithms commonly used to allocate frames to a process are:
1. Equal allocation: In a system with x frames and y processes, each process gets equal number of
frames, i.e. x/y. For instance, if the system has 48 frames and 9 processes, each process will get 5
frames. The three frames which are not allocated to any process can be used as a free-frame buffer
pool.
Disadvantage: In systems with processes of varying sizes, it does not make much sense to give
each process equal frames. Allocation of a large number of frames to a small process will
eventually lead to the wastage of a large number of allocated unused frames.
2. Proportional allocation: Frames are allocated to each process according to the process size.
For a process pi of size si, the number of allocated frames is ai = (si/S)*m, where S is the sum of the
sizes of all the processes and m is the number of frames in the system. For instance, in a system with
62 frames, if there is a process of 10KB and another process of 127KB, then the first process will be
allocated (10/137)*62 = 4 frames and the other process will get (127/137)*62 = 57 frames.
Advantage: All the processes share the available frames according to their needs, rather than
equally.
In case, if the page fault and swapping happens very frequently at a higher rate, then the operating system
has to spend more time swapping these pages. This state in the operating system is termed thrashing.
Because of thrashing the CPU utilization is going to be reduced.
Let's understand by an example, if any process does not have the number of frames that it needs to support
pages in active use then it will quickly page fault. And at this point, the process must replace some pages.
As all the pages of the process are actively in use, it must replace a page that will be needed again right
away. Consequently, the process will quickly fault again, and again, and again, replacing pages that it
must bring back in immediately. This high paging activity by a process is called thrashing.
During thrashing, the CPU spends less time on some actual productive work spend more time swapping.
Figure: Thrashing
By:
Dr. M. Ramu
Associate Professor
Dept. of IT
TKREC-R9
Hyderabad