Unit 3
Unit 3
Memory Management is the process of controlling and coordinating computer memory, assigning
portions known as blocks to various running programs to optimize the overall performance of the system.
It is the most important function of an operating system that manages primary memory. It helps
processes to move back and forward between the main memory and execution disk. It helps OS to keep
track of every memory location, irrespective of whether it is allocated to some process or it remains free.
• It allows you to check how much memory needs to be allocated to processes that decide which
processor should get memory at what time.
• Tracks whenever inventory gets freed or unallocated. According to it will update the status.
• It allocates the space to application routines.
• It also make sure that these applications do not interfere with each other.
• Helps protect different processes from each other
• It places the programs in memory so that memory is utilized to its full extent.
• Obviously memory accesses and memory management are a very
important part of modern computer operation. Every instruction has
to be fetched from memory before it can be executed, and most
instructions involve retrieving data from memory or storing data in
memory or both.
• The advent of multi-tasking OSes compounds the complexity of
memory management, because because as processes are swapped in
and out of the CPU, so must their code and data be swapped in and
out of memory, all at high speeds and without interfering with any
other processes.
• Shared memory, virtual memory, the classification of memory as
read-only versus read-write, and concepts like copy-on-write forking
all further complicate the issue.
1 Basic Hardware
• It should be noted that from the memory chips point of view, all
memory accesses are equivalent. The memory hardware doesn't know
what a particular part of memory is being used for, nor does it care.
This is almost true of the OS as well, although not entirely.
• The CPU can only access its registers and main memory. It cannot,
for example, make direct access to the hard drive, so any data stored
there must first be transferred into the main memory chips before the
CPU can work with it. ( Device drivers communicate with their
hardware via interrupts and "memory" accesses, sending short
instructions for example to transfer data from the hard drive to a
specified location in main memory. The disk controller monitors the
bus for such instructions, transfers the data, and then notifies the CPU
that the data is there with another interrupt, but the CPU never gets
direct access to the disk. )
• Memory accesses to registers are very fast, generally one clock tick,
and a CPU may be able to execute more than one machine instruction
per clock tick.
• Memory accesses to main memory are comparatively slow, and may
take a number of clock ticks to complete. This would require
intolerable waiting by the CPU if it were not for an intermediary fast
memory cache built into most modern CPUs. The basic idea of the
cache is to transfer chunks of memory at a time from the main
memory to the cache, and then to access individual memory locations
one at a time from the cache.
• User processes must be restricted so that they only access memory
locations that "belong" to that particular process. This is usually
implemented using a base register and a limit register for each
process, as shown in Figures 8.1 and 8.2 below. Every memory
access made by a user process is checked against these two registers,
and if a memory access is attempted outside the valid range, then a
fatal error is generated. The OS obviously has access to all existing
memory locations, as this is necessary to swap users' code and data in
and out of memory. It should also be obvious that changing the
contents of the base and limit registers is a privileged activity,
allowed only to the OS kernel.
Address Binding
TOPIC-2 Swapping
Swapping
The total time taken by swapping process includes the time it takes to move the
entire process to a secondary disk and then to copy the process back to memory, as
well as the time the process takes to regain main memory.
Let us assume that the user process is of size 2048KB and on a standard hard disk
where swapping will take place has a data transfer rate around 1 MB per second. The
actual transfer of the 1000K process to or from memory will take
2048KB / 1024KB per second
= 2 seconds
= 2000 milliseconds
Now considering in and out time, it will take complete 4000 milliseconds plus other
overhead where the process competes to regain main memory.
Benefits of Swapping
In the contiguous memory allocation, both operating system and the client must live
in the primary memory. The primary memory is isolated into two segments where’s
one segment is for the operations and other is for the client program.
A solitary procedure is distributed in that fixed-sized single partition. Yet, this will
build the level of multiprogramming that implies more than one procedure in the
principle memory that limits the quantity of fixed partition done in memory. Internal
fragmentation expands in light of the contiguous memory allocation.
Types of Contiguous Memory Management Technique
For instance, the MS-DOS operating system designates memory along these lines. An
embedded system likewise runs on a solitary application.
Partitioned Allocation
It is also called variable size segments/partitions. In which system isolates essential
memory into different memory segments, which are generally adjacent classes of
memory. Each segment stores all the data for a particular task/job. This technique
comprises allocating a segment to occupation when it begins and unallocated when it
closes. In the variable size partition, the memory is treated as one unit, and space
allotted to a procedure is actually equivalent to require and the extra space can be
reused once more.
Segments need equipment support as a segment table. It contains the physical address
of the area in memory, size, and other information like access assurance bits and
status.
• Allow the memory ability to be 1 MB despite the fact that the addresses
related with the individual directions are 16 bits wide.
• Allow the utilization of independent memory regions for the program code
and information and stack part of the program.
• It permits a program and additionally its information to be put into various
areas of memory at whatever point the program is end.
• Multitasking turns out to be simple
2 Multiple-partition allocation
In this type of allocation, main memory is divided into a number of fixed-
sized partitions where each partition should contain only one process. When
a partition is free, a process is selected from the input queue and is loaded
into the free partition. When the process terminates, the partition becomes
available for another process.
Partition Allocation Methods in Memory Management
In the operating system, the following are four common memory management
techniques.
Paged memory management: Memory is divided into fixed-sized units called page
frames, used in a virtual memory environment.
Most of the operating systems (for example Windows and Linux) use Segmentation
with Paging. A process is divided into segments and individual segments have pages.
In Partition Allocation, when there is more than one partition freely available to
accommodate a process’s request, a partition must be selected. To choose a particular
partition, a partition allocation method is needed. A partition allocation method is
considered better if it avoids internal fragmentation.
When it is time to load a process into the main memory and if there is more than one
free block of memory of sufficient size then the OS decides which free block to
allocate.
A. First Fit
B. Best Fit
C. Worst Fit
D. Next Fit
1. First Fit: In the first fit, the partition is allocated which is the first sufficient block
from the top of Main Memory. It scans memory from the beginning and chooses the
first available block that is large enough. Thus it allocates the first hole that is large
enough.
2. Best Fit Allocate the process to the partition which is the first smallest sufficient
partition among the free available partition. It searches the entire list of holes to find
the smallest hole whose size is greater than or equal to the size of the process.
3. Worst Fit Allocate the process to the partition which is the largest sufficient
among the freely available partitions available in the main memory. It is opposite to
the best-fit algorithm. It searches the entire list of holes to find the largest hole and
allocate it to process.
4. Next Fit: Next fit is similar to the first fit but it will search for the first sufficient
partition from the last allocation point.
Exercise: Consider the requests from processes in given order 300K, 25K, 125K, and
50K. Let there be two blocks of memory available of size 150K followed by a block
size 350K.
Which of the following partition allocation schemes can satisfy the above requests?
A) Best fit but not first fit.
B) First fit but not best fit.
C) Both First fit & Best fit.
D) neither first fit nor best fit.
First Fit:
300K request is allocated from 350K block, 50K is left out.
25K is be allocated from the 150K block, 125K is left out.
Then 125K and 50K are allocated to the remaining left out partitions.
So, the first fit can handle requests.
Fragmentation
As processes are loaded and removed from memory, the free memory space is
broken into little pieces. It happens after sometimes that processes cannot be
allocated to memory blocks considering their small size and memory blocks remains
unused. This problem is known as Fragmentation.
Fragmentation is of two types −
2 Internal fragmentation
Memory block assigned to process is bigger. Some
portion of memory is left unused, as it cannot be used
by another process.
The following diagram shows how fragmentation can cause waste of memory and a
compaction technique can be used to create more free memory out of fragmented
memory −
In contiguous memory allocation whenever the processes come into RAM, space
is allocated to them. These spaces in RAM are divided either on the basis of fixed
partitioning(the size of partitions are fixed before the process gets loaded into
RAM) or dynamic partitioning (the size of the partition is decided at the run time
according to the size of the process). As the process gets loaded and removed from
the memory these spaces get broken into small pieces of memory that it can’t be
allocated to the coming processes. This problem is called fragmentation. In this
blog, we will study how these free space and fragmentations occur in memory. So,
let's get started.
Fragmentation
Fragmentation is an unwanted problem where the memory blocks cannot be
allocated to the processes due to their small size and the blocks remain unused. It
can also be understood as when the processes are loaded and removed from the
memory they create free space or hole in the memory and these small blocks cannot
be allocated to new upcoming processes and results in inefficient use of memory.
Basically, there are two types of fragmentation:
• Internal Fragmentation
• External Fragmentation
Internal Fragmentation
In this fragmentation, the process is allocated a memory block of size more than the
size of that process. Due to this some part of the memory is left unused and this
cause internal fragmentation.
Example: Suppose there is fixed partitioning (i.e. the memory blocks are of fixed
sizes) is used for memory allocation in RAM. These sizes are 2MB, 4MB, 4MB,
8MB. Some part of this RAM is occupied by the Operating System (OS).
Now, suppose a process P1 of size 3MB comes and it gets memory block of size
4MB. So, the 1MB that is free in this block is wasted and this space can’t be utilized
for allocating memory to some other process. This is called internal fragmentation.
Example: Suppose in the above example, if three new processes P2, P3, and P4 come of
sizes 2MB, 3MB, and 6MB respectively. Now, these processes get memory blocks of size
2MB, 4MB and 8MB respectively allocated.
So, now if we closely analyze this situation then process P3 (unused 1MB)and P4(unused
2MB) are again causing internal fragmentation. So, a total of 4MB (1MB (due to process P1)
+ 1MB (due to process P3) + 2MB (due to process P4)) is unused due to internal
fragmentation.
There are two types of fragmentation in OS which are given as: Internal
fragmentation, and External fragmentation.
Internal Fragmentation:
Internal fragmentation happens when the memory is split into mounted sized blocks.
Whenever a method request for the memory, the mounted sized block is allotted to the
method. just in case the memory allotted to the method is somewhat larger than the
memory requested, then the distinction between allotted and requested memory is that
the Internal fragmentation.
The above diagram clearly shows the internal fragmentation because the difference
between memory allocated and required space or memory is called Internal
fragmentation.
External Fragmentation:
External fragmentation happens when there’s a sufficient quantity of area within the
memory to satisfy the memory request of a method. however the process’s memory
request cannot be fulfilled because the memory offered is during a non-contiguous
manner. Either you apply first-fit or best-fit memory allocation strategy it’ll cause
external fragmentation.
In above diagram, we can see that, there is enough space (55 KB) to run a process-07
(required 50 KB) but the memory (fragment) is not contiguous. Here, we use
compaction, paging or segmentation to use the free space to run a process.
5. The difference between memory The unused spaces formed between non-
allocated and required space or contiguous memory fragments are too
memory is called Internal small to serve a new process, is called
fragmentation. External fragmentation .
System also keep the record of all the unallocated blocks each and can merge these
different size blocks to make one big chunk.
Advantage –
• Easy to implement a buddy system
• Allocates block of correct size
• It is easy to merge adjacent holes
• Fast to allocate memory and de-allocating memory
Disadvantage –
• It requires all allocation unit to be powers of two
• It leads to internal fragmentation
Example –
Consider a system having buddy system with physical address space 128
KB.Calculate the size of partition for 18 KB process.
Solution –
So, size of partition for 18 KB process = 32 KB. It divides by 2, till possible to get minimum
block to fit 18 KB.
Advantage –
• Easy to implement a buddy system
• Allocates block of correct size
• It is easy to merge adjacent holes
• Fast to allocate memory and de-allocating memory
Disadvantage –
• It requires all allocation unit to be powers of two
• It leads to internal fragmentation.
Paging-
⚫ Paging is a fixed size partitioning scheme.
⚫ In paging, secondary memory and main memory are divided into equal fixed size
partitions.
⚫ The partitions of secondary memory are called as pages.
⚫ The partitions of main memory are called as frames.
⚫ Paging is a memory management scheme that eliminates the need for
contiguous allocation of physical memory. This scheme permits the physical
address space of a process to be non – contiguous.
• Logical Address or Virtual Address (represented in bits): An address
generated by the CPU
• Logical Address Space or Virtual Address Space( represented in words or
bytes): The set of all logical addresses generated by a program
• Physical Address (represented in bits): An address actually available on
memory unit
• Physical Address Space (represented in words or bytes): The set of all
physical addresses corresponding to the logical addresses
Example:
• If Logical Address = 31 bit, then Logical Address Space = 231 words = 2 G
words (1 G = 230)
• If Logical Address Space = 128 M words = 27 * 220 words, then Logical
Address = log2 227 = 27 bits
• If Physical Address = 22 bit, then Physical Address Space = 222 words = 4
M words (1 M = 220)
• If Physical Address Space = 16 M words = 24 * 220 words, then Physical
Address = log2 224 = 24 bits
The mapping from virtual to physical address is done by the memory management
unit (MMU) which is a hardware device and this mapping is known as paging
technique.
• The Physical Address Space is conceptually divided into a number of fixed-
size blocks, called frames.
• The Logical address Space is also splitted into fixed-size blocks,
called pages.
• Page Size = Frame Size
Let us consider an example:
• Physical Address = 12 bits, then Physical Address Space = 4 K words
• Logical Address = 13 bits, then Logical Address Space = 8 K words
• Page size = frame size = 1 K words (assumption)
Address generated by CPU is divided into
• Page number(p): Number of bits required to represent the pages in Logical
Address Space or Page number
• Page offset(d): Number of bits required to represent particular word in a
page or page size of Logical Address Space or word number of a page or page
offset.
Physical Address is divided into
• Frame number(f): Number of bits required to represent the frame of
Physical Address Space or Frame number.
• Frame offset(d): Number of bits required to represent particular word in a
frame or frame size of Physical Address Space or word number of a frame or
frame offset.
The hardware implementation of page table can be done by using dedicated registers.
But the usage of register for the page table is satisfactory only if page table is small.
If page table contain large number of entries then we can use TLB(translation Look-
aside buffer), a special, small, fast look up hardware cache.
• The TLB is associative, high speed memory.
• Each entry in TLB consists of two parts: a tag and a value.
• When this memory is used, then an item is compared with all tags
simultaneously.If the item is found, then corresponding value is returned.
Main memory access time = m
If page table are kept in main memory,
Effective access time = m(for page table) + m(for particular page in page table)
•
• Each process is divided into parts where size of each part is same as page size.
• The size of the last part may be less than the page size.
• The pages of process are stored in the frames of main memory depending
upon their availability.
Example-
⚫ Consider a process is divided into 4 pages P0, P1, P2 and P3.
⚫ Depending upon the availability, these pages may be stored in the main memory
frames in a non-contiguous fashion as shown-
Translating Logical Address into Physical Address-
⚫ CPU always generates a logical address.
⚫ A physical address is needed to access the main memory.
Following steps are followed to translate logical address into physical address-
Step-01:
CPU generates a logical address consisting of two parts-
1. Page Number
2. Page Offset
⚫ Page Number specifies the specific page of the process from which CPU wants to
read the data.
⚫ Page Offset specifies the specific word on the page that CPU wants to read.
Step-02:
Step-03:
• The frame number combined with the page offset forms the required physical
address.
• Frame number specifies the specific frame where the required page is stored.
• Page Offset specifies the specific word that has to be read from that page
iagram-
The following diagram illustrates the above steps of translating logical address into
physical address-
Advantages-
The advantages of paging are-
• It allows to store parts of a single process in a non-contiguous fashion.
• It solves the problem of external fragmentation.
Disadvantages-
The disadvantages of paging are-
• It suffers from internal fragmentation.
• There is an overhead of maintaining a page table for each process.
• The time taken to fetch the instruction increases since now two memory
accesses are required.
Advantages of Segmentation –
• No Internal fragmentation.
• Segment Table consumes less space in comparison to Page table in paging.
Disadvantage of Segmentation –
• As processes are loaded and removed from the memory, the free memory
space is broken into little pieces, causing External fragmentation.
OR
Characteristics-
Example-
Segment Table-
• Segment table is a table that stores the information about each segment of the
process.
• It has two columns.
• First column stores the size or length of the segment.
• Second column stores the base address or starting address of the segment in
the main memory.
• Segment table is stored as a separate segment in the main memory.
• Segment table base register (STBR) stores the base address of the segment
table.
For the above illustration, consider the segment table is-
Here,
• Limit indicates the length or size of the segment.
• Base indicates the base address or starting address of the segment in the main
memory.
In accordance to the above segment table, the segments are stored in the main
memory as-
Following steps are followed to translate logical address into physical address-
Step-01:
• Segment Number specifies the specific segment of the process from which
CPU wants to read the data.
• Segment Offset specifies the specific word in the segment that CPU wants to
read.
Step-02:
• If segment offset is found to be smaller than the limit, then request is treated as
a valid request.
• The segment offset must always lie in the range [0, limit-1],
• Then, segment offset is added with the base address of the segment.
• The result obtained after addition is the address of the memory location storing
the required word.
Diagram-
The following diagram illustrates the above steps of translating logical address into
physical address-
Advantages-
Disadvantages-
ANOTHER CONCEPT
SEGMENTATION WITH PAGING
⚫ Paging and Segmentation are the non-contiguous memory allocation techniques.
• Paging divides the process into equal size partitions called as pages.
• Segmentation divides the process into unequal size partitions called as
segments.
Segmented Paging-
Working-
In segmented paging,
• Process is first divided into segments and then each segment is divided into
pages.
• These pages are then stored in the frames of main memory.
• A page table exists for each segment that keeps track of the frames storing the
pages of that segment.
• Each page table occupies one frame in the main memory.
• Number of entries in the page table of a segment = Number of pages that
segment is divided.
• A segment table exists that keeps track of the frames storing the page tables of
segments.
• Number of entries in the segment table of a process = Number of segments
that process is divided.
• The base address of the segment table is stored in the segment table base
register.
Following steps are followed to translate logical address into physical address-
Step-01:
Step-02:
Step-03:
• For the generated page number, corresponding entry is located in the page
table.
• Page table provides the frame number of the frame storing the required page
of the referred segment.
• The frame containing the required page is located.
Step-04:
• The frame number combined with the page offset forms the required physical
address.
• For the generated page offset, corresponding word is located in the page and
read.
Diagram-
The following diagram illustrates the above steps of translating logical address into
physical address-
Advantages-
Disadvantages-
Segmentation:
Segmentation is another non-contiguous memory allocation scheme like paging. like
paging, in segmentation, process isn’t divided indiscriminately into mounted(fixed)
size pages. It is variable size partitioning theme. like paging, in segmentation,
secondary and main memory are not divided into partitions of equal size. The
partitions of secondary memory area unit known as as segments. The details
concerning every segment are hold in a table known as as segmentation table.
Segment table contains two main data concerning segment, one is Base, which is the
bottom address of the segment and another is Limit, which is the length of the
segment.
In segmentation, CPU generates logical address that contains Segment number and
segment offset. If the segment offset is a smaller amount than the limit then the
address called valid address otherwise it throws miscalculation because the address is
invalid.
The above figure shows the translation of logical address to physical address.
OR
If these characteristics are present then, it is not necessary that all the pages or
segments are present in the main memory during execution. This means that the
required pages need to be loaded into memory whenever required. Virtual memory is
implemented using Demand Paging or Demand Segmentation.
OR
Therefore, instead of loading one long process in the main memory, the OS loads the various parts of
more than one process in the main memory. Virtual memory is mostly implemented with demand paging
and demand segmentation.
• Whenever your computer doesn't have space in the physical memory it writes
what it needs to remember to the hard disk in a swap file as virtual memory.
• If a computer running Windows needs more memory/RAM, then installed in
the system, it uses a small portion of the hard drive for this purpose.
So, in that case, instead of preventing pages from entering in the main memory, the
OS searches for the RAM space that are minimum used in the recent times or that are
not referenced into the secondary memory to make the space for the new pages in the
main memory.
Let's understand virtual memory management with the help of one example.
For example:
Let's assume that an OS requires 300 MB of memory to store all the running programs.
However, there's currently only 50 MB of available physical memory stored on the
RAM.
he OS will then set up 250 MB of virtual memory and use a program called the
Virtual Memory Manager(VMM) to manage that 250 MB.
• So, in this case, the VMM will create a file on the hard disk that is 250 MB in
size to store extra memory that is required.
• The OS will now proceed to address memory as it considers 300 MB of real
memory stored in the RAM, even if only 50 MB space is available.
• It is the job of the VMM to manage 300 MB memory even if just 50 MB of
real memory space is available.
Advantages
Disadvantages
•
Number of tables and the amount of processor overhead for handling page
interrupts are greater than in the case of the simple paged management
techniques. OR
Difference between Demand Paging and Segmentation
Demand Paging:
Demand paging is identical to the paging system with swapping. In demand paging, a
page is delivered into the memory on demand i.e., only when a reference is made to a
location on that page. Demand paging combines the feature of simple paging and
implement virtual memory as it has a large virtual memory. Lazy swapper concept is
implemented in demand paging in which a page is not swapped into the memory
unless it is required.
Segmentation:
Segmentation is the arrangement of memory management. According to the
segmentation the logical address space is a collection of segments. Each segment has
a name and length. Each logical address have two quantities segment name and the
segment offset, for simplicity we use the segment number in place of segment name.
Since actual physical memory is much smaller than virtual memory, page faults
happen. In case of page fault, Operating System might have to replace one of the
existing pages with the newly needed page. Different page replacement algorithms
suggest different ways to decide which page to replace. The target for all algorithms is
to reduce the number of page faults.
Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots
—> 3 Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1.
—>1 Page Fault.
6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —
>1 Page Fault.
Finally when 3 come it is not avilable so it replaces 0 1 page fault
•
Belady’s anomaly – Belady’s anomaly proves that it is possible to have more page
faults when increasing the number of page frames while using the First in First Out
(FIFO) page replacement algorithm. For example, if we consider reference string 3, 2,
1, 0, 3, 2, 4, 3, 2, 1, 0, 4 and 3 slots, we get 9 total page faults, but if we increase slots
to 4, we get 10 page faults.
•
• Optimal Page replacement –
In this algorithm, pages are replaced which would not be used for the longest
duration of time in the future.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4
Page faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration
of time in the future.—>1 Page fault.
0 is already there so —> 0 Page fault..
4 will takes place of 1 —> 1 Page Fault.
Now for the further page reference string —> 0 Page fault because they are already
available in the memory.
•
Optimal page replacement is perfect, but not possible in practice as the operating
system cannot know future requests. The use of Optimal Page replacement is to set up
a benchmark so that other replacement algorithms can be analyzed against it
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4
Page faults
0 is already their so —> 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —>1 Page
fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already
available in the memory.
OR
Trashing concept
Or
or
Thrashing :
At any given time, only few pages of any process are in main memory and therefore
more processes can be maintained in memory. Furthermore time is saved because
unused pages are not swapped in and out of memory. However, the OS must be clever
about how it manages this scheme. In the steady state practically, all of main memory
will be occupied with process’s pages, so that the processor and OS has direct access
to as many processes as possible. Thus when the OS brings one page in, it must throw
another out. If it throws out a page just before it is used, then it will just have to get
that page again almost immediately. Too much of this leads to a condition called
Thrashing. The system spends most of its time swapping pages rather than executing
instructions. So a good page replacement algorithm is required.
In the given diagram, initial degree of multi programming upto some extent of
point(lamda), the CPU utilization is very high and the system resources are utilized
100%. But if we further increase the degree of multi programming the CPU utilization
will drastically fall down and the system will spent more time only in the page
replacement and the time taken to complete the execution of the process will increase.
This situation in the system is called as thrashing.
Causes of Thrashing :
1. High degree of multiprogramming : If the number of processes keeps on
increasing in the memory than number of frames allocated to each process will be
decreased. So, less number of frames will be available to each process. Due to this,
page fault will occur more frequently and more CPU time will be wasted in just
swapping in and out of pages and the utilization will keep on decreasing.
For example:
Let free frames = 400
Case 1: Number of process = 100
Then, each process will get 4 frames.
2.
Case 2: Number of process = 400
Each process will get 1 frame.
Case 2 is a condition of thrashing, as the number of processes are increased,frames
per process are decreased. Hence CPU time will be consumed in just swapping pages.
3.
4. Lacks of Frames:If a process has less number of frames then less pages of
that process will be able to reside in memory and hence more frequent swapping in
and out will be required. This may lead to thrashing. Hence sufficient amount of
frames must be allocated to each process in order to prevent thrashing.
Recovery of Thrashing :
• Do not allow the system to go into thrashing by instructing the long term
scheduler not to bring the processes into memory after the threshold.
• If the system is already in thrashing then instruct the mid term schedular to
suspend some of the processes so that we can recover the system from thrashing.
Structure of page table simply defines, in how many ways a page table can be
structured. Well, the paging is a memory management technique where a large
process is divided into pages and is placed in physical memory which is also divided
into frames. Frame and page size is equivalent. The operating system uses a page
table to map the logical address of the page generated by CPU to its physical address
in the main memory.
In this section, we will discuss three common methods that we use to structure a page
table.
Structure of Page Table
So can you store the page table of size 2 MB in a frame of the main memory
where frame size is 4 KB? It is impossible.
So we need to divide the page table this division of page table can be accomplished in
several ways. You can perform two-level division, three-level division on page table
and so on.
Let us discuss two-level paging in which a page table itself is paged. So we will have
two-page tables’ inner page table and outer page table. We have a logical address with
page number 20 and page offset 12. As we are paging a page table the page number
will further get split to 10-bit page number and 10 bit offset as you can see in the
image below.
Here P1 would act as an index and P2 would act as an offset for the outer page table.
Further, the P2 would act as an index and d would act as an offset to inner page table
to map the logical address of the page to the physical memory.