0% found this document useful (0 votes)
63 views76 pages

OS Unit 4

The document discusses different memory management techniques including swapping, contiguous and non-contiguous allocation, paging, and segmentation. It defines virtual memory and describes demand paging and copy-on-write. It also briefly mentions file concepts including file access methods and protection.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views76 pages

OS Unit 4

The document discusses different memory management techniques including swapping, contiguous and non-contiguous allocation, paging, and segmentation. It defines virtual memory and describes demand paging and copy-on-write. It also briefly mentions file concepts including file access methods and protection.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 76

UNIT IV

Memory Management: Swapping, contiguous memory allocation, paging, segmentation, structure of page

the table.

Virtual memory: Demand paging, Copy-on-Write, page-replacement, allocation of frames, thrashing.

File Concepts: File concept, access Methods, directory and disk structure, protection.
Memory Management

 Memory Management is the process of controlling and coordinating computer memory,

assigning portions known as blocks to various running programs to optimize the overall

performance of the system.

 It is the most important function of an operating system that manages primary memory.

 It helps processes to move back and forward between the main memory and execution disk.

 It helps OS to keep track of every memory location, irrespective of whether it is allocated to

some process or it remains free.


Memory Management Requirements

 Relocation

 Protection

 Sharing

 Logical Organization

 Physical Organization
Uses

 It allows you to check how much memory needs to be allocated to processes that decide which

processor should get memory at what time.

 Tracks whenever inventory gets freed or unallocated. According to it will update the status.

 It allocates the space to application routines.

 It also make sure that these applications do not interfere with each other.

 Helps protect different processes from each other

 It places the programs in memory so that memory is utilized to its full extent.
Swapping

 Swapping is a method in which the process should be swapped temporarily from the main memory
to the backing store.

 It will be later brought back into the memory for continue execution.

 Backing store is a hard disk or some other secondary storage device that should be big enough
ignored to accommodate copies of all memory images for all users
Benefits 

 It offers a higher degree of multiprogramming.

 It helps to get better utilization of memory.

 Minimum wastage of CPU time on completion so it can easily be applied to a priority-based

scheduling method to improve its performance.


Memory allocation

Memory allocation is a process by which computer programs are assigned memory or space.

Here, main memory is divided into two types of partitions

 Low Memory - Operating system resides in this type of memory.

 High Memory- User processes are held in high memory.


Contiguous Memory Allocation

 Contiguous memory allocation is basically a method in which a single contiguous section/part of


memory is allocated to a process or file needing it.

 Because of this all the available memory space resides at the same place together,

 which means that the freely/unused available memory partitions are not distributed in a random
fashion here and there across the whole memory space.

 The main memory is a combination of two main portions- one for the operating system and other
for the user program.
 Contiguous Technique can be divided into:
 Fixed (or static) partitioning
 Variable (or dynamic) partitioning
fixed size partitions

 In this partitioning, number of partitions (non-overlapping) in RAM are fixed but size of each

partition may or may not be same.

 The memory is assigned to the processes in contiguous way.

 Here partition are made before execution or during system configure.

 In fixed partitioning,

 The partitions cannot overlap.

 A process must be contiguously present in a partition for the execution.


Types of Fixed size Partitioning
Memory Assignment
Advantages of Fixed Partitioning:

 Easy to implement:
 Algorithms needed to implement Fixed Partitioning are easy to implement. It simply requires putting a process into
certain partition without focusing on the emergence of Internal and External Fragmentation.

 Little OS overhead:

Processing of Fixed Partitioning require lesser excess and indirect computational power.

Disadvantages of Fixed Partitioning:

 Internal Fragmentation

If the size of the process is lesser then the total size of the partition then some size of the
partition get wasted and remain unused. This is wastage of the memory and called internal
fragmentation.
 External Fragmentation

The total unused space of various partitions cannot be used to load the processes even though there is
space available but not in the contiguous form.

 Limitation on the size of the process

If the process size is larger than the size of maximum sized partition then that process cannot be loaded
into the memory. Therefore, a limitation can be imposed on the process size that is it cannot be larger than the
size of the largest partition.

 Degree of multiprogramming is less

By Degree of multi programming, we simply mean the maximum number of processes that can be
loaded into the memory at the same time. In fixed partitioning, the degree of multiprogramming is fixed and
very less due to the fact that the size of the partition cannot be varied according to the size of processes.
Dynamic Partitioning

 Dynamic partitioning tries to overcome the problems caused by fixed partitioning.

 In this technique, the partition size is not declared initially.

 It is declared at the time of process loading.

 The first partition is reserved for the operating system.

 The remaining space is divided into parts.

 The size of each partition will be equal to the size of the process.

 The partition size varies according to the need of the process so that the internal fragmentation can be

avoided.
Advantages 

 No Internal Fragmentation

Given the fact that the partitions in dynamic partitioning are created according to the need of the process, It
is clear that there will not be any internal fragmentation because there will not be any unused remaining space in
the partition.

 No Limitation on the size of the process

In Fixed partitioning, the process with the size greater than the size of the largest partition could not be
executed due to the lack of sufficient contiguous memory. Here, In Dynamic partitioning, the process size can't
be restricted since the partition size is decided according to the process size.

 Degree of multiprogramming is dynamic

Due to the absence of internal fragmentation, there will not be any unused space in the partition hence
more processes can be loaded in the memory at the same time.
Disadvantages

 External Fragmentation

The total unused space of various partitions cannot be used to load the processes even though

there is space available but not in the contiguous form.

 Complex Memory Allocation

In Fixed partitioning, the list of partitions is made once and will never change but in dynamic

partitioning, the allocation and deallocation is very complex since the partition size will be varied

every time when it is assigned to a new process. OS has to keep track of all the partitions.
Compaction
 compaction used to minimize the probability of external fragmentation.

 In compaction, all the free partitions are made contiguous and all the loaded partitions are brought
together.

 By applying this technique, The free partitions are merged which can now be allocated according to the
needs of new processes.

 This technique is also called defragmentation.

Problem with Compaction:

 The efficiency of the system is decreased in the case of compaction due to the fact that all the free spaces
will be transferred from several places to a single place.

 Huge amount of time is invested for this procedure and the CPU will remain idle for all this time.

 Despite of the fact that the compaction avoids external fragmentation, it makes system inefficient
Partition Allocation

Memory is divided into different blocks or partitions. Each process is allocated according to the
requirement. Partition allocation is an ideal method to avoid internal fragmentation.

Below are the various partition allocation schemes :

 First Fit: In this type fit, the partition is allocated, which is the first sufficient block from the
beginning of the main memory.

 Best Fit: It allocates the process to the partition that is the first smallest partition among the free
partitions.

 Worst Fit: It allocates the process to the partition, which is the largest sufficient freely available
partition in the main memory.

 Next Fit: It is mostly similar to the first Fit, but this Fit, searches for the first sufficient partition
from the last allocation point.
Non Contiguous Memory Allocation

 Non-contiguous memory allocation is a memory allocation technique.

 It allows to store parts of a single process in a non-contiguous fashion.

  There are two popular techniques used for non-contiguous memory allocation-
Paging

 Paging is a storage mechanism that allows OS to retrieve processes from the secondary storage
into the main memory in the form of pages. 

 It eliminates the need for contiguous allocation of physical memory. This scheme permits the
physical address space of a process to be non – contiguous.

 Logical Address or Virtual Address (represented in bits): An address generated by the CPU

 Logical Address Space or Virtual Address Space( represented in words or bytes): The set of all
logical addresses generated by a program

 Physical Address (represented in bits): An address actually available on memory unit

 Physical Address Space (represented in words or bytes): The set of all physical addresses
corresponding to the logical addresses
 The mapping from virtual to physical address is done by the memory management unit (MMU)

which is a hardware device and this mapping is known as paging technique.

 The Physical Address Space is conceptually divided into a number of fixed-size blocks,

called frames.

 The Logical address Space is also splitted into fixed-size blocks, called pages.

 Page Size = Frame Size

 In the Paging method, the main memory is divided into small fixed-size blocks of physical

memory, which is called frames


Example

 For example, if the main memory size is 16 KB and Frame size is 1 KB. Here, the main memory

will be divided into the collection of 16 frames of 1 KB each.

 There are 4 separate processes in the system that is A1, A2, A3, and A4 of 4 KB each.

 Here, all the processes are divided into pages of 1 KB each so that operating system can store one

page in one frame.

 At the beginning of the process, all the frames remain empty so that all the pages of the processes

will get stored in a contiguous way.


 you can see that A2 and A4 are moved to the waiting state after some time.

 Therefore, eight frames become empty, and so other pages can be loaded in that empty blocks.

 The process A5 of size 8 pages (8 KB) are waiting in the ready queue.
Address Translation Scheme
Address generated by CPU is divided into

 Page number(p): Number of bits required to represent the pages in Logical Address Space or
Page number

 Page offset(d): Number of bits required to represent particular word in a page or page size of
Logical Address Space or word number of a page or page offset.

 Physical Address is divided into

 Frame number(f): Number of bits required to represent the frame of Physical Address Space or
Frame number.

 Frame offset(d): Number of bits required to represent particular word in a frame or frame size of
Physical Address Space or word number of a frame or frame offset.
Paging Hardware With TLB
(Translation Lookaside Buffer )
 Translation Lookaside Buffer (TLB) is nothing but a special cache used to keep track of recently used transactions.
TLB contains page table entries that have been most recently used.
Steps in TLB hit: 
 
• CPU generates virtual (logical) address. 
 
• It is checked in TLB (present). 
 
• Corresponding frame number is retrieved, which now tells where the main memory page lies. 
 
Steps in TLB miss: 
 
• CPU generates virtual (logical) address. 
 
• It is checked in TLB (not present). 
 
• Now the page number is matched to page table residing in main memory (assuming page table contains all
PTE). 
 
• Corresponding frame number is retrieved, which now tells where the main memory page lies. 
 
• The TLB is updated with new PTE (if space is not there, one of the replacement technique comes into
picture i.e either FIFO, LRU or MFU etc). 
Translation look aside buffer (TLB)…

Translation look aside buffer (TLB) is a special, small, fast-lookup hardware cache
and the TLB is associative, high-speed, memory.

Typically, the number of entries in a TLB is small, often numbering between 64 and
1,0241.

when the associative memory is presented with an item, the item is compared with
all keys simultaneously.

A Translation look aside buffer can be defined as a memory cache which can be used
to reduce the time taken to access the page table again and again.
Translation look aside buffer (TLB)…
However, if the entry is not found in TLB (TLB miss) then CPU has to access page
table in the main memory and then access the actual frame in the main memory.

Therefore, in the case of TLB hit, the effective access time will be lesser as compare
to the case of TLB miss.

 In addition, we add the page number and frame number to the TLB, so that they will
be found quickly on the next reference.

 If  the TLB is already full of entries, the operating system must select one for
replacement. Replacement policies range from least recently used (LRU) to
random.
Translation look aside buffer (TLB)…
 If the probability of TLB hit is P% (TLB hit rate) then the probability of TLB miss
(TLB miss rate) will be (1-P) %.

 Therefore, the effective access time (EAT) can be defined as;


EAT  P (t  m)  (1  P)(t  k .m  m)
 Where, P → TLB hit rate, t → time taken to access TLB, m → time taken to access
main memory k = 1, if the single level paging has been implemented.

 By the formula, we come to know that


1. Effective access time will be decreased if the TLB hit rate is increased.
2. Effective access time will be increased in the case of multilevel paging.
Translation look aside buffer (TLB)…
 Problem-01: A paging scheme uses a Translation Lookaside buffer (TLB). A TLB
access takes 10 ns and a main memory access takes 50 ns. What is the effective access
time (in ns) if the TLB hit ratio is 90% and there is no page fault?
 Solution: Given that TLB access time (t)= 10 ns, Main memory access time( m)= 50
ns, TLB Hit ratio (P)= 90% = 0.9 and TLB Miss ratio=1-P=1-0.9=0.1.
 Effective Access Time EAT  P (t  m)  (1  P )(t  k .m  m)
= 0.9 x { 10 ns + 50 ns } + 0.1 x { 10 ns + 2 x 50 ns }
= 0.9 x 60 ns + 0.1 x 110 ns = 54 ns + 11 ns
= 65 ns
Where k=1.
Translation look aside buffer (TLB)…
 Problem-02: A paging scheme uses a Translation Lookaside buffer (TLB). The effective
memory access takes 160 ns and a main memory access takes 100 ns. What is the TLB access
time (in ns) if the TLB hit ratio is 60% and there is no page fault?
 Solution- Given- Effective access time = 160 ns, Main memory access time = 100 ns, TLB
Hit ratio = 60% = 0.6
 Let TLB access time = T ns., Substituting values in the above formula, we get-
 160 ns = 0.6 x { T + 100 ns } + 0.4 x { T + 2 x 100 ns }
 160 = 0.6 x T + 60 + 0.4 x T + 80
 160 = T + 140
 T = 160 – 140
 T = 20
Segmentation

 Like Paging, Segmentation is another non-contiguous memory allocation technique.

 In segmentation, process is not divided blindly into fixed size pages.

 Rather, the process is divided into modules for better visualization.

Characteristics-

  Segmentation is a variable size partitioning scheme.

 In segmentation, secondary memory and main memory are divided into partitions of unequal size.

 The size of partitions depend on the length of modules.

 The partitions of secondary memory are called as segments.


Example:

Consider a program is divided into 5 segments as-


Segment Table:

 Segment table is a table that stores the information about each segment of the process.

 It has two columns.

 First column stores the size or length of the segment.

 Second column stores the base address or starting address of the segment in the main memory.

 Segment table is stored as a separate segment in the main memory.

 Segment table base register (STBR) stores the base address of the segment table.
Translation of Logical address into physical address
 CPU generates a logical address which contains two parts:

 Segment Number

 Offset

 The Segment number is mapped to the segment table.

 The limit of the respective segment is compared with the offset.

 If the offset is less than the limit then the address is valid otherwise it throws an error as the address is invalid.

 In the case of valid address, the base address of the segment is added to the offset to get the physical address

of actual word in the main memory.


Segmented Paging
In Segmented Paging, the main memory is divided into variable size
segments which are further divided into fixed size pages.
• Pages are smaller than segments.
• Each Segment has a page table which means every program has
multiple page tables.
• The logical address is represented as Segment Number (base address),
Page number and page offset.
• Segment Number → It points to the appropriate Segment Number.
• Page Number → It Points to the exact page within the segment
• Page Offset → Used as an offset within the page frame

Translation of logical address to physical address:


The CPU generates a logical address which is divided into two parts:
Segment Number and Segment Offset. The Segment Offset must be less
than the segment limit. Offset is further divided into Page number and
Page Offset. To map the exact page number in the page table, the page
number is added into the page table base.
The actual frame number with the page offset is mapped to the main
memory to get the desired word in the page of the certain segment of the
process.
Advantages of Segmented Paging:
• It reduces memory usage.
• Page table size is limited by the segment size.
• Segment table has only one entry corresponding to one actual
segment.
• External Fragmentation is not there.
• It simplifies memory allocation.
Disadvantages of Segmented Paging:
• Internal Fragmentation will be there.
• The complexity level will be much higher as compare to paging.
• Page Tables need to be contiguously stored in the memory.
Difference between Paging and Segmentation
SNo. Paging Segmentation
1 Paging divides program into fixed Segmentation divides program into variable
size pages. size segments.
2 OS is responsible Compiler is responsible.
3 Paging is faster than segmentation Segmentation is slower than paging

4 Paging is closer to Operating System Segmentation is closer to User

5 It suffers from internal It suffers from external fragmentation


fragmentation
6 Logical address is divided into page Logical address is divided into segment
number and page offset number and segment offset

7 Page table is used to maintain the Segment Table maintains the segment
page information. information

8 Page table entry has the frame Segment table entry has the base address of
number and some flag bits to the segment and some protection bits for
represent details about pages. the segments.
Swapping
Swapping is a memory management technique and is used to temporarily remove the
inactive programs from the main memory of the computer system.

Any process must be in the memory for its execution, but can be swapped temporarily
out of memory to a backing store and then again brought back into the memory to
complete its execution.

Swapping is done so that other processes get memory for their execution.

Due to the swapping technique, the performance usually gets affected, but it also helps
in running multiple and big processes in parallel. 

The swapping process is also known as a technique for memory compaction. Basically,


low priority processes may be swapped out so that processes with a higher priority may
be loaded and executed.
Swapping…
 Backing store is a hard disk or some other secondary storage device that should be big enough
ignored to accommodate copies of all memory images for all users

 It is used to improve main memory utilization. In secondary memory, the place where the
swapped-out process is stored is called swap space.
Swapping…
The concept of swapping has divided into two more concepts: Swap-in and Swap-out.
 Swap-out is a method of removing a process from RAM and adding it to the hard disk.
 Swap-in is a method of removing a program from a hard disk and putting it back into the main
memory or RAM.

• Note:
In a single tasking operating system, only one process occupies the user program area of
memory and stays in memory until the process is complete.
In a multitasking operating system, a situation arises when all the active processes
cannot coordinate in the main memory, then a process is swap out from the main
memory so that other processes can enter it.
Swapping…
Example: Suppose the user process's size is 2048KB and is a standard hard disk where
swapping has a data transfer rate of 1Mbps. Now we will calculate how long it will take
to transfer from main memory to secondary memory.
Solution: User process size is 2048Kb and Data transfer rate is 1Mbps = 1024 kbps  
Time = process size / transfer rate  
= 2048 / 1024  
      = 2 seconds  
     = 2000 milliseconds  
Now taking swap-in and swap-out time, the process will take 4000 milliseconds. 
Advantages/benefits of Swapping

1) It offers a higher degree of multiprogramming. It helps the CPU to manage

multiple processes within a single main memory.

2) It helps to get better utilization of memory.

3) It helps to create and use virtual memory.

4) Minimum wastage of CPU time on completion so it can easily be applied to a

priority-based scheduling method to improve its performance.

5) In this technique, the CPU can perform several tasks simultaneously. Thus,

processes need not wait too long before their execution.


Disadvantages of Swapping

1) If the computer system loses power, the user may lose all information related
to the program in case of substantial swapping activity.

2) If the swapping algorithm is not good, the composite method can increase the
number of Page Faults and decrease the overall processing performance.

3) There may occur inefficiency in the case if a resource is commonly used by


those processes that are participating in the swapping process.
Virtual Memory

 A computer can address more memory than the amount physically installed on the system.

 This extra memory is actually called virtual memory.

 Virtual memory serves two purposes.

 First, it allows us to extend the use of physical memory by using disk.

 Second, it allows us to have memory protection, because each virtual address is translated to a
physical address.

 The MMU's job is to translate virtual addresses into physical addresses.

 Virtual memory is commonly implemented by demand paging.


Demand Paging

 According to the concept of Virtual Memory, in order to execute some process, only a part of the
process needs to be present in the main memory

 which means that only a few pages will only be present in the main memory at any time.

 However, deciding, which pages need to be kept in the main memory and which need to be kept in
the secondary memory, is going to be difficult.

 Because we cannot say in advance that a process will require a particular page at particular time.

 Therefore, to overcome this problem, there is a concept called Demand Paging is introduced.

 It suggests keeping all pages of the frames in the secondary memory until they are required.

 In other words, it says that do not load any page in the main memory until it is required.
The main steps involved in demand paging that is in between the page is
requested and it is loaded into main memory are as follows:-

1. CPU refers to the page it needs.


2.  The referred page is checked in page table whether that is present in main
memory or not.If not an interrupt page fault is generated. OS puts the
interrupts  process in blocking state and  starts the process of fetching the page
from memory so that process can be executed.
3. OS will search for it in logical address space.
4. Required page will be brought from logical address space to physical address
space. The page replacement algorithms are used for the decision making of
replacing the page in physical address space.
5. Page table will be updated.
6. Interrupted process will be restarted.
Copy-on-Write

 Copy-on-Write(CoW) is mainly a resource management technique

 that allows the parent and child process to share the same pages of the memory initially will be
marked as copy-on-write.

 which means that if any of these processes will try to modify the shared pages then only a copy of these
pages will be created

 And the modifications will be done on the copy of pages by that process and thus not affecting the
other process.

 if a unit of data is copied but is not modified then "copy" can mainly exist as a reference to the
original data.
Suppose, there is a process P that creates a new process Q and then process P modifies page.
The below figures shows what happens before and after process P modifies page 3.
 There are two main aspects of virtual memory, Frame allocation and Page Replacement.

 It is very important to have the optimal frame allocation and page replacement algorithm.

 Frame allocation is all about how many frames are to be allocated to the process

 while the page replacement is all about determining the page number which needs to be replaced in

order to make space for the requested page.


Page Replacement Algorithms

 In an operating system that uses paging for memory management, a page replacement algorithm is
needed to decide which page needs to be replaced when new page comes in.

 Page Fault – A page fault happens when a running program accesses a memory page that is
mapped into the virtual address space, but not loaded in physical memory.

 Actual physical memory is much smaller than virtual memory, page faults happen.

 In case of page fault, Operating System might have to replace one of the existing pages with the
newly needed page.

 Different page replacement algorithms suggest different ways to decide which page to replace.

 The target for all algorithms is to reduce the number of page faults.
Page Replacement Algorithms :

 First In First Out (FIFO) 

 Optimal Page replacement

 Least Recently Used


First In First Out (FIFO):

 This is the simplest page replacement algorithm.

 In this algorithm, the operating system keeps track of all pages in the memory in a queue, the

oldest page is in the front of the queue.


 When a page needs to be replaced page in the front of the queue is selected for removal.
Example: Consider page reference string 1, 3, 0, 3, 5, 6 with 3 page frames. Find number of page faults
 initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3 Page
Faults.

 when 3 comes, it is already in  memory so —> 0 Page Faults.

 Then 5 comes, it is not available in  memory so it replaces the oldest page slot i.e 1. —>1 Page Fault.

 6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —>1 Page Fault.

 Finally when 3 come it is not available so it replaces 0 1 page fault

Belady’s anomaly – 

 Belady’s anomaly proves that it is possible to have more page faults when increasing the number of
page frames while using the First in First Out (FIFO) page replacement algorithm. 

 For example, if we consider reference string 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4 and 3 slots, we get 9 total
page faults, but if we increase slots to 4, we get 10 page faults.
Optimal Page replacement –
In this algorithm, pages are replaced which would not be used for the longest duration of time in the future.
Example: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frame. Find number
of page fault.
 Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults

 0 is already there so —> 0 Page fault.

 when 3 came it will take the place of 7 because it is not used for the longest duration of time in the future.
—>1 Page fault.

 0 is already there so —> 0 Page fault.

 4 will takes place of 1 —> 1 Page Fault.

 Now for the further page reference string —> 0 Page fault because they are already available in the
memory.

 Optimal page replacement is perfect, but not possible in practice as the operating system cannot know
future requests.

 The use of Optimal Page replacement is to set up a benchmark so that other replacement algorithms can be
analyzed against it.
Least Recently Used –
 In this algorithm page will be replaced which is least recently used.
 Example: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frames. Find
number of page faults.
 Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults

 0 is already their so —> 0 Page fault.

 when 3 came it will take the place of 7 because it is least recently used —>1 Page fault

 0 is already in memory so —> 0 Page fault.

 4 will takes place of 1 —> 1 Page Fault

 Now for the further page reference string —> 0 Page fault because they are already available in

the memory.
Allocation of frames
  Frame allocation algorithms are used if you have multiple processes; it helps decide how many frames
to allocate to each process.

 There are various constraints to the strategies for the allocation of frames:

 You cannot allocate more than the total number of available frames.

 At least a minimum number of frames should be allocated to each process.

 This constraint is supported by two reasons.

 The first reason is, as less number of frames are allocated, there is an increase in the page fault ratio,
decreasing the performance of the execution of the process.

 Secondly, there should be enough frames to hold all the different pages that any single instruction can
reference.
Frame Allocation Strategies

EQUAL ALLOCATION: :

 Each number of frames will be equally distributed among processes.  

 Not very much useful as not every process will require equal number of frames;

 some process may require extra frames whereas some process may require less number of frames

For example, given no. of frames: 6

                       No. of processes available: 3

                       Therefore, each process will get 2 frames


 WEIGHTED ALLOCATION: 

Depending on the size of the processes, number of frames will be allocated accordingly. More
number of frames given the process of larger size.

For example: available processes of size P1: 20 Pages, P2: 30 Pages, P3: 50 Pages

      Available frames: 10

     Requirement: P1= 20/100*10=2    (P1+P2+P3:20+30+50=100)

     P2= 30/100*10=3

     P3=50/100*10=5
PRIORITY ALLOCATION: 

 The processes with higher priority no. gets more frame.

 If the process with higher priorities wants more frames;

 then it can forcefully replace processes with lower priorities.

 Suppose, Process P1 has higher priority than process P2 and requires more frame then P1 pulls out

P2 and uses up the frame.


The number of frames allocated to a process can also dynamically change depending on whether
you have used global replacement or local replacement for replacing pages in case of a page fault

 LOCAL PAGE-REPLACEMENT: 

Local page replacement strategy works as static allocation. Whenever we need to replace a page
from the main memory then we will replace the page only from the frames which are allocated to
that particular process without disturbing any other pages of other processes.

 GLOBAL PAGE-REPLACEMENT: 

This strategy works differently than local page replacement strategy while replacing any page
we have to consider everything for replacement, we can consider all the available frames to replace.
Thrashing

Thrashing is a condition or a situation when the system is spending a major portion of its time in

servicing the page faults, but the actual processing done is very negligible.

Techniques used to handle the thrashing:

Working Set:

 The set of the pages in the most recent? page reference is known as the working set.

 If a page is in active use, then it will be in the working set.

 In case if the page is no longer being used then it will drop from the working set ? times after its last

reference.
Page Fault Frequency:

 When the Page fault is too high, then we know that the process needs more frames. Conversely, if
the page fault-rate is too low then the process may have too many frames.

 We can establish upper and lower bounds on the desired page faults.

 If the actual page-fault rate exceeds the upper limit then we will allocate the process to another
frame.

 And if the page fault rate falls below the lower limit then we can remove the frame from the
process.

 Thus with this, we can directly measure and control the page fault rate in order to prevent thrashing.

You might also like