0% found this document useful (0 votes)
17 views21 pages

Unit 4 OS

Memory management is a crucial operating system function that allocates and deallocates memory for processes, aiming for efficient utilization and minimizing fragmentation. It involves concepts such as logical and physical address spaces, static and dynamic loading, and various memory allocation techniques like fixed and variable partitions. Additionally, memory management addresses issues like fragmentation and protection, ensuring processes operate securely and efficiently within the system.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views21 pages

Unit 4 OS

Memory management is a crucial operating system function that allocates and deallocates memory for processes, aiming for efficient utilization and minimizing fragmentation. It involves concepts such as logical and physical address spaces, static and dynamic loading, and various memory allocation techniques like fixed and variable partitions. Additionally, memory management addresses issues like fragmentation and protection, ensuring processes operate securely and efficiently within the system.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

Unit 4

Memory Management

What is Memory Management?

 In a multiprogramming computer, the Operating System resides in a part of memory, and the rest is used by multiple
processes.
 The task of subdividing the memory among different processes is called Memory Management.
 Memory management is a method in the operating system to manage operations between main memory and disk during
process execution.
 The main aim of memory management is to achieve efficient utilization of memory.
Why Memory Management is Required?
 Allocate and de-allocate memory before and after process execution.
 To keep track of used memory space by processes.
 To minimize fragmentation issues.
 To proper utilization of main memory.
 To maintain data integrity while executing of process.
Logical and Physical Address Space
Logical Address Space:
 An address generated by the CPU is known as a “Logical Address”.
 It is also known as a Virtual address. Logical address space can be defined as the size of the process.
 A logical address can be changed.
Physical Address Space:
 An address seen by the memory unit (i.e the one loaded into the memory address register of the memory) is commonly
known as a “Physical Address”.
 A Physical address is also known as a Real address.
 The set of all physical addresses corresponding to these logical addresses is known as Physical address space. A physical
address is computed by MMU.
 The run-time mapping from virtual to physical addresses is done by a hardware device Memory Management Unit (MMU).
The physical address always remains constant.
Static and Dynamic Loading
Loading a process into the main memory is done by a loader. There are two different types of loading :
Static Loading: Static Loading is basically loading the entire program into a fixed address. It requires more memory space.
Dynamic Loading:
 The entire program and all data of a process must be in physical memory for the process to execute. So, the size of a process is
limited to the size of physical memory.
 To gain proper memory utilization, dynamic loading is used.
 In dynamic loading, a routine is not loaded until it is called.
 All routines are residing on disk in a relocatable load format. One of the advantages of dynamic loading is that the
unused routine is never loaded.
 This loading is useful when a large amount of code is needed to handle it efficiently.
Static and Dynamic Linking
To perform a linking task a linker is used. A linker is a program that takes one or more object files generated by a compiler and
combine them into a single executable file.
Static Linking:
In static linking, the linker combines all necessary program modules into a single executable program. So there is no runtime
dependency. Some operating systems support only static linking, in which system language libraries are treated like any other object
module.
Dynamic Linking:
 The basic concept of dynamic linking is similar to dynamic loading. In dynamic linking, “Stub” is included for each appropriate
library routine reference.
 A stub is a small piece of code. When the stub is executed, it checks whether the needed routine is already in memory or not. If not
available then the program loads the routine into memory.

Swapping
 When a process is executed it must have resided in memory.
 Swapping is a process of swapping a process temporarily into a secondary memory from the main memory, which is fast
compared to secondary memory.
 A swapping allows more processes to be run and can be fit into memory at one time. The main part of swapping is transferred
time and the total time is directly proportional to the amount of memory swapped.
 Swapping is also known as roll-out, or roll because if a higher priority process arrives and wants service, the memory
manager can swap out the lower priority process and then load and execute the higher priority process.
 After finishing higher priority work, the lower priority process swapped back in memory and continued to the execution
process.

Advantages of Memory Management


 It is a simple management approach
Disadvantages of Memory Management
 It does not support multiprogramming
 Memory is wasted
Mono Programming:
Mono programming is an older and simpler approach where only one program runs at a time, while multi-programming is a modern
and more efficient approach that allows multiple programs to run concurrently
 Single-Tasking: Mono programming, short for "mono programming," is the traditional computing model where only one
task or program runs at a time. The computer's resources, including CPU and memory, are dedicated to executing a single
program until it completes or is manually terminated by the user.
 Simple: Mono programming is straightforward and easy to understand because there's no concurrent execution of multiple
tasks.
 Limited Utilization: It does not make efficient use of the computer's resources, as the CPU may remain idle when the
program is waiting for input or performing I/O operations.

Example: Early computer systems and some embedded systems still use mono programming, but it's not common in modern desktop
or server environments.

Multi-Programming:

 Multi-Tasking: Multi-programming, or multi-tasking, is a modern computing paradigm where multiple programs or


processes run concurrently on a single computer. The operating system manages the execution of these processes by sharing
the CPU and other resources among them.
 Efficient Resource Utilization: Multi-programming maximizes resource utilization by allowing the CPU to switch between
processes when one is waiting for I/O or is blocked, thus minimizing idle time.
 Responsive: It enables better responsiveness and user experience, as multiple applications can run simultaneously, allowing
users to switch between them seamlessly.

Examples: Virtually all modern desktop and server operating systems, including Windows, macOS, Linux, and others, support multi-
programming.

Multi-programming is the standard in today's computing environments because it enables better resource utilization and a more
responsive user experience.

Modelling multiprogramming

 Modelling multiprogramming involves creating abstractions and models to understand and simulate how multiple programs
or processes run concurrently on a computer system.
 This can be useful for various purposes, including performance analysis, system design, and optimization
 It involves developing mathematical models to analyze and predict the behaviour of multiple programs running concurrently.

Queuing theory and scheduling algorithms are often used to model the performance of multiprogramming systems.

1. Identify the Goal:


2. Select the Level of Abstraction
3. Define the Components:
- Processes/Program
- CPU Scheduler:
- Memory Management
- I/O Devices
4. Model the Behaviour:
- Process Behaviour
- CPU Scheduler
- Memory Management
- I/O Devices
5. Time Modelling
6. Initialization
7. Simulation Execution
8. Data Collection
9. Analysis and Validation
10. Iterate and Refine
3. Relocation:-

Relocation ensures that programs can be loaded and executed at different memory locations.

- Relocation is the process of adjusting the memory addresses of a program when it is loaded into memory.

- It ensures that a program can be placed anywhere in memory and still operate correctly.

- This is especially important in multiprogramming, where programs are loaded and executed independently.

- Relocation involves updating memory references, such as pointers and addresses, within the program to reflect its new location in
memory.

4. Protection:

- Protection mechanisms ensure that one program cannot interfere with or access the memory or resources of another program.

- Memory protection prevents processes from accessing each other's memory areas, which enhances security and stability.

- Protection mechanisms can involve hardware support, such as memory segmentation and paging, as well as operating system
controls.

Protection mechanisms prevent unauthorized access to memory and resources, enhancing system security and stability.

Memory Management Data Structure

Memory management is a critical aspect of computer systems, ensuring that programs and data are efficiently allocated and
deallocated in memory. Two common data structures used in memory management are bitmaps and linked lists

1. Bitmaps:

A bitmap is a simple data structure that represents the allocation status of individual memory blocks (usually fixed-sized blocks) in a
memory region.

Each block in memory corresponds to a bit in the bitmap. Typically, a '0' in the bitmap indicates that the corresponding block is free,
while a '1' indicates that it is allocated or in use.

Allocation: When allocating memory, the bitmap is consulted to find a free block (a '0' bit), and this block is marked as allocated ('1')
to indicate it's in use.

Deallocation: When deallocating memory, the corresponding bit in the bitmap is set back to '0' to mark it as free.

Advantages:
o Quick and efficient for finding free memory blocks.
o Compact representation, especially when dealing with large memory regions.
o Simple to implement.

Disadvantages:

o Not suitable for managing variable-sized memory blocks.


o Can suffer from fragmentation (external fragmentation) as free blocks may not be contiguous.

2. Linked Lists:

Linked lists are more versatile and can be used for dynamic memory allocation. Instead of representing individual blocks, linked
lists maintain a list of memory blocks (nodes) where each node contains information about the block's size and status (allocated or
free). These memory blocks can be of variable sizes.

Allocation: To allocate memory, the allocator traverses the linked list, looking for a free block that is large enough to satisfy the
allocation request. Once found, the block is marked as allocated, and a reference to it is returned.

Deallocation: When deallocating memory, the block is marked as free, and the allocator may perform coalescing, merging adjacent
free blocks to prevent fragmentation.

Advantages:

- Suitable for managing variable-sized memory blocks.

- Reduces fragmentation through coalescing.

- Can adapt to dynamic memory requirements.

Disadvantages:

- Slower than bitmaps for finding free memory blocks.

- Requires additional metadata (size information) for each block, increasing memory overhead.

Memory Management Techniques

Multiprogramming fixed and variable partitions:

Multiprogramming with fixed and variable partitions, relocation, and protection are essential concepts in operating system design and
memory management. These concepts help in efficiently utilizing a computer's memory resources while ensuring the security and
isolation of processes. Let's explore each of these concepts in more detail:

1. Multiprogramming:

 Multiprogramming allows multiple programs to run concurrently, improving system throughput and responsiveness.
 Multiprogramming is a technique in which multiple programs are loaded into memory simultaneously, and the CPU is
switched between them to keep it busy.
 It improves the overall system throughput and response time by allowing multiple processes to run concurrently.

2. Fixed and Variable Partitions:


Fixed Partitions: Fixed partitions divide memory into fixed-sized sections

 In fixed-partitioning, the memory is divided into fixed-sized partitions.


 Each partition can hold one process, and processes are allocated to partitions at the time of submission.
 Fixed-partitioning is relatively simple but can lead to inefficient memory usage when the sizes of processes vary widely.

Variable Partitions: variable partitions allocate memory dynamically based on process requirements

 Variable partitioning, also known as dynamic partitioning, allows for more flexible memory allocation.
 Memory is divided into variable-sized partitions, and processes are allocated memory dynamically based on their size and
memory availability.
 Variable partitioning is more memory-efficient but requires more complex memory management

Buddy System:

 The buddy system allocates memory in powers of 2 (e.g., 2^0, 2^1, 2^2, etc.).
 When a memory request is made, the system allocates a block of the nearest larger size and splits it into smaller blocks until
the requested size is reached.
 Free memory blocks are merged (buddies) to form larger blocks when possible to prevent fragmentation.
 This approach is efficient and can help minimize both external and internal fragmentation.
Slab Allocation:

 Slab allocation is often used in the Linux kernel for managing memory for kernel data structures.
 It divides memory into slabs, where each slab contains a specific type of data structure (e.g., file control blocks).
 Slabs are allocated, deallocated, and managed as units, reducing memory fragmentation.
First Fit, Best Fit, and Worst Fit:

- These are allocation algorithms used within memory allocation strategies.

- First Fit allocates the first available block that is large enough.

- Best Fit allocates the smallest available block that fits.


- Worst Fit allocates the largest available block, hoping to minimize fragmentation.

- These algorithms have their advantages and disadvantages, and their performance depends on the specific use case.

The choice of memory allocation strategy depends on the specific requirements and constraints of the system, including the type of
applications running, available hardware, and the desired balance between memory utilization and management complexity. Modern
operating systems often use a combination of these strategies to optimize memory allocation.

Fragmentation
There are two types of fragmentation in OS which are given as Internal fragmentation and External fragmentation.
Internal Fragmentation:
 Internal fragmentation happens when the memory is split into mounted-sized blocks.
 Whenever a method is requested for the memory, the mounted-sized block is allotted to the method.
 In the case where the memory allotted to the method is somewhat larger than the memory requested, then the difference
between allotted and requested memory is called internal fragmentation.
 We fixed the sizes of the memory blocks, which has caused this issue. If we use dynamic partitioning to allot space to the
process, this issue can be solved.
The above diagram clearly shows the internal fragmentation because the difference between memory allocated and required space or
memory is called Internal fragmentation.
External Fragmentation:
 External fragmentation happens when there’s a sufficient quantity of area within the memory to satisfy the memory request
of a method.
 However, the process’s memory request cannot be fulfilled because the memory offered is in a non-contiguous manner.
Whether you apply a first-fit or best-fit memory allocation strategy it’ll cause external fragmentation.

In the above diagram, we can see that, there is enough space (55 KB) to run a process-07 (required 50 KB) but the memory (fragment)
is not contiguous. Here, we use compaction, paging, or segmentation to use the free space to run a process.

Difference between Internal fragmentation and External fragmentation

S.NO Internal fragmentation External fragmentation

In internal fragmentation fixed-sized memory, blocks square In external fragmentation, variable-sized memory blocks
1. measure appointed to process. square measure appointed to the method.

Internal fragmentation happens when the method or process External fragmentation happens when the method or process
2. is smaller than the memory. is removed.

The solution to external fragmentation is compaction


The solution of internal fragmentation is the best-fit block.
3. and paging.

Internal fragmentation occurs when memory is divided External fragmentation occurs when memory is divided into
4. into fixed-sized partitions. variable size partitions based on the size of processes.

The unused spaces formed between non-contiguous


The difference between memory allocated and required
memory fragments are too small to serve a new process,
space or memory is called Internal fragmentation.
5. which is called External fragmentation.

Internal fragmentation occurs with paging and fixed External fragmentation occurs with segmentation and dynamic
6. partitioning. partitioning.

It occurs on the allocation of a process to a partition greater


It occurs on the allocation of a process to a partition greater
than the process’s requirement. The leftover space causes
which is exactly the same memory space as it is required.
7. degradation system performance.

8. It occurs in worst fit memory allocation method. It occurs in best fit and first fit memory allocation method.

What is Paging in OS?

 Paging in OS is a dynamic and flexible technique that can load the processes in the memory partitions more optimally.
 Paging is a memory management scheme that eliminates the need for a contiguous allocation of physical memory. This
scheme permits the physical address space of a process to be non-contiguous.
 The basic idea behind paging is to divide the process into pages so we can store these pages in the memory at different holes
(partitions) and use these holes efficiently.
 The paging in OS is used to support non-contiguous memory allocation.
 Secondary memory will be divided into fixed-sized partitions of equal size, and each of these is called a page.
 Similarly, the main memory is divided into equal fixed-size partitions; each called a frame.
 The mapping between logical pages and physical pages is maintained in a data structure called the page table.

Types of Paging in OS

There are primarily two types of Paging in OS commonly used in operating systems:

 Fixed-size paging
 Variable-size paging

Fixed-size paging in Operating System

 In fixed-size paging, the memory is divided into fixed-sized blocks or pages.


 Each page has a fixed size, typically a power of 2 (e.g., 4KB or 8KB).
 The entire logical address space of a process is divided into pages of the same size.
 This approach simplifies memory management, as each page can be easily allocated or deallocated.
 However, it can lead to internal fragmentation if the memory allocated to a process is not fully utilized.

Variable-size paging in Operating System

 In variable-size paging, the memory is divided into variable-sized blocks or pages.


 The size of each page is not fixed and can vary depending on the memory requirements of the process.
 Variable-size paging in OS aims to reduce internal fragmentation by allocating memory in smaller chunks based on the actual
memory needs of a process.
 This approach helps in more efficient memory utilization.

Demand Paging in OS

 Demand paging is a memory management scheme where pages are brought into memory only when they are demanded by
the running process, reducing memory wastage and improving overall system performance.
 When a page that is not present in memory is accessed, a page fault occurs, prompting the operating system to fetch the
required page from secondary storage into memory.
 This approach allows for more efficient memory utilization as only the necessary pages are loaded, reducing initial memory
overhead and allowing for larger program sizes to be accommodated.

Important Points for Paging in Operating System (OS)


The hardware defines the page size in paging in OS. All the frames and pages in the memory are of the same size. Some of the
important points for the paging in OS (operating system) are as follows:

 Whenever the process is created, paging in OS will be applied to the process, and a page table will be created. The base
address of the page table will be stored in the PCB.
 Paging in OS is for every process, and every process will have its own page table.
 Page Table of the processes will be stored in the main memory.
 There is no external fragmentation in the paging in OS because the page size is the same as the frame size.
 The internal fragmentation exists on the last page, and internal fragmentation in the paging in OS is considered, where P is
the page size.
 Maintaining the page table is considered an overhead (burden) for the system.

Paging in OS Diagram

The below diagram explains the technique of mapping a logical address to a physical address in paging in OS (operating system):

In the above diagram of the translation of logical to a physical address in paging in the operating system as the CPU generates a
logical address containing the page number and offset.

Then, corresponding to the page number, we check the frame number in which it is stored from the page table.

The frame number is then added with the offset value to get the physical address, and then the memory is accessed with it.

Address Translation in Paging in OS

 The address translation in paging in OS is an address space that is the range of valid addresses available in a program or
process memory.
 It is the memory space accessible to a program or process.
 The memory can be physical or virtual and is used for storing data and executing instructions.
 The two main types of address space are described below:

 Logical address space


 Physical address space
 Logical Address or Virtual Address (represented in bits): An address generated by the CPU.
 Logical Address Space or Virtual Address Space (represented in words or bytes): The set of all logical addresses generated by
a program.
 Physical Address (represented in bits): An address actually available on a memory unit.
 Physical Address Space (represented in words or bytes): The set of all physical addresses corresponding to the logical
addresses.
Example:
 If Logical Address = 31 bits, then Logical Address Space = 231 words = 2 G words (1 G = 230)
 If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address = log2 227 = 27 bits
 If Physical Address = 22 bits, then Physical Address Space = 222 words = 4 M words (1 M = 220)
 If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address = log2 224 = 24 bits
The mapping from virtual to physical address is done by the memory management unit (MMU) which is a hardware device and this
mapping is known as the paging technique.
 The Physical Address Space is conceptually divided into several fixed-size blocks, called frames.
 The Logical Address Space is also split into fixed-size blocks, called pages.
 Page Size = Frame Size
Let us consider an example:
 Physical Address = 12 bits, then Physical Address Space = 4 K words
 Logical Address = 13 bits, then Logical Address Space = 8 K words
 Page size = frame size = 1 K words (assumption)

The address generated by the CPU is divided into:


 Page Number(p): Number of bits required to represent the pages in Logical Address Space or Page number
 Page Offset(d): Number of bits required to represent a particular word in a page or page size of Logical Address Space or word
number of a page or page offset.
Physical Address is divided into:
 Frame Number(f): Number of bits required to represent the frame of Physical Address Space or Frame number frame
 Frame Offset(d): Number of bits required to represent a particular word in a frame or frame size of Physical Address Space or
word number of a frame or frame offset.
The hardware implementation of the page table can be done by using dedicated registers. But the usage of the register for the page
table is satisfactory only if the page table is small. If the page table contains a large number of entries then we can use TLB(translation
Look-aside buffer), a special, small, fast look-up hardware cache.

Benefits of Paging in OS

 Increased Memory Utilization


 Simplified Memory Management
 Memory Protection
 Virtual Memory

Advantage of Paging in Operating System

 Paging in OS is a fundamental technique used in modern operating systems to efficiently manage memory resources. By
dividing memory into fixed-size pages and mapping virtual addresses to physical addresses, paging provides increased
memory utilization, simplified memory management, memory protection, and supports virtual memory systems.
 Page replacement algorithms further enhance the efficiency of paging by determining which pages to swap in and out of
physical memory. As operating systems continue to evolve, paging remains a vital component in ensuring optimal system
performance and resource management.

TLB(Translation Look-aside buffer)

 The TLB is an associative, high-speed memory.


 Each entry in TLB consists of two parts: a tag and a value.
 When this memory is used, then an item is compared with all tags simultaneously. If the item is found, then the
corresponding value is returned.

Main memory access time = m


If page table are kept in main memory,
Effective access time = m(for page table)
+ m(for particular page in page table)
TLB Hit and Miss

Segmentation

 A process is divided into Segments. The chunks that a program is divided into which are not necessarily all of the exact sizes
are called segments.
 Segmentation gives the user’s view of the process which paging does not provide. Here the user’s view is mapped to physical
memory.

Types of Segmentation in Operating System

 Virtual Memory Segmentation: Each process is divided into a number of segments, but the segmentation is not done all at
once. This segmentation may or may not take place at the run time of the program.
 Simple Segmentation: Each process is divided into a number of segments, all of which are loaded into memory at run time,
though not necessarily contiguously.
 There is no simple relationship between logical addresses and physical addresses in segmentation. A table stores the
information about all such segments and is called Segment Table.

What is Segment Table?

 It maps a two-dimensional Logical address into a one-dimensional Physical address. It’s each table entry has:
o Base Address: It contains the starting physical address where the segments reside in memory.
o Segment Limit: Also known as segment offset. It specifies the length of the segment.
Translation of Two-dimensional Logical Address to Dimensional Physical Address.

The address generated by the CPU is divided into:

o Segment number (s): Number of bits required to represent the segment.


o Segment offset (d): Number of bits required to represent the size of the segment.

Advantages of Segmentation in Operating System

 No Internal fragmentation.
 Segment Table consumes less space in comparison to Page table in paging.
 As a complete module is loaded all at once, segmentation improves CPU utilization.
 The user’s perception of physical memory is quite similar to segmentation. Users can divide user programs into modules via
segmentation. These modules are nothing more than separate processes’ codes.
 The user specifies the segment size, whereas, in paging, the hardware determines the page size.
 Segmentation is a method that can be used to segregate data from security operations.
 Flexibility: Segmentation provides a higher degree of flexibility than paging. Segments can be of variable size, and processes can
be designed to have multiple segments, allowing for more fine-grained memory allocation.
 Sharing: Segmentation allows for sharing of memory segments between processes. This can be useful for inter-process
communication or for sharing code libraries.
 Protection: Segmentation provides a level of protection between segments, preventing one process from accessing or modifying
another process’s memory segment. This can help increase the security and stability of the system.

Disadvantages of Segmentation in Operating System

 As processes are loaded and removed from the memory, the free memory space is broken into little pieces, causing External
fragmentation.
 Overhead is associated with keeping a segment table for each activity.
 Due to the need for two memory accesses, one for the segment table and the other for main memory, access time to retrieve
the instruction increases.
 Fragmentation: As mentioned, segmentation can lead to external fragmentation as memory becomes divided into smaller
segments. This can lead to wasted memory and decreased performance.
 Overhead: Using a segment table can increase overhead and reduce performance. Each segment table entry requires
additional memory, and accessing the table to retrieve memory locations can increase the time needed for memory operations.
 Complexity: Segmentation can be more complex to implement and manage than paging. In particular, managing multiple
segments per process can be challenging, and the potential for segmentation faults can increase as a result.

Segmented Paging

Pure segmentation is not very popular and not being used in many of the operating systems. However, Segmentation can be combined
with Paging to get the best features out of both the techniques.

In Segmented Paging, the main memory is divided into variable size segments which are further divided into fixed size pages.

1. Pages are smaller than segments.


2. Each Segment has a page table which means every program has multiple page tables.
3. The logical address is represented as Segment Number (base address), Page number and page offset.

 Segment Number → It points to the appropriate Segment Number.


 Page Number → It Points to the exact page within the segment
 Page Offset → Used as an offset within the page frame.

4. Each Page table contains the various information about every page of the segment.
5. The Segment Table contains the information about every segment.
6. Each segment table entry points to a page table entry and every page table entry is mapped to one of the page within a
segment.
Translation of logical address to physical address

 The CPU generates a logical address which is divided into two parts: Segment Number and Segment Offset.
 The Segment Offset must be less than the segment limit.
 Offset is further divided into Page number and Page Offset. To map the exact page number in the page table, the page number
is added into the page table base.
 The actual frame number with the page offset is mapped to the main memory to get the desired word in the page of the
certain segment of the process.

Advantages of Segmented Paging

1. It reduces memory usage.


2. Page table size is limited by the segment size.
3. Segment table has only one entry corresponding to one actual segment.
4. External Fragmentation is not there.
5. It simplifies memory allocation.
Disadvantages of Segmented Paging

1. Internal Fragmentation will be there.


2. The complexity level will be much higher as compare to paging.
3. Page Tables need to be contiguously stored in the memory.

Page replacement algorithms

 When a process requests a page that is not present in physical memory, a page fault occurs. The operating system needs to
determine which page to evict from physical memory to make space for the requested page.
 Various page replacement algorithms, such as FIFO (First-In-First-Out), LRU (Least Recently Used), and Optimal, are used
to select the victim page for eviction.
 These algorithms aim to minimize the number of page faults and optimize the overall system performance.

1. FIFO (First-In, First-Out):

 This is one of the simplest page replacement algorithms.


 It replaces the oldest page in memory, based on the order in which pages were brought in.
 However, FIFO may suffer from the "Belady's Anomaly," where increasing the number of frames can lead to more page
faults.

2. Second Chance:

 Second Chance is an enhancement of the FIFO algorithm.


 It uses a circular queue to maintain pages in memory.
 When a page is to be replaced, it checks if the oldest page (the one at the front of the queue) has been referenced (given a
second chance). If it has, it moves the page to the back of the queue.
 This algorithm aims to avoid replacing pages that are still actively used.

3. LRU (Least Recently Used):

 LRU replaces the page that has not been used for the longest time.
 It requires maintaining a list of pages in order of their usage, with the most recently used page at the front.
 LRU is effective but can be computationally expensive to implement because it needs to constantly update the usage order.

4. Optimal:

 The Optimal algorithm is an idealized algorithm that makes replacement decisions based on future knowledge of page
accesses.
 It replaces the page that will not be used for the longest time in the future.
 While Optimal provides the lowest possible page fault rate, it is not practical because it requires predicting future page
accesses, which is typically not feasible.

5. LFU (Least Frequently Used):

 LFU replaces the page that has been used the least frequently.
 It maintains a counter for each page that is incremented each time the page is accessed.
 LFU aims to keep frequently used pages in memory.

6. Clock:

 The Clock algorithm is a simpler approximation of the LRU algorithm.


 It uses a circular list of pages in memory and a "use bit" for each page.
 When a page is to be replaced, Clock scans through the circular list until it finds an unreferenced page (use bit is 0).
 It then replaces the first unreferenced page it encounters, setting the use bit to 0 for all pages encountered.

7. WS-Clock (Working Set Clock):

 WS-Clock is an extension of the Clock algorithm that considers the working set of pages used by a process.
 It scans through pages in memory and checks their reference status, but it also takes into account the working set window.
 If a page falls outside the working set window, it is a candidate for replacement.

The choice of a page replacement algorithm depends on factors such as the system's hardware, the memory management requirements
of the operating system, and the specific workload characteristics. No single algorithm is universally superior; each has its strengths
and weaknesses in different scenarios. Modern operating systems may even use a combination of these algorithms or adaptive
approaches to optimize page replacement.

Belady’s Anomaly in Page Replacement Algorithms


 In Operating System, process data is loaded in fixed-sized chunks and each chunk is referred to as a page.
 The processor loads these pages in fixed-sized chunks of memory called frames. Typically the size of each page is always
equal to the frame size.
 A page fault occurs when a page is not found in the memory and needs to be loaded from the disk.
 If a page fault occurs and all memory frames have been already allocated, then the replacement of a page in memory is
required on the request of a new page. This is referred to as demand-paging.
 The choice of which page to replace is specified by page replacement algorithms.
 Generally, on increasing the number of frames to a process’ virtual memory, its execution becomes faster as fewer page
faults occur. Sometimes the reverse happens, i.e. more page faults occur when more frames are allocated to a process. This
most unexpected result is termed Belady’s Anomaly.
Bélády’s anomaly is the name given to the phenomenon where increasing the number of page frames results in an increase in the
number of page faults for a given memory access pattern.
This phenomenon is commonly experienced in the following page replacement algorithms:

1. First in first out (FIFO)


2. Second chance algorithm
3. Random page replacement algorithm
Reason for Belady’s Anomaly
Example: Consider the following diagram to understand the behavior of a stack-based page replacement algorithm
The diagram illustrates that given the set of pages i.e. {0, 1, 2} in 3 frames of memory is not a subset of the pages in memory – {0, 1,
4, 5} with 4 frames and it is a violation in the property of stack-based algorithms. This situation can be frequently seen in FIFO
algorithm.

Belady’s Anomaly in FIFO

Assuming a system that has no pages loaded in the memory and uses the FIFO Page replacement algorithm. Consider the following
reference string:
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

Case-1: If the system has 3 frames, the given reference string the using FIFO page replacement algorithm yields a total of 9 page
faults. The diagram below illustrates the pattern of the page faults occurring in the example.

Case-2: If the system has 4 frames, the given reference string using the FIFO page replacement algorithm yields a total of 10 page
faults. The diagram below illustrates the pattern of the page faults occurring in the example.

It can be seen from the above example that on increasing the number of frames while using the FIFO page replacement algorithm, the
number of page faults increased from 9 to 10.
Note – It is not necessary that every string reference pattern cause Belady anomaly in FIFO but there is certain kind of string
references that worsen the FIFO performance on increasing the number of frames.

How Can Belady’s Anomaly Be Removed?


A stack-based approach can be used to get rid of Belady’s Algorithm. These are some examples of such algorithms:

 Optimal Page Replacement Algorithm


 Least Recently Used Algorithm (LRU)
These algorithms are based on the idea that if a page is inactive for a long time, it is not being utilised frequently. Therefore, it would
be best to forget about this page. This allows for improvised memory management and the abolition of Belady’s anomaly.
Features of Belady’s Anomaly:

 Page fault rate: Page fault rate is the number of page faults that occur during the execution of a process. Belady’s Anomaly
occurs when the page fault rate increases as the number of page frames allocated to a process increases.
 Page replacement algorithm: Belady’s Anomaly is specific to some page replacement algorithms, including the First-In-
First-Out (FIFO) algorithm and the Second-Chance algorithm.
 System workload: Belady’s Anomaly can occur when the system workload changes. Specifically, it can happen when the
number of page references in the workload increases.
 Page frame allocation: Belady’s Anomaly can occur when the page frames allocated to a process are increased, but the total
number of page frames in the system remains constant. This is because increasing the number of page frames allocated to a
process reduces the number of page faults initially, but when the workload increases, the increased number of page frames
can cause the process to evict pages from its working set more frequently, leading to more page faults.
 Impact on performance: Belady’s Anomaly can significantly impact system performance, as it can result in a higher
number of page faults and slower overall system performance. It can also make it challenging to choose an optimal number
of page frames for a process.

Advantages :

 Better insight into algorithm behavior:


 Improved algorithm performance:
Disadvantages :
 Poor predictability:
 Increased overhead:
 Unintuitive behavior:
 Difficulty in optimization:
Segmentation with Paging (MULTICS):

The MULTICS (Multiplexed Information and Computing Service) operating system was one of the early systems that combined
segmentation and paging to address the drawbacks of each technique:

1. Segmentation: MULTICS divided a program's address space into segments, as discussed above. Segments were used for code, data,
and other logical units.

2. Paging: Inside each segment, MULTICS used a paging mechanism. Paging divides each segment into fixed-sized pages. This
combination allowed for more granular control of memory allocation and improved protection.

By using both segmentation and paging, MULTICS gained the benefits of both techniques:

 Modularity: Segmentation provided logical organization and protection.


 Granularity: Paging allowed for efficient use of memory with fixed-sized pages.
 Flexibility: The combination allowed for variable-sized segments, which could have variable-sized pages within them.

Handling Page Faults:

Page faults occur when a program tries to access a virtual page that is not currently in physical memory. When a page fault happens,
the operating system must handle it:
 Locate the page on disk.
 Find an available page frame in physical memory.
 Load the page from disk into the free page frame.
 Update the page table to reflect the new mapping.
 Resume the interrupted process.

Concept of "Locality of Reference"

The concept of "Locality of Reference" is a fundamental principle in computer science and memory management. It refers to the
tendency of a program's memory access pattern to repeatedly access a relatively small portion of the available memory for a certain
period of time. Locality of Reference is crucial for optimizing memory management, cache usage, and overall system performance.
There are two primary types of locality of reference:

1. Temporal Locality:

 Temporal locality, also known as "temporal reference," refers to the idea that if a program or process accesses a particular
memory location, it is likely to access the same location again in the near future.
 Programs often use variables, data structures, or instructions repeatedly in a short time span. Caching mechanisms take
advantage of this by retaining recently accessed data in a faster, smaller memory (cache) to reduce the time it takes to access
that data again.
 Examples of temporal locality include loop iterations, function calls, and the reuse of variables.

2. Spatial Locality:

 Spatial locality, also known as "spatial reference," suggests that when a program accesses a specific memory location, it is
likely to access neighbouring memory locations in the same vicinity.
 This locality principle aligns with the idea that data is often stored in contiguous memory locations, and programs tend to
access data that is stored nearby.
 Caching mechanisms, such as cache lines, exploit spatial locality by loading not only the requested memory location but also
nearby data into cache because it is likely to be used soon.
 Examples of spatial locality include array traversal, sequential file reading, and data structures with closely related elements.

You might also like