Os Unit 4 Class Notes
Os Unit 4 Class Notes
Memory Management: Contiguous Memory Allocation - Paging - Structure of the Page Table –
Swapping - Virtual Memory: Demand Paging – Copy-on write – Page Replacement – Allocation
of frames – Thrashing Memory – Compression
The main memory is usually divided into two partitions: one for the resident operating system and one
for the user processes.
Memory Protection:
One of the simplest methods for allocating memory is to divide memory into several fixed-sized
Partitions.
Each partition may contain exactly one process.
Fixed sized partitions are simple to implement. Any process whose size is less than or equal to the
partition size can be loaded into any available partition.
Advantages
Simple to implement.
Less overhead.
Disadvantages
Internal Fragmentation:
Internal fragmentation is the space wasted inside of allocated memory blocks because of restriction on
the allowed sizes of allocated blocks.
In the variable-size partition memory allocation method, memory is divided into partitions
of varying sizes based on the specific needs of the processes.
The operating system tracks available memory holes and which are occupied by processes.
When a process needs memory, the system searches for a suitable hole.
If a hole is too large, it can be split: one part is allocated to the process, and the rest is
returned as an available hole.
When a process finishes, it releases its memory back to the available holes.
First Fit: Allocates the first available hole that is large enough.
Best Fit: Allocates the smallest hole that can fit the process.
Worst Fit: Allocates the largest hole to leave big fragments available for future processes
External fragmentation
External fragmentation occurs when there are many small, scattered blocks of free memory in the
system, but none of these blocks is large enough to fulfil a new memory request, even though the total
amount of free memory is sufficient.
Paging
Paging is a memory management technique in which process (logical) address space is broken
into blocks of the same size called pages.
Similarly, main memory is divided into small fixed-sized blocks of (physical) memory called
frames.
Paging avoids external fragmentation and the need for compaction
Address Translation
Page address is called logical address and represented by page number and the offset.
Frame address is called physical address and represented by a frame number and the offset.
A data structure called page map table is used to keep track of the relation between a page of a
process to a frame in physical memory.
Paging Hardware:
In a paged memory management system, a logical address generated by the CPU is divided into a
page number (p) and a page offset (d).
The page number is used as an index into a page table, which contains the frame number (base
address) of each page in physical memory.
To compute the physical address, the frame number is combined with the page offset, resulting in
the formula:
This mapping allows efficient translation of logical addresses to physical memory, enhancing
memory management and protection for processes.
Paging Model
Paging with TLB
In a paged memory system, logical addresses are divided into a page number (p) and a page offset (d).
The page number is used to look up the corresponding frame number in the page table, which then
helps compute the physical address.
A TLB is a special cache that stores recent translations of logical addresses (page numbers) to
physical addresses (frame numbers).
When the CPU generates a logical address, the system first checks the TLB to see if the page number
is present.
TLB Hit: If the page number is found in the TLB, the corresponding frame number is
retrieved quickly, and the physical address is calculated using the formula:
TLB Miss: If the page number is not found, the system must access the page table to retrieve
the frame number. Once obtained, it is added to the TLB for future reference, and then the
physical address is calculated.
Explain Page Table Structure in detail .
Hierarchical paging
Hierarchical paging
In Computer a system that has a 32 bit logical address space the page table becomes too large.
Each process may need up to 4MB of Physical address space for the page table alone.
One simple solution to this problem is to divide the page table into smaller pieces.
Consider the system with a 32-bit logical address space and a page size of 4 KB.
A logical address is divided into a page number consisting of 20 bits and a page offset consisting
of 12 bits.
The page number is further divided into a 10-bit page number and a 10-bit page offset
First, use p1 to index into the outer page table, which provides the base address of the corresponding
inner page table. Next, use p2 to index into the inner page table to retrieve the frame number. Finally,
combine the frame number with the offset d to get the physical address.
Hashed page table
A common approach for handling address spaces larger than 32 bits is to use a hashed page
table, with the hash value being the virtual page number.
Each entry in the hash table contains a linked list of elements
Each element consists of three fields:
• virtual page number,
• value of the mapped page frame
• pointer to the next element in the linked list
The virtual page number in the virtual address is hashed into the hash table. The virtual page
number is compared with field in the first element in the linked list.
If there is a match, the corresponding page frame is used to form the desired physical
address.
If there is no match, subsequent entries in the linked list are searched for a matching virtual
page number.
Inverted Page Tables
Traditional page tables are created for each process, mapping virtual addresses to physical
addresses, which can lead to high memory usage due to duplicated entries for shared pages.
In contrast, inverted page tables maintain a single global table for the entire system, where
each entry corresponds to a physical frame and maps it to the associated virtual page and
process ID, significantly reducing memory overhead and preventing duplication.
Each Logical address in the system consists of a triple :< process-id, page-number, offset>.
Each inverted page-table entry is a pair <process-id, page-number>
When a memory reference occurs, the system looks up <process-id, page-number> in the
inverted page table. If it finds a match at entry i, it generates the physical address <i, offset>.
Explain demand paging in detail. How page fault is handled? Explain with neat diagram.
Virtual Memory
Virtual memory allows the execution of processes that are not completely in the main memory.
Virtual memory involves the separation of logical memory as perceived by users from physical
memory.
This separation allows an extremely large virtual memory to be provided for programmers
when only a smaller physical memory is available.
Demand Paging:
The process of loading the page into memory on demand is known as demand paging.
Pages that are never accessed by a process are not loaded into physical memory. This helps
conserve memory and resources.
When a process is ready to run, it is swapped into memory as pages rather than as a whole.
This means that only the required pages are loaded into RAM.
A lazy swapper only loads a page into memory when it is actually needed
A swapper deals with whole processes, while a pager focuses on individual pages.
In pure demand paging, a page is never brought into memory until it is required. This
strategy ensures that only the necessary pages are loaded, allowing efficient use of memory
resources.
Page Fault
A page fault occurs when a process attempts to access a page that is not currently loaded in
physical memory (RAM).
When this happens, the operating system must take specific actions to handle the page fault
and load the required page into memory.
The CPU signals the operating system that a page fault has occurred.
The OS checks if the memory address is valid:
Valid: Proceed to the next step.
Invalid: Terminate the process.
Identify which page the process needs by checking the page table.
look for a free frame in physical memory:
1. If available, use it.
2. If not, The page replacement algorithms are used for the decision-making of
replacing the page in physical address space.
Read the needed page from disk into the newly freed frame.
Update the page table to indicate the page is now in memory.
Resume the program from where it left off, allowing access to the newly loaded page.
Advantagea of Demand Paging
Large virtual memory.
More efficient use of memory.
Unconstrained multiprogramming.
Faster Program Start
Disadvantage
Page Fault Overhead
Disk I/O Performance
Thrashing
Complexity
Increased Memory Management Overhead
Copy-on-Write
Copy-on-Write(CoW) is mainly a resource management technique that allows the parent
and child process to share the same pages of the memory initially. If any process either
parent or child modifies the shared page, only then the page is copied.
Recall in the UNIX Operating System , the fork() system call is used to create a duplicate
process of the parent process which is known as the child process.
The main intention behind the CoW technique is that whenever a parent process creates a
child process both parent and child process initially will share the same pages in the
memory.
These shared pages between parent and child process will be marked as copy-on-write
which means that if the parent or child process will attempt to modify the shared pages then
a copy of these pages will be created and the modifications will be done only on the copy of
pages by that process and it will not affect other processes.
Now, let us assume that process 1 wants to modify a page C in the memory. When the Copy-on-
write(CoW) technique is used, only those pages that are modified by process i are copied; all the
unmodified pages are shared by the parent and child process.
Explain Page Replacement Algorithm in detail.
Page Replacement
Page replacement algorithms are techniques used in operating systems to manage memory efficiently
when the virtual memory is full. When a new page needs to be loaded into physical memory , and
there is no free space, these algorithms determine which existing page to replace.
3. write the victim frame to the disk; change the page and frame tables accordingly.
4. Read the desired page into the newly freed frame; change the page and frame
tables.
5. Continue the user process from where the page fault occurred.
This algorithm replaces the page that won’t be used for the longest time in the
future.
It is called optimal because it guarantees the minimum number of page faults.
Advantages: Minimum page faults.
Disadvantages: Not feasible to implement, as the future memory accesses of
programs are unknown.
LRU replaces the page that has not been used for the longest period of time.
The OS keeps track of the order in which pages are accessed. When a page needs to
be replaced, it picks the least recently used one.
Advantages: Frequently used pages are less likely to be replaced.
Disadvantages: Requires extra resources to track and maintain the order of page
accesses.
Allocation of Frames
The frame allocation problem arises in memory management when an operating system
needs to decide how to allocate physical memory frames to various processes.
It involves determining the number of frames each process should receive to ensure
efficient execution while minimizing page faults.
Types of Frame Allocation:
1. Equal Allocation:
Each process gets an equal number of frames.
If there are n processes and m frames then allocate m/n frames to each process.
Example:
If there are 5 processes and 100 frames, give each process 20 frames.
2. Proportional Allocation: Frames are allocated based on the size of the process.
Example:
For instance, in a system with 62 frames, if there is a process of 10KB and another
process of 127KB, then the first process will be allocated (10/137)*62 = 4 frames and
the other process will get (127/137)*62 = 57 frames.
3. Priority Allocation: Frames are allocated based on the priority of the processes, with high-
priority processes receiving more frames.
4. Global vs. Local Allocation:
Global: Any process can take a frame from any other process.
Local: A process can only take frames from its allocated set.
Thrashing
Thrashing occurs when a process does not have ’enough” frames allocated to store the
pages it uses repeatedly; the page fault rate will be very high.
Local replacement algorithms can limit the effects of thrashing.
The selection of a replacement policy to implement virtual memory plays an important part
in the elimination of the potential for thrashing.
Thrashing is solved by using
1. Working set model
2. Page fault frequency.
Working Set Model:
The working set model helps prevent thrashing by keeping track of the pages a
process needs over a certain period of time, known as the working set.
The operating system monitors each process’s working set. If a process’s working set
exceeds the number of frames allocated to it, the system detects that the process might
begin thrashing.
To prevent thrashing, the OS will allocate more frames to the process to ensure that its
working set can fit into memory.
If more frames are not available, the system may decide to swap out entire processes.