OS Unit 4
OS Unit 4
• Purpose : To run programs larger than the physical memory and to improve
multitasking.
• Key Concepts:
1. Physical Memory: This is the actual RAM installed in your computer. It's a nite
resource.
2. Physical Address: Address in main memory (RAM).
3. Virtual Memory: This is a simulated memory space created by the operating system. It's
much larger than physical memory.
4. Logical Address: Address generated by the CPU.
5. Mapping: Done using page tables to translate logical to physical addresses.
6. Paging: A memory management scheme that divides both physical and virtual memory
into xed-sized blocks called pages.
7. Page Table: A data structure that maps virtual pages to physical frames (physical
memory blocks).
8. Page Fault: Occurs when a process tries to access a page that isn't currently in physical
memory. The OS must then bring the page from secondary storage (like a hard disk) into
physical memory.
9. Demand Paging: A strategy where pages are brought into physical memory only when
they are needed, reducing memory overhead.
10. Thrashing: A condition where the system spends more time swapping pages in and out
of memory than executing processes, leading to poor performance.
1

fi
fi
fi
• Bene ts of Virtual Memory:
1. Increased Multiprogramming: Allows more processes to run concurrently.
2. Ef cient Memory Utilization: Reduces memory fragmentation.
3. Larger Address Space: Enables programs to use more memory than physically
available.
• De nition: Cache memory is a small, high-speed memory located between the CPU and
RAM. It stores frequently accessed data to improve processing speed.
The cache has a signi cantly shorter access time than the main memory due to the applied
faster but more expensive implementation technology.
2

fi
fi
fi
fi
Computers use these concepts to optimize performance. They store frequently accessed data
and instructions in a special, fast memory called cache. This way, the CPU doesn't have to go
all the way to the main memory (slower) every time, speeding up the process.
• Levels:
- L1 Cache: Fastest, but small and inside the CPU.
- L2 Cache: Larger and slower than L1 but faster than RAM.
- L3 Cache: Shared among CPU cores, slower than L1 and L2.
A cache hit occurs when the data requested by the CPU is already present in the cache.
1. CPU Requests Data: The CPU sends an address to the memory system.
2. Tag Match Check:
◦ The cache checks if the requested address's tag matches an existing entry in
the cache.
◦ If there is a match, it’s a cache hit.
3. Data Delivery:
◦ The cache retrieves the data from the appropriate block and delivers it to the
CPU.
◦ No access to the main memory is required, reducing latency.
4. Timing: Since cache is faster than main memory, access time is minimal.
A cache miss occurs when the data requested by the CPU is not found in the cache.
1. CPU Requests Data: The CPU sends an address to the memory system.
2. Tag Match Check:
◦ The cache checks for the requested data.
◦ If the tag does not match any entry, it’s a cache miss.
3. Access Main Memory:
◦ The cache sends a request to main memory for the required data block.
◦ The main memory retrieves the block and sends it back to the cache.
4. Update the Cache:
3

◦ The fetched block is written to the cache using a cache replacement policy if
the cache is full (e.g., FIFO, LRU).
◦ The tag for the block is updated to match the new data.
5. Data Delivery:
◦ Once the block is stored in the cache, the data is delivered to the CPU.
6. Timing: A cache miss incurs more delay since main memory access is much slower
than cache access.
3. Demand Paging:
Demand paging is a memory management technique that brings pages of a process into
physical memory only when they are needed. This helps optimize memory usage and allows
more processes to run concurrently.
Process:
1. The program is divided into pages.
2. Pages are loaded from disk to memory when accessed.
3. If the required page is not in memory, a page fault occurs.
1. Page Fault:
◦ When a process tries to access a memory location that isn't currently in
physical memory, a page fault occurs.
2. Page Table Check:
◦ The operating system checks the page table to see if the page is present in
memory.
3. Page Replacement (if necessary):
◦ If the page isn't in memory, the OS selects a victim page to be replaced.
4. Page Fetch:
◦ The required page is fetched from secondary storage (like a hard disk) and
loaded into the freed frame.
5. Page Table Update:
◦ The page table is updated to re ect the new location of the page.
6. Process Resumption:
◦ The process is resumed, and the instruction that caused the page fault is re-
executed.
4

fl
• Ef cient Memory Utilization: Only the necessary pages are loaded into memory,
reducing memory overhead.
• Increased Multiprogramming: More processes can be run concurrently as they don't
need to be entirely loaded into memory.
• Simpli ed Memory Management: The OS doesn't need to worry about allocating
contiguous blocks of memory for each process.
•
Key Disadvantages of Demand Paging:
• Performance Overhead: Page faults can signi cantly slow down program execution.
• Thrashing: If too many page faults occur, the system can spend more time swapping
pages than executing processes, leading to poor performance.
1. First-In-First-Out (FIFO):
• This algorithm replaces the page that will not be used for the longest time in the
future.
• It is the best possible algorithm, but it is not feasible to implement in practice as it
requires knowledge of future page references.
5

fi
fi
fi
3. Least Recently Used (LRU):
• This algorithm replaces the page that has not been used for the longest time.
• It is generally a good choice as it tends to replace pages that are less likely to be
needed in the near future.
• However, it requires additional overhead to track the usage history of each page.
• This algorithm replaces the page that was most recently used.
• It is often not a good choice as it may replace pages that are likely to be needed again
soon.
• This algorithm replaces the page that has been used the least number of times.
• It can be effective in some cases, but it may not always be accurate, as a page that has
been used infrequently in the past may be needed frequently in the future.
5. Allocation of Frames :
Frame allocation in operating systems is the process of assigning physical memory frames to
processes. It's a crucial aspect of memory management, as it directly impacts the system's
performance and stability.
6

1. Equal Allocation:
• Each process is allocated an equal number of frames, regardless of its size or priority.
• This approach is simple to implement but may not be ef cient, as it can lead to
underutilization or overutilization of memory.
2. Proportional Allocation:
3. Priority-Based Allocation:
4. Demand Paging:
• Global Replacement: When a page fault occurs, the operating system can choose any
page in memory to replace, regardless of which process it belongs to.
• Local Replacement: The operating system can only choose a page to replace from
the process that caused the page fault.
6. Thrashing :
7

fi
fi
fi
fi
fi
swapping pages between main memory and secondary storage (like a hard disk) than it does
executing processes.
Causes of Thrashing:
• Insuf cient Physical Memory: When there's not enough physical memory to
accommodate all the processes and their data, the system relies heavily on virtual
memory.
• High Degree of Multiprogramming: Running too many processes simultaneously
can lead to excessive page faults.
• Poor Page Replacement Algorithms: Inef cient page replacement algorithms can
exacerbate the problem.
Symptoms of Thrashing:
• Add More Physical Memory: Increasing RAM can reduce the need for virtual
memory.
• Reduce the Degree of Multiprogramming: Limit the number of processes running
simultaneously.
• Improve Page Replacement Algorithms: Use more ef cient algorithms like LRU or
Clock.
• Process Swapping: Temporarily swap out inactive processes to free up memory.
6. Demand Segmentation :
De nition: A memory management technique similar to demand paging, but instead of xed-
size pages, segments of variable sizes are loaded into memory only when required.
Key Concepts:
8

fi
fi
fi
fi
fi
fi
fi
• Segment: A logical division of a process's memory space.
• Segment Table: A table that maps logical segment numbers to physical memory
addresses.
• Segment Size: Segments can be of varying sizes, allowing for more exible memory
allocation.
9

fi
fl
• User Authentication: OS veri es the identity of users through mechanisms like
passwords, biometrics, or tokens.
• Access Control: OS enforces access controls to restrict user access to speci c
resources based on their privileges.
2. Resource Protection:
4. Malware Protection:
5. Network Security:
• Secure Network Protocols: OS can enforce secure network protocols like SSL/TLS
to protect data transmission.
• Network Access Control: OS can control network access to prevent unauthorized
users from connecting to the system.
6. System Integrity:
• File System Integrity Checks: OS can periodically check the integrity of le systems
to detect and prevent corruption.
• System Log Monitoring: OS can monitor system logs to identify unusual activity or
potential security breaches.
10

fi
fi
fi
fi
fi
fi
fi
fi
7. User Education and Awareness:
• Security Policies: OS can enforce security policies and guidelines to educate users
about best practices.
• Security Training: OS can provide training and awareness programs to help users
identify and avoid security threats.
11

12

13