OS Unit-IV
OS Unit-IV
In modern computing environments, where multiple processes are running concurrently, memory
management plays an essential role in preventing conflicts, improving performance, and
supporting multitasking. It is responsible for organizing and tracking memory in both physical
and virtual forms, ensuring that each process gets the required memory and that no process
interferes with another's memory space.
1. Efficient Memory Utilization: Ensuring that memory resources are used optimally
without wastage, so that programs can run smoothly even in environments with limited
memory.
2. Isolation and Protection: Preventing one process from accessing or modifying another
process's memory space, ensuring the security and stability of the system.
3. Process Scheduling: Allowing processes to run without interference while ensuring that
each process gets access to sufficient memory.
4. Avoiding Fragmentation: Reducing or eliminating issues like fragmentation, which can
leave parts of memory unused despite the presence of free space.
1. Memory Allocation: Assigning memory to processes when they request it, ensuring that
each process has enough space to operate without conflicting with others.
2. Memory Deallocation: Releasing memory once it is no longer needed by a process to
make it available for other processes.
3. Memory Tracking: Keeping track of which memory regions are occupied and which are
free, often through data structures like page tables or segment tables.
4. Virtual Memory Management: Using techniques like paging and segmentation to
extend the apparent amount of physical memory available, allowing the system to run
larger programs and manage memory more flexibly.
Types of Memory:
• Physical Memory (RAM): The actual memory hardware available in the system.
• Virtual Memory: A concept that allows programs to use more memory than what is
physically available by swapping data between physical memory and storage (disk).
Memory management techniques are essential for supporting multi-tasking, enabling the OS to
efficiently handle multiple processes, each with different memory needs, while maintaining the
integrity of the system as a whole. Through various strategies like paging, segmentation, and
virtual memory, modern operating systems ensure the effective and secure management of
system memory resources.
Fault Tolerance No fault tolerance (single point Better fault tolerance,as tasks
of failure) can be shifted to other
processes
Cost and Complexity Less expensive, simpler design More expensive and complex
design
1. Fixed Partitioning
• Description: In fixed partitioning, the system divides the memory into fixed-size
partitions, and each partition is assigned to a process. Each partition has a predetermined
size, and processes are allocated one partition.
• Key Characteristics:
o The size of partitions is fixed and does not change.
o Each partition is assigned to one process, and if a process is smaller than the
partition, the remaining space inside the partition is wasted.
• Advantages:
o Simple to implement and manage.
o No complex memory management schemes are required.
• Disadvantages:
o Internal Fragmentation: If a process is smaller than the partition size, the
remaining space inside the partition is wasted.
o External Fragmentation: There could still be gaps between partitions when
processes are allocated or deallocated.
o Fixed partition sizes may not fit all processes efficiently.
• Example: Older operating systems like MS-DOS used fixed partitioning.
2. Dynamic Partitioning
Buddy System Memory blocks Reduces fragmentation, Can still lead to internal
allocated in powers of efficient memory fragmentation.
two, can split or merge usage.
blocks.
Non-contiguous memory allocation refers to a memory management scheme where processes
are allocated memory in multiple scattered blocks rather than one contiguous block. This type of
memory allocation helps in efficient memory utilization and reduces the problems of
fragmentation, particularly external fragmentation. There are several types of non-contiguous
memory allocation techniques, including:
1. Paging
• Description: In paging, both the physical memory and the process memory are divided
into fixed-size blocks. The process is divided into pages, and the physical memory is
divided into page frames of the same size. The operating system maintains a page table
to map the pages of a process to physical memory.
• Key Characteristics:
o The process is split into fixed-size blocks (pages), which can be stored anywhere
in physical memory.
o The OS uses a page table to map logical addresses (pages) to physical addresses
(page frames).
• Advantages:
o Eliminates external fragmentation because processes are divided into equal-
sized pages.
o Flexible memory allocation as pages can be stored in any available physical
memory location.
• Disadvantages:
o Internal fragmentation can still occur if the last page of a process is not fully
utilized.
o Requires additional hardware support (Memory Management Unit - MMU) and
extra overhead in managing the page table.
• Example: Most modern operating systems like Windows, Linux, and macOS use
paging for memory management.
Structure of Page Table:
A Page Table is a crucial data structure used by the virtual memory system in an operating
system to store the mapping between logical addresses (generated by the CPU) and physical
addresses (used by the hardware). The page table helps translate the logical address into the
physical address, providing the corresponding frame number where the page is stored in the main
memory.
Characteristics of the Page Table
• Stored in Main Memory: The page table resides in the main memory.
• Entries: The number of entries in the page table equals the number of pages into which the
process is divided.
• Page Table Base Register (PTBR): Holds the base address for the page table of the current
process.
• Independent Page Tables: Each process has its own independent page table.
Techniques for Structuring the Page Table
Hierarchical Paging
Also known as multilevel paging, this technique is used when the page table is too large to fit in
a contiguous space. It involves breaking the logical address space into multiple page tables.
Commonly used hierarchical paging structures include two-level and three-level page tables.
Two-Level Page Table
For a system with a 32-bit logical address space and a page size of 1 KB:
• Page Number: 22 bits
• Page Offset: 10 bits
• The page number is further divided into: P1: Index into the outer page table (12 bits) P2:
Displacement within the page of the inner page table (10 bits)
Three-Level Page Table
For a system with a 64-bit logical address space and a page size of 4 KB, a three-level page table
is used to avoid large tables.
Hashed Page Tables
This approach handles address spaces larger than 32 bits. The virtual page number is hashed into
a page table containing a chain of elements hashing to the same location. Each element consists
of:
• The virtual page number
• The mapped page frame value
• A pointer to the next element in the linked list.
Inverted Page Tables
Combines a page table and a frame table into a single data structure. Each entry consists of the
virtual address of the page stored in the real memory location and information about the process
that owns the page. This technique reduces the memory needed to store each page table but
increases the time needed to search the table.
Page Table Entries (PTE)
A Page Table Entry (PTE) stores information about a particular page of memory, including:
• Frame Number: The frame number in which the current page is present.
• Present/Absent Bit: Indicates whether the page is present in memory.
• Protection Bit: Specifies the protection level (read, write, etc.).
• Referenced Bit: Indicates whether the page has been accessed recently.
• Caching Enabled/Disabled: Controls whether the page can be cached.
• Modified Bit: Indicates whether the page has been modified.
Advantages of Using a Page Table
• Efficient Use of Memory: Allocates only the necessary amount of physical memory needed
by a process.
• Protection: Controls access to memory and protects sensitive data.
• Flexibility: Allows multiple processes to share the same physical memory space.
• Address Translation: Translates virtual addresses into physical addresses efficiently.
• Hierarchical Design: Provides a more efficient method for managing large virtual address
spaces
2. Segmentation
• Description: Segmentation divides a process into segments, which are logical units like
code, data, and stack, and each segment may vary in size. These segments are loaded into
non-contiguous physical memory locations. The operating system maintains a segment
table to map logical segments to physical memory addresses.
• Key Characteristics:
o The process is divided into logically meaningful segments (such as code, data,
stack).
o Each segment has a base address and a length.
o The segments are stored non-contiguously in memory.
• Advantages:
o Supports logical organization of memory, making it easier for the programmer to
work with different parts of the process.
o Provides better memory utilization compared to contiguous memory allocation.
• Disadvantages:
o External fragmentation can occur as segments are allocated non-contiguously.
o The system may still need complex algorithms to manage segment allocation and
deal with fragmentation.
• Example: Early UNIX systems used segmentation for memory management.
3. Paged Segmentation (Combination of Paging and Segmentation)
ace in RAM and ensure efficient CPU utilization. It involves moving processes between the main
memory (RAM) and secondary storage (usually a hard disk or SSD) to allow the system to run
more programs simultaneously.
Key Concepts
• Swap-in: This is the process of transferring a program from secondary storage back into the
main memory (RAM) for execution.
• Swap-out: This involves moving a process from the main memory to secondary storage to free
up space in RAM for other processes.
How Swapping Works
When the RAM is full and a new program needs to run, the operating system selects a program
or data that is currently in RAM but not actively being used. This selected data is moved to
secondary storage, making space in RAM for the new program. When the swapped-out program
is needed again, it can be swapped back into RAM, replacing another inactive program or data if
necessary.
Advantages
1. Improved Memory Utilization: Swapping allows the CPU to manage multiple processes
within a single main memory, improving overall memory utilization.
2. Virtual Memory Creation: It helps in creating and using virtual memory, allowing the
system to run larger and more processes than the physical memory would allow.
3. Priority-Based Scheduling: Swapping can be used in priority-based scheduling to optimize
the execution of high-priority processes by swapping out low-priority ones.
Disadvantages
1. Performance Impact: The process of swapping can degrade system performance due to the
time taken to transfer data between RAM and secondary storage.
2. Data Loss Risk: In case of a power failure during swapping, there is a risk of losing data
related to the processes involved in swapping.
3. Increased Page Faults: If the swapping algorithm is not efficient, it can lead to an increased
number of page faults, further decreasing overall performance.
Virtual Memory
Demand Paging:
Demand Paging is a technique in which a page is usually brought into the main memory
only when it is needed or demanded by the CPU. Initially, only those pages are loaded that are
required by the process immediately. Those pages that are never accessed are thus never loaded
into the physical memory.
Definition: Copy on Write is a strategy where the system delays the copying of data until it is
actually modified. This means that multiple processes can share the same data in memory until
one of them needs to change it.
How It Works:
Initial State: When a process requests a copy of data, the operating system provides a reference
to the existing data rather than creating a new copy.
Shared Data: Multiple processes can read the shared data without any issues.
Modification: When a process attempts to modify the shared data, the operating system
intervenes.
Copying: At the point of modification, the operating system creates a copy of the data
specifically for the modifying process. This ensures that changes made by one process do not
affect the others.
Benefits:
Memory Efficiency: Reduces the amount of memory used by sharing data until modification is
necessary.
Fork System Call: In Unix-like operating systems, when a process is forked, the child process
shares the parent's memory pages. Only when either process modifies a page does the COW
mechanism create a separate copy.
Virtual Memory Management: COW is used to manage memory pages efficiently, especially
in scenarios involving large datasets or multiple processes accessing the same data.
By using Copy on Write, operating systems can manage resources more effectively, leading to
better overall system performance and reduced memory usage.
Page Replacement Algorithms
Page replacement algorithms are essential in operating systems that use paging for memory
management. When a page fault occurs and no free page frames are available, these algorithms
decide which page to replace to minimize the number of page faults and improve system
performance.
FIFO is the simplest page replacement algorithm. It maintains a queue of pages in memory, with
the oldest page at the front. When a new page needs to be loaded, the oldest page is removed.
This algorithm can suffer from Belady's anomaly, where increasing the number of page frames
can lead to more page faults.
Example:
Reference string: 1, 3, 0, 3, 5, 6, 3
Page frames: 3
Page faults: 6
Optimal page replacement replaces the page that will not be used for the longest time in the
future. This algorithm is theoretical and serves as a benchmark since it requires future knowledge
of page requests.
Example:
Reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3
Page frames: 4
Page faults: 6
LRU replaces the page that has not been used for the longest time. It relies on the principle of
locality, where pages used recently are likely to be used again soon.
Example:
Reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3
Page frames: 4
Page replacement algorithms are crucial for efficient memory management in operating systems.
Each algorithm has its advantages and disadvantages, and the choice depends on the specific
needs of the system. Understanding these algorithms helps improve system performance and
reduce page fault
Allocation of Frames in Operating Systems
Frame allocation is a crucial aspect of operating systems, particularly in the context of virtual
memory and demand paging. It involves determining how many frames (fixed-size blocks of
physical memory) to allocate to each process. This is essential for efficient memory management
and process execution.
Key Principles
Constraints
1. Total Frames: You cannot allocate more frames than the total number of available frames.
2. Minimum Frames: Each process must be allocated a minimum number of frames to avoid
high page fault ratios and ensure that all referenced pages can be held in memory.
Frame Allocation Algorithms
1. Equal Allocation: Each process gets an equal number of frames. For example, if there are 48
frames and 9 processes, each process gets 5 frames, with the remaining frames used as a free-
frame buffer pool. Disadvantage: Inefficient for processes of varying sizes, leading to wasted
frames for smaller processes.
2. Proportional Allocation: Frames are allocated based on the size of each process. For a
process ( p_i ) of size ( s_i ), the number of allocated frames ( a_i ) is calculated as: [ a_i =
\left(\frac{s_i}{S}\right) \times m ] where ( S ) is the sum of the sizes of all processes and ( m
) is the total number of frames. Advantage: Allocates frames according to the needs of each
process.
3. Priority Allocation: Frames are allocated based on the priority of the processes. Higher
priority processes receive more frames. Advantage: Ensures that critical processes get the
necessary resources.
Global vs. Local Allocation
1. Local Replacement: A process can only use frames from its own allocated set when a page
fault occurs. Advantage: The page fault ratio is influenced only by the process's own
behavior. Disadvantage: Low-priority processes may hinder high-priority processes by not
sharing frames.
2. Global Replacement: A process can take frames from any other process when a page fault
occurs. Advantage: Improves overall system throughput. Disadvantage: The page fault ratio
of a process is affected by the behavior of other processes.
Important Considerations
• Performance: The choice of frame allocation strategy can significantly impact system
performance and process execution efficiency.
• Flexibility: Global replacement offers more flexibility but can lead to unpredictable
performance for individual processes.
• Fairness: Proportional and priority allocation aim to distribute resources more fairly based on
process needs and importance.
By understanding and implementing these frame allocation strategies, operating systems can
manage memory more effectively, ensuring optimal performance and resource utilization
Thrashing:
Thrashing in OS is a phenomenon that occurs in computer operating systems when the
system spends an excessive amount of time swapping data between physical memory (RAM)
and virtual memory (disk storage) due to high memory demand and low available resources.
Storage Management in Operating Systems
Storage management in operating systems involves the efficient use and maintenance of storage
devices to ensure data integrity, security, and optimal performance. It encompasses various
processes and techniques to manage data storage equipment and optimize their usage.
Storage management has several key attributes that are essential for maintaining the storage
capacity of a system:
1. Performance: Ensuring that data storage resources operate efficiently.
2. Reliability: Maintaining the integrity and availability of data.
3. Recoverability: Providing mechanisms for data recovery in case of failures.
4. Capacity: Managing the available storage space effectively.
Features and Advantages
Disk management is a critical aspect of storage management, involving the organization and
maintenance of data on storage devices like hard disk drives and solid-state drives. Some
common disk management techniques include:
• Partitioning: Dividing a physical disk into multiple logical partitions for better organization.
• Formatting: Preparing a disk for use by creating a file system on it.
• File System Management: Managing the file systems used to store and access data.
• Disk Space Allocation: Allocating space on the disk for files and directories.
• Disk Defragmentation: Rearranging fragmented data to improve performance.
Mass storage devices are essential components of an operating system, designed to store
large volumes of data. These devices are often used interchangeably with peripheral storage,
which manages data volumes larger than the native storage capacity of a computer or device.
Primary Memory: This is the volatile storage component of a computer system, directly
accessed by the processor. It includes data buses, cache memory, and Random Access Memory
(RAM). Primary memory is faster but more expensive and has limited storage capacity, typically
ranging from 16 GB to 32 GB.
Secondary Memory: This is non-volatile, permanent memory that is not directly accessible by
the processor. It includes read-only memory (ROM), flash drives, hard disk drives (HDD), and
magnetic tapes. Secondary memory is slower but less expensive and has a larger storage
capacity, ranging from 200 GB to several terabytes.
Types of Mass Storage Devices
Magnetic Disks
Magnetic disks use magnetization to write, rewrite, and access data. They consist of platters
coated with magnetic material, divided into tracks and sectors for data storage. Examples include
floppy disks, hard disks, and zip disks.
Structure and Working:
• A mechanical arm with a read-write head moves across the spinning platters.
• Data is stored in tiny magnetized patches on the disk surface.
• Tracks are concentric circles on the disk, divided into sectors.
• The positioning time (seek time) and rotational latency affect data access speed.
Solid State Disks (SSDs)
SSDs use memory technology, such as flash memory or DRAM chips, to store data. They are
faster than traditional hard drives due to the absence of moving parts. However, they are more
expensive, smaller in capacity, and may have shorter lifespans.
Advantages:
• High-speed data access.
• Useful as high-speed cache for frequently accessed data.
• Employed in laptops for better performance and portability.
Magnetic Tapes
Magnetic tapes were commonly used for secondary storage before the advent of hard disk drives.
They are now primarily used for backups. Although accessing a specific location on a tape can
be slow, the read/write speeds are comparable to disk drives once the operation starts.
Capacity:
• Tape drive capacities range from 20 GB to 200 GB, with compression potentially doubling the
capacity.
rotation
Disk scheduling is a technique used by operating systems to manage the order in which disk I/O
(input/output) requests are processed. The main goals of disk scheduling are to optimize the
performance of disk operations, reduce the time it takes to access data, and improve overall
system efficiency.
Key Terms
• Seek Time: The time taken to locate the disk arm to a specified track where the data is to be
read or written.
• Rotational Latency: The time taken by the desired sector of the disk to rotate into a position
so that it can access the read/write heads.
• Transfer Time: The time to transfer the data, depending on the rotating speed of the disk and
the number of bytes to be transferred.
• Disk Access Time: The sum of seek time, rotational latency, and transfer time.
• Disk Response Time: The average time spent by a request waiting to perform its I/O
operation.
Common Disk Scheduling Algorithms