0% found this document useful (0 votes)
26 views29 pages

OS Unit-IV

Memory management is a critical function of operating systems that ensures efficient allocation and tracking of memory resources, supporting multitasking and preventing conflicts. It includes techniques like contiguous and non-contiguous memory allocation, with methods such as paging and segmentation to optimize memory use. The document outlines various memory management goals, tasks, and types, emphasizing the importance of effective memory management in modern computing environments.

Uploaded by

VENU DINO
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views29 pages

OS Unit-IV

Memory management is a critical function of operating systems that ensures efficient allocation and tracking of memory resources, supporting multitasking and preventing conflicts. It includes techniques like contiguous and non-contiguous memory allocation, with methods such as paging and segmentation to optimize memory use. The document outlines various memory management goals, tasks, and types, emphasizing the importance of effective memory management in modern computing environments.

Uploaded by

VENU DINO
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

UNIT-IV

Introduction to Memory Management in Operating Systems

Memory management is a crucial function of an operating system (OS) responsible for


efficiently allocating, tracking, and managing the computer’s memory resources. It ensures that
the system makes the best use of available memory, allows processes to execute correctly, and
maintains the overall stability and performance of the system.

In modern computing environments, where multiple processes are running concurrently, memory
management plays an essential role in preventing conflicts, improving performance, and
supporting multitasking. It is responsible for organizing and tracking memory in both physical
and virtual forms, ensuring that each process gets the required memory and that no process
interferes with another's memory space.

Goals of Memory Management:

1. Efficient Memory Utilization: Ensuring that memory resources are used optimally
without wastage, so that programs can run smoothly even in environments with limited
memory.
2. Isolation and Protection: Preventing one process from accessing or modifying another
process's memory space, ensuring the security and stability of the system.
3. Process Scheduling: Allowing processes to run without interference while ensuring that
each process gets access to sufficient memory.
4. Avoiding Fragmentation: Reducing or eliminating issues like fragmentation, which can
leave parts of memory unused despite the presence of free space.

Key Tasks of Memory Management:

1. Memory Allocation: Assigning memory to processes when they request it, ensuring that
each process has enough space to operate without conflicting with others.
2. Memory Deallocation: Releasing memory once it is no longer needed by a process to
make it available for other processes.
3. Memory Tracking: Keeping track of which memory regions are occupied and which are
free, often through data structures like page tables or segment tables.
4. Virtual Memory Management: Using techniques like paging and segmentation to
extend the apparent amount of physical memory available, allowing the system to run
larger programs and manage memory more flexibly.

Types of Memory:

• Physical Memory (RAM): The actual memory hardware available in the system.
• Virtual Memory: A concept that allows programs to use more memory than what is
physically available by swapping data between physical memory and storage (disk).
Memory management techniques are essential for supporting multi-tasking, enabling the OS to
efficiently handle multiple processes, each with different memory needs, while maintaining the
integrity of the system as a whole. Through various strategies like paging, segmentation, and
virtual memory, modern operating systems ensure the effective and secure management of
system memory resources.

Contiguous Memory Allocation:

• In contiguous memory allocation, each process is allocated a single, continuous block of


memory.
• The memory for each process is allocated as one large, uninterrupted chunk, and the
entire process occupies a contiguous section of memory.

Non-Contiguous Memory Allocation:

• In non-contiguous memory allocation, the process is allocated multiple non-adjacent


blocks of memory scattered throughout the physical memory.
• The process may have parts of its memory located at different locations in the system’s
memory (e.g., using paging or segmentation).
Feature Uni OS Multi OS

Processors Single processor Multiple processors


(uniprocessor) (multiprocessor)

Performance Limited to one processor’s Improved performance with


capacity parallel processing

Concurrency Sequential execution with True parallel execution of tasks


multitasking

Resource Management Simpler, with single resource Complex management of


management multiple processors and
resources

Fault Tolerance No fault tolerance (single point Better fault tolerance,as tasks
of failure) can be shifted to other
processes

Cost and Complexity Less expensive, simpler design More expensive and complex
design

Examples MS-DOS, early Windows, Windows Server, Linux,


embedded systems UNIX, modern macOS
In conclusion, Uni OS is suitable for simple systems with limited resource needs, while Multi
OS is essential for modern systems requiring high performance, scalability, and fault tolerance
through the use of multiple processors.

In contiguous memory allocation, memory is allocated to processes in a continuous block. This


strategy simplifies the management of memory because each process is assigned a single,
uninterrupted block of memory. However, there are different methods of implementing
contiguous memory allocation depending on how the memory is divided and allocated to
processes. Below are the main types of contiguous memory allocation:

1. Fixed Partitioning

• Description: In fixed partitioning, the system divides the memory into fixed-size
partitions, and each partition is assigned to a process. Each partition has a predetermined
size, and processes are allocated one partition.
• Key Characteristics:
o The size of partitions is fixed and does not change.
o Each partition is assigned to one process, and if a process is smaller than the
partition, the remaining space inside the partition is wasted.
• Advantages:
o Simple to implement and manage.
o No complex memory management schemes are required.
• Disadvantages:
o Internal Fragmentation: If a process is smaller than the partition size, the
remaining space inside the partition is wasted.
o External Fragmentation: There could still be gaps between partitions when
processes are allocated or deallocated.
o Fixed partition sizes may not fit all processes efficiently.
• Example: Older operating systems like MS-DOS used fixed partitioning.

2. Dynamic Partitioning

• Description: In dynamic partitioning, the memory is allocated to processes in variable-


size partitions, meaning the partition size is dynamically determined based on the size of
the process. Memory is allocated only when required.
• Key Characteristics:
o The partition size is determined when a process is loaded into memory, depending
on the process’s memory requirements.
o This allows more efficient use of memory compared to fixed partitioning.
• Advantages:
o More efficient use of memory compared to fixed partitioning because the
partition sizes vary to fit the process's size.
• Disadvantages:
o External Fragmentation: As processes are allocated and deallocated, small free
spaces can be scattered across memory, making it difficult to find large
contiguous blocks for new processes.
o More complex memory management and allocation algorithms.
• Example: Operating systems like UNIX use dynamic partitioning.

Summary of Contiguous Memory Allocation Types:

Type Description Advantages Disadvantages


Fixed Partitioning Memory is divided into Simple, easy to Internal fragmentation,
fixed-size partitions for implement. external fragmentation
each process.

Dynamic Partitiong Memory is allocated in More efficient use of External fragmentation,


variable-sized partitions memory, flexible complex allocation.
based on process allocation.
requirements

Multiple Fixed Memory divided into Reduces internal Still susceptible to


Partitioning multiple fixed-size fragmentation. internal and external
blocks, process can use fragmentation.
multiple blocks.

Buddy System Memory blocks Reduces fragmentation, Can still lead to internal
allocated in powers of efficient memory fragmentation.
two, can split or merge usage.
blocks.
Non-contiguous memory allocation refers to a memory management scheme where processes
are allocated memory in multiple scattered blocks rather than one contiguous block. This type of
memory allocation helps in efficient memory utilization and reduces the problems of
fragmentation, particularly external fragmentation. There are several types of non-contiguous
memory allocation techniques, including:

1. Paging

• Description: In paging, both the physical memory and the process memory are divided
into fixed-size blocks. The process is divided into pages, and the physical memory is
divided into page frames of the same size. The operating system maintains a page table
to map the pages of a process to physical memory.
• Key Characteristics:
o The process is split into fixed-size blocks (pages), which can be stored anywhere
in physical memory.
o The OS uses a page table to map logical addresses (pages) to physical addresses
(page frames).
• Advantages:
o Eliminates external fragmentation because processes are divided into equal-
sized pages.
o Flexible memory allocation as pages can be stored in any available physical
memory location.
• Disadvantages:
o Internal fragmentation can still occur if the last page of a process is not fully
utilized.
o Requires additional hardware support (Memory Management Unit - MMU) and
extra overhead in managing the page table.
• Example: Most modern operating systems like Windows, Linux, and macOS use
paging for memory management.
Structure of Page Table:
A Page Table is a crucial data structure used by the virtual memory system in an operating
system to store the mapping between logical addresses (generated by the CPU) and physical
addresses (used by the hardware). The page table helps translate the logical address into the
physical address, providing the corresponding frame number where the page is stored in the main
memory.
Characteristics of the Page Table

• Stored in Main Memory: The page table resides in the main memory.
• Entries: The number of entries in the page table equals the number of pages into which the
process is divided.
• Page Table Base Register (PTBR): Holds the base address for the page table of the current
process.
• Independent Page Tables: Each process has its own independent page table.
Techniques for Structuring the Page Table

Hierarchical Paging
Also known as multilevel paging, this technique is used when the page table is too large to fit in
a contiguous space. It involves breaking the logical address space into multiple page tables.
Commonly used hierarchical paging structures include two-level and three-level page tables.
Two-Level Page Table
For a system with a 32-bit logical address space and a page size of 1 KB:
• Page Number: 22 bits
• Page Offset: 10 bits
• The page number is further divided into: P1: Index into the outer page table (12 bits) P2:
Displacement within the page of the inner page table (10 bits)
Three-Level Page Table
For a system with a 64-bit logical address space and a page size of 4 KB, a three-level page table
is used to avoid large tables.
Hashed Page Tables
This approach handles address spaces larger than 32 bits. The virtual page number is hashed into
a page table containing a chain of elements hashing to the same location. Each element consists
of:
• The virtual page number
• The mapped page frame value
• A pointer to the next element in the linked list.
Inverted Page Tables
Combines a page table and a frame table into a single data structure. Each entry consists of the
virtual address of the page stored in the real memory location and information about the process
that owns the page. This technique reduces the memory needed to store each page table but
increases the time needed to search the table.
Page Table Entries (PTE)

A Page Table Entry (PTE) stores information about a particular page of memory, including:
• Frame Number: The frame number in which the current page is present.
• Present/Absent Bit: Indicates whether the page is present in memory.
• Protection Bit: Specifies the protection level (read, write, etc.).
• Referenced Bit: Indicates whether the page has been accessed recently.
• Caching Enabled/Disabled: Controls whether the page can be cached.
• Modified Bit: Indicates whether the page has been modified.
Advantages of Using a Page Table

• Efficient Use of Memory: Allocates only the necessary amount of physical memory needed
by a process.
• Protection: Controls access to memory and protects sensitive data.
• Flexibility: Allows multiple processes to share the same physical memory space.
• Address Translation: Translates virtual addresses into physical addresses efficiently.
• Hierarchical Design: Provides a more efficient method for managing large virtual address
spaces

2. Segmentation

• Description: Segmentation divides a process into segments, which are logical units like
code, data, and stack, and each segment may vary in size. These segments are loaded into
non-contiguous physical memory locations. The operating system maintains a segment
table to map logical segments to physical memory addresses.
• Key Characteristics:
o The process is divided into logically meaningful segments (such as code, data,
stack).
o Each segment has a base address and a length.
o The segments are stored non-contiguously in memory.
• Advantages:
o Supports logical organization of memory, making it easier for the programmer to
work with different parts of the process.
o Provides better memory utilization compared to contiguous memory allocation.
• Disadvantages:
o External fragmentation can occur as segments are allocated non-contiguously.
o The system may still need complex algorithms to manage segment allocation and
deal with fragmentation.
• Example: Early UNIX systems used segmentation for memory management.
3. Paged Segmentation (Combination of Paging and Segmentation)

• Description: Paged Segmentation is a combination of paging and segmentation. In this


approach, the process is first divided into segments, and each segment is then divided into
pages. This method combines the advantages of both segmentation and paging, where
segments are logical units, and pages allow for better physical memory management.
• Key Characteristics:
o The process is divided into segments based on logical divisions (e.g., code, data,
stack).
o Each segment is further divided into pages, and each page is mapped to physical
memory.
o This method provides flexibility by supporting both logical division
(segmentation) and physical memory optimization (paging).
• Advantages:
o Reduces external fragmentation (like paging) while still supporting logical
segmentation of processes.
o More efficient memory usage compared to pure segmentation.
• Disadvantages:
o More complex than either pure paging or segmentation alone.
o Requires both segment tables and page tables, leading to additional overhead.
• Example: x86 architecture uses paged segmentation in modern operating systems.

4. Virtual Memory (with Paging or Segmentation)

• Description: Virtual memory is a memory management technique that gives an


application the impression it has contiguous working memory, while in reality, it may be
fragmented and reside in non-contiguous physical memory. Virtual memory is typically
implemented using paging or segmentation.
• Key Characteristics:
o Allows processes to use more memory than physically available by swapping data
between physical memory and disk storage.
o Both paging and segmentation can be used to manage virtual memory by mapping
virtual addresses to physical addresses.
• Advantages:
o Processes can be given the illusion of a larger memory space than is physically
available.
o Better multitasking and resource utilization.
• Disadvantages:
o Disk thrashing may occur if too much time is spent swapping data between
memory and disk.
o Can be slow due to frequent page faults or segment faults when accessing non-
resident pages/segments.
• Example: Linux, Windows, and macOS use virtual memory techniques based on paging
and segmentation.

5. Memory Pooling (Memory Pool Allocation)

• Description: Memory pooling is a technique in which memory is divided into pools or


blocks of fixed size. Processes request memory from a pool, and the system allocates
blocks of memory from the pool as needed. This helps in reducing fragmentation and
makes memory management more efficient.
• Key Characteristics:
o Memory is divided into pools, and each pool consists of fixed-size blocks.
o Processes request memory from these pools based on their needs.
• Advantages:
o Reduces fragmentation, especially internal fragmentation, since blocks are of a
fixed size.
o More efficient for managing large numbers of small objects (e.g., for embedded
systems).
• Disadvantages:
o Not as flexible as other non-contiguous memory allocation methods, especially
for larger or variable-sized memory requests.
• Example: Memory pooling is commonly used in embedded systems, video games, and
real-time operating systems.

Summary of Non-Contiguous Memory Allocation Types:

Type Description Advantages Disadvantages Examples

Paging Divides memory Eliminates Windows,Linux,m


into fixed-size pages external acOS
and page frames fragmentation,effi
cient memory
allocation.
Segmentat Divides memory Supports logical External Early UNIX
ion into organization of fragmentation,co systems,ARM
segments(code,data, memory,flexible mplex architecture
stack) allocation. management.

Paged Combines paging Combines More X86 architecture


Segmentat and benefits of paging complex,requires
ion segmentation,segme and segmentation. both segment and
nts are divided into page tables
pages.

Virtual Uses paging or Enables processes Disk Windows,Linux,m


Memory segmentation to map to use more thrashing,slow acOS
virtual addresses to memory than performance
physical memory physically during
available. page/segment
faults

Memory Allocates memory Reduces Less flexible for Embedded


Pooling from fixed-size fragmentation,effi large or variable systems,Real-time
blocks in a pool. cient for small size allocations operating syatems
objects

ace in RAM and ensure efficient CPU utilization. It involves moving processes between the main
memory (RAM) and secondary storage (usually a hard disk or SSD) to allow the system to run
more programs simultaneously.

Key Concepts

• Swap-in: This is the process of transferring a program from secondary storage back into the
main memory (RAM) for execution.
• Swap-out: This involves moving a process from the main memory to secondary storage to free
up space in RAM for other processes.
How Swapping Works

When the RAM is full and a new program needs to run, the operating system selects a program
or data that is currently in RAM but not actively being used. This selected data is moved to
secondary storage, making space in RAM for the new program. When the swapped-out program
is needed again, it can be swapped back into RAM, replacing another inactive program or data if
necessary.

Advantages

1. Improved Memory Utilization: Swapping allows the CPU to manage multiple processes
within a single main memory, improving overall memory utilization.
2. Virtual Memory Creation: It helps in creating and using virtual memory, allowing the
system to run larger and more processes than the physical memory would allow.
3. Priority-Based Scheduling: Swapping can be used in priority-based scheduling to optimize
the execution of high-priority processes by swapping out low-priority ones.
Disadvantages

1. Performance Impact: The process of swapping can degrade system performance due to the
time taken to transfer data between RAM and secondary storage.
2. Data Loss Risk: In case of a power failure during swapping, there is a risk of losing data
related to the processes involved in swapping.
3. Increased Page Faults: If the swapping algorithm is not efficient, it can lead to an increased
number of page faults, further decreasing overall performance.

Virtual Memory

Virtual memory is a memory management technique that creates an illusion of a large,


continuous block of memory to applications, even if the physical memory (RAM) is limited1.
This allows the system to compensate for physical memory shortages, enabling larger
applications to run on systems with less RAM. Virtual memory is implemented using both
hardware and software, mapping memory addresses used by a program (virtual addresses) into
physical addresses in computer memory1.
Types of Virtual Memory
1. Paging: Divides memory into small fixed-size blocks called pages. When the computer runs
out of RAM, pages that aren’t currently in use are moved to the hard drive into an area called
a swap file. When a page is needed again, it is swapped back into RAM1.
2. Segmentation: Divides virtual memory into segments of different sizes. Segments that aren’t
currently needed can be moved to the hard drive. The system uses a segment table to keep
track of each segment’s status1.
Working of Virtual Memory
Virtual memory works by dynamically translating logical addresses into physical addresses at
runtime. This means that a process can be swapped in and out of the main memory, occupying
different places in the main memory at different times during execution1. Virtual memory is
implemented using demand paging or demand segmentation1.
Advantages and Disadvantages
Advantages:
• Increased Effective Memory: Allows a computer to have more memory than the physical
memory using the disk space1.
• Memory Isolation: Allocates a unique address space to each process, increasing safety and
reliability1.
• Efficient Memory Management: Better utilization of physical memories through methods
like paging and segmentation1.
Disadvantages:
• Performance Overhead: Slower system performance due to constant data transfer between
physical memory and the hard disk1.
• Complexity: Increases the complexity of the memory management system1.
Memory Management Techniques
Memory management involves various techniques to allocate and deallocate memory efficiently:
1. Contiguous Memory Allocation: Divides memory into fixed-sized partitions, each
containing exactly one process2.
2. Paging: Eliminates the need for contiguous allocation of physical memory by dividing the
physical address space into fixed-size blocks called frames2.
3. Swapping: Temporarily swaps a process into secondary memory from the main memory to
allow more processes to run simultaneously2.
Fragmentation
Fragmentation occurs when memory blocks are allocated and deallocated, creating small free
holes that cannot be assigned to new processes. There are two types of fragmentation:
• Internal Fragmentation: Occurs when memory blocks are allocated more than the requested
size2.
• External Fragmentation: Occurs when free memory blocks are not contiguous2

Demand Paging:
Demand Paging is a technique in which a page is usually brought into the main memory
only when it is needed or demanded by the CPU. Initially, only those pages are loaded that are
required by the process immediately. Those pages that are never accessed are thus never loaded
into the physical memory.

Copy on Write (COW)

Definition: Copy on Write is a strategy where the system delays the copying of data until it is
actually modified. This means that multiple processes can share the same data in memory until
one of them needs to change it.

How It Works:
Initial State: When a process requests a copy of data, the operating system provides a reference
to the existing data rather than creating a new copy.

Shared Data: Multiple processes can read the shared data without any issues.

Modification: When a process attempts to modify the shared data, the operating system
intervenes.

Copying: At the point of modification, the operating system creates a copy of the data
specifically for the modifying process. This ensures that changes made by one process do not
affect the others.

Benefits:

Memory Efficiency: Reduces the amount of memory used by sharing data until modification is
necessary.

Performance Improvement: Decreases the overhead of copying data unnecessarily, leading to


faster process creation and execution.

Example Use Cases:

Fork System Call: In Unix-like operating systems, when a process is forked, the child process
shares the parent's memory pages. Only when either process modifies a page does the COW
mechanism create a separate copy.

Virtual Memory Management: COW is used to manage memory pages efficiently, especially
in scenarios involving large datasets or multiple processes accessing the same data.

By using Copy on Write, operating systems can manage resources more effectively, leading to
better overall system performance and reduced memory usage.
Page Replacement Algorithms
Page replacement algorithms are essential in operating systems that use paging for memory
management. When a page fault occurs and no free page frames are available, these algorithms
decide which page to replace to minimize the number of page faults and improve system
performance.

First In First Out (FIFO)

FIFO is the simplest page replacement algorithm. It maintains a queue of pages in memory, with
the oldest page at the front. When a new page needs to be loaded, the oldest page is removed.
This algorithm can suffer from Belady's anomaly, where increasing the number of page frames
can lead to more page faults.
Example:
Reference string: 1, 3, 0, 3, 5, 6, 3

Page frames: 3

Page faults: 6

Optimal Page Replacement

Optimal page replacement replaces the page that will not be used for the longest time in the
future. This algorithm is theoretical and serves as a benchmark since it requires future knowledge
of page requests.
Example:
Reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3

Page frames: 4
Page faults: 6

Least Recently Used (LRU)

LRU replaces the page that has not been used for the longest time. It relies on the principle of
locality, where pages used recently are likely to be used again soon.
Example:
Reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3

Page frames: 4

Page replacement algorithms are crucial for efficient memory management in operating systems.
Each algorithm has its advantages and disadvantages, and the choice depends on the specific
needs of the system. Understanding these algorithms helps improve system performance and
reduce page fault
Allocation of Frames in Operating Systems

Frame allocation is a crucial aspect of operating systems, particularly in the context of virtual
memory and demand paging. It involves determining how many frames (fixed-size blocks of
physical memory) to allocate to each process. This is essential for efficient memory management
and process execution.

Key Principles

Constraints
1. Total Frames: You cannot allocate more frames than the total number of available frames.
2. Minimum Frames: Each process must be allocated a minimum number of frames to avoid
high page fault ratios and ensure that all referenced pages can be held in memory.
Frame Allocation Algorithms
1. Equal Allocation: Each process gets an equal number of frames. For example, if there are 48
frames and 9 processes, each process gets 5 frames, with the remaining frames used as a free-
frame buffer pool. Disadvantage: Inefficient for processes of varying sizes, leading to wasted
frames for smaller processes.
2. Proportional Allocation: Frames are allocated based on the size of each process. For a
process ( p_i ) of size ( s_i ), the number of allocated frames ( a_i ) is calculated as: [ a_i =
\left(\frac{s_i}{S}\right) \times m ] where ( S ) is the sum of the sizes of all processes and ( m
) is the total number of frames. Advantage: Allocates frames according to the needs of each
process.
3. Priority Allocation: Frames are allocated based on the priority of the processes. Higher
priority processes receive more frames. Advantage: Ensures that critical processes get the
necessary resources.
Global vs. Local Allocation
1. Local Replacement: A process can only use frames from its own allocated set when a page
fault occurs. Advantage: The page fault ratio is influenced only by the process's own
behavior. Disadvantage: Low-priority processes may hinder high-priority processes by not
sharing frames.
2. Global Replacement: A process can take frames from any other process when a page fault
occurs. Advantage: Improves overall system throughput. Disadvantage: The page fault ratio
of a process is affected by the behavior of other processes.
Important Considerations

• Performance: The choice of frame allocation strategy can significantly impact system
performance and process execution efficiency.
• Flexibility: Global replacement offers more flexibility but can lead to unpredictable
performance for individual processes.
• Fairness: Proportional and priority allocation aim to distribute resources more fairly based on
process needs and importance.
By understanding and implementing these frame allocation strategies, operating systems can
manage memory more effectively, ensuring optimal performance and resource utilization

Thrashing:
Thrashing in OS is a phenomenon that occurs in computer operating systems when the
system spends an excessive amount of time swapping data between physical memory (RAM)
and virtual memory (disk storage) due to high memory demand and low available resources.
Storage Management in Operating Systems
Storage management in operating systems involves the efficient use and maintenance of storage
devices to ensure data integrity, security, and optimal performance. It encompasses various
processes and techniques to manage data storage equipment and optimize their usage.

Key Attributes of Storage Management

Storage management has several key attributes that are essential for maintaining the storage
capacity of a system:
1. Performance: Ensuring that data storage resources operate efficiently.
2. Reliability: Maintaining the integrity and availability of data.
3. Recoverability: Providing mechanisms for data recovery in case of failures.
4. Capacity: Managing the available storage space effectively.
Features and Advantages

Storage management offers several features and advantages:


• Optimization: It optimizes the use of storage devices, improving system performance.
• Resource Management: Storage must be allocated and managed as a resource to benefit an
organization.
• Basic System Component: It is a fundamental component of information system.
• Improved Performance: Enhances the performance of data storage resources.
Advantages include simplified management of storage capacity, reduced time consumption,
improved system performance, and enhanced agility through virtualization and automation
technologies.
Limitations

Despite its benefits, storage management has some limitations:


• Limited Physical Storage: Operating systems can only manage the available physical storage
space.
• Performance Degradation: Increased storage utilization can lead to performance degradation
due to factors like fragmentation.
• Complexity: Managing storage can be complex, especially in large environments.
• Cost: Storing large amounts of data can be expensive.
• Security Issues: Storing sensitive data presents security risks.
• Backup and Recovery: Backup and recovery can be challenging, especially with data stored
on multiple systems.
Disk Management Techniques

Disk management is a critical aspect of storage management, involving the organization and
maintenance of data on storage devices like hard disk drives and solid-state drives. Some
common disk management techniques include:
• Partitioning: Dividing a physical disk into multiple logical partitions for better organization.
• Formatting: Preparing a disk for use by creating a file system on it.
• File System Management: Managing the file systems used to store and access data.
• Disk Space Allocation: Allocating space on the disk for files and directories.
• Disk Defragmentation: Rearranging fragmented data to improve performance.

Overview of Mass Storage Structure in Operating Systems

Mass storage devices are essential components of an operating system, designed to store
large volumes of data. These devices are often used interchangeably with peripheral storage,
which manages data volumes larger than the native storage capacity of a computer or device.

Primary and Secondary Memory

Primary Memory: This is the volatile storage component of a computer system, directly
accessed by the processor. It includes data buses, cache memory, and Random Access Memory
(RAM). Primary memory is faster but more expensive and has limited storage capacity, typically
ranging from 16 GB to 32 GB.
Secondary Memory: This is non-volatile, permanent memory that is not directly accessible by
the processor. It includes read-only memory (ROM), flash drives, hard disk drives (HDD), and
magnetic tapes. Secondary memory is slower but less expensive and has a larger storage
capacity, ranging from 200 GB to several terabytes.
Types of Mass Storage Devices

Magnetic Disks
Magnetic disks use magnetization to write, rewrite, and access data. They consist of platters
coated with magnetic material, divided into tracks and sectors for data storage. Examples include
floppy disks, hard disks, and zip disks.
Structure and Working:
• A mechanical arm with a read-write head moves across the spinning platters.
• Data is stored in tiny magnetized patches on the disk surface.
• Tracks are concentric circles on the disk, divided into sectors.
• The positioning time (seek time) and rotational latency affect data access speed.
Solid State Disks (SSDs)
SSDs use memory technology, such as flash memory or DRAM chips, to store data. They are
faster than traditional hard drives due to the absence of moving parts. However, they are more
expensive, smaller in capacity, and may have shorter lifespans.
Advantages:
• High-speed data access.
• Useful as high-speed cache for frequently accessed data.
• Employed in laptops for better performance and portability.
Magnetic Tapes
Magnetic tapes were commonly used for secondary storage before the advent of hard disk drives.
They are now primarily used for backups. Although accessing a specific location on a tape can
be slow, the read/write speeds are comparable to disk drives once the operation starts.
Capacity:
• Tape drive capacities range from 20 GB to 200 GB, with compression potentially doubling the
capacity.
rotation

Disk Scheduling Algorithms

Disk scheduling is a technique used by operating systems to manage the order in which disk I/O
(input/output) requests are processed. The main goals of disk scheduling are to optimize the
performance of disk operations, reduce the time it takes to access data, and improve overall
system efficiency.

Key Terms

• Seek Time: The time taken to locate the disk arm to a specified track where the data is to be
read or written.
• Rotational Latency: The time taken by the desired sector of the disk to rotate into a position
so that it can access the read/write heads.
• Transfer Time: The time to transfer the data, depending on the rotating speed of the disk and
the number of bytes to be transferred.
• Disk Access Time: The sum of seek time, rotational latency, and transfer time.
• Disk Response Time: The average time spent by a request waiting to perform its I/O
operation.
Common Disk Scheduling Algorithms

First-Come, First-Served (FCFS)


FCFS is the simplest disk scheduling algorithm where requests are addressed in the order they
arrive in the disk queue. It ensures fairness but does not optimize seek time.
Shortest Seek Time First (SSTF)
SSTF selects the request with the shortest seek time from the current head position. It reduces the
average response time and increases throughput but can cause starvation for requests with higher
seek times.
SCAN (Elevator Algorithm)
In SCAN, the disk arm moves in one direction, servicing requests until it reaches the end of the
disk, then reverses direction. This algorithm provides high throughput and low variance of
response time.
Circular SCAN (C-SCAN)
C-SCAN is similar to SCAN but, instead of reversing direction, the disk arm goes to the other
end of the disk and starts servicing requests from there. This provides more uniform wait time
compared to SCAN.
LOOK
LOOK is similar to SCAN but the disk arm only goes as far as the last request in each direction
before reversing. This prevents unnecessary traversal to the end of the disk.
Circular LOOK (C-LOOK)
C-LOOK is similar to C-SCAN but the disk arm only goes as far as the last request before
moving to the other end's last request. This also prevents unnecessary traversal.
Random Scheduling (RSS)
RSS schedules requests randomly and is used in situations involving random attributes such as
random processing time and stochastic machine breakdowns.
Last-In, First-Out (LIFO)
LIFO services the newest jobs before the existing ones, maximizing locality and resource
utilization but can cause starvation for older requests.
N-STEP SCAN
N-STEP SCAN creates a buffer for N requests and services them in one go. It eliminates the
starvation of requests.
F-SCAN
F-SCAN uses two sub-queues. During the scan, all requests in the first queue are serviced, and
new incoming requests are added to the second queue. This prevents "arm stickiness".
Each algorithm has its unique advantages and disadvantages, and the choice of algorithm
depends on the specific requirements and characteristics of the system

You might also like