0% found this document useful (0 votes)
12 views98 pages

Unit - 4

The document outlines the syllabus and key concepts related to storage management in operating systems for a third-year engineering course at MIT School of Computing. It covers topics such as file system interfaces, file concepts, access methods, directory structures, and file sharing, along with details on file attributes and operations. Additionally, it discusses various file types, directory structures, and the implementation of file systems, including protection and recovery mechanisms.

Uploaded by

Prajwal Tilekar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views98 pages

Unit - 4

The document outlines the syllabus and key concepts related to storage management in operating systems for a third-year engineering course at MIT School of Computing. It covers topics such as file system interfaces, file concepts, access methods, directory structures, and file sharing, along with details on file attributes and operations. Additionally, it discusses various file types, directory structures, and the implementation of file systems, including protection and recovery mechanisms.

Uploaded by

Prajwal Tilekar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 98

MIT School of Computing

Department of Computer Science & Engineering

Third Year Engineering

23CSE2010-Operating System

Class - S.Y. (SEM-II)


Unit - IV PLD
Storage management
AY 2024-2025
Prof. (Dr) N.P.Karlekar
Prof. Pratik Kamble
MIT School of Computing
Department of Computer Science & Engineering

Unit-IV Syllabus

● File-System Interface, File Concept, Access Methods, Directory


Structure, File-System Mounting, File Sharing, Protection, File-
System Implementation, File-System
PLD Structure, File-System
Implementation, Directory Implementation, Allocation Methods,
Free-Space Management, Efficiency and Performance, Recovery,
Mass-Storage Structure, Disk Structure, Disk Scheduling, Swap-
Space Management
● Case study: Disk scheduling algorithms
MIT School of Computing
Department of Computer Science & Engineering

What is Storage Management?


• Storage Management refers to the
process of efficiently handling data
storage resources in a computing
environment. PLD
• It involves organizing, maintaining,
and optimizing storage to ensure
data availability, reliability, and
performance..

Key aspects of storage management


MIT School of Computing
Department of Computer Science & Engineering

File-System Interface
• A File System Interface in an Operating System (OS) defines how
users, applications, and the OS interact with files stored on a storage
device.
• It provides a structured way to organize, access, read, write, and
manage files and directories. PLD
• Key Components of the File System Interface:
 File Concept
 Access Methods
 Directory Structure
 File-System Mounting
 File Sharing
 Protection
MIT School of Computing
Department of Computer Science & Engineering

File Concept
• A file in an Operating System (OS) is a logical storage unit that stores
data, information, or program instructions on a storage device such as a
hard disk, SSD, or flash drive.
• The OS manages files using a file system, which organizes, retrieves,
and controls access to files efficiently. PLD
• Characteristics of a File:
 Persistent – Retains data even after a program terminates.
 Named – Identified by a unique name within a directory.
 Structured – Contains data in a specific format (text, binary, etc.).
 Managed by the OS – The OS controls access, security, and
organization.
MIT School of Computing
Department of Computer Science & Engineering

File-System Structure
 File structure:
• Logical storage unit
• Collection of related information
 File system resides on secondary storage (disks):
• Provided user interface to storage, mapping
PLD logical to physical.
• Provides efficient and convenient access to disk by allowing data to
be stored, located retrieved easily.
 Disk provides in-place rewrite and random access:
• I/O transfers performed in blocks of sectors (usually 512 bytes).
• File control block – storage structure consisting of information
about a file.
• Device driver controls the physical device.
• File system organized into layers.
MIT School of Computing
Department of Computer Science & Engineering

File-System Architecture

PLD
MIT School of Computing
Department of Computer Science & Engineering

File Attributes
 Name – only information kept in human-readable form
 Identifier – unique tag (number) identifies file within file system
 Type – needed for systems that support different types
 Location – pointer to file location on device
 Size – current file size PLD
 Protection – controls who can do reading, writing, executing
 Time, date, and user identification – data for protection, security, and
usage monitoring
 Information about files are kept in the directory structure, which is
maintained on the disk
 Many variations, including extended file attributes such as file
checksum
 Information kept in the directory structure
MIT School of Computing
Department of Computer Science & Engineering

File info Window on Mac OS X

PLD
MIT School of Computing
Department of Computer Science & Engineering

File Contol Block

PLD
MIT School of Computing
Department of Computer Science & Engineering

Types of Files in OS:


 Text Files – Contain readable characters (e.g., .txt, .log).
 Binary Files – Store data in machine-readable format (e.g., .exe, .bin).
 Executable Files – Contain compiled programs (e.g., .exe, .sh).
 System Files – OS-related files (e.g., .dll, .sys).
 Multimedia Files – Audio, video, and image PLDfiles (e.g., .mp3, .jpg).
 Database Files – Structured data files used by databases (e.g., .db, .sql).
MIT School of Computing
Department of Computer Science & Engineering

File Types – Name, Extension

PLD

Key a
MIT School of Computing
Department of Computer Science & Engineering

File Operations:
File is an abstract data type
 Create
 Write – at write pointer location
 Read – at read pointer location
 Reposition within file - seek PLD
 Delete
 Truncate
 Open(Fi) – search the directory structure on disk for entry Fi, and move
the content of entry to memory
 Close (Fi) – move the content of entry Fi in memory to directory
structure on disk
MIT School of Computing
Department of Computer Science & Engineering

Access Methods
In operating systems, file management involves various access methods
that determine how data is read from and written to a file. The most
common access methods include:
1. Sequential Access
Data is accessed in a linear order, one record PLD
after another.
Suitable for text files and log files.
MIT School of Computing
Department of Computer Science & Engineering

Access Methods
2.Direct (Random) Access
Allows access to data at any position without
reading sequentially.
Useful for databases and binary files.
Uses seek() to move to a specific location PLD
MIT School of Computing
Department of Computer Science & Engineering

Access Methods
3. Indexed Access and Relative File
Uses an index table to locate data
efficiently.
Common in database management systems
(DBMS). PLD
Example: An index file storing byte offsets
for quick lookup.
MIT School of Computing
Department of Computer Science & Engineering

Access Methods
3. Indexed Access and Relative File
Uses an index table to locate data
efficiently.
Common in database management systems
(DBMS). PLD
Example: An index file storing byte offsets
for quick lookup.
MIT School of Computing
Department of Computer Science & Engineering

Directory Structure
 A collection of nodes containing information about all files

PLD

Both the directory structure and the files reside on disk


MIT School of Computing
Department of Computer Science & Engineering

Disk Structure

• Disk can be subdivided into partitions


• Disks or partitions can be RAID protected against failure
• Disk or partition can be used raw – without a file system, or formatted with a file system
Partitions also known as minidisks, slices
PLD
• Entity containing file system known as a volume
• Each volume containing file system also tracks that file system’s info in device directory or
volume table of contents
• As well as general-purpose file systems there are many special-purpose file systems,
frequently all within the same operating system or computer
MIT School of Computing
Department of Computer Science & Engineering

A Typical File-system Organization

PLD
MIT School of Computing
Department of Computer Science & Engineering

Types of File Systems


 We mostly talk of general-purpose file systems
 But systems frequently have may file systems, some general- and some special- purpose
 Consider Solaris has
 tmpfs – memory-based volatile FS for fast,
PLD temporary I/O
 objfs – interface into kernel memory to get kernel symbols for debugging
 ctfs – contract file system for managing daemons
 lofs – loopback file system allows one FS to be accessed in place of another
 procfs – kernel interface to process structures
 ufs, zfs – general purpose file systems
MIT School of Computing
Department of Computer Science & Engineering

Operations Performed on Directory


 Search for a file
 Create a file
 Delete a file
PLD
 List a directory
 Rename a file
 Traverse the file system
MIT School of Computing
Department of Computer Science & Engineering

Directory Structure
The directory is organized logically to obtain
 Efficiency – locating a file quickly
 Naming – convenient to users
PLD
 Two users can have same name for different files
 The same file can have several different names
 Grouping – logical grouping of files by properties, (e.g., all Java programs,
all games, …)
MIT School of Computing
Department of Computer Science & Engineering

Single-Level Directory

PLD
MIT School of Computing
Department of Computer Science & Engineering

Two-Level Directory

PLD
MIT School of Computing
Department of Computer Science & Engineering

Tree-Structured Directories

PLD
MIT School of Computing
Department of Computer Science & Engineering

Tree-Structured Directories (Cont.)


 Efficient searching

 Grouping Capability

PLD
 Current directory (working directory)
 cd /spell/mail/prog
 type list
MIT School of Computing
Department of Computer Science & Engineering

Tree-Structured Directories (Cont)


 Absolute or relative path name
 Creating a new file is done in current directory
 Delete a file
rm <file-name>
 Creating a new subdirectory is done in current directory
PLD
mkdir <dir-name>
Example: if in current directory /mail
mkdir count

Deleting “mail”  deleting the entire subtree rooted by “mail”


MIT School of Computing
Department of Computer Science & Engineering

Acyclic-Graph Directories
 Have shared subdirectories and files

PLD
MIT School of Computing
Department of Computer Science & Engineering

Acyclic-Graph Directories (Cont.)


 Two different names (aliasing)
 If dict deletes list  dangling pointer
Solutions:
 Backpointers, so we can delete all pointers
PLD
Variable size records a problem
 Backpointers using a daisy chain organization
 Entry-hold-count solution
 New directory entry type
 Link – another name (pointer) to an existing file
 Resolve the link – follow pointer to locate the file
MIT School of Computing
Department of Computer Science & Engineering

General Graph Directory

PLD
MIT School of Computing
Department of Computer Science & Engineering

General Graph Directory (Cont.)


 How do we guarantee no cycles?
 Allow only links to file not subdirectories
 Garbage collection
PLD
 Every time a new link is added use a cycle detection algorithm to determine
whether it is OK
MIT School of Computing
Department of Computer Science & Engineering

File-System Mounting
• A file system must be mounted before it can be accessed
• A unmounted file system (i.e., Fig. a-(b)) is mounted at a mount point

PLD
MIT School of Computing
Department of Computer Science & Engineering

Mount Point

PLD
MIT School of Computing
Department of Computer Science & Engineering

File Sharing
 Sharing of files on multi-user systems is desirable
 Sharing may be done through a protection scheme
 On distributed systems, files may be shared across a network
 Network File System (NFS) is a commonPLD
distributed file-sharing method
 If multi-user system
 User IDs identify users, allowing permissions and protections to be per-
user
Group IDs allow users to be in groups, permitting group access rights
 Owner of a file / directory
 Group of a file / directory
MIT School of Computing
Department of Computer Science & Engineering

File Sharing – Remote File Systems


 Uses networking to allow file system access between systems
 Manually via programs like FTP
 Automatically, seamlessly using distributed file systems
 Semi automatically via the world wide web
 Client-server model allows clients to mount remotePLD
file systems from servers
 Server can serve multiple clients
 Client and user-on-client identification is insecure or complicated
 NFS is standard UNIX client-server file sharing protocol
 CIFS is standard Windows protocol
 Standard operating system file calls are translated into remote calls
 Distributed Information Systems (distributed naming services) such as LDAP, DNS, NIS, Active Directory
implement unified access to information needed for remote computing
MIT School of Computing
Department of Computer Science & Engineering

File Sharing – Failure Modes


 All file systems have failure modes
 For example corruption of directory structures or other non-user data, called
metadata
 Remote file systems add new failure modes, due to network failure, server
PLD
failure
 Recovery from failure can involve state information about status of each
remote request
 Stateless protocols such as NFS v3 include all information in each request,
allowing easy recovery but less security
MIT School of Computing
Department of Computer Science & Engineering

File Sharing – Consistency Semantics


 Specify how multiple users are to access a shared file simultaneously
 Similar to process synchronization algorithms
 Tend to be less complex due to disk I/O and network latency (for remote file
systems
 Andrew File System (AFS) implemented PLD
complex remote file sharing semantics
 Unix file system (UFS) implements:
 Writes to an open file visible immediately to other users of the same open file
 Sharing file pointer to allow multiple users to read and write concurrently
 AFS has session semantics
 Writes only visible to sessions starting after the file is closed
MIT School of Computing
Department of Computer Science & Engineering

Protection
 File owner/creator should be able to control:
 what can be done
 by whom
 Types of access PLD
 Read
 Write
 Execute
 Append
 Delete
 List
MIT School of Computing
Department of Computer Science & Engineering

Access Lists and Groups


 Mode of access: read, write, execute
 Three classes of users on Unix / Linux
RWX
a) owner access 7  111
RWX
b) group access 6  110
RWX
c) public access 1  0 PLD
01
 Ask manager to create a group (unique name), say G, and add some
users to the group.
 For a particular file (say game) or subdirectory, define an appropriate
access.

Attach a group to a file


chgrp G game
MIT School of Computing
Department of Computer Science & Engineering

Windows 7 Access-Control List Management

PLD
MIT School of Computing
Department of Computer Science & Engineering

A Sample UNIX Directory Listing

PLD
MIT School of Computing
Department of Computer Science & Engineering

File-System Implementation
 Mount table storing file system mounts, mount points, file system types
 The following figure illustrates the necessary file system structures provided by the operating
systems
  Figure (a) refers to opening a file
  Figure (b) refers to reading a file PLD
  Plus buffers hold data blocks from secondary storage
  Open returns a file handle for subsequent use
  Data from read eventually copied to specified user process mem
MIT School of Computing
Department of Computer Science & Engineering

In-Memory File System Structures

PLD
MIT School of Computing
Department of Computer Science & Engineering

Partitions and Mounting


• Partition can be a volume containing a file system (“cooked”) or raw – just a sequence of
blocks with no file system
• Boot block can point to boot volume or boot loader set of blocks that contain enough code to
know how to load the
• kernel from the file systemOr a boot management program for multi-os booting
PLD
• Root partition contains the OS, other partitions can hold other Oses, other file systems,
or be raw Mounted at boot time
• Other partitions can mount automatically or manually
• At mount time, file system consistency checked
• Is all metadata correct?
• If not, fix it, try again
• If yes, add to mount table, allow access
MIT School of Computing
Department of Computer Science & Engineering

Virtual File Systems


 Virtual File Systems (VFS) on Unix provide an object-oriented way of
implementing file systems
 VFS allows the same system call interface (the API) to be used for
different types of file systems
• Separates file-system generic operations from implementation details
PLD
• Implementation can be one of many file systems types, or network file system
• Implements vnodes which hold inodes or network file details
 Then dispatches operation to appropriate file system implementation
routines
 The API is to the VFS interface, rather than any specific type of file
system
MIT School of Computing
Department of Computer Science & Engineering

Schematic View of Virtual File System

PLD
MIT School of Computing
Department of Computer Science & Engineering

Virtual File System Implementation


 For example, Linux has four object types:
• inode, file, superblock, dentry
 VFS defines set of operations on the objects that must be implemented
• Every object has a pointer to a function PLD
table
• Function table has addresses of routines to implement that function on that
object
MIT School of Computing
Department of Computer Science & Engineering

Directory Implementation
 Linear listof file names with pointer to the data blocks
• Simple to program
• Time-consuming to execute
Linear search time
Could keep ordered alphabetically via PLD
linked list or use B+ tree
 Hash Table–linear list with hash data structure
• Decreases directory search time
• Collisions–situations where two file names hash to the same location
• Only good if entries are fixed size, or use chained-overflow method
MIT School of Computing
Department of Computer Science & Engineering

Allocation Methods
An allocation method refers to how disk blocks are allocated for
files:

 Contiguous allocation PLD

 Linked allocation

 Indexed allocation
MIT School of Computing
Department of Computer Science & Engineering

Contiguous Allocation
 Each file occupies a set of contiguous blocks on the disk

 Simple –only starting location (block #) and length (number of blocks) are
PLD
required

 Random access

 Wasteful of space (dynamic storage-allocation problem)

 Files cannot grow


MIT School of Computing
Department of Computer Science & Engineering

Contiguous Allocation

PLD
MIT School of Computing
Department of Computer Science & Engineering

Contiguous Allocation of Disk Space

PLD
MIT School of Computing
Department of Computer Science & Engineering

Extent-Based Systems
 Many newer file systems (i.e., Veritas File System) use a modified contiguous
allocation scheme
 Extent-based file systems allocate disk blocks
PLDin extents

 An extentis a contiguous block of disks


• Extents are allocated for file allocation
• A file consists of one or more extents
MIT School of Computing
Department of Computer Science & Engineering

Allocation Methods -Linked


 Linked allocation –each file a linked list of blocks
• File ends at nil pointer
• No external fragmentation
• Each block contains pointer to next block
• No compaction, external fragmentation
• Free space management system called whenPLD new block needed
• Improve efficiency by clustering blocks into groups but increases internal fragmentation
• Reliability can be a problem
• Locating a block can take many I/Os and disk seeks
 FAT (File Allocation Table) variation
• Beginning of volume has table, indexed by block number
• Much like a linked list, but faster on disk and cacheable
• New block allocation simple
MIT School of Computing
Department of Computer Science & Engineering

Linked Allocation
 Each file is a linked list of disk blocks: blocks may be scattered anywhere on the disk

PLD
MIT School of Computing
Department of Computer Science & Engineering

Linked Allocation (Cont.)

PLD
MIT School of Computing
Department of Computer Science & Engineering

Linked Allocation

PLD
MIT School of Computing
Department of Computer Science & Engineering

File-Allocation Table

PLD
MIT School of Computing
Department of Computer Science & Engineering

Allocation Methods -Indexed


 Indexed allocation
• Each file has its own index block(s) of pointers to its data blocks
 Logical view
PLD
MIT School of Computing
Department of Computer Science & Engineering

Example of Indexed Allocation

PLD
MIT School of Computing
Department of Computer Science & Engineering

Indexed Allocation (Cont.)


 Need index table
 Random access
 Dynamic access without external fragmentation, but have overhead of index block
 Mapping from logical to physical in a file of PLD
maximum size of 256K bytes and block size of
512 bytes. We need only 1 block for index table

Q = displacement into index table


R = displacement into block
MIT School of Computing
Department of Computer Science & Engineering

Indexed Allocation –Mapping (Cont.)


• Mapping from logical to physical in a file of unbounded length (block size of 512 words)
• Linked scheme –Link blocks of index table (no limit on size)

PLD
Q1= block of index table
R1is used as follows:

Q2= displacement into block of index table


R2displacement into block of file:
MIT School of Computing
Department of Computer Science & Engineering

Indexed Allocation –Mapping (Cont.)


 Two-level index (4K blocks could store 1,024 four-byte pointers in outer index -> 1,048,567
data blocks and file size of up to 4GB)
Q1= displacement into outer-index
R1is used as follows: PLD

Q2= displacement into block of index table


R2displacement into block of file:
MIT School of Computing
Department of Computer Science & Engineering

Indexed Allocation –Mapping (Cont.)

PLD
MIT School of Computing
Department of Computer Science & Engineering

Combined Scheme: UNIX UFS (4K bytes per block, 32-bit)


addresses)

PLD
MIT School of Computing
Department of Computer Science & Engineering

Free-Space Management (Cont.)


 Bit map requires extra space
• Example:
block size = 4KB = 212bytes
disk size = 240bytes (1 terabyte)
n= 240/212= 228bits (or 256 MB) PLD
if clusters of 4 blocks -> 64MB of memory
 Easy to get contiguous files
 Linked list (free list)
• Cannot get contiguous space easily
• No waste of space
• No need to traverse the entire list (if # free blocks recorded)
MIT School of Computing
Department of Computer Science & Engineering

Free-Space Management (Cont.)


 Need to protect:
• Pointer to free list
• Bit map
 Must be kept on disk
 Copy in memory and disk may differ
PLD
 Cannot allow for block[i] to have a situation where bit[i] = 1 in memory and bit[i]
= 0 on disk
• Solution:
 Set bit[i] = 1 in disk
 Allocate block[i]
 Set bit[i] = 1 in memory
MIT School of Computing
Department of Computer Science & Engineering

Linked Free Space List on Disk

PLD
MIT School of Computing
Department of Computer Science & Engineering

Free-Space Management (Cont.)


 Grouping (group size = n blocks)
• Modify linked list to store address of next n-1free blocks in first free block, plus a pointer
to next block that contains free-block-pointers (like this one)
 Counting
• Because space is frequently contiguouslyPLDused and freed, with contiguous-allocation
allocation, extents, or clustering
 Keep address of first free block and count of following free blocks
 Free space list then has entries containing addresses and counts
MIT School of Computing
Department of Computer Science & Engineering

Free-Space Management (Cont.)


 Space Maps
• Used in ZFS
• Consider meta-data I/O on very large file systems
 Full data structures like bit maps couldn’t fit in memory -> thousands of I/Os
• Divides device space into metaslab units and manages metaslabs
 Given volume can contain hundreds PLD of metaslabs
• Each metaslab has associated space map
 Uses counting algorithm
• But records to log file rather than file system
 Log of all block activity, in time order, in counting format
• Metaslab activity -> load space map into memory in balanced-tree structure, indexed by
offset
 Replay log into that structure
 Combine contiguous free blocks into single entry
MIT School of Computing
Department of Computer Science & Engineering

Directory Implementation
 Linear list of file names with pointer to the data blocks
• simple to program
• time-consuming to execute
PLD
 Hash Table –linear list with hash data structure
• decreases directory search time
• collisions–situations where two file names hash to the same location
• fixed size
MIT School of Computing
Department of Computer Science & Engineering

Efficiency and Performance


 Efficiency dependent on:
• Disk allocation and directory algorithms
• Types of data kept in file’s directory entry
PLD
• Pre-allocation or as-needed allocation of metadata structures
• Fixed-size or varying-size data structures
MIT School of Computing
Department of Computer Science & Engineering

Efficiency and Performance (Cont.)


 Performance
• Keeping data and metadata close together
• Buffer cache –separate section of main memory for frequently used blocks
PLD
• Synchronous writes sometimes requested by apps or needed by OS
• No buffering / caching –writes must hit disk before acknowledgement
• Asynchronouswrites more common, buffer-able, faster
• Free-behind and read-ahead –techniques to optimize sequential access
• Reads frequently slower than writes
MIT School of Computing
Department of Computer Science & Engineering

Recovery
 Consistency checking–compares data in directory structure with data blocks on disk, and tries

to fix inconsistencies
• Can be slow and sometimes fails PLD
 Use system programs to back updata from disk to another storage device (magnetic tape, other

magnetic disk, optical)


• Recover lost file or disk by restoringdata from backup
MIT School of Computing
Department of Computer Science & Engineering

Mass-Storage Structure
 Magnetic disks provide bulk of secondary storage of modern computers
 Drives rotate at 60 to 200 times per second
 Transfer rate is rate at which data flow between drive and computer
 Positioning time (random-access time)
PLD is time to move disk arm to desired
cylinder (seek time) and time for desired sector to rotate under the disk head
(rotational latency)
 Head crash results from disk head making contact with the disk surface.
 Disks can be removable
 Drive attached to computer via I/O bus
 Busses vary, including EIDE, ATA, SATA, USB, Fibre Channel, SCSI
MIT School of Computing
Department of Computer Science & Engineering

Moving-head Disk Mechanism

PLD
MIT School of Computing
Department of Computer Science & Engineering

Disk Structure
 Disk drives are addressed as large 1-dimensional arrays of logical blocks, where the
logical block is the smallest unit of transfer.

 The 1-dimensional array of logical blocks is mapped into the sectors of the disk
sequentially. PLD

 Sector 0 is the first sector of the first track on the outermost cylinder.

 Mapping proceeds in order through that track, then the rest of the tracks in that
cylinder, and then through the rest of the cylinders from outermost to innermost.
MIT School of Computing
Department of Computer Science & Engineering

Disk Scheduling …
 The operating system is responsible for using hardware efficiently — for the
disk drives, this means having a fast access time and disk bandwidth.
 Access time has two major components
 Seek time is the time for the disk are to move the heads to the cylinder

containing the desired sector. PLD


 Rotational latency is the additional time waiting for the disk to rotate the

desired sector to the disk head.


 Minimize seek time
 Seek time  seek distance
 Disk bandwidth is the total number of bytes transferred, divided by the total
time between the first request for service and the completion of the last
transfer.
MIT School of Computing
Department of Computer Science & Engineering

Disk Scheduling Algorithms …


 Several algorithms exist to schedule the servicing of disk I/O requests.

FCFS

SSTF

SCAN PLD

C-SCAN

LOOK

C-LOOK
MIT School of Computing
Department of Computer Science & Engineering

Selecting a Disk-Scheduling Algorithm


 SSTF is common and has a natural appeal
 SCAN and C-SCAN perform better for systems that place a heavy load on the disk.
 Performance depends on the number and types of requests.
PLD
 Requests for disk service can be influenced by the file-allocation method.
 The disk-scheduling algorithm should be written as a separate module of the
operating system, allowing it to be replaced with a different algorithm if necessary.
 Either SSTF or LOOK is a reasonable choice for the default algorithm.
MIT School of Computing
Department of Computer Science & Engineering

Swap-Space Management

Swap-space — Virtual memory uses disk space as an extension of main memory.
 Swap-space can be carved out of the normal file system,or, more commonly, it can be
in a separate disk partition.
 Swap-space management
PLD

4.3BSD allocates swap space when process starts; holds text segment (the
program) and data segment.

Kernel uses swap maps to track swap-space use.

Solaris 2 allocates swap space only when a page is forced out of physical memory,
not when the virtual memory page is first created.
MIT School of Computing
Department of Computer Science & Engineering

Case study: Disk scheduling algorithms


Introduction
Disk scheduling algorithms determine the order in which disk I/O requests are serviced
to optimize performance. Since hard drives have mechanical movement (in HDDs) or
PLD proper scheduling can reduce seek
require efficient memory block access (in SSDs),
time, enhance throughput, and minimize latency.
This study explores various disk scheduling algorithms, their working principles,
advantages, disadvantages, and real-world applications.
MIT School of Computing
Department of Computer Science & Engineering

Disk Scheduling Algorithms … A Case Study


Problem Statement
• In a multi-user operating system, numerous disk requests are generated at different times. The
challenge is to schedule these requests optimally to reduce seek time (the time taken for the disk
arm to move to the required track). PLD

For example, consider a disk queue with requests arriving for the following tracks (in order):
98, 183, 37, 122, 14, 124, 65, 67
• The read/write head is currently at 53.

• The goal is to determine which algorithm performs best in minimizing seek time.
MIT School of Computing
Department of Computer Science & Engineering

Disk Scheduling Algorithms …FCFS


1. First-Come, First-Served (FCFS)
Concept: Requests are served in the order they arrive.
Implementation: Simple queue-based processing.
Advantages: Fair, easy to implement.
PLD
Disadvantages: High seek time when requests are scattered.
📌 Example:
• Order of servicing → 53 → 98 → 183 → 37 → 122 → 14 → 124 → 65 → 67
• Seek time = 640 cylinders
MIT School of Computing
Department of Computer Science & Engineering

Disk Scheduling Algorithms …FCFS


llustration shows total head movement of 640 cylinders.

PLD
MIT School of Computing
Department of Computer Science & Engineering

Disk Scheduling Algorithms …SSTF


2.Shortest Seek Time First (SSTF)
Concept: Selects the request closest to the current head position.
Implementation: Sorts requests based on distance from the head.
Advantages: Reduces average seek time.
PLD
Disadvantages: Can cause starvation for far-off requests.
📌 Example:
• Order of servicing → 53 → 65 → 67 → 37 → 14 → 98 → 122 → 124 → 183
• Seek time = 236 cylinders
MIT School of Computing
Department of Computer Science & Engineering

Disk Scheduling Algorithms …SSTF

PLD
MIT School of Computing
Department of Computer Science & Engineering

Disk Scheduling Algorithms …SCAN


3. SCAN (Elevator Algorithm)
Concept: The head moves in one direction, serving requests, then reverses.
Implementation: Sorts requests, moves in a defined direction.
Advantages: Fairer than SSTF, avoids starvation.
PLD
Disadvantages: Can still have a high seek time if requests are at extremes.
📌 Example:
Order of servicing → 53 → 65 → 67 → 98 → 122 → 124 → 183 → (reverse) → 37 → 14
Seek time = 208 cylinders
MIT School of Computing
Department of Computer Science & Engineering

Disk Scheduling Algorithms …SCAN

PLD
MIT School of Computing
Department of Computer Science & Engineering

Disk Scheduling Algorithms … C-SCAN


C-SCAN (Circular SCAN)
Concept: Similar to SCAN but resets to the beginning after reaching the end.
Implementation: Always moves in one direction and wraps around.
Advantages: Provides uniform wait time.
PLD
Disadvantages: Requires more movement than SCAN.
📌 Example:
Order of servicing → 53 → 65 → 67 → 98 → 122 → 124 → 183 → (jump to start) → 14 → 37
Seek time = 186 cylinders
MIT School of Computing
Department of Computer Science & Engineering

Disk Scheduling Algorithms … C-SCAN

PLD
MIT School of Computing
Department of Computer Science & Engineering

Disk Scheduling Algorithms …LOOK and C-LOOK


LOOK and C-LOOK
LOOK: Similar to SCAN but stops at the last request instead of going to the disk's edge.
C-LOOK: Like C-SCAN but only moves to the highest request before resetting.
Advantages: Reduces unnecessary movement.
PLD
📌 Example:
LOOK Order → 53 → 65 → 67 → 98 → 122 → 124 → 183 → 37 → 14
C-LOOK Order → 53 → 65 → 67 → 98 → 122 → 124 → 183 → (jump) → 14 → 37
Seek time = LOOK: 176 cylinders, C-LOOK: 172 cylinders
MIT School of Computing
Department of Computer Science & Engineering

Disk Scheduling Algorithms …LOOK and C-LOOK

PLD
MIT School of Computing
Department of Computer Science & Engineering

Comparisison of Disk Scheduling Algorithms

PLD
MIT School of Computing
Department of Computer Science & Engineering

Disk Scheduling Algorithms


 Real-World Applications

• Operating Systems: Used in disk scheduling for multi-tasking systems.

• Databases: Efficient retrieval of stored data using optimized seek time.


PLD

• Cloud Storage: Improves SSD/HDD performance in cloud environments.

• Multimedia Servers: Ensures smooth playback by reducing disk access latency.


MIT School of Computing
Department of Computer Science & Engineering

Disk Scheduling Algorithms


 Conclusion
• C-LOOK and C-SCAN provide optimal performance by reducing seek time and ensuring
fairness.
• SSTF offers good performance but risks PLD
starvation.
• FCFS is fair but inefficient.
• The choice of algorithm depends on the workload and hardware.
• For modern SSDs, seek time is negligible, so wear leveling and other optimization technique
take precedence over traditional disk scheduling.
MIT School of Computing
Department of Computer Science & Engineering

PLD

You might also like