Disk scheduling algorithms are crucial in managing how data is read from and written to a computer’s hard disk. These algorithms help determine the order in which disk read and write requests are processed, significantly impacting the speed and efficiency of data access. Common disk scheduling methods include First-Come, First-Served (FCFS), Shortest Seek Time First (SSTF), SCAN, C-SCAN, LOOK, and C-LOOK. By understanding and implementing these algorithms, we can optimize system performance and ensure faster data retrieval.
- Disk scheduling is a technique operating systems use to manage the order in which disk I/O (input/output) requests are processed.
- Disk scheduling is also known as I/O Scheduling.
- The main goals of disk scheduling are to optimize the performance of disk operations, reduce the time it takes to access data and improve overall system efficiency.
In this article, we will explore the different types of disk scheduling algorithms and their functions. By understanding and implementing these algorithms, we can optimize system performance and ensure faster data retrieval.
Importance of Disk Scheduling in Operating System
- Multiple I/O requests may arrive by different processes and only one I/O request can be served at a time by the disk controller. Thus other I/O requests need to wait in the waiting queue and need to be scheduled.
- Two or more requests may be far from each other so this can result in greater disk arm movement.
- Hard drives are one of the slowest parts of the computer system and thus need to be accessed in an efficient manner.
Key Terms Associated with Disk Scheduling
- Seek Time: Seek time is the time taken to locate the disk arm to a specified track where the data is to be read or written. So the disk scheduling algorithm that gives a minimum average seek time is better.
- Rotational Latency: Rotational Latency is the time taken by the desired sector of the disk to rotate into a position so that it can access the read/write heads. So the disk scheduling algorithm that gives minimum rotational latency is better.
- Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating speed of the disk and the number of bytes to be transferred.
- Disk Access Time:
Disk Access Time = Seek Time + Rotational Latency + Transfer Time
Total Seek Time = Total head Movement * Seek Time

Disk Access Time and Disk Response Time
- Disk Response Time: Response Time is the average time spent by a request waiting to perform its I/O operation. The average Response time is the response time of all requests. Variance Response Time is the measure of how individual requests are serviced with respect to average response time. So the disk scheduling algorithm that gives minimum variance response time is better.
Goal of Disk Scheduling Algorithms
- Minimize Seek Time
- Maximize Throughput
- Minimize Latency
- Fairness
- Efficiency in Resource Utilization
Disk Scheduling Algorithms
There are several Disk Several Algorithms. We will discuss in detail each one of them.
- FCFS (First Come First Serve)
- SSTF (Shortest Seek Time First)
- SCAN
- C-SCAN
- LOOK
- C-LOOK
- RSS (Random Scheduling)
- LIFO (Last-In First-Out)
- N-STEP SCAN
- F-SCAN
1. FCFS (First Come First Serve)
FCFS is the simplest of all Disk Scheduling Algorithms. In FCFS, the requests are addressed in the order they arrive in the disk queue. Let us understand this with the help of an example.

First Come First Serve
Example:
Suppose the order of request is- (82,170,43,140,24,16,190)
And current position of Read/Write head is: 50
So, total overhead movement (total distance covered by the disk arm) =
(82-50)+(170-82)+(170-43)+(140-43)+(140-24)+(24-16)+(190-16) =642
Advantages of FCFS
Here are some of the advantages of First Come First Serve.
- Every request gets a fair chance
- No indefinite postponement
Disadvantages of FCFS
Here are some of the disadvantages of First Come First Serve.
- Does not try to optimize seek time
- May not provide the best possible service
2. SSTF (Shortest Seek Time First)
In SSTF (Shortest Seek Time First), requests having the shortest seek time are executed first. So, the seek time of every request is calculated in advance in the queue and then they are scheduled according to their calculated seek time. As a result, the request near the disk arm will get executed first. SSTF is certainly an improvement over FCFS as it decreases the average response time and increases the throughput of the system. Let us understand this with the help of an example.
Example:

Shortest Seek Time First
Suppose the order of request is- (82,170,43,140,24,16,190)
And current position of Read/Write head is: 50
So,
total overhead movement (total distance covered by the disk arm) =
(50-43)+(43-24)+(24-16)+(82-16)+(140-82)+(170-140)+(190-170) =208
Advantages of Shortest Seek Time First
Here are some of the advantages of Shortest Seek Time First.
- The average Response Time decreases
- Throughput increases
Disadvantages of Shortest Seek Time First
Here are some of the disadvantages of Shortest Seek Time First.
- Overhead to calculate seek time in advance
- Can cause Starvation for a request if it has a higher seek time as compared to incoming requests
- The high variance of response time as SSTF favors only some requests
3. SCAN
In the SCAN algorithm the disk arm moves in a particular direction and services the requests coming in its path and after reaching the end of the disk, it reverses its direction and again services the request arriving in its path. So, this algorithm works as an elevator and is hence also known as an elevator algorithm. As a result, the requests at the midrange are serviced more and those arriving behind the disk arm will have to wait.
Example:

SCAN Algorithm
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is at 50, and it is also given that the disk arm should move “towards the larger value”.
Therefore, the total overhead movement (total distance covered by the disk arm) is calculated as
= (199-50) + (199-16) = 332
Advantages of SCAN Algorithm
Here are some of the advantages of the SCAN Algorithm.
- High throughput
- Low variance of response time
- Average response time
Disadvantages of SCAN Algorithm
Here are some of the disadvantages of the SCAN Algorithm.
- Long waiting time for requests for locations just visited by disk arm
4. C-SCAN
In the SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing its direction. So, it may be possible that too many requests are waiting at the other end or there may be zero or few requests pending at the scanned area.
These situations are avoided in the CSCAN algorithm in which the disk arm instead of reversing its direction goes to the other end of the disk and starts servicing the requests from there. So, the disk arm moves in a circular fashion and this algorithm is also similar to the SCAN algorithm hence it is known as C-SCAN (Circular SCAN).
Example:

Circular SCAN
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is at 50, and it is also given that the disk arm should move “towards the larger value”.
So, the total overhead movement (total distance covered by the disk arm) is calculated as:
=(199-50) + (199-0) + (43-0) = 391
Advantages of C-SCAN Algorithm
Here are some of the advantages of C-SCAN.
- Provides more uniform wait time compared to SCAN.
5. LOOK
LOOK Algorithm is similar to the SCAN disk scheduling algorithm except for the difference that the disk arm in spite of going to the end of the disk goes only to the last request to be serviced in front of the head and then reverses its direction from there only. Thus it prevents the extra delay which occurred due to unnecessary traversal to the end of the disk.
Example:

LOOK Algorithm
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is at 50, and it is also given that the disk arm should move “towards the larger value”.
So, the total overhead movement (total distance covered by the disk arm) is calculated as:
= (190-50) + (190-16) = 314
6. C-LOOK
As LOOK is similar to the SCAN algorithm, in a similar way, C-LOOK is similar to the CSCAN disk scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes only to the last request to be serviced in front of the head and then from there goes to the other end’s last request. Thus, it also prevents the extra delay which occurred due to unnecessary traversal to the end of the disk.
Example:
- Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is at 50, and it is also given that the disk arm should move “towards the larger value”

C-LOOK
So, the total overhead movement (total distance covered by the disk arm) is calculated as
= (190-50) + (190-16) + (43-16) = 341
7. RSS (Random Scheduling)
It stands for Random Scheduling and just like its name it is natural. It is used in situations where scheduling involves random attributes such as random processing time, random due dates, random weights, and stochastic machine breakdowns this algorithm sits perfectly. Which is why it is usually used for analysis and simulation.
8. LIFO (Last-In First-Out)
In LIFO (Last In, First Out) algorithm, the newest jobs are serviced before the existing ones i.e. in order of requests that get serviced the job that is newest or last entered is serviced first, and then the rest in the same order.
Advantages of LIFO (Last-In First-Out)
Here are some of the advantages of the Last In First Out Algorithm.
- Maximizes locality and resource utilization
- Can seem a little unfair to other requests and if new requests keep coming in, it cause starvation to the old and existing ones.
9. N-STEP SCAN
It is also known as the N-STEP LOOK algorithm. In this, a buffer is created for N requests. All requests belonging to a buffer will be serviced in one go. Also once the buffer is full no new requests are kept in this buffer and are sent to another one. Now, when these N requests are serviced, the time comes for another top N request and this way all get requests to get a guaranteed service
Advantages of N-STEP SCAN
Here are some of the advantages of the N-Step Algorithm.
- It eliminates the starvation of requests completely
10. F-SCAN
This algorithm uses two sub-queues. During the scan, all requests in the first queue are serviced and the new incoming requests are added to the second queue. All new requests are kept on halt until the existing requests in the first queue are serviced.
Advantages of F-SCAN
Here are some of the advantages of the F-SCAN Algorithm.
- F-SCAN along with N-Step-SCAN prevents “arm stickiness” (phenomena in I/O scheduling where the scheduling algorithm continues to service requests at or near the current sector and thus prevents any seeking)
Each algorithm is unique in its own way. Overall Performance depends on the number and type of requests.
Note: Average Rotational latency is generally taken as 1/2(Rotational latency).
Questions For Practice
1. Suppose a disk has 201 cylinders, numbered from 0 to 200. At some time the disk arm is at cylinder 100, and there is a queue of disk access requests for cylinders 30, 85, 90, 100, 105, 110, 135, and 145. If Shortest-Seek Time First (SSTF) is being used for scheduling the disk access, the request for cylinder 90 is serviced after servicing ____________ the number of requests. (GATE CS 2014)
(A) 1
(B) 2
(C) 3
(D) 4
Solution: Correct Answer is (C).
For a more detailed solution, refer to GATE | GATE-CS-2014-(Set-1) | Question 65.
2) Consider an operating system capable of loading and executing a single sequential user process at a time. The disk head scheduling algorithm used is First Come First Served (FCFS). If FCFS is replaced by Shortest Seek Time First (SSTF), claimed by the vendor to give 50% better benchmark results, what is the expected improvement in the I/O performance of user programs? (GATE CS 2004)
(A) 50%
(B) 40%
(C) 25%
(D) 0%
Solution: Correct Answer is (D).
For a more detailed solution, refer to GATE | GATE-CS-2004 | Question 12.
3) Suppose the following disk request sequence (track numbers) for a disk with 100 tracks is given: 45, 20, 90, 10, 50, 60, 80, 25, 70. Assume that the initial position of the R/W head is on track 50. The additional distance that will be traversed by the R/W head when the Shortest Seek Time First (SSTF) algorithm is used compared to the SCAN (Elevator) algorithm (assuming that SCAN algorithm moves towards 100 when it starts execution) is _________ tracks (GATE CS 2015).
(A) 8
(B) 9
(C) 10
(D) 11
Solution: Correct Answer is (C).
For a more detailed solution, refer to GATE | GATE-CS-2015 (Set 1) | Question 65.
4) Consider a typical disk that rotates at 15000 rotations per minute (RPM) and has a transfer rate of 50 × 10^6 bytes/sec. If the average seek time of the disk is twice the average rotational delay and the controller’s transfer time is 10 times the disk transfer time, the average time (in milliseconds) to read or write a 512-byte sector of the disk is _____________. (GATE CS 2015)
Solution: Correct Answer is 6.1
For a more detailed solution, refer to GATE | GATE-CS-2015 (Set 2) | Question 65.
Similar Reads
Inverted Page Table in Operating System
Most Operating Systems implement a separate page table for each process, i.e. for the 'n' number of processes running on a Multiprocessing/ Timesharing Operating System, there is an 'n' number of page tables stored in the memory. Sometimes when a process is very large and it occupies virtual memory
7 min read
Segmentation in Operating System
A process is divided into Segments. The chunks that a program is divided into which are not necessarily all of the exact sizes are called segments. Segmentation gives the user's view of the process which paging does not provide. Here the user's view is mapped to physical memory. Types of Segmentatio
4 min read
Partition Allocation Methods in Memory Management
In the operating system, the following are four common memory management techniques. Single contiguous allocation: Simplest allocation method used by MS-DOS. All memory (except some reserved for OS) is available to a process. Partitioned allocation: Memory is divided into different blocks or partiti
4 min read
Non-Contiguous Allocation in Operating System
Non-contiguous allocation, also known as dynamic or linked allocation, is a memory allocation technique used in operating systems to allocate memory to processes that do not require a contiguous block of memory. In this technique, each process is allocated a series of non-contiguous blocks of memory
6 min read
Fixed (or static) Partitioning in Operating System
Fixed partitioning, also known as static partitioning, is one of the earliest memory management techniques used in operating systems. In this method, the main memory is divided into a fixed number of partitions at system startup, and each partition is allocated to a process. These partitions remain
8 min read
Variable (or Dynamic) Partitioning in Operating System
In operating systems, Memory Management is the function responsible for allocating and managing a computerâs main memory. The memory Management function keeps track of the status of each memory location, either allocated or free to ensure effective and efficient use of Primary Memory. Below are Memo
4 min read
Buddy System - Memory Allocation Technique
Static partition techniques are limited by having a fixed number of active processes, and space use may not be optimal. The buddy system is a memory allocation and management technique that uses power-of-two increments. In this article, we are going to discuss the Buddy System in detail along with e
9 min read
Virtual Memory in Operating System
Virtual memory is a memory management technique used by operating systems to give the appearance of a large, continuous block of memory to applications, even if the physical memory (RAM) is limited. It allows larger applications to run on systems with less RAM. The main objective of virtual memory i
15+ min read
Page Replacement Algorithms in Operating Systems
In an operating system that uses paging for memory management, a page replacement algorithm is needed to decide which page needs to be replaced when a new page comes in. Page replacement becomes necessary when a page fault occurs and no free page frames are in memory. in this article, we will discus
6 min read
Belady's Anomaly in Page Replacement Algorithms
Belady's Anomaly is a phenomenon in operating systems where increasing the number of page frames in memory leads to an increase in the number of page faults for certain page replacement algorithms. Normally, as more page frames are available, the operating system has more flexibility to keep the nec
11 min read