CPU Scheduling in Operating Systems
CPU Scheduling in Operating Systems
https://www.geeksforgeeks.org/cpu-scheduling-in-operating-systems/
CPU scheduling is a key part of how an operating system works. It decides which task (or
process) the CPU should work on at any given time. This is important because a CPU can
only handle one task at a time, but there are usually many tasks that need to be processed. In
this article, we are going to discuss CPU scheduling in detail.
Whenever the CPU becomes idle, the operating system must select one of the processes in the
line ready for launch. The selection process is done by a temporary (CPU) scheduler. The
Scheduler selects between memory processes ready to launch and assigns the CPU to one of
them.
Table of Content
What is a Process?
How is Process Memory Used For Efficient Operation?
What is Process Scheduling?
Why do We Need to Schedule Processes?
What is The Need For CPU Scheduling Algorithm?
Terminologies Used in CPU Scheduling
Things to Take Care While Designing a CPU Scheduling Algorithm
What are the different types of CPU Scheduling Algorithms?
1. First Come First Serve
2. Shortest Job First(SJF)
3. Longest Job First(LJF)
4. Priority Scheduling
5. Round robin
6. Shortest Remaining Time First
7. Longest Remaining Time First
8. Highest Response Ratio Next
9. Multiple Queue Scheduling
10. Multilevel Feedback Queue Scheduling
Comparison between various CPU Scheduling algorithms
What is a Process?
In computing, a process is the instance of a computer program that is being executed by
one or many threads. It contains the program code and its activity. Depending on the
operating system (OS), a process may be made up of multiple threads of execution that
execute instructions concurrently.
The text category is composed of integrated program code, which is read from fixed
storage when the program is launched.
The data class is made up of global and static variables, distributed and executed
before the main action.
Heap is used for flexible, or dynamic memory allocation and is managed by calls to
new, delete, malloc, free, etc.
The stack is used for local variables. The space in the stack is reserved for local
variables when it is announced.
To know further, you can refer to our detailed article on States of a Process in Operating
system.
If most operating systems change their status from performance to waiting then there may
always be a chance of failure in the system. So in order to minimize this excess, the OS needs
to schedule tasks in order to make full use of the CPU and avoid the possibility of deadlock.
Waiting Time(W.T): Time Difference between turn around time and burst time.
o Waiting Time = Turn Around Time – Burst Time
CPU Utilization: The main purpose of any CPU algorithm is to keep the CPU as
busy as possible. Theoretically, CPU usage can range from 0 to 100 but in a real-time
system, it varies from 40 to 90 percent depending on the system load.
Throughput: The average CPU performance is the number of processes performed
and completed during each unit. This is called throughput. The output may vary
depending on the length or duration of the processes.
Turn Round Time: For a particular process, the important conditions are how long it
takes to perform that process. The time elapsed from the time of process delivery to
the time of completion is known as the conversion time. Conversion time is the
amount of time spent waiting for memory access, waiting in line, using CPU, and
waiting for I / O.
Waiting Time: The Scheduling algorithm does not affect the time required to
complete the process once it has started performing. It only affects the waiting time of
the process i.e. the time spent in the waiting process in the ready queue.
Response Time: In a collaborative system, turn around time is not the best option.
The process may produce something early and continue to computing the new results
while the previous results are released to the user. Therefore another method is the
time taken in the submission of the application process until the first response is
issued. This measure is called response time.
What Are The Different Types of CPU Scheduling
Algorithms?
There are mainly two types of scheduling methods:
Let us now learn about these CPU scheduling algorithms in operating systems one by one:
Characteristics of FCFS
Advantages of FCFS
Easy to implement
First come, first serve method
Disadvantages of FCFS
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on First come, First serve Scheduling.
Characteristics of SJF
Shortest Job first has the advantage of having a minimum average waiting time
among all operating system scheduling algorithms.
It is associated with each task as a unit of time to complete.
It may cause starvation if shorter processes keep coming. This problem can be solved
using the concept of ageing.
Advantages of SJF
As SJF reduces the average waiting time thus, it is better than the first come first
serve scheduling algorithm.
SJF is generally used for long term scheduling
Disadvantages of SJF
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on Shortest Job First.
Characteristics of LJF
Among all the processes waiting in a waiting queue, CPU is always assigned to the
process having largest burst time.
If two processes have the same burst time then the tie is broken using FCFS i.e. the
process that arrived first is processed first.
LJF CPU Scheduling can be of both preemptive and non-preemptive types.
Advantages of LJF
No other task can schedule until the longest job or process executes completely.
All the jobs or processes finish at the same time approximately.
Disadvantages of LJF
Generally, the LJF algorithm gives a very high average waiting time and average turn-
around time for a given set of processes.
This may lead to convoy effect.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on the Longest job first scheduling.
4. Priority Scheduling
Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU
scheduling algorithm that works based on the priority of a process. In this algorithm, the
editor sets the functions to be as important, meaning that the most important process must be
done first. In the case of any conflict, that is, where there is more than one process with equal
value, then the most important CPU planning algorithm works on the basis of the FCFS (First
Come First Serve) algorithm.
One of the most common demerits of the Preemptive priority CPU scheduling
algorithm is the Starvation Problem. This is the problem in which a process has to
wait for a longer amount of time to get scheduled into the CPU. This condition is
called the starvation problem.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on Priority Preemptive Scheduling algorithm.
5. Round Robin
Round Robin is a CPU scheduling algorithm where each process is cyclically assigned a
fixed time slot. It is the preemptive version of First come First Serve CPU Scheduling
algorithm. Round Robin CPU Algorithm generally focuses on Time Sharing technique.
It’s simple, easy to use, and starvation-free as all processes get the balanced CPU
allocation.
One of the most widely used methods in CPU scheduling as a core.
It is considered preemptive as the processes are given to the CPU for a very limited
time.
Round robin seems to be fair as every process gets an equal share of CPU.
The newly created process is added to the end of the ready queue.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on the Round robin Scheduling algorithm.
Characteristics of SRTF
SRTF algorithm makes the processing of the jobs faster than SJF algorithm, given it’s
overhead charges are not counted.
The context switch is done a lot more times in SRTF than in SJF and consumes the
CPU’s valuable time for processing. This adds up to its processing time and
diminishes its advantage of fast processing.
Advantages of SRTF
Disadvantages of SRTF
Like the shortest job first, it also has the potential for process starvation.
Long processes may be held off indefinitely if short processes are continually added.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on the shortest remaining time first.
Characteristics of LRTF
Among all the processes waiting in a waiting queue, the CPU is always assigned to
the process having the largest burst time.
If two processes have the same burst time then the tie is broken using FCFS i.e. the
process that arrived first is processed first.
LRTF CPU Scheduling can be of both preemptive and non-preemptive.
No other process can execute until the longest task executes completely.
All the jobs or processes finish at the same time approximately.
Advantages of LRTF
Disadvantages of LRTF
This algorithm gives a very high average waiting time and average turn-around time
for a given set of processes.
This may lead to a convoy effect.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on the longest remaining time first.
Characteristics of HRRN
The criteria for HRRN is Response Ratio, and the mode is Non-Preemptive.
HRRN is considered as the modification of Shortest Job First to reduce the problem of
starvation.
In comparison with SJF, during the HRRN scheduling algorithm, the CPU is allotted
to the next process which has the highest response ratio and not to the process
having less burst time.
Here, W is the waiting time of the process so far and S is the Burst time of the process.
Advantages of HRRN
HRRN Scheduling algorithm generally gives better performance than the shortest job
first Scheduling.
There is a reduction in waiting time for longer jobs and also it encourages shorter
jobs.
Disadvantages of HRRN
The implementation of HRRN scheduling is not possible as it is not possible to know
the burst time of every job in advance.
In this scheduling, there may occur an overload on the CPU.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on Highest Response Ratio Next.
System Processes: The CPU itself has its process to run, generally termed as System
Process.
Interactive Processes: An Interactive Process is a type of process in which there
should be the same type of interaction.
Batch Processes: Batch processing is generally a technique in the Operating system
that collects the programs and data together in the form of a batch before the
processing starts.
The main merit of the multilevel queue is that it has a low scheduling overhead.
Starvation problem
It is inflexible in nature
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on Multilevel Queue Scheduling.
10. Multilevel Feedback Queue Scheduling
Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling is like Multilevel
Queue Scheduling but in this process can move between the queues. And thus, much more
efficient than multilevel queue scheduling.
It is more flexible
It allows different processes to move between different queues
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on Multilevel Feedback Queue Scheduling.
Average
Allocation waiting Preemptio
Algorithm Complexity Starvation Performance
is time n
(AWT)
According
to the
arrival time Simple and
Slow
FCFS of the easy to Large. No No
performance
processes, implement
the CPU is
allocated.
SJF Based on More Smaller No Yes Minimum
the lowest complex than FCFS Average
CPU burst than FCFS Waiting
time Time
Average
Allocation waiting Preemptio
Algorithm Complexity Starvation Performance
is time n
(AWT)
(BT).
Depending
on some
Based on
More measures
the highest Big turn-
LJFS complex e.g., arrival No Yes
CPU burst around time
than FCFS time,
time (BT)
process
size, etc.
Same as
LJFS the
Dependin
allocation
g on some
of the CPU The
More measures
is based on preference is
LRTF complex e.g., arrival Yes Yes
the highest given to the
than FCFS time,
CPU burst longer jobs
process
time (BT).
size, etc.
But it is
preemptive
Same as
SJF the
allocation Depending
of the CPU on some
The
is based on More measures
preference is
SRTF the lowest complex e.g., arrival Yes Yes
given to the
CPU burst than FCFS time,
short jobs
time (BT). process
But it is size, etc
preemptive
.
According
to the order
The
of the Large as
complexity Each process
process compared
depends on has given a
RR arrives to SJF and Yes No
Time fairly fixed
with fixed Priority
Quantum time
time scheduling.
size
quantum
(TQ)
Priority Pre- According This type is Smaller Yes Yes Well
emptive to the less than FCFS performance
priority. complex but contain a
The bigger starvation
priority problem
task
executes
Average
Allocation waiting Preemptio
Algorithm Complexity Starvation Performance
is time n
(AWT)
first
According
to the
priority
with This type is Most
Preemptive
Priority non- monitoring less complex beneficial
Smaller No Yes
preemptive the new than Priority with batch
than FCFS
incoming preemptive systems
higher
priority
jobs
According
to the More
Good
process complex
performance
that resides than the Smaller
MLQ No Yes but contain a
in the priority than FCFS
starvation
bigger scheduling
problem
queue algorithms
priority
It is the most
According
Complex but Smaller
to the
its than all
process of Good
MFLQ complexity scheduling No No
a bigger performance
rate depends types in
priority
on the TQ many cases
queue.
size
Exercise:
Consider a system which requires 40-time units of burst time. The Multilevel
feedback queue scheduling is used and time quantum is 2 unit for the top queue and is
incremented by 5 unit at each level, then in what queue the process will terminate the
execution?
Which of the following is false about SJF? S1: It causes minimum average waiting
time S2: It can cause starvation (A) Only S1 (B) Only S2 (C) Both S1 and S2 (D)
Neither S1 nor S2 Answer (D) S1 is true SJF will always give minimum average
waiting time. S2 is true SJF can cause starvation.
Consider the following table of arrival time and burst time for three processes P0, P1
and P2. (GATE-CS-2011)
P1 P2 P4 P3 P1
1 4 5 8 12
Turn Around Time = Completion Time – Arrival Time Avg Turn Around Time =
(12 + 3 + 6+ 1)/4 = 5.50
An operating system uses the Shortest Remaining Time First (SRTF) process
scheduling algorithm. Consider the arrival times and execution times for the following
processes:
Conclusion
In conclusion, CPU scheduling is a fundamental component of operating systems, playing a
crucial role in managing how processes are allocated CPU time. Effective CPU scheduling
ensures that the system runs efficiently, maintains fairness among processes, and meets
various performance criteria. Different scheduling algorithms, such as First-Come, First-
Served (FCFS), Shortest Job Next (SJN), Priority Scheduling, and Round Robin (RR), each
have their own strengths and are suited to different types of workloads and system
requirements.
It’s important because it helps the CPU work efficiently, ensures all tasks get a fair chance to
run, and improves overall system performance.
SJN schedules the task with the shortest execution time first, aiming to reduce the average
waiting time for all tasks.
In RR scheduling, each task gets a small, fixed amount of CPU time called a time slice. The
CPU cycles through all tasks, giving each one a turn.
Context switching is the process of saving the state of a currently running task and loading
the state of the next task. This allows the CPU to switch between tasks