0% found this document useful (0 votes)
12 views

CPU-Scheduling-Algorithms

The document discusses various CPU scheduling algorithms, including Round Robin, First Come First Served, Shortest Job First, Priority Scheduling, Multi-Level Queuing, and Multi-Level Feedback Queuing. Each algorithm is analyzed in terms of its advantages, disadvantages, and suitability for different types of systems. The conclusion highlights Multi-Level Feedback Queue as the most efficient scheduling algorithm despite its complexity.

Uploaded by

bis22-pjimu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

CPU-Scheduling-Algorithms

The document discusses various CPU scheduling algorithms, including Round Robin, First Come First Served, Shortest Job First, Priority Scheduling, Multi-Level Queuing, and Multi-Level Feedback Queuing. Each algorithm is analyzed in terms of its advantages, disadvantages, and suitability for different types of systems. The conclusion highlights Multi-Level Feedback Queue as the most efficient scheduling algorithm despite its complexity.

Uploaded by

bis22-pjimu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

MALAWI UNIVERSITY OF BUSINESS AND APPLIED SCIENCE

Department of Computer Science and Information Systems


BIS 3
SUBMITTED TO:
D. MKAVEA

NAME:
Prince Jimu BIS/22/SS/009

COURSE
OPERATING SYSTEMS (OPS-311)

DUE DATE
7th March 2025
CPU SCHEDULING ALGORITHMS
1.0 INTRODUCTION
The CPU, being the most important part of the internal hardware system of any computing
device, is responsible for handling each process requested by the user or triggered by another
process. This means that the CPU handles thousands of processes all of which rely on the
completion or execution of another process. The CPU can execute one process at a time (This is
true for single core processors as multi-core CPUs share processes, one for each core (BBC,
2025)) and therefore needs a way of scheduling the order at which processes are allowed CPU
time and executed. This discussion outlines some of the most common scheduling algorithms
used by the OS to ensure that the CPU is used efficiently.
2.0 DEFINITIONS
Some of the terms to be used frequently in this discussion:
2.1 Turnaround time: The total time taken from the initiation of the process to the time it
completes executing. (geeksforgeeeks, 2024)
2.2 Waiting time: Defined as the total amount of time a process stays in the “ready queue” while
another process is being executed (unacademy, 2025).
2.3 Burst-time: The total time a process takes to execute.
2.4 Non pre-emptive scheduling: This a type of scheduling algorithm that allows a process to
run until it is finished executing or gets set to a waiting state for a certain event.
2.5 Pre-emptive scheduling: this is when a process in execution is “involuntarily” released from
the CPU when another process with higher priority needs to be executed. (Omar, H., et. al,
2021).
3.0 ALGORITHMS
3.1 ROUND ROBIN
This algorithm involves the allocation of a fixed time for each process called a “quantum”.
Recall that all the processes are stored in a queue where the CPU fetches a process to execute. As
a data structure, queues are accessed by “en-queuing” an object on one end and de-queuing one
object at a tie on the other. The CPU de-queues a process and executes it within the pre-specified
quantum. If the process execution takes longer than the specified quantum, the CPU will create a
new process and en-queue it back to the queue (it goes back to the end of the line). This is an
example of a preemptive scheduling algorithm because the CPU will stop the current process and
begin another if its quantum runs out.
According to Stallings (2011), choosing the next process to be executed after one has completed
or has been preempted is done by FCFS basis. This means that, if the quantum is longer than the
process with the highest burst-time, the Round Robin algorithm will work the same way as FCFS
algorithm.
This algorithm raises a few problems. The first one is comes in when the specified quantum is
too small for some processes that they get dropped multiple times for passing the specified
quantum. The second problem arises when the quantum is too big for some other processes that
it creates an overhead of waiting time.
The solution to these problems is the introduction of Variable Quantum Round Robin. This
algorithm is an enhancement of the RR. The difference is, here, the quantum size is variable. A
process is given a quantum based on its requirements (Mohammed. A., Kotp. Y., 2023). This
solves the problem of waiting time overhead by allowing processes with shorter burst-time to
execute within a reasonable quantum that will be just enough for the specific process. This
significantly reduces the average waiting time for each process.
Variable Quantum Round Robin also solves the problem of dropping large processes due to
insufficient fixed quantum since in this case, larger or more demanding processes will be given a
reasonable quantum that is enough for the process to execute once and complete its execution.
3.2 FIRST COME, FIRST SERVED:
FCFS is an easier to understand algorithm since it works on the same principle that a normal
queue works on. The first process to request for the CPU (the first to arrive in the ready queue) is
processed first and the last to arrive is processed last, in that order. This is regardless of the burst-
time or requirements of the process. FCFS is an example of a Non pre-emptive scheduling
algorithm.
The main advantage of this algorithm is that there is less overhead as compared to other
scheduling algorithms such as Round Robin (RR) since there is no pre-specified quantum for a
process. Therefore, with FCFS, a process will execute until it is done and when it is done, it exits
the CPU immediately. This means the next process will not experience a long waiting time in the
queue.
The main disadvantage is that, if a process with very high burst-time enters the queue first, the
next process, regardless of its burst-time size will have to wait longer for the previous process to
be completed even if it would’ve completed executing faster than the previous process.
Another notable disadvantage is that FCFS favors processor-bound processes (Stallings, W.,
2011). Stalling (2011) shares the difference in treatment between processes that need the
processor more (Processor-bound) and those that need more of I/O than the processor (I/O
bound). This is seen when, for example, a processor bound process is in the ready queue and a
few more I/O bound processes join the queue. The processor may be take a long time executing
the processor-bound processor leaving the I/O bound processes in the ready queue and the I/O
devices in idle state since the processes that require the I/O devices have not been processed and
sent interrupts yet. This brings inefficiency in the use of the CPU but mostly on I/O devices.
3.3 SHORTEST JOB FIRST
This is a scheduling algorithm that estimates the burst-times of processes in the ready queue. It
uses these estimates to decide which process needs to execute first. The process with the shortest
estimated burst-time is executed first. Some sources refer to it as “Shortest Process Next (SPN)”
according to Mohammed and Kotp (2023).
SJF reduces the average waiting time for processes since the processes with shorter burst-time
are executed first and free up resources quicker than would the processes with a higher burst-
time. However, this means that the processes with a longer burst-time may stay in the ready
queue for a longer time if much shorter processes are being created.
This algorithm can be preemptive or non-preemptive (Abraham, S. et. al., 2018).
In the preemptive version, the CPU will look for the shortest estimated burst-time in the ready
queue. It will execute this process until it finishes or until another process enters the queue with a
lower burst-time than the remaining time to finish executing the current process. In this case, the
CPU will preempt and execute the new process first. This is called shortest-remaining-time-first
algorithm.
The non-preemptive version is the one that runs a process till it finishes then looks for the next
shortest burst-time in the queue.
The following example shows a scenario where preemptive SJF is necessary and better than non-
preemptive:
Process Arrival Time (milliseconds) Burst-time(milliseconds)
(Estimated)
P1 0 8
P2 1 4
P3 2 9
P4 3 5

P1 – 1ms P2-4ms P4-5ms P1-7ms(remaining) P3-9ms


Example extracted from Abraham, S., Greg, G., & Peter Baer, G. (2018). Operating System
Concepts.-10th. p. 209
Explanation: P1 will be executed first as it reaches the ready queue first. After one millisecond,
P2 will be executed since it has lower burst rate (4ms) than the remaining for P1(7ms). Then
after 4ms, all 4 processes are in the ready queue and the next-shortest-burst-time is 5ms (P4) and
therefore it will be executed. Then P1(7ms) will be completed before processing P3 (9ms).
According to Abraham et al (2018), this process has an average waiting-time of 6.5ms where as
non-preemptive SJF would have a waiting-time of 7.75ms. This shows that in this case,
preemptive SJF is a better scheduling algorithm.
Burst-time Prediction: Throughout the discussion of SJF we have been referring to an
“estimated burst-time”. The question could be; how does the CPU estimate or predict the length
of the next burst and in-turn predict the burst-time of the processes in the ready queue?
This is done using the following formula:
(Abraham, S. et. al., 2018).
This formula has the variables:
τn+1 : The predicted value for the nth CPU burst
tn : The length of the next CPU burst
α : a value between 0-1
τn : Old recorded burst-time
The value of α dictates the weight of each variable (the length of the nth burst and the old
recorded burst) When it is 1, the predicted burst-time will be equal to the length of the nth burst.
When it is 0, the predicted burst-time will be equal to the old recorded burst-time. This predicted
burst-time is what is used in estimating the length of processes in the ready queue.
3.4 PRIORITY SCHEDULING
Priority Scheduling involves each process having a priority value as it enters the ready queue.
The process with the highest priority among those in the ready queue is executed first and the
one with the next most priority is executed next after that. Abraham et al (2018) define two types
of priority; internal and external priority. Internal priority is set by a measurable quantity like
memory requirements or average CPU bursts. This is done by the operating system. External
priority values are set by factors not controlled by the OS such as externally sponsored or funded
processes.
One point to note is that priority values are defined differently in different machines and books.
For some, a higher value means high priority and for some a lower value is a higher priority. In
the example given, Abraham et al (2018) used lower values to represent higher priority.
An example of priority scheduling in action:
Process Burst-time (milliseconds) Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

P2 – 1 P5-2 P1-3 P3-4 P4-5


Example extracted from Abraham, S., Greg, G., & Peter Baer, G. (2018). Operating System
Concepts.-10th. pp 212-213
Explanation: Unlike the example in SJF above, here a process is ran based on its priority value
regardless of its burst-time. Therefore, P2 is ran first since it has higher priority, then P5, in that
order up to P4 with the lowest priority.
There are two versions / types of priority scheduling, one is pre-emptive and the other is non-
preemptive. The non-pre-emptive version allows a process to complete execution even if a
higher priority process enters the queue. The pre-emptive version sends the process with lower
priority to the end of the queue and executes the new process given it has higher priority.
Priority scheduling introduces a “starvation” problem. This occurs when a low priority process in
the ready queue does not get any CPU time as higher priority processes join the queue
constantly. This can cause certain program crashes or a system crash in case of a system initiated
process not being executed.
This problem can be solved in two ways; first, if a process stays in the ready queue for too long,
the priority value should gradually be increased at regular intervals. For example, if a process
enters the queue with a priority value of 7, every second it spends could be reducing it gradually.
After a while, it will be reduced to 0 and the process will have the highest priority and be
executed.
A second way is combine priority scheduling with round robin to handle same priority processes
using round robin algorithm.
3.5 MULTI-LEVEL QUEUING
So far all the algorithms discussed have all the processes in one queue. In this algorithm
however, the processes are separated into multiple queues. This separation occurs on a priority
basis. This means that high priority processes are placed in the same queue and the next level of
priorities in its own queue and so on. The CPU then executes the processes from the highest
priority queue to the lowest.
Inside a particular queue, the processes can be executed using any of the other algorithms such as
Round Robin or FCFS since within that queue, all processes have the same priority level.
An example of MLQ from Abraham et al (2018) is shown below where they separated processes
by priority into the following categories:
Real-time Processes will be executed first as opposed to interactive processes, as an example. If
a lower priority queue process was being executed (meaning that all other queues above it were
empty) and a higher priority queue process is initiated, the scheduler would preempt and process
the higher priority first. For example, in the diagram above, if a batch process was being
executed and a process enters the system process queue, the scheduler would preempt and run
the system process first.
3.6 MULTI-LEVEL FEEDBACK QUEUING
In normal MLQ, processes cannot change their queue since the queues can be defined by nature
of the process which cannot change. In MLFQ processes can change their queue. This scheduling
algorithm tries to solve the common problems not only those of MLQ but from other scheduling
algorithms as well such as the starvation problem and the rigidity problem caused by MLQ.
MLFQ groups processes into different queues based on their burst-time. Processes with a lower
burst-time will be placed in a higher priority queue. The higher the burst-time, the lower the
queue.
If a process is set in the highest priority queue (for instance queue A) with a quantum of 8
seconds, it is expected to conclude its processing within that time. If not, the process is moved to
a lower priority queue (queue B) which has a longer quantum. If the process does not complete
executing here either, it is then moved to the next queue (queue C) where instead of using round
robin, processes are executed using FCFS. The CPU will run through the queues in order.
To prevent starvation, if a process stays in a lower queue for some specified time, aging is
implemented and it can move to a higher priority queue.
The main disadvantage of MLFQ is that it is complex and may be hard to implement.
4.0 DISCUSSION
There isn’t one perfect scheduling algorithm. As seen, each algorithm has its own disadvantages.
To decide which algorithm to implement would depend on the type of the system. For example,
if the system being built needs to process tasks according to the order that they are initiated
without being preempted, then FCFS would be the best option. If system will involve user
generated events that will change the order of execution such as video games, MLFQ should be
implemented.
Algorithm Time-Allocation Starvation Problem Complexity
Round Robin Specified Quantum No Low
FCFS Until Completion No Low
SJF Until Completion Yes Low
Priority Scheduling Until Pre-empted Yes Medium
Multi-Level Queue Using queue Yes High
algorithm
Multi-Level Using queue No High
Feedback Queue algorithm

5.0 CONCLUSION
From the study conducted in this discussion, multi-level Feedback Queue scheduling algorithm
is the most efficient algorithm. Despite the complexity of implementing it, it provides a way of
executing process fairly as compared to other algorithms.
6.0 References
Abraham, S., Greg, G., & Peter Baer, G. (2018). Operating System Concepts.-10th.

Ali, S. M., Alshahrani, R. F., Hadadi, A. H., Alghamdi, T. A., Almuhsin, F. H., & El-Sharawy,

E. E. (2021). A review on the cpu scheduling algorithms: Comparative

study. International Journal of Computer Science & Network Security, 21(1), 19-26.

BBC. (2025). The CPU and the fetch-execute cycle.

https://www.bbc.co.uk/bitesize/guides/zws8d2p/revision/2

Geeksforgeeks. (2024). Difference Between Turn Around Time (TAT) and Waiting Time (WT) in

CPU Scheduling. https://www.geeksforgeeks.org/difference-between-turn-around-time-

tat-and-waiting-time-wt-in-cpu-scheduling/

https://unacademy.com/content/cbse-class-11/difference-between/turnaround-time-and-waiting-

time-in-cpu-scheduling/#:~:text=Waiting%20time%20(WT)%20is%20defined,before

%20it%20reaches%20the%20CPU.

Mohammed. A., Kotp. Y. (n.d.). CPU-Scheduling-Algorithm.

https://github.com/yousefkotp/CPU-Scheduling-Algorithms

Omar, H. K., Jihad, K. H., & Hussein, S. F. (2021). Comparative analysis of the essential CPU

scheduling algorithms. Bulletin of Electrical Engineering and Informatics, 10(5), 2742-

2750.

Stallings, W. (2011). Operating systems: internals and design principles. Prentice Hall Press.

You might also like