CPU Scheduling in Operating Systems
CPU Scheduling in Operating Systems
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The OS
scheduler determines how to move processes between the ready and run queues which can only have one
entry per processor core on the system; in the above diagram, it has been merged with the CPU.
1 Running
When a new process is created, it enters into the system as in the running
state.
2 Not Running
Processes that are not running are kept in queue, waiting for their turn to
execute. Each entry in the queue is a pointer to a particular process.
Queue is implemented by using linked list. Use of dispatcher is as
follows. When a process is interrupted, that process is transferred in the
waiting queue. If the process has completed or aborted, the process is
discarded. In either case, the dispatcher then selects a process from the
queue to execute.
Schedulers
Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to
run. Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
2 Speed is lesser than short term Speed is fastest among Speed is in between both short
scheduler other two and long term scheduler.
5 It selects processes from pool It selects those processes It can re-introduce the process
and loads them into memory which are ready to execute into memory and execution can
for execution be continued.
Context Switch
A context switch is the mechanism to store and restore the state or context of a CPU in Process
Control block so that a process execution can be resumed from the same point at a later time.
Using this technique, a context switcher enables multiple processes to share a single CPU.
Context switching is an essential part of a multitasking operating system features.
When the scheduler switches the CPU from executing one process to execute another, the state
from the current running process is stored into the process control block. After this, the state for
the process to run next is loaded from its own PCB and used to set the PC, registers, etc. At that
point, the second process can start executing.
Context switches are computationally intensive since register and memory state must be saved
and restored. To avoid the amount of context switching time, some hardware systems employ
two or more sets of processor registers. When the process is switched, the following
information is stored for later use.
Program Counter
Scheduling information
Base and limit register value
Currently used register
Changed State
I/O State information
Accounting information
Preemptive and Non-Preemptive Scheduling
Preemptive Scheduling is
a CPU scheduling technique that works by dividing time slots of CPU to a
given process. The time slot given might be able to complete the whole process or might not be
able to it. When the burst time of the process is greater than CPU cycle, it is placed back into the
ready queue and will execute in the next chance. This scheduling is used when the process
switch to ready state.
Algorithms that are backed by preemptive Scheduling are round-robin (RR), priority, SRTF
(shortest remaining time first).
In this type of scheduling, the resources (CPU Cycle) have been allocated to a process for
a limited amount of time.
A process can be interrupted when it is being executed.
If a process that has a high priority arrives frequently in the ‘ready’ queue, the low
priority processes may starve.
This kind of scheduling has overheads since it has to schedule multiple processes.
It is flexible in nature.
It is expensive in nature.
The CPU utilization is high in this type of scheduling.
Examples of pre-emptive scheduling include: Round Robin scheduling, Shortest
Remaining Time First scheduling.
Non-preemptive Scheduling is a CPU scheduling technique the process takes the resource (CPU
time) and holds it till the process gets terminated or is pushed to the waiting state. No process is
interrupted until it is completed, and after that processor switches to another process.
Algorithms that are based on non-preemptive Scheduling are non-preemptive priority, and
shortest Job first.
In this type of scheduling, once the resources (CPU Cycle) have been allocated to a
process, the process holds it until it completes its burst time or switches to the ‘wait’
state.
A process can’t be interrupted until it terminates itself or its time is over.
If a process that has a long burst time is running the CPU, then the process that has less
CPU burst time would starve.
It doesn’t have overhead.
It is not flexible in nature.
It is not expensive in nature.
Examples of non-pre-emptive scheduling include: First Come First Serve and Shortest
Job First.
Resources are allocated according to the Resources are used and then held by the
Preemptive Scheduling Non-Preemptive Scheduling
The process can be interrupted, even before The process is not interrupted until its life
the completion. cycle is complete.
Starvation may be caused, due to the insertion Starvation can occur when a process with
of priority process in the queue. large burst time occupies the system.
Maintaining queue and remaining time needs No such overheads are required.
storage overhead.
Shortest Job First (SJF): Process which have the shortest burst time are scheduled first.If two
processes have the same bust time then FCFS is used to break the tie. It is a non-preemptive
scheduling algorithm.
Longest Job First (LJF): It is similar to SJF scheduling algorithm. But, in this scheduling
algorithm, we give priority to the process having the longest burst time. This is non-preemptive
in nature i.e., when any process starts executing, can’t be interrupted before complete
execution.
Shortest Remaining Time First (SRTF): It is preemptive mode of SJF algorithm in which jobs
are schedule according to shortest remaining time.
Longest Remaining Time First (LRTF): It is preemptive mode of LJF algorithm in which we
give priority to the process having largest burst time remaining.
Round Robin Scheduling : Each process is assigned a fixed time(Time Quantum/Time Slice) in
cyclic way.It is designed especially for the time-sharing system. The ready queue is treated as a
circular queue. The CPU scheduler goes around the ready queue, allocating the CPU to each
process for a time interval of up to 1-time quantum. To implement Round Robin scheduling,
we keep the ready queue as a FIFO queue of processes. New processes are added to the tail of
the ready queue. The CPU scheduler picks the first process from the ready queue, sets a timer
to interrupt after 1-time quantum, and dispatches the process. One of two things will then
happen. The process may have a CPU burst of less than 1-time quantum. In this case, the
process itself will release the CPU voluntarily. The scheduler will then proceed to the next
process in the ready queue. Otherwise, if the CPU burst of the currently running process is
longer than 1-time quantum, the timer will go off and will cause an interrupt to the operating
system. A context switch will be executed, and the process will be put at the tail of the ready
queue. The CPU scheduler will then select the next process in the ready queue.
Priority Based scheduling (Non-Preemptive) : In this scheduling, processes are scheduled
according to their priorities, i.e., highest priority process is scheduled first. If priorities of two
processes match, then schedule according to arrival time. Here starvation of process is
possible.
Highest Response Ratio Next (HRRN): In this scheduling, processes with highest response ratio
is scheduled. This algorithm avoids starvation.
Response Ratio = (Waiting Time + Burst time) / Burst time
Multilevel Queue Scheduling : According to the priority of process, processes are placed in the
different queues. Generally high priority process are placed in the top level queue. Only after
completion of processes from top level queue, lower level queued processes are scheduled. It
can suffer from starvation.
Multi level Feedback Queue Scheduling : It allows the process to move in between queues. The
idea is to separate processes according to the characteristics of their CPU bursts. If a process
uses too much CPU time, it is moved to a lower-priority queue.