Scheduling Algorithm
Scheduling Algorithm
-PAL
4.1 INTRODUCTION
4
Scheduling -ram
sum—o ;
seanf ( "*d" Snum) ; while (i 10 )
Scheduling
CPU burst
I/O burst
CPV burst
I/O burst
When to Schedule
An important
facet of scheduling is to determine when the scheduler should make
decisions.
scheduling The following circumstances may require the scheduler to make sCheduling
switches
decisions: en a process to waitin state. This situation may occur in case the process has
such
to wait for L/O or the termma 1 n Itsc 1 process or some another reason. In •tuations, the
•cheduler has to select some ready process for exec.lltipn. • a process switches from runmng to
such
read state ue to occurrence of n interrupt. In situations, the scheduler may ecr e to run a
process from the ready queue. If the Interrupt was caused by some I/O device that has now
comp eted its task, the scheduler may choose the process that was blocked and waiting for the I/O.
When a process switches fro waiting state to ready state, or example, in cases where the
process has completed its I/O operation. In suc Situa Ions, the scheduler may select either
be
e process that has now come to the ready state, or the current process, which may
continued. When a process terminates and the systgn. In t s case, the scheduler has to
select a process for execution from the set of ready processes.
(c) Dispatcher
CPU
The CPU scheduler only selects a process to be executed next on the CPU but it cannot assign to
the CPU
the selected process. The function of setting up the execution of the selected process on
operating system, as dispatcher.
is performed by another module of the known The
dispatcher involves the following three steps to perform this function: I. Context switching is
context currently running
performed. The kernel saves the of process and restores the
CPU
saved state of the process selected by the scheduler. In case the process selected by the short-
kernel its
term scheduler is new, the loads context. 2. The system switches from the kernel mode to
user user
mode, as a process is to be executed.
Scheduling
user process
3. The execution of the selected by the CPU scheduler is started by transferring
the the instruction that supposed
control either to was to be executed at the time the præess
first instruction, the going
was interrupted or to the if process is to be executed for the first time
after time the dispatcher
its creation. Note: The amount of required by to suspend execution Of one
and resume another dispatch
process erecution of process is known as latency. Low dispatch latency
implies faster start of process execution.
minimizing the average is not a good criterion; rather, it is the variance in the response time of the
be minimized.
processes that should This is because it is not desirable to have a process with a
as
long response time compared to other processes.
4.4
SCHEDULING ALGORITHMS
A wide variety of algorithms are used for the CPU scheduling. These scheduling algorithms fall
two
into categories, namely, non-preemptive and preemptive.
Non-preemptive scheduling algorithms: Once the CPU is allocated to a process, it cannot be
withdravyv_until
the rocess v luntarily releases_jt (in case the process¯haöåo wart%r 1/0 or
some other event) or e process terminates. In other words, we can say the decision to
schedule a process is made only when the currently running process either switches to the
waiting state or terminates. Jn both cases, the CPU executes some other process from the set of
ready processes. Some examples of non-preemptive scheduling algorithms are first come first
15
21 28 33
ready
Initially, pt enters the queue at t = O and CPU is allocated to it. While
Pa, P,
PI is executing, and p 4 enter the ready queue at t = 2, t = 3, and t 5,
entered
respectively. When PI completes, CPU is allocated to ? 2 as it has
before completes,
P, and P4. When P 2 P, gets the CPU after which gets the
CPU.
Since turnaround time = exit time— entry time therefore:
Turnaround time for
Turnaround time for P 2 = (21 — 2) = 19 ms
Turnaround time for PJ = (28 — 3) = 25 ms
Turnaround time for P 4 = (33 — 5) = 28 ms
turnaround
Average time = (15 + 19 + 25 + 28)/4 = 21.75 ms
Since waiting time = turnaround time — processing time, therefore:
Waiting time for PI
Waiting time for P 2 = (19—6) = 13 ms
Waiting time for P, = (25 — 7) = 18 ms
Waiting time for Pa = (28 — 5) = 23 ms
Average waiting time = (0+ 13 + 18 + 23)/4 = 13.5 ms
The performance of FCFS scheduling algorithm largely depends pp the order
ready queue.
of arrival of processes in the having long
short CPU burst or vice versa. To illustrate this, assume that the processes
4.
(shown in Example 4.1) enter the ready queue in the orde g, % , P 3 and Pt. ow, the
processes will be scheduled as shown in the following Gantt chart.
5 11 18 33
Using the above formulae, the average turnaround time and average waiting
time can be computed as: Average turnaround time = [(5 —0) + (Il — 2) +
(18 — 3) + (33 — 5)]/4 = 14.25 ms Average waiting time = [(5 — 5) + (9
— 6) + (15 — 7) + (28 — 15)]/4 = 6 ms
It is clear that if processes having shorter CPU bursts execute before
those having longer CPU bursts, the average may-reduce significantly.
Advantages
• It is easier to understand and implement, as processes are simply to be
added at the end of the queue and No process in between these two
points in the queue needs to be accessed.
• Itiswell suited for batch systems where the longer!imeperiods foreach process
areoftenacceptable.
60 Scheduling
Disadvantages
• The average
waiting time is not minimal. Therefore, this scheduling algorithm is
recomrÄéiidWWhere
never performance is a major issue.
reduces
e It the CPU and I/O devices utilization under some circumstances. For example, assume that
may
there is one Ion CPCbOtiiid rocessandmån¯AiOüf110- ound rocessgs ip the ready ueue. Now, it
complete
appen thåt while the CPU-bound process is executing, the O-bound processes their
CPU-bound
I/O and come to the ready queue for execution. There they have to wait till the
process releases the CPU; besides the I/O dev-ces also remain idle during this time. When the CPU-
bound procesS needs To pe orm I/O, it comes to the deuce queue and the CPU is allocated to I/O-
bound processes. As the
they execute quickly and come back to the device queue, thereby leaving the CPU idle. Then
the CPU-bound process enters the ready queue and is allocated the CPU, which again results in having
I/O processes waiting in ready queue, at some point of time. This happens again and again until the
CPU-bound process is done, thus resulting in low CPU and I/O devices utilization.
• It is not suitable for time sharing systems where each process needs the same amount of
cpu time.
CPU burst (ms) 5 2 3 How will these processes be scheduled according to SJF scheduling
algorithm? Compute the average waiting time and average turnaround time.
Solution: The processes will be scheduled as depicted in the following Gantt chart:
9
7 12 17
Initially, Pi enters the ready queue at t = 0 and gets the CPU as there are no other processes
in the queue. While it is executing, P2, P3 and P4 enter the queue at t = l, t = 3 and t = 4
has the shortest CPU burst among the three processes. When P3 completes, CPU is
allocated first to P, and then to Pz.
Since turnaround time = exit time — entry time, therefore:
Turnaround time for PI = (7 — 0) = 7 ms
Turnaround time for P2 = (17 — I) = 16 ms
Turnaround time for P3 (9 — 3) = 6 ms
Turnaround time for P4 = (12 — 4) = 8 ms
Scheduling
61
turnaround + 9.25
Average time = (7 16 + 6 + 8)/4 = ms
= turnaround — processing
Since waiting time time time, Waiting time for PI = therefore: (7
— 7) = 0 ms
5) Il
Waiting time for P2 = (16 — ms
Waiting time for Pg = (6 — 2) = 4 ms
time for P4 3)
Waiting (8 — 5 ms
Average waiting
time = (0 + Il +4 + 5)/4 = 5 ms
Advantages
. It elimimtes the variance in waitin and turnaround times. In fact, it is optimal with
average all processes are
respect to waiting time if available at the same time. This is due
that before
to the fact short processes are made to run longer ones, which decreases the
processes and_increases-the
waiting time for short waiting time for long processes. However,
waiting the
the reduction in time is more than increment and thus, the average waiting time
decreases. Disadvantages
It is di lcult lement as it needs to know the length of CPU burst of processes in advance. In
practice, it is difficult to obtain pnor owledge of required processing times of processes.
Many systems expect users to provide estimates of CPU burst of processes, which may
not always be correct.
It does not favour processes having longer CPU burst. This is because long processes will
not be allowed to get the CPU as long as the short processes continue to enter the ready
queue. This results in starvation of long processes.
11
35 17 Initially,
PI enters the ready queue at t = 0 and gets the CPU as there are no other
=
processes in the queue. While it is executing, at time t l, P2 with CPU burst of 5 ms
enters the queue. At that time the remaining CPU burst of PI is 6 ms which is greater than
that of Pa. Therefore, the CPU is
Schedtdihg
taken back from P. and allocated to P:. During execution of Pz, P, enters at t = 3 with a
CPU burst of 2 ms. Again CPU is switched from to as the remaining CPU burst of p; at t
-3 is 3 which is greater than that of P,. at time t = 4, P, with CPU burst 0/3 ms the
queue. the CPU is to it because at that time the remaining CPU burst of currently running
process (that is, P,) is I ms which is shorter than that of Pa. When P3 completes, there are
three processes P: (6 ms), P2 (3 ms) and P, (3 ms) ira the queue. To break the tie between
p2 and p
the scheduler takes into consideration their arrival order and the CPU is allocated first to then
to P, and finally, to Pt.
Since turnaround time = exit time — entry time, therefore:
Turnaround time for P: = (17 — O) = 17 ms
Tumaround time for P2 = (8 — I) 7 ms
Tumaround time for P, = (5 — 3) = 2 ms
Turnaround time for P, = (I I — 4) = 7 ms
Average turnaround time = (17 + 7 + 2 + 7)/4 = 8.25 ms
Since waiting time = tumaround time — processing time, therefore:
Waiting time for = (17—7) = 10 ms
Waiting time for P2 = (7 — 5) = 2 ms
Waiting time for = (2 — 2) = O ms
Waiting time for ? , = (7 — 3) = 4 ms
Average waiting time = (10 + 2 + O + 4)/4 = 4 ms
Advantages
• Along process that is near completion maybe favoured overshortprocesses entering the
system. This results in an improvement in the tumaround time of the long process.
Disadvantages
Like SJF, it also requires an estimate of the next CPU burst of a process in advance. •
Favouring a long process that is nearing completion over the several short processes entering
the system may adversely affect the turnaround times of short processes.
• It favours only those long processes that are just about to complete and not those who
have just started their operation. Thus, starvation of long processes may still occur.
Assuming that the lower riori number re resents hi her priority, how will these processes
be non-preemptive
scheduled according to a as well as a preemptive priority scheduling
algorithm? Compute the average waiting time and average turnaround time in both cases.
Solution:
Non-preemptive priority scheduling algorithm
following Gantt chart.
7
10 16
Initially, PI enters the ready queue at t = 0 and gets the CPU as there are no other
processes in the queue. While it is executing, P P and P4 enter the queue at t = 1, t
= 3, and t = 4, respectively. When the CPU becomes free, that is, at t = 7, it is
allocated to P3 because it has the highest priority (that is, 1) among the three
processes. When P3 completes, CPU is allocated to the next lower priority process,
that is, P4 and finally, the lowest priority process P2 is executed.
Since tumaround time = exit time — entry time, therefore:
Tumaround time for PI = (7 — 0) = 7 ms
Tumaround time for P2 = (16 — l) = 15 ms
Turnaround time for P3 (10 — 3) = 7 ms
Turnaround time for P4 = (12 — 4) = 8 ms
Average turnaround time = (7 + 15 + 7 + 8)/4 = 9.25 ms
Since waiting time = turnaround time — processing time, therefore:
Waiting time for PI = (7 — 7) = 0 ms
Waiting time for P2 = (15—4) = 11 ms
Waiting time for P3 = (7 — 3) = 4 ms
Waiting time for P4 = (8 — 2) = 6 ms
Average waiting time = (0 + Il + 4 + 6)/4 = 5.25 ms
64
Preemptive
prioritv scheduling algorithm
The
precesses in the following Gantt chart.
Scheduling
o
136
s10 16
Initially, P: of priority 4 enters the ready queue at t = 0 and gets the CPU as there are no
than that of the currently running process PI — enters the queue. Therefore, PI is
preempted (with remaining CPU burst of 6 ms) and the CPU is allocated to During
since ? 3 enjoys higher priority than P:. However, when 4t time t = 4, P 4 with priority
2 enters the queue, the CPU is not assigned to it because it has lower priority than
currently running process ? 2. When P 3 completes, there are three processes PI, % , and P
4 in the ready queue with priorities 4, 3, and 2, respectively. The CPU is allocated first
to PA, then to P 2 and finally to r.. Since tumaround time = exit time — entry time,
therefore:
Turnaround time for PI = (16 — 0) = 16 ms
Turnaround time for P2 = (10 — l) = 9 ms
Turnaround time for P3 (6 — 3) = 3 ms
Turnaround time for ?4 = (8 — 4) = 4 ms
Average turnaround time = (16 + 9 + 3 + 4)/4 = 8 ms
Since waiting time = turnaround time — processing time, therefore:
Waiting time for PI (16 — 7) = 9 ms
Waiting time for P2 (9 — 4) = 5 ms
Waiting time for PE = (3 — 3) = O ms
Waiting time for P, = (4 — 2) = 2 ms
Average waiting time = (9 + 5 + O + 2)/4 = 4 ms
Advantages
• Important processes are never made to wait merely because less important processes are
under execution.
Disadvantages
• It suffers from the problem of starvation ofloyerpriority processes, since the
continuous arrival of higher priority processes will indefinitely prevent lower priority
processes from acquiring the CPU. One possible solution to this problem aging, which is
a process Of gradually raising the priority of a low priority process in step with increase
in its waiting time. If the priority Of a IOW
priority process is increased after each fixed time interval, it is ensured that at some
time it will become the highest priority process and ill finally get executed.
(HRN)
4.4.5 Highest Response Ratio Next Scheduling
non-preemptive
The highest response ratio next scheduling is a scheduling algorithm
Whenever CPU
that schedules the processes according to their response ratio. the
becomes available, the process
value of response ratio
highest
Scheduling 65 having among all the ready processes is scheduled
next. The response ratio of a process in the queue is computed by using the following equation:
Process
4
Arrival time 2 34 CPU burst (ms) 3 5 2
How will these processes be scheduled according to HRN scheduling algorithm? Compute the
average waiting time and average turnaround time.
Solution: The processes will be scheduled as depicted in the follgwing Gantt chart.
14
3 79
Initially, Pi enters the ready queue at t = 0 and CPU is allocated to it. By the time PI completes, P2
and P3 have arrived at t = 2 and t = 3, respectively. At t = 3, the response ratio of P2 is ((3-2)+4)/4
= 1.25 and of P3 is I as it has just arrived. Therefore P2 is scheduled next. During execution of P2,
Advantages
It favours short processes. This is because with increase in waiting time, the response ratio of
short processes increases speedily as compared to long processes. Thus, they are
scheduled earlier than long processes.
• Unlike SJF, starvation docs not occur since with increase in waiting time, the
response ratio Of long processes also increases and eventually they arc
schcdulcd.
Disadvantages
• Like SJF and SRTN. it also requires an estimatc oftbc expected service time (CPU
burst) ofa
4.4.6
Round Robin (RR) Scheduling
The round robin scheduling is onc of the most widely used preemptive
Arrival time 10 5
3 4 CPU burst (ms) 23
Assuming that the time slice is 3 ms how will these processes be scheduled
according to round robin scheduling algorithm? Compute the average waiting
time and average turnaround time. Solution: Tije processes will be scheduled as
following
depicted in the Gantt chart.
14
36 16 19 Initially,
P; enters the ready queue at t z and
CPU a
gets the for 3 ins. While it executes, P2 py enter the queue at t •v J
respectively. Since, Pi a
and t 3, does not execute within 3 ms' interrupt
is preempted (with
occurs when the time slice gets over. P. remaining
CPU burst in the queue
7 ms), put back after
to execution of not entered P,
Pa. During yet and
enters
because
P, P, has the CPU is allocate
queue queu (with remaining CPU
in the at t 4 and is put at to the end Of burst of 2
The to allocated the next process
ms) and sent end of queue, behind p.. CPU is to in
the
queue, that is, to P 3 and
behind P; , When P, times out, it is
preempted
Scheduling
67 executescompletely before the time slice expires. Thus, the CPU is allocated to
the next process in the queue which is Pr. PI again executes for 3 ms, is then preempted (with
remaining CPU burst of 4 ms) and joins the end of the queue, behind Pa . The CPU is now
cxccutcs
allocated to P 4. P, completely within its time slice, so the CPU is allocated to the
before
process at the head of thc queue, that is, P2. As P2 completes the time out occurs,
the CPU is switched to Pi at t 16 for another 3 ms. When the time slice expires, the CPU is
reallocated to Pi since it is the only proccys remaining: there is no queue left.
Since tumaround time = exit time — entry time, therefore:
Turnaround time for PI = (20 — 0) = 20 ms
Turnaround time for P2 = (16 — l) = 15 ms
Turnaround time for P3 = (8 — 3) = 5 ms
Turnaround time for P4 = (14 — 4) = 10 ms
Average turnaround time = (20 + 15 + 5 + 10)/4 = 12.5 ms
Since waiting time = turnaround time — processing time, therefore:
Waiting time for PI = (20 — 10) = 10 ms
Waiting time for P2 = (15 — 5) = 10 ms
Waiting time for P3 = (5 — 2) = 3 ms
Waiting time for (10 — 3) = 7 ms
Average waiting time = (10 + 10 + 3 + 7)/4 = 7.5 ms
The performance of round robin scheduling is greatly affected by the size of the time
quantum. If the time quantum is too small, a number of context switches occur which, in
turn, increase the system overhead. More time will be spent in performing context
switching rather than executing the processes. On the other hand, if the time quantum is
too large, the performance of round robin simply degrades to FCFS.
Note: If the time quantum is too small, say I ms, the round robin scheduling is called
processor sharing.
Advantages
• It is efficient for time sharing systems where the CPU time is divided among the
competing processes.
• It increases impartiality as regards priority to processes is concerned.
Disadvantages
• Processes (even short ones) may take a long time to execute. This decreases the
system throughput.
• It needs some extra hardware support, such as a timer to cause interrupt after each time
out.
Note: Ideally, the time quantum should be such that 80% of the processes can complete their execution
within the given time quantum.
TO implement multilevel scheduling algorithm, the ready ueue is artitioned into as many
ieparate
queues as there are groups. Whenever a new process enters, it is assigne mermancntly
to one of the ready queues depending on its properties, including memory requirements, type
batch
and priority. E h read as its own scheduling al orithm. Fnr example, for processes
FCFS scheduling
algorithm may be used, an or interactive processes, one may use the round
scheduling algon queues are executed
robin a 10011, the processes in lgher pnorlty
before
those
priority system
in lower queues.Ährs Implies that no batch process can run unless all the
and interactive
processes processes have been executed completely. Moreover, if a process
higher prjority lower
enters into a queue while a process prioritysueue is executing, the
priority preempted
process is in order to allocate the CPU to the higherpriority
process.
Highest priority Interactive processes
CPU
Lowest Batch
priority processes
slice among the queues. Each queue gets a certain share ofCPU
time which
it schedules among the processes it. Note that the time slices ofdifferent
in
priority queues .4.8 Multilevel Feedback Queue
may Scheduling
The multilevel feedback queue scheduling also known as multilevel adaptive
In this
scheduling
algorithm processes
that has been waiting too long in a lower priority queue is
Similarly, a process
moved a
higher
It
10
Scheduling
70
in 02, that is, Pt. While PI is executing, enters Qi at t 25 so PI is preempted, placed
in
afterup and P3 starts executing. As Py executes completely within time slice, the
first
scheduler picks the process in which is P: at t = 29. While is executing, Pa enters
t
QI at 32 because
of which P2 is preempted and placed after PI in (02. The CPU is assigned to P4 for 5 ms
and at t 37, P4 is moved to 02 and placed after 1>2. At the same time, the CPU is allocated to PI
(first process
Advantages
• It is fair to I/O-bound (short) processes as these processes need not wait too
long and are executed quickly.
• It prevents starvation by moving a lower priority process to a higher priority queue if it has
been waiting for too long.
Disadvantages
• It is the most complex scheduling algorithm.
• Moving the processes between queues causes a number of context switches, which results in
an increased overhead.
The turnaround time for long processes may increase significantly.
CPU,
CPU?
cptJ3
CPUn
CPU,
CPU2
CPI-J3
CPUn
Qoment of time. that one or more processors are sitting idle whilc others arc overloaded
with Ä number of processes for them. Thus, In order to achieve bcttcr utilizatiomof the
processors.
multiple load balancing is required, which means keeping the workload evenly
di>tributed among
multiple
processors. nierc are two techniques-to perforlll load balancing, namely, push
migration and pull migration.
In push migration technique, the load is balanced by acriodically checking the load of each
processor and shift}ng the processes from the ready queues of overloaded processors to that of
less
overloaded or idle processors. On the other hand, in pull migration, the idle processor itself
pulls a waiting process from a busy processor.
Note: Load balancing is often unnecessary on SMP systems with a single shared ready queue.