Scheduling Algorithm

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

4

Scheduling

Learning Objectives
After reading this chapter, you will be able to:
• Understand the basic concepts of scheduling.
• Discuss the criteria for scheduling.
• Explainvarious scheduling algorithms.
• Discuss scheduling for multiprocessorsystems.
Explain real-time scheduling.
• Evaluate various scheduling algorithms. -ram

-PAL
4.1 INTRODUCTION
As discussed in Chapter 2, CPU scheduling is the procedure employed for deciding_which-of
the ready processes the CPU should be allocated to. CPU scheduling plays a pivotal role in the
basic framework of the operating system owing to the fact that the CPU is one of the primary
resources of the computer system. The algorithm used by the scheduler to c out the selection
of a rocess for execution is known aÄéKédiiliii@ålGGFithm. number of scheduling algorithms
are available for CP scheduling. Each scheduling algon influences the resource utilization,
overall system performance, and quality of service provided to the user. Therefore, one has to
reason out a number of criteria to be considered while selecting an •algorithmon a particular
system.

4.2 SCHEDULING CONCEPTS


Before we start discussing the scheduling criteria and scheduling algorithms in a comprehensive
are
manner, we will first take into account some relatiy.ely impertant concepts of scheduling, which
mentioned below.

(a) Process Behaviour


CPU scheduling is greatly affected by how a process behaves during its execution. Almost all
the processes continue to switc etween CPU (for devices ( or performing
I/O
I/O) during their execution. The time riod elapsed ip processing before performing the next
operation is known CPU burs d the time pen e apsed in performin I/O before the next CPU
burst, follow
burst is known as VO urst. enerally, the rocess execution starts with a CPU
an I/O CPU burst and so on until e termination o e process. Thus we can
say that the process execution comprises alternate cycles of CPUand I/O 4.1 shows
segment written
the sequence of CPU and I/O bursts involved in the execution of the following code
in C language:
Scheduling

CPU burst
sum—o ;
seanf ("*d" Snum) ; I/O burst
while (i 10 ) CPV burst

sum num i;

print'f ("*d", sum) ; I/O burst

Figure 4.1: Altemate Cycles of CPU and I/O Bursts


The length of a CPI]
burst and I/O burst varies from process to process. depending on whether the
process is CPU-bound or If the rocess is CPU- und it will have longer CPU bursts
as compared to I/O bursts. pnd vice
versa in case t e process is I/O-bound. From the scheduling
the length of CPU burst is taken iiito consideration and not the length of the 1/0
burst.

When to Schedule
An important facet of scheduling is to determinewhen the scheduler should make scheduling
decisions. The following circumstances may require the scheduler to make sCheduling decisions:

en a process switches to waitin state. This situation may occur in case the
process has to wait for L/Oor the termma 1 n Itsc 1 process or some another
reason. In such
•tuations, the •cheduler has to select some ready process for exec.lltipn.
• a process switches from runmng to read state ue to occurrence of n interrupt.
In such
situations, the scheduler may ecr e to run a process from the ready queue. If the
Interrupt
was caused by some I/O device that has now comp eted its task, the
scheduler may choose the
process that was blocked and waiting for the I/O.
When a process switches fro waiting state to ready state, or example,
in cases where the
process has completed its I/O operation. In suc SituaIons, the scheduler
may select either
e process that has now come to the ready state, or the current
process, which may be
continued.
When a process terminates and the systgn. In t s case, the scheduler has to select a
process for execution from the set of ready processes.

(c) Dispatcher
The CPU scheduler only selects a process to be executed next on the
CPU but it cannot assign CPU
to the selected process. The function of setting up the execution of the
selected process on the CPU
is performed by another module of the operatingsystem, known as dispatcher.
The dispatcher
involves the following three steps to perform this function:
I. Context switching is performed.The kernel saves the context of currently
and restores the saved state of the process selected by the CPU scheduler.
running process
In case the process
selected by the short-term scheduler is new, the kernel loads its
context.
2. The system switches from the kernel mode to user mode, as a user process
is to be executed.
Scheduling
3. The execution of the user process
selected by the CPU scheduler is started by transferring the
control either to the instruction
that was supposed to be executed
interrupted or to the first at the time the præess was
instruction, if the process is
going to be executed for the first time after
its creation.

Note: The amount of time required by


the dispatcher to suspend execution Ofone process and resume
erecution of another process is known as
dispatch latency. Low dispatch latency implies faster start of
process execution.

4.3 SCHEDULING CRITERIA


The scheduler must consider the following performance
measures and optimization criteria in order
to maximize the performance of the system:
airness: It is defined as the degree to which each process gets an gqu.alchance to execute. The
scheduler must ensure that each process should get a fair share of CPU time. However, it may
t' different categories of processes (batch, real-time, or interactive) in a different manner.
CPU utilization: It is defined as the percentage of time the CPU is busy in executin
For higher be k&ptas
running at all times.
• Balanced utilization: It is defined as the percentage of time all the system resources are busy.
It considers not only the CPU utilization but the utilizationof I/O devices, memory, and all
other resources. To get more work done by the system, the CPU and VO devices must be kept
running simultaneously. For this, it is desirable to load a mixture of CPU-bound and VO-bound
processes in the memory.
• Throughput: It is defined as the total number of processes that a
time. By and large, it depen on the average en ofthe processes to be executed- Forthesystems
running long processes, throughput will be less as compared to the systems running short processes.
• Turnaround time: It js defined as the amount of time that has rolled by from the time of
creation to the termination of a process. To put it differently,it is the differeQVS&Gäeen the
time a process enters the system and the time it exits the system.FiRGiÄÄh@GEifi-e-3•
GöhääitiiiÜöäféiiiitö¯üdfiGGe, within théFekidfQueueto get CPU access, running
on CPU, and in I/O queues. It is inversely promortional throughpw&, that is, the more the
tumaround time, less will be the throughput.
• Waiting time: It is defined as the time spent by a process while waiting in the ready queue.
However, it does not take into account the execution time or time consumed for I/O. Thus, waiting
time of a process can be determined as the difference between tumaround time and
time. In practice, waiting time is a more accurate measure as comparedi6 tåmaround time.
• Response time: It is defined as the time elapsed between the moment when a user initiates a
request and the instant when the system starts responding to this request. For interactive systems,
it is one of the best metrics employed to gauge performance. This is because in such systems,
only the speed with which the system responds to user's request matters, not the time it takes to
output the response.
The basic purpose of a CPU scheduling algorithm is that it should tend to maximize fairness, CPU
Utilization balanced utilization and throughput, and minimize turnaround, waiting and response
time. Practicall s akin no
general, the performance of an algorithm is evaluated on the basis of average measures. or example,
an algorithm that minimizes the average waiting time is considered to be a good algorithm because
Scheduling
minimizing
this improves the system. However, in case of response time,
overall efficiency of the response time of the processes that
the average is not a good criterion; rather, it is the variance in the
process with a long response time
should be minimized. This is because it is not desirable to have a
as compared to other processes.

4.4 SCHEDULING ALGORITHMS


scheduling algorithms fall into
A wide variety of algorithms are used for the CPU scheduling. These
two categories, namely, non-preemptive and preemptive.
allocated to a process, it cannot
Non-preemptive scheduling algorithms: Once the CPU is
process¯haöåo wart%r 1/0
be withdravyv_until the rocess v luntarilyreleases_jt(in case the
can say the decision to
or some other event) or e process terminates. In other words, we
either switches to the
schedule a process is made only when the currently running process
process from the set of
waiting state or terminates. Jn both cases, the CPU executes some other
first come first
ready processes. Some examples of non-preemptive scheduling algorithms are
served (FCFS), shortest job first sym priority-based scheduling and
next (HRN) scheduling.
Note: A non-preemptive scheduling algorithm is also known as a cooperative or voluntary scheduling
gorithm.

Preemptive scheduling algorithms: The CPU can be forcibl taken back from the currently
runnin The preemp_tgdprocess
rocessbefore its completion and allocate to some other py_ocgss.
is put back in the rea y queue and resumes its executionwhen it is scheduled a ain. Thus, •a
many times before its completion.In preemptive scheduling, the
decision to schedule another process is made whenever an interrupt occurs causing the currently
running process to switch to ready state, or when a process having higGVrFiGiffhfiühe currently

shortest remaimng_gmenext scheduling and round robin (Sheduling.

4.4.1 First-Come First-Served (FCFS) Scheduling


FCFS is one of the simplest scheduling algorithms. As the name implies, the processes are executed
in the or f their arrival in the read queue, which means the process thåfGiféFöfhéFéädfiüéiE
rs_ggts the CPU first. FCFS is a non-preemptive scheduling algorithm. Therefore, once a process
gets the CPU, it retains the control of CPU until it blocks or terminates.
To implement FCFS scheduling, the implementationof ready queue is managed as a FIFO
(First-in First-out) queue. When the first process enters the reådy queue, it immediately gets the
CPU and starts executing. Meanwhile, other processes enter the system and are added to the end
of queue by inserting their PCBs in the queue. When the currentlyrunning process completes or
blocks, the CPU is allocated to the process at the forefront of the queue and its PCB is removed
from the queue. In case a currently running process was blocked and later comes to the ready state,
its PCB is linked to the end of queue.
Example 4.1: Consider four processes PI, %, %, and P4 with their arrival times and required
CPU
burst (in milliseconds) as shown in t e followingtable.
Process
Arrival time 2 3 5
CPU burst (ms) 15 6 7 5
Scheduling
59
How will these processes be scheduled
according to FCFS scheduling algorithm? Compute the
averagewaiting time and average turnaround time.
solution: The processes will be scheduled as
depicted in the following Gantt chart:

15 21 28 33
Initially, pt enters the ready queue at t = O
and CPU is allocated to it. While PI is executing, Pa, P,
and p 4 enter the ready queue at t = 2, t =
3, and t 5, respectively. When PI completes, CPU is
allocated to ? 2 as it has entered before P, and P4.
When P2 completes, P, gets the CPU after which
gets the CPU.
Since turnaround time = exit time— entry time therefore:
Turnaroundtime for
Turnaround time for P 2 = (21 —2) = 19 ms
Turnaround time for PJ = (28 —3) = 25 ms
Turnaround time for P 4 = (33 —5) = 28 ms
Average turnaround time = (15 + 19 + 25 + 28)/4 = 21.75 ms
Since waiting time = turnaround time —processing time, therefore:
Waitingtime for PI
Waiting time for P2 = (19—6) = 13 ms
Waiting time for P, = (25 —7) = 18 ms
Waiting time for Pa = (28 —5) = 23 ms
Average waiting time = (0+ 13 + 18 + 23)/4 = 13.5 ms
The performance of FCFS scheduling algorithm largely depends pp the order of arrival of processes
in the ready queue. having long
short CPU burst or vice versa. To illustrate this, assume that the processes (shown in Example 4.1)
4. enter the ready queue in the orde g, % , P 3 and Pt. ow, the processes will be scheduled as shown
in the following Gantt chart.

5 11 18 33

Using the above formulae, the average turnaround time and average waiting time can be computed as:
Average turnaround time = [(5 —0) + (Il —2) + (18 —3) + (33 —5)]/4 = 14.25 ms
Average waiting time = [(5 —5) + (9 —6) + (15 —7) + (28 —15)]/4 = 6 ms
It is clear that if processes having shorter CPU bursts execute before those having longer CPU
bursts, the average may-reduce significantly.

Advantages
• It is easier to understand and implement, as processes are simply to be added at the end of the
queue and No process in between these two points in the
queue needs to be accessed.
• Itiswell suited for batch systems where the longer!imeperiods foreach process areoftenacceptable.
60 Scheduling

Disadvantages
never
• The average waiting time is not minimal.Therefore,this schedulingalgorithm is
recomrÄéiidWWhere performance is a major issue.
For example, assume
e It reduces the CPU and I/O devices utilization under some circumstances.
ip the ready ueue.
that there is one Ion CPCbOtiiid rocessandmån¯AiOüf110- ound rocessgs
Now, it may appen thåt while the CPU-boundprocess is executing, the O-bound processes
they have to wait till the
complete their I/O and come to the ready queue for execution. There this time.
remain idle during
CPU-bound process releases the CPU; besides the I/O dev-ces also
deuce queue and the CPU
When the CPU-bound procesS needs To pe orm I/O, it comes to the
is allocated to I/O-boundprocesses.As the
the CPU idle. Then
they execute quickly and come back to the device queue, thereby leaving
CPU, which again results in
the CPU-bound process enters the ready queue and is allocated the
This happens again and again
having I/O processes waiting in ready queue, at some point of time.
I/O devices utilization.
until the CPU-bound process is done, thus resulting in low CPU and
amount of cpu
• It is not suitable for time sharing systems where each process needs the same
time.

4.4.2 Shortest Job First (SJF) Scheduling


request next (SRN) is
The shortest job first also known as shortest process next (SPN) or shortest
a non-preemptive scheduling algorithm that schedules the piocesscs according to the.length-QLCP-LJ—
burst they require. At any point of time, among all the readyprocesses,
processes shorter than it have
CPU burst is scheduled first. Thus, a process has to wait until all the
they are scheduled in the FCFS order.
been executed. In case two processes have the same CPU burst,
arrival times and required CPU
Example 4.2: Consider four processes PI, På, Ps, and P4 with their
burst (in milliseconds) as shown in the followingtable:
Process
3 4
Arrival time
5 2 3
CPU burst (ms)
according to SJF scheduling algorithm? Compute the average
How will these processes be scheduled
waiting time and average turnaroundtime.
as depicted in the following Gantt chart:
Solution: The processes will be scheduled

7 9 12 17

Pi enters the ready queue at t = 0 and gets the CPU as there are no other processes in the
Initially,
While it is executing, P2, P3 and P4 enter the queue at t = l, t = 3 and t = 4 respectively
queue.
CPU becomes free, that is, at t = 7, it is allocatedto P3 because it is has the shortest CPU
When completes, CPU is allocatedfirst to P, and then to Pz.
three processes. When P3
burst among the
time = exit time —entry time,
therefore:
Since turnaround
PI = (7 —0) = 7 ms
Turnaround time for
for P2 = (17 —I) =
16 ms
Turnaround time (9 —3) = 6 ms
for P3
Turnaround time
for P4 = (12 —4)
= 8 ms
Turnaround time
Scheduling
61
Average turnaround time = (7 + 16 +
6 + 8)/4 = 9.25 ms
Sincewaiting time = turnaround time —
processing time, therefore:
Waitingtime for PI = (7 —7) = 0 ms
Waitingtime for P2 = (16 —5) Il ms
Waitingtime for Pg = (6 —2) = 4 ms
Waitingtime for P4 (8 —3) 5 ms
Average waiting time = (0 + Il +4
+ 5)/4 = 5 ms
Advantages
. It elimimtes the variance in waitin
and turnaround times. In fact, it is optimal with respect to
average waiting time if all processes
are available at the same time. This is due to the fact that
short processes are made to run before
longer ones, which decreases the waiting time for short
processes and_increases-the waiting time for
long processes. However, the reduction in waiting
time is more than the increment and thus, the
average waiting time decreases.

Disadvantages
It is di lcult lement as it needs to know the length of CPU burst of processes in advance.
In practice, it is difficult to obtain pnor owledge of required processing times of processes.
Many systems expect users to provide estimates of CPU burst of processes, which may not
always be correct.
It does not favour processes having longer CPU burst. This is because long processes will not
be allowed to get the CPU as long as the short processes continue to enter the ready queue. This
results in starvation of long processes.

4.4.3 Shortest Remaining Time Next (SRTN) Scheduling


The shortest remaining time next, also known as shortest time to go (STG), isa reemptive version
of the SJF_scheduling algorithm. It takes into acQGÜfi&GéF6GGaining urst of the
processes rather than the whole length, in order to schedule them. The scheduler always chooses that
process for execution which has the shortest remaining processing time. While a process is being
execute t e c e taken ac om It an assigned to some new y arrived process provided
the CPU burst of the newpro_cess the remaining CPU burst of the current process.
Note that if at any point of time, the remaining CPU burst of two processes becomes equal, they are
scheduled in the FCFS order.
Example 4.3: Consider the same set of processes, their arrival times and CPU burst as in
Example4.2. How will these processes be scheduled according to SRTN scheduling algorithm?
Compute the average waiting time and average turnaround time.

Solution: The processes will be scheduled as depicted in the following Gantt chart.

3 5 11 17

gets the CPU as there are no other processes in the


Initially, PI enters the ready queue at t = 0 and
queue.While it is executing, at time t = l, P2 with CPU burst of 5 ms enters the queue. At that
time the remaining CPU burst of PI is 6 ms which is greater than that of Pa. Therefore, the CPU is
Schedtdihg
at t = 3 with a CPU burst
execution of Pz, P, enters
During
taken back from P. and allocated to P:. as the remaining CPU burst of p; at t -3 is 3
to with CPU burst 0/3 ms
of 2 ms. Again CPU is switched from at time t = 4, P,
which is greater than that of P,. the remaining CPU burst of currently
to it because at that time When P3 completes, there are
the queue. the CPU is than that of Pa.
I ms which is shorter
running process (that is, P,) is break the tie between p2 and p
ms) and P, (3 ms) irathe queue. To
three processes P: (6 ms), P2 (3 CPU is allocated first to then
consideration their arrival order and the
the scheduler takes into
to P, and finally, to Pt.

Since turnaround time = exit time —entry time, therefore:


Turnaround time for P: = (17 —O) = 17 ms
Tumaround time for P2 = (8 —I) 7 ms
Tumaround time for P, = (5 —3) = 2 ms
Turnaround time for P, = (I I —4) = 7 ms
Average turnaround time = (17 + 7 + 2 + 7)/4 = 8.25 ms
Since waiting time = tumaround time —processing time, therefore:
Waiting time for = (17—7) = 10 ms
Waiting time for P2 = (7 —5) = 2 ms
Waiting time for = (2 —2) = O ms
Waiting time for ? , = (7 —3) = 4 ms
Average waiting time = (10 + 2 + O+ 4)/4 = 4 ms

Advantages
• Along process that is near completion maybe favoured overshortprocesses entering the system.
This results in an improvement in the tumaround time of the long process.

Disadvantages
Like SJF, it also requires an estimate of the next CPU burst of a process in advance.
• Favouring a long process that is nearing completion over the several short processes entering the
system may adversely affect the turnaround times of short processes.
• It favours only those long processes that are just about to complete and not those who have just
started their operation. Thus, starvation of long processes may still occur.

4.4.4 Priority-based Scheduling


In a priority-based scheduling algorithm, each processis_assigned a priority, with higher priority
processes being scheduled before lowerpHorityprocesses.At any point of time, the process with
tfréfriEsWRÖrity among all the readyorpgesses is t. case two processes enjoy e
säiiié¯ÅFföFiiYJihöÄäECiitdin VCFS order.
Priority or non-preemptive.The choice is made whenever
a new process enters the ready qÄeÄhiIe some process is executing.
If the newly arrived process
has higher priority than the currently running process, the preemptive priority
scheduling algorithm
preempts the currently running process and allocates CPU to the new process.
On the other hand, a
non-preemptive scheduling algorithm allows the curently running
process to complete its execution
and the new process has to wait for the CPU.
Scheduling
63
Note..Both SIF and SRTN are
special cases ofpriority-based
equal to inverse ofthe next CPU schedulingwhew priority o/a process is
burst: the lower the
CPU burst, the higher will be thepriority.
A major design issue related with
priority scheduling
Thepriority can be assigned to a is how to compute priorities of the processes.
process
process's characteristics like memory Gither internålly defined byéthesystem depending on the
usage, I/O frequency, usage
by the user executing that process. cost, etc., or externally defined
xample 4.4: Consider four processes
P , P , P3, and P4 with their
(in milliseconds), and priorities arrival times, required CPU burst
as shown in the following
table:
Process
Arrival time
1 3 4
CPU burst (ms) 7 4 3 2
Priority 4 3 1 2

Assuming that the lower riori number re resents


hi her priority, how will these processes be
scheduledaccording to a non-preemptive as well as a
preemptivepriority scheduling algorithm?
Compute the average waiting time and average turnaround
time in both cases.
Solution:
Non-preemptivepriority scheduling algorithm
following Gantt chart.

7 10 16

Initially,PI enters the ready queue at t = 0 and gets the CPU as there are no other processes
in the queue. While it is executing, P P and P4 enter the queue at t = 1, t = 3, and t = 4,
respectively.When the CPU becomes free, that is, at t = 7, it is allocated to P3 because it has the
highest priority (that is, 1) among the three processes. When P3 completes, CPU is allocated to
the next lower priority process, that is, P4 and finally, the lowest priority process P2 is executed.
Sincetumaround time = exit time —entry time, therefore:
Tumaround time for PI = (7 —0) = 7 ms
Tumaround time for P2 = (16 —l) = 15 ms
Turnaround time for P3 (10 —3) = 7 ms
Turnaroundtime for P4 = (12 —4) = 8 ms

Average turnaround time = (7 + 15 + 7 + 8)/4 = 9.25 ms


Sincewaiting time = turnaround time —processing time, therefore:
Waitingtime for PI = (7 —7) = 0 ms
Waiting time for P2 = (15—4) = 11 ms
Waiting time for P3 = (7 —3) = 4 ms
Waiting time for P4 = (8 —2) = 6 ms

Average waiting time = (0 + Il + 4 + 6)/4 = 5.25 ms


Scheduling
64

Preemptive prioritv scheduling algorithm Gantt chart.


The precesses in the following

s 10 16
o 1 3 6

queue at t = 0 and gets the CPU as there are no othe


Initially, P: of priority 4 enters the ready
= 1, P2 of priority 3 —greater than
that of
executing, at time t
processes in the queue. While it is
Therefore, PI is preempted (with remaining
the currently running process PI —enters the queue.
During execution of % , P3 of priority 1
CPU burst of 6 ms) and the CPU is allocated to to p
(with remaining CPU burst of 2 ms)
enters at t 3. Again, the CPU switches from
4t time t = 4, P4 with priority 2 enters
since ? 3 enjoys higher priority than P:. However, when
lower priority than currently running
the queue, the CPU is not assigned to it because it has
there are three processes PI, % , and P4 in the ready queue with
process ? 2. When P 3 completes,
first to PA,then to P2 and finally to r..
priorities 4, 3, and 2, respectively. The CPU is allocated
Since tumaround time = exit time —entry time, therefore:
Turnaround time for PI = (16 —0) = 16 ms
Turnaround time for P2 = (10 —l) = 9 ms
Turnaround time for P3 (6 —3) = 3 ms
Turnaround time for ? 4 = (8 —4) = 4 ms
Average turnaround time = (16 + 9 + 3 + 4)/4 = 8 ms
therefore:
Since waiting time = turnaround time —processing time,
Waiting time for PI (16 —7) = 9 ms
Waiting time for P2 (9 —4) = 5 ms
Waiting time for PE= (3 —3) = Oms
Waiting time for P, = (4 —2) = 2 ms
Average waiting time = (9 + 5 + O + 2)/4 = 4 ms

Advantages
• Important processes are never made to wait merely because less important processes are under
execution.

Disadvantages
• It suffers from the problem of starvation ofloyerpriority processes, since the continuous arrival
of higher priority processes will indefinitelyprevent lower priority processes from acquiring the
CPU. One possible solution to this problem aging, which is a process Of gradually raising the
priority of a low priority process in step with increase in its waiting time. If the priority Of a IOW
priority process is increased after each fixed time interval, it is ensured that at some time it will
become the highest priority process and ill finally get executed.

Ratio Next(HRN) Scheduling


4.4.5 Highest Response
ratio next scheduling is a non-preemptive scheduling
The highest response algorithm that schedules
according to their response ratio. Whenever the CPU becomes available, the process
the processes
Scheduling 65

having highest value of response ratio among all the


ready processes is scheduled next. The
responseratio of a process in the queue is computed by using the following equation:

Response ratio = (Time since arrival + CPU burst)


CPU burst

Initially, when a process enters, its response ratio is 1. It goes on increasing at the rate of (I/CPU
burst) as the process's waiting time increases.
*Æxample 4.5: Consider four processes PI , P2, Pg, and P4 with their arrival timesmd required CPU
V'/ burst (in milliseconds) as shown in the following table:
Process
Arrival time 2 3 4
CPU burst (ms) 3 4 5 2

How will these processes be scheduled according to HRN scheduling algorithm? Compute the
average waiting time and average turnaround time.
Solution:The processes will be scheduled as depicted in the follgwing Gantt chart.

9 14
3 7
By the time PI completes, P2
Initially, Pi enters the ready queue at t = 0 and CPU is allocated to it.
3, the response ratio of P2 is ((3-2)+4)/4
and P3 have arrived at t = 2 and t = 3, respectively. At t =
scheduled next. During execution of P2,
= 1.25 and of P3 is I as it has just arrived. Therefore P2 is =
enters the queue at t = 4. When P2 completes at t = 7, the response ratio of P 3 is ((7-3)+5)/5
P4
response ratio, the CPU is allocated to it and after
I .8 and of P4 is ((7-4)+2)/2 = 2.5. As P4 has higher
its completion, P3 is executed.
time, therefore:
Since turnaround time = exit time —entry
Turnaround time for PI = (3 —0) = 3 ms
Turnaround time for P2 = (7 —2) = 5 ms
Turnaround time for 23 = (14 —3) = 11 ms
Turnaround time for P4 = (9 —4) = 5 ms
11 + 5)/4 = 6 ms
Average turnaround time = (3 + 5 +
—processing time, therefore:
Since waiting time = turnaround time
Waiting time for PI = (3 —3) = 0 ms
Waiting time for P2 = (5 —4) = 1 ms
Waitingtime for P3 (11 —5) = 6 ms
Waiting time for P4 = (5 —2) = 3 ms
+ 6 + 3)/4 = 2.5 ms
Average waiting time = (0+ 1

Advantages
waiting time, the response ratio of
processes. This is because with increase in
It favours short
speedily as compared to long processes. Thus, they are scheduled
short processes increases
earlier than long processes.
response ratio Of
with increase in waiting time, the
• Unlike SJF, starvation docs not occur since
they arc schcdulcd.
long processes also increases and eventually

Disadvantages
oftbc expected service time (CPU burst) ofa
• Like SJF and SRTN. it also requires an estimatc

4.4.6 Round Robin (RR) Scheduling


used preemptive scheduling algorithms. It
The round robin scheduling is onc of the most widely
in¯aiiimpåiiial¯manner. Each in
considers all processes equally important and treats them know}}
thc re4.gueue gets a timc (generally from 10
hasnotgxgguted completely when
as time slice time quantum for its execution.
70the next process in the ready queue.
the umc slice ends, it isnrcempted and the CPU is allocated
Slice expires, the CPU is immediately
However, if the process blocks or terrninatesbefore the-time
switched to thc next process in the ready queue.
ready queue is treated as a circular
To implement the round robin scheduling algorithm, the
of queue. The CPU is allocated
queue. All thc processes arriving in the ready queue are put at the end
to thc first •stn executes until its time slice expires. If the CPU burst
than one time quantum, the process Itself releases the CPU and
is deleted from thc queue. The CPU is the/Ä116catedto the next ocess in the ueue. However, if
thc process does not execute comp ete y within the time slice, an interrupt occurs when the time
slice expires. The currently running process is preempted, put back at the end of the queue and the
CPU is allocated to the next process in the queue. The preempted process again gets the CPU after
all the processes before it in the queue have been allocated their CPU time slice. The whole process
continues until all the processes in queue have been executed.
Example 4.6: Consider four processes P:, ?z, and P, with their arrival times and required CPU
burst (in milliseconds) as shown in the following table:

Arrival time 3 4
CPU burst (ms) 10 5 2 3

Assuming that the time slice is 3 ms how will these processes be scheduled according
to round
robin scheduling algorithm? Compute the average waiting time and average turnaround
time.
Solution: Tije processes will be scheduled as depicted in the following
Gantt chart.

3 6 14 16 19
Initially, P; enters the ready queue at t z and gets the CPU
for 3 ins. While it executes, P2 an
p y enter the queue at t •v J and t 3, respectively.Since,
Pi does not execute within 3 ms' a
interrupt occurs when the time slice gets over. P. is
preempted (with remaining CPU burst
7 ms), put back in the queue after P, becauseP, has not
to Pa. During execution of entered yet and the CPU is allocate
P, enters in the queue at
t 4 and is put at to the
behind P; , When P, times out, it is preempted (with end Of queu
remaining CPU burst of
end of queue, behind p.. The CPU is allocated to the 2 ms) and sent to
next process in
the queue, that is, to P 3 and
Scheduling 67
executes completely before the time slice expires.
Thus, the CPU is allocated to the next process
in the queue which is Pr. PI again executes for
3 ms, is then preempted (with remaining CPU burst
of 4 ms) and joins the end of the queue, behind Pa . The CPU is now allocated to P4. P, cxccutcs
completely within its time slice, so the CPU is allocated to the process at the head of thc queue,
that is, P2. As P2 completes before the time out occurs, the CPU is switched to Pi at t 16 for
another 3 ms. When the time slice expires, the CPU is reallocated to Pi since it is the only proccys
remaining: there is no queue left.
Since tumaround time = exit time —entry time, therefore:
Turnaround time for PI = (20 —0) = 20 ms
Turnaround time for P2 = (16 —l) = 15 ms
Turnaround time for P3 = (8 —3) = 5 ms
Turnaround time for P4 = (14 —4) = 10 ms
Average turnaround time = (20 + 15 + 5 + 10)/4 = 12.5 ms
Since waiting time = turnaround time —processing time, therefore:
Waiting time for PI = (20 —10) = 10 ms
Waiting time for P2 = (15 —5) = 10 ms
Waiting time for P3 = (5 —2) = 3 ms
Waiting time for (10 —3) = 7 ms
Average waiting time = (10 + 10 + 3 + 7)/4 = 7.5 ms
The performance of round robin scheduling is greatly affected by the size of the time quantum.
If the time quantum is too small, a number of context switches occur which, in turn, increase the
system overhead. More time will be spent in performing context switching rather than executing
the processes. On the other hand, if the time quantum is too large, the performance of round robin
simply degrades to FCFS.
Note: If the time quantum is too small, say I ms, the round robin scheduling is called processor
sharing.

Advantages
• It is efficient for time sharing systems where the CPU time is divided among the competing
processes.
• It increases impartiality as regards priority to processes is concerned.

Disadvantages
system
• Processes(even short ones) may take a long time to execute.This decreasesthe
throughput.
after each time out.
• It needs some extra hardware support, such as a timer to cause interrupt
of the processes can complete their execution
Note: Ideally, the time quantum should be such that 80%
within the given time quantum.

4.4.7 Multilevel Queue Scheduling


be categorized into,
Multilevel ueue scheduling is designed for environments where processes can
different scheduling
different groups on the aslS of their different response time requirements or
Schedulin
68
the process is a system process
needs. One possible categorizationmay be based on whether
of processes IS associated With
process or an interactiveprocess (see Figure 4.2). Each group
highest priority whereas batch
specific priority. For example, the system rocesses ma have the
processes :nay have the lowest priority.
is artitioned
into as many
TOimplementmultilevelschedulingalgorithm,the ready ueue
enters, it is assigne
ieparate queues Whenevera new process
as there are groups. mermancntly
to one of the ready queues dependingon its properties, including memory requirements, type and
priority. E h read as its own schedulingal orithm. Fnr example, for batch processes
FCFS scheduling algorithmmay be used, an or interactiveprocesses, one may use the round robin
schedulingalgon theprocessesin lgher pnorlty queues are executed before those
a 10011,
in lower priority queues.Ährs Implies that no batchprocess can run unless all the system processes
and interactive processes have been executed completely. Moreover, if a process enters into a
higher
prjority queue while a process prioritysueue is executing, the lower priority process is
preempted in order to allocate the CPU to the higherpriority process.

System processes
Highest priority

Interactive processes

CPU

Batch processes
Lowest priority

Figure 4.2: Multilevel


Queue Scheduling
Advantages
• Processes
arepermanentlyassignedtotheirrespecåve
queuesand do not
move between queues.
Disadvantages
The processes in lower
priorityqueuesmay be
processes are continuously starved of CPU
arriving in higher access in a situation where
starvation is to time slice
it schedules among among the queues. priority queues. One
the processes Each queue possible way to prevent
in it. Note gets a certain such
that the time share ofCPU time which
slices
ofdifferent priority
queues may
.4.8 Multilevel
Feedback Queue
The multilevel feedback Scheduling
queue
improved version ofmultilevel schedulingalso
are not permanently queueschedulingknown as
assigned to multilevel
The decision to move a queues; instead,algorithm In this adaptive schedulmg is an
process between they scheduling algorithm
and its waiting time. Ifa are
processuses queues.is based allowed to move processes
Similarly, a process that has too much the between ffe queues•
been waiting CPU time, time taken by It in
too long it
in a lower is moved to a
priority lower queue
queue is moved 10
a higher
Scheduling
70
at t 25 so PI is preempted, placed after
in 02, that is, Pt. While PI is executing, enters Qi within time slice, the scheduler
picks
up
completely
in and P3 starts executing. As Py executes is executing, Pa enters QI at t 32 because
the first process in which is P: at t = 29. While is assigned to P4 for 5 ms and at
t
in The
(02. CPU
of which P2 is preempted and placed after PI the CPU is allocated to PI (first
process
the same time,
37, P4 is moved to 02 and placed after 1>2.At which is % , starts executing. When it
next process in Q2
in 02). When it completes at t = 42, the
completes, the last process in 02, that is, P4 is executed.
Since tumaround time = exit time —entry time, therefore:
Tumaround time for PI = (42 —0) = 42 ms
Turnaround time for P2 = (52 —12)= 40 ms
Turnaround time for P3 = (29 —25) = 4 ms
Tumaround time for PC= (57 —32) = 25 ms
27.75 ms
Average turnaround time = (42 + 40 + 4 + 25)/4 =
Since waiting time turnaround time —processing time, therefore:
Waiting time for Pi = (42 —25) = 17
Waiting time for P2 = (40 —18) = 22 ms
Waiting time for P3 = (4 —4) = 0 ms
Waiting time for P (25—10)= 15ms
Average waiting time = (17 + 22 + O+ 15)/4= 13.5ms

Advantages
• It is fair to I/O-bound(short)processesas these processesneed not wait too long and are
executed quickly.
• It prevents starvation by moving a lower priority process to a higher priority queue if it has been
waiting for too long.

Disadvantages
• It is the most complex scheduling algorithm.
• Moving the processes between queues causes a number of context switches, which results in an
increased overhead.
The turnaround time for long processes may increase significantly.

4.5 MULTIPLE PROCESSOR SCHEDULING


So far we have discussed the schedulingof a single processoramong a
the queue. In case there is more than one processor. different scheduling mechanisms need to be
incorporated. In this section, we will concentrateon homogeneous multiprocessor systems, which
means systems in which all processors are identical hence an rocess•
in the ueue can be assigne to any availableprocessor.
The scheduling cn na or multiprocessorschedulingare same
as that for single procesor
scheduling. But there are also some new considerations which are
discussed here.
(a) Implementation of Ready Queue
In multiprocessor systems, the ready queue can be implementedin
tyo ways. Either there may
be a separate ready queue for each procgssor (see Figure 4.4(a)) or
t&ere may be a single sha!.g4
Scheduling 71

read ueue for all the processors (sec Figure 4.4(b)), In thc former case, it may happen that at
any given moment, the ready queue of one processor may bc cmpVWhilc thc othcr processor
is very busy in executing processes. To obviate this sort of situation, the is
-preferred,in which all the_nroccsscs entet into onc qucuc and arc schcduJc?lon any availabJg
processor.

CPU,

CPU?

cptJ3

CPUn

(a) Ready queues per processor

CPU,

CPU2

CPI-J3

CPUn

(b) A single shared ready queue


Figure 4.4: Implementationof Ready Queue in MultiprocessorSystems

(b) Scheduling Approaches


The next issue is how to schedule the processes from the ready queue to multi le rocessors. For
this, one of following sche uling approac es may e used%
• Symmetric multiprocessing (SMP): In this approach,each processoris self-scheduling. For
each processor, the scheduler selects a process for execution from the ready queue. Since multiple
grocessors need to access common
among multiple processors. This is required in order that no two
and no process IS ost from the ready queue.
• Asymmetricmultiprocessing:This approachis based on a master-slavestructureof the
processors. The responsibility of making scheduling decisions, I/O rocessing and other system
activities is dele ated to onl one rocessor ca e master), and other processors calle slaves
simp y execute the user's code. Whenever a processor ecomes available, the master processor
a process for it. This approach is easier to implement than
symmetricmultiprocessing since only one processor has access to the system data structures.
But this approach is also inefficient because a number of processes may block on the master
processor.
Scheduling
72

(c) Load Balancing


processor,it could happen, at a certain
On SMP systems having a ready queue for cach whilc others arc overloaded with Ä
Qoment of time. that one or more processors are sitting idle bcttcr utilizatiomof the multiple
to achieve
number of processes for them. Thus, In order
the workload evenly di>tributed among
processors. load balancing is required, which means keeping balancing, namely, push migration
multiple processors. nierc are two techniques-toperforlll load
and pull migration.
acriodically checking the load of each
In push migration technique, the load is balanced by
overloaded processors to that of less
processor and shift}ng the processes from the ready queues of
the idle processor itself pulls a
overloaded or idle processors. On the other hand, in pull migration,
waiting process from a busy processor.
a single shared ready queue.
Note: Load balancing is often unnecessary on SMP systems with

(d) Processor Affinity


that executed it last
Processor affinity means an effort to make a process I-unon the same processor
accessed by it is kept in
time. a process executes on a processor, the data most recently
the cache memory of that processor. The next time the process is run on the same processor, most
of its memory accesses are satisfied in the cache memory only and as a result the process execution
speeds up. However, if the process is run on a different processor next time, the cache of the older
processor becomes invalid and the cache of the new processor has to be re-populated, thus delaying
process execution. Therefore, an attempt should be made by the operating system to run a process
on the same processor each time instead of migrating it to another processor.
When an operating system tries to make a process run on the same processor but does not
guarantee to do so always, it is referred to as soft affinity On the other hand, when an operating
system provides system calls that force a process to run on the same processor, it is referred to as
hard affinity. In soft affinity,there is a possibilityof process migration from one processor to
another whereas in hard affinity, the process is never migrated to another processor.

4.6 REAL-TIME SCHEDULING


In real-time systems, the correctnessof the computationsnot only depends on the output of the
computation but e outpu IS enera e rea - Ime system has well-
defined,fixed time lilts are not met, the system is said to have failed
in spite of producing the correct output.
correct result within its atimeconstraints. Real-time systems are of two types: hard meal-time systems
and soft real-time sytems.

4.6.1 Hard Real-timeSystems


In hard real-time systems, a process must be accompli!hed within the specified
deadlines; otherwise
A process servjcedåfter ißdéådlin@has
control and robotics are paSsedmeÄiiÖi
any sense. Industrial two examples o ar real-time
systems.
In hard-real time systems, the scheduler requires that a process
declare its deadline requirements
before entering the system. It then employs a technique known
as admission
control, which uses
a special algorithm to decide whether the process should be admitted.
The process is admitted
if the scheduler can ensure that it will be accomplishedby its deadline;
The scheduler can give assurance of processcompletionon time otherwise it is rejected'
only if it knows the exact time
taken by each function the operating system is to perform,and
performance of each function is

You might also like