0% found this document useful (0 votes)
32 views

Scheduling Algorithm

The document discusses key concepts in CPU scheduling such as process behavior involving alternate cycles of CPU and I/O bursts, when scheduling decisions need to be made, and the role of the dispatcher in executing the selected process. It also outlines important criteria for scheduling algorithms including fairness, CPU utilization, balanced utilization, throughput, turnaround time, waiting time, and response time. The goal of scheduling algorithms is to maximize fairness, CPU and balanced utilization, and throughput while minimizing turnaround, waiting, and response times.

Uploaded by

Sujal Bhavsar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views

Scheduling Algorithm

The document discusses key concepts in CPU scheduling such as process behavior involving alternate cycles of CPU and I/O bursts, when scheduling decisions need to be made, and the role of the dispatcher in executing the selected process. It also outlines important criteria for scheduling algorithms including fairness, CPU utilization, balanced utilization, throughput, turnaround time, waiting time, and response time. The goal of scheduling algorithms is to maximize fairness, CPU and balanced utilization, and throughput while minimizing turnaround, waiting, and response times.

Uploaded by

Sujal Bhavsar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Learning Objectives

After reading this chapter, you will be able to:


• Understand the basic concepts of scheduling. • Discuss the criteria for scheduling.
• Explain various scheduling algorithms. • Discuss scheduling for multiprocessor
systems. Explain real-time scheduling.
• Evaluate various scheduling algorithms.

-PAL
4.1 INTRODUCTION
4
Scheduling -ram

As discussed in Chapter 2, CPU scheduling is the procedure employed for


deciding_which-of the ready processes the CPU should be allocated to. CPU
scheduling plays a pivotal role in the basic framework of the operating system owing
to the fact that the CPU is one of the primary resources of the computer system. The
algorithm used by the scheduler to c out the selection of a rocess for execution is known
aÄéKédiiliii@ålGGFithm. number of scheduling algorithms are available for CP
scheduling. Each scheduling algon influences the resource utilization, overall system
performance, and quality of service provided to the user. Therefore, one has to
reason out a number of criteria to be considered while selecting an •algorithm on a
particular system.
4.2 SCHEDULING CONCEPTS
Before we start discussing the scheduling criteria and scheduling algorithms in a
comprehensive manner, we will first take into account some relatiy.ely impertant concepts of
scheduling, which are mentioned below.

(a) Process Behaviour


CPU scheduling is greatly affected by how a process behaves during its execution.
Almost all the processes continue to switc etween CPU (for devices ( or performing
I/O) during their execution. The time riod elapsed ip processing before performing the
next I/O operation is known CPU burs d the time pen e apsed in performin I/O before the next
CPU burst is known as VO urst. enerally, the rocess execution starts with a CPU burst,
follow
an I/O CPU burst and so on until e termination o e process. Thus we can say that the
process execution comprises alternate cycles of CPUand I/O 4.1 shows the sequence of
CPU and I/O bursts involved in the execution of the following code segment written in C
language:

sum—o ;
seanf ( "*d" Snum) ; while (i 10 )

sum num i; print'f ( "*d", sum) ;

Scheduling

CPU burst

I/O burst
CPV burst

I/O burst

Figure 4.1: Altemate Cycles of CPU and I/O Bursts


The CPI]
length of a burst and I/O burst varies from process to process. depending on whether
is CPU-bound as
the process or If the rocess is CPU- und it will have longer CPU bursts
compared
to I/O bursts. pnd vice versa in case t e process is I/O-bound. From the scheduling
burst.
the length of CPU burst is taken iiito consideration and not the length of the 1/0

When to Schedule
An important
facet of scheduling is to determine when the scheduler should make
decisions.
scheduling The following circumstances may require the scheduler to make sCheduling
switches
decisions: en a process to waitin state. This situation may occur in case the process has
such
to wait for L/O or the termma 1 n Itsc 1 process or some another reason. In •tuations, the
•cheduler has to select some ready process for exec.lltipn. • a process switches from runmng to
such
read state ue to occurrence of n interrupt. In situations, the scheduler may ecr e to run a
process from the ready queue. If the Interrupt was caused by some I/O device that has now
comp eted its task, the scheduler may choose the process that was blocked and waiting for the I/O.
When a process switches fro waiting state to ready state, or example, in cases where the
process has completed its I/O operation. In suc Situa Ions, the scheduler may select either
be
e process that has now come to the ready state, or the current process, which may
continued. When a process terminates and the systgn. In t s case, the scheduler has to
select a process for execution from the set of ready processes.

(c) Dispatcher
CPU
The CPU scheduler only selects a process to be executed next on the CPU but it cannot assign to
the CPU
the selected process. The function of setting up the execution of the selected process on
operating system, as dispatcher.
is performed by another module of the known The
dispatcher involves the following three steps to perform this function: I. Context switching is
context currently running
performed. The kernel saves the of process and restores the
CPU
saved state of the process selected by the scheduler. In case the process selected by the short-
kernel its
term scheduler is new, the loads context. 2. The system switches from the kernel mode to
user user
mode, as a process is to be executed.
Scheduling
user process
3. The execution of the selected by the CPU scheduler is started by transferring
the the instruction that supposed
control either to was to be executed at the time the præess
first instruction, the going
was interrupted or to the if process is to be executed for the first time
after time the dispatcher
its creation. Note: The amount of required by to suspend execution Of one
and resume another dispatch
process erecution of process is known as latency. Low dispatch latency
implies faster start of process execution.

4.3 SCHEDULING CRITERIA


following
The scheduler must consider the performance measures and optimization criteria in
order to maximize the performance of the system:
airness: It is defined as the degree to which each process gets an gqu.al chance to execute. The
scheduler must ensure that each process should get a fair share of CPU time. However, it may
t' different categories of processes (batch, real-time, or interactive) in a different manner.
higher
CPU utilization: It is defined as the percentage of time the CPU is busy in executin For be k&pt
as
running at all times.
• Balanced utilization: It is defined as the percentage of time all the system resources are busy.
It considers not only the CPU utilization but the utilization of I/O devices, memory, and
all other resources. To get more work done by the system, the CPU and VO devices must be
kept running simultaneously. For this, it is desirable to load a mixture of CPU-bound and VO-
bound processes in the memory.
• Throughput: It is defined as the total number of processes that a
time. By and large, it depen on the average en ofthe processes to be executed- Forthesystems running
long processes, throughput will be less as compared to the systems running short processes. •
Turnaround time: It js defined as the amount of time that has rolled by from the time of
ä
creation to the termination of a process. To put it differently, it is the differeQVS&G een
the time a process enters the system and the time it exits the system.FiRGiÄÄh@GEifi-e-3•
GöhääitiiiÜöäféiiiitö¯üdfiGGe, within théFekidfQueue to get CPU access, running on
CPU, and in I/O queues. It is inversely promortional throughpw&, that is, the more the
tumaround time, less will be the throughput.
• Waiting time: It is defined as the time spent by a process while waiting in the ready
queue. However, it does not take into account the execution time or time consumed for I/O. Thus,
waiting time of a process can be determined as the difference between tumaround time and time.
In practice, waiting time is a more accurate measure as comparedi6 tåmaround time.
• Response time: It is defined as the time elapsed between the moment when a user
initiates a request and the instant when the system starts responding to this request. For
interactive systems, it is one of the best metrics employed to gauge performance. This is
because in such systems, only the speed with which the system responds to user's request
matters, not the time it takes to output the response.
The basic purpose of a CPU scheduling algorithm is that it should tend to maximize fairness, CPU
Utilization balanced utilization and throughput, and minimize turnaround, waiting and
ti
response me. Practicall s akin no
general, the performance of an algorithm is evaluated on the basis of average measures. or example, an
algorithm that minimizes the average waiting time is considered to be a good algorithm because
Scheduling
this improves the
overall efficiency of the system. However, in case of response time,

minimizing the average is not a good criterion; rather, it is the variance in the response time of the
be minimized.
processes that should This is because it is not desirable to have a process with a
as
long response time compared to other processes.
4.4
SCHEDULING ALGORITHMS
A wide variety of algorithms are used for the CPU scheduling. These scheduling algorithms fall
two
into categories, namely, non-preemptive and preemptive.
Non-preemptive scheduling algorithms: Once the CPU is allocated to a process, it cannot be
withdravyv_until
the rocess v luntarily releases_jt (in case the process¯haöåo wart%r 1/0 or

some other event) or e process terminates. In other words, we can say the decision to
schedule a process is made only when the currently running process either switches to the

waiting state or terminates. Jn both cases, the CPU executes some other process from the set of

ready processes. Some examples of non-preemptive scheduling algorithms are first come first

served (FCFS), shortest job first sym priority-based scheduling and


next (HRN) scheduling.
Note: A non-preemptive scheduling algorithm is also known as a cooperative or voluntary scheduling
gorithm.
Preemptive scheduling algorithms: The CPU can be forcibl taken back from the currently
runnin rocessbefore its completion and allocate to some other py_ocgss. The preemp_tgdprocess is
put back in the rea y queue and resumes its execution when it is scheduled a ain. Thus, •a
many times before its completion. In preemptive scheduling, the
decision to schedule another process is made whenever an interrupt occurs causing the currently
running process to switch to ready state, or when a process having higGVrFiGiffhfiühe currently

shortest remaimng_gmenext scheduling and round robin (Sheduling.

4.4.1 First-Come First-Served (FCFS) Scheduling


FCFS is one of the simplest scheduling algorithms. As the name implies, the processes are
executed in the or f their arrival in the read queue, which means the process
thåfGiféFöfhéFéädfiüéiE rs_ggts the CPU first. FCFS is a non-preemptive scheduling algorithm.
Therefore, once a process gets the CPU, it retains the control of CPU until it blocks or terminates.
To implement FCFS scheduling, the implementation of ready queue is managed as a FIFO
(First-in First-out) queue. When the first process enters the reådy queue, it immediately gets
the CPU and starts executing. Meanwhile, other processes enter the system and are added to
the end of queue by inserting their PCBs in the queue. When the currently running process
completes or blocks, the CPU is allocated to the process at the forefront of the queue and its
PCB is removed from the queue. In case a currently running process was blocked and later comes
to the ready state, its PCB is linked to the end of queue.
their
Example 4.1: Consider four processes PI , %, %, and P4 with arrival times and required CPU
burst (in milliseconds) as shown in t e following table.
Process
Arrival time CPU burst (ms)
23
5
15 6 7
5
Scheduling be scheduled according
59 How will these processes to FCFS
scheduling algorithm? Compute the
and average turnaround
average waiting time time.
The processes be scheduled as depicted
solution: will in the following Gantt chart:

15
21 28 33
ready
Initially, pt enters the queue at t = O and CPU is allocated to it. While
Pa, P,
PI is executing, and p 4 enter the ready queue at t = 2, t = 3, and t 5,
entered
respectively. When PI completes, CPU is allocated to ? 2 as it has
before completes,
P, and P4. When P 2 P, gets the CPU after which gets the
CPU.
Since turnaround time = exit time— entry time therefore:
Turnaround time for
Turnaround time for P 2 = (21 — 2) = 19 ms
Turnaround time for PJ = (28 — 3) = 25 ms
Turnaround time for P 4 = (33 — 5) = 28 ms
turnaround
Average time = (15 + 19 + 25 + 28)/4 = 21.75 ms
Since waiting time = turnaround time — processing time, therefore:
Waiting time for PI
Waiting time for P 2 = (19—6) = 13 ms
Waiting time for P, = (25 — 7) = 18 ms
Waiting time for Pa = (28 — 5) = 23 ms
Average waiting time = (0+ 13 + 18 + 23)/4 = 13.5 ms
The performance of FCFS scheduling algorithm largely depends pp the order
ready queue.
of arrival of processes in the having long
short CPU burst or vice versa. To illustrate this, assume that the processes
4.
(shown in Example 4.1) enter the ready queue in the orde g, % , P 3 and Pt. ow, the
processes will be scheduled as shown in the following Gantt chart.

5 11 18 33

Using the above formulae, the average turnaround time and average waiting
time can be computed as: Average turnaround time = [(5 —0) + (Il — 2) +
(18 — 3) + (33 — 5)]/4 = 14.25 ms Average waiting time = [(5 — 5) + (9
— 6) + (15 — 7) + (28 — 15)]/4 = 6 ms
It is clear that if processes having shorter CPU bursts execute before
those having longer CPU bursts, the average may-reduce significantly.

Advantages
• It is easier to understand and implement, as processes are simply to be
added at the end of the queue and No process in between these two
points in the queue needs to be accessed.
• Itiswell suited for batch systems where the longer!imeperiods foreach process
areoftenacceptable.
60 Scheduling
Disadvantages
• The average
waiting time is not minimal. Therefore, this scheduling algorithm is
recomrÄéiidWWhere
never performance is a major issue.
reduces
e It the CPU and I/O devices utilization under some circumstances. For example, assume that
may
there is one Ion CPCbOtiiid rocessandmån¯AiOüf110- ound rocessgs ip the ready ueue. Now, it
complete
appen thåt while the CPU-bound process is executing, the O-bound processes their
CPU-bound
I/O and come to the ready queue for execution. There they have to wait till the

process releases the CPU; besides the I/O dev-ces also remain idle during this time. When the CPU-
bound procesS needs To pe orm I/O, it comes to the deuce queue and the CPU is allocated to I/O-
bound processes. As the
they execute quickly and come back to the device queue, thereby leaving the CPU idle. Then

the CPU-bound process enters the ready queue and is allocated the CPU, which again results in having
I/O processes waiting in ready queue, at some point of time. This happens again and again until the

CPU-bound process is done, thus resulting in low CPU and I/O devices utilization.
• It is not suitable for time sharing systems where each process needs the same amount of
cpu time.

4.4.2 Shortest Job First (SJF) Scheduling


The shortest job first also known as shortest process next (SPN) or shortest request next
(SRN) is a non-preemptive scheduling algorithm that schedules the piocesscs according to
the.length-QLCP-LJ— burst they require. At any point of time, among all the ready
processes,
CPU burst is scheduled first. Thus, a process has to wait until all the processes shorter than it have
been executed. In case two processes have the same CPU burst, they are scheduled in the FCFS order.
Example 4.2: Consider four processes PI , På, Ps, and P4 with their arrival times and required CPU
burst (in milliseconds) as shown in the following table:
Process
Arrival time
34

CPU burst (ms) 5 2 3 How will these processes be scheduled according to SJF scheduling

algorithm? Compute the average waiting time and average turnaround time.
Solution: The processes will be scheduled as depicted in the following Gantt chart:

9
7 12 17

Initially, Pi enters the ready queue at t = 0 and gets the CPU as there are no other processes

in the queue. While it is executing, P2, P3 and P4 enter the queue at t = l, t = 3 and t = 4

respectively When CPU becomes free, that is, at t = 7, it is allocated to P3 because it is

has the shortest CPU burst among the three processes. When P3 completes, CPU is
allocated first to P, and then to Pz.
Since turnaround time = exit time — entry time, therefore:
Turnaround time for PI = (7 — 0) = 7 ms
Turnaround time for P2 = (17 — I) = 16 ms
Turnaround time for P3 (9 — 3) = 6 ms
Turnaround time for P4 = (12 — 4) = 8 ms
Scheduling
61
turnaround + 9.25
Average time = (7 16 + 6 + 8)/4 = ms
= turnaround — processing
Since waiting time time time, Waiting time for PI = therefore: (7
— 7) = 0 ms
5) Il
Waiting time for P2 = (16 — ms
Waiting time for Pg = (6 — 2) = 4 ms
time for P4 3)
Waiting (8 — 5 ms
Average waiting
time = (0 + Il +4 + 5)/4 = 5 ms
Advantages
. It elimimtes the variance in waitin and turnaround times. In fact, it is optimal with
average all processes are
respect to waiting time if available at the same time. This is due
that before
to the fact short processes are made to run longer ones, which decreases the
processes and_increases-the
waiting time for short waiting time for long processes. However,
waiting the
the reduction in time is more than increment and thus, the average waiting time
decreases. Disadvantages
It is di lcult lement as it needs to know the length of CPU burst of processes in advance. In
practice, it is difficult to obtain pnor owledge of required processing times of processes.
Many systems expect users to provide estimates of CPU burst of processes, which may
not always be correct.
It does not favour processes having longer CPU burst. This is because long processes will
not be allowed to get the CPU as long as the short processes continue to enter the ready
queue. This results in starvation of long processes.

4.4.3 Shortest Remaining Time Next (SRTN) Scheduling


The shortest remaining time next, also known as shortest time to go (STG), isa reemptive
version of the SJF_ scheduling algorithm. It takes into acQGÜfi&GéF6GGaining urst of
the processes rather than the whole length, in order to schedule them. The scheduler always
chooses that process for execution which has the shortest remaining processing time.
While a process is being execute t e c e taken ac om It an assigned to some new y arrived
process provided the CPU burst of the newpro_cess the remaining CPU burst of the current
process. Note that if at any point of time, the remaining CPU burst of two processes becomes
equal, they are scheduled in the FCFS order.
Example 4.3: Consider the same set of processes, their arrival times and CPU burst as in
Example 4.2. How will these processes be scheduled according to SRTN scheduling
algorithm? Compute the average waiting time and average turnaround time.
Solution: The processes will be scheduled as depicted in the following Gantt chart.

11
35 17 Initially,
PI enters the ready queue at t = 0 and gets the CPU as there are no other
=
processes in the queue. While it is executing, at time t l, P2 with CPU burst of 5 ms
enters the queue. At that time the remaining CPU burst of PI is 6 ms which is greater than
that of Pa. Therefore, the CPU is
Schedtdihg

taken back from P. and allocated to P:. During execution of Pz, P, enters at t = 3 with a

CPU burst of 2 ms. Again CPU is switched from to as the remaining CPU burst of p; at t

-3 is 3 which is greater than that of P,. at time t = 4, P, with CPU burst 0/3 ms the

queue. the CPU is to it because at that time the remaining CPU burst of currently running

process (that is, P,) is I ms which is shorter than that of Pa. When P3 completes, there are

three processes P: (6 ms), P2 (3 ms) and P, (3 ms) ira the queue. To break the tie between
p2 and p

the scheduler takes into consideration their arrival order and the CPU is allocated first to then
to P, and finally, to Pt.
Since turnaround time = exit time — entry time, therefore:
Turnaround time for P: = (17 — O) = 17 ms
Tumaround time for P2 = (8 — I) 7 ms
Tumaround time for P, = (5 — 3) = 2 ms
Turnaround time for P, = (I I — 4) = 7 ms
Average turnaround time = (17 + 7 + 2 + 7)/4 = 8.25 ms
Since waiting time = tumaround time — processing time, therefore:
Waiting time for = (17—7) = 10 ms
Waiting time for P2 = (7 — 5) = 2 ms
Waiting time for = (2 — 2) = O ms
Waiting time for ? , = (7 — 3) = 4 ms
Average waiting time = (10 + 2 + O + 4)/4 = 4 ms

Advantages
• Along process that is near completion maybe favoured overshortprocesses entering the
system. This results in an improvement in the tumaround time of the long process.

Disadvantages
Like SJF, it also requires an estimate of the next CPU burst of a process in advance. •
Favouring a long process that is nearing completion over the several short processes entering
the system may adversely affect the turnaround times of short processes.
• It favours only those long processes that are just about to complete and not those who
have just started their operation. Thus, starvation of long processes may still occur.

4.4.4 Priority-based Scheduling


processis_assigned
In a priority-based scheduling algorithm, each a priority, with
higher priority processes being scheduled before lower pHority processes. At any
is
point of time, the process with tfréfriEsWRÖrity among all the readyorpgesses t.
case two processes enjoy e säiiié¯ÅFföFiiYJihöÄäECiitdin VCFS order.
Priority non-preemptive.
or The choice is made whenever a new process enters the
process
ready qÄeÄhiIe some process is executing. If the newly arrived has higher
the preemptive scheduling algorithm
priority than the currently running process, priority
allocates
preempts the currently running process and CPU to the new process. On the
curently running
other hand, a non-preemptive scheduling algorithm allows the process
execution
to complete its and the new process has to wait for the CPU.
Scheduling
63 are special cases ofpriority-based scheduling
Note.. Both SIF and SRTN whew priority o/a process
equal to inverse ofthe next CPU burst: the lower the CPU thepriority.
is burst, the higher will be A major
related with priority scheduling is how to
design issue The priority can be assigned compute
the processes. a processGither like internålly
priorities of to process's characteristics memory
defined usage, I/O frequency,
byéthe system depending on the usage cost, etc., or
defined that four processes
externally by the user executing process. xample 4.4: Consider P,
burst milliseconds), and priorities
P , P3, and P4 with their arrival times, required CPU (in as
the following
shown in table: Process
1 (ms) 7 4 Priority 43
Arrival time 3 4 CPU burst 32 12

Assuming that the lower riori number re resents hi her priority, how will these processes
be non-preemptive
scheduled according to a as well as a preemptive priority scheduling
algorithm? Compute the average waiting time and average turnaround time in both cases.
Solution:
Non-preemptive priority scheduling algorithm
following Gantt chart.

7
10 16

Initially, PI enters the ready queue at t = 0 and gets the CPU as there are no other
processes in the queue. While it is executing, P P and P4 enter the queue at t = 1, t
= 3, and t = 4, respectively. When the CPU becomes free, that is, at t = 7, it is
allocated to P3 because it has the highest priority (that is, 1) among the three
processes. When P3 completes, CPU is allocated to the next lower priority process,
that is, P4 and finally, the lowest priority process P2 is executed.
Since tumaround time = exit time — entry time, therefore:
Tumaround time for PI = (7 — 0) = 7 ms
Tumaround time for P2 = (16 — l) = 15 ms
Turnaround time for P3 (10 — 3) = 7 ms
Turnaround time for P4 = (12 — 4) = 8 ms
Average turnaround time = (7 + 15 + 7 + 8)/4 = 9.25 ms
Since waiting time = turnaround time — processing time, therefore:
Waiting time for PI = (7 — 7) = 0 ms
Waiting time for P2 = (15—4) = 11 ms
Waiting time for P3 = (7 — 3) = 4 ms
Waiting time for P4 = (8 — 2) = 6 ms
Average waiting time = (0 + Il + 4 + 6)/4 = 5.25 ms
64
Preemptive
prioritv scheduling algorithm
The
precesses in the following Gantt chart.
Scheduling

o
136
s10 16

Initially, P: of priority 4 enters the ready queue at t = 0 and gets the CPU as there are no

othe processes in the queue. While it is executing, at time t = 1, P2 of priority 3 — greater

than that of the currently running process PI — enters the queue. Therefore, PI is

preempted (with remaining CPU burst of 6 ms) and the CPU is allocated to During

execution of % , P3 of priority 1 enters at t 3. Again, the CPU switches from (with

remaining CPU burst of 2 ms) to p

since ? 3 enjoys higher priority than P:. However, when 4t time t = 4, P 4 with priority

2 enters the queue, the CPU is not assigned to it because it has lower priority than

currently running process ? 2. When P 3 completes, there are three processes PI, % , and P
4 in the ready queue with priorities 4, 3, and 2, respectively. The CPU is allocated first

to PA, then to P 2 and finally to r.. Since tumaround time = exit time — entry time,
therefore:
Turnaround time for PI = (16 — 0) = 16 ms
Turnaround time for P2 = (10 — l) = 9 ms
Turnaround time for P3 (6 — 3) = 3 ms
Turnaround time for ?4 = (8 — 4) = 4 ms
Average turnaround time = (16 + 9 + 3 + 4)/4 = 8 ms
Since waiting time = turnaround time — processing time, therefore:
Waiting time for PI (16 — 7) = 9 ms
Waiting time for P2 (9 — 4) = 5 ms
Waiting time for PE = (3 — 3) = O ms
Waiting time for P, = (4 — 2) = 2 ms
Average waiting time = (9 + 5 + O + 2)/4 = 4 ms

Advantages
• Important processes are never made to wait merely because less important processes are
under execution.

Disadvantages
• It suffers from the problem of starvation ofloyerpriority processes, since the
continuous arrival of higher priority processes will indefinitely prevent lower priority
processes from acquiring the CPU. One possible solution to this problem aging, which is
a process Of gradually raising the priority of a low priority process in step with increase
in its waiting time. If the priority Of a IOW
priority process is increased after each fixed time interval, it is ensured that at some
time it will become the highest priority process and ill finally get executed.

(HRN)
4.4.5 Highest Response Ratio Next Scheduling
non-preemptive
The highest response ratio next scheduling is a scheduling algorithm
Whenever CPU
that schedules the processes according to their response ratio. the
becomes available, the process
value of response ratio
highest
Scheduling 65 having among all the ready processes is scheduled
next. The response ratio of a process in the queue is computed by using the following equation:

Response ratio =(Time since arrival + CPU burst)


CPU burst
Initially, when a process enters, its response ratio is 1. It goes on increasing at the rate of (I/CPU
burst) as the process's waiting time increases.
*Æxample 4.5: Consider four processes PI , P2, Pg, and P4 with their arrival timesmd required
CPU V'/ burst (in milliseconds) as shown in the following table:

Process
4
Arrival time 2 34 CPU burst (ms) 3 5 2

How will these processes be scheduled according to HRN scheduling algorithm? Compute the
average waiting time and average turnaround time.
Solution: The processes will be scheduled as depicted in the follgwing Gantt chart.
14
3 79
Initially, Pi enters the ready queue at t = 0 and CPU is allocated to it. By the time PI completes, P2
and P3 have arrived at t = 2 and t = 3, respectively. At t = 3, the response ratio of P2 is ((3-2)+4)/4
= 1.25 and of P3 is I as it has just arrived. Therefore P2 is scheduled next. During execution of P2,

P4 enters the queue at t = 4. When P2 completes at t = 7, the response ratio of P 3 is ((7-3)+5)/5 =


I .8 and of P4 is ((7-4)+2)/2 = 2.5. As P4 has higher response ratio, the CPU is allocated to it and
after its completion, P3 is executed.
Since turnaround time = exit time — entry time, therefore:
Turnaround time for PI = (3 — 0) = 3 ms
Turnaround time for P2 = (7 — 2) = 5 ms
Turnaround time for 23 = (14 — 3) = 11 ms
Turnaround time for P4 = (9 — 4) = 5 ms
Average turnaround time = (3 + 5 + 11 + 5)/4 = 6 ms
Since waiting time = turnaround time — processing time, therefore:
Waiting time for PI = (3 — 3) = 0 ms
Waiting time for P2 = (5 — 4) = 1 ms
Waiting time for P3 (11 — 5) = 6 ms
Waiting time for P4 = (5 — 2) = 3 ms
Average waiting time = (0+ 1 + 6 + 3)/4 = 2.5 ms

Advantages
It favours short processes. This is because with increase in waiting time, the response ratio of
short processes increases speedily as compared to long processes. Thus, they are
scheduled earlier than long processes.
• Unlike SJF, starvation docs not occur since with increase in waiting time, the

response ratio Of long processes also increases and eventually they arc
schcdulcd.

Disadvantages
• Like SJF and SRTN. it also requires an estimatc oftbc expected service time (CPU
burst) ofa

4.4.6
Round Robin (RR) Scheduling
The round robin scheduling is onc of the most widely used preemptive

scheduling algorithms. It considers all processes equally important and treats

them in¯aii impåiiial¯manner. Each in thc re4.gueue gets a timc (generally

from 10 know}} as time slice time quantum for its execution.


hasnotgxgguted completely when the umc slice ends, it isnrcempted and the
CPU is allocated 70 the next process in the ready queue. However, if the

process blocks or terrninates before the-time Slice expires, the CPU is


immediately switched to thc next process in the ready queue.
To implement the round robin scheduling algorithm, the ready queue is
treated as a circular queue. All thc processes arriving in the ready queue are put
at the end of queue. The CPU is allocated to thc first •s tn executes until its time
slice expires. If the CPU burst than one time quantum, the process Itself
releases the CPU and
is deleted from thc queue. The CPU is the/Ä116cated to the next ocess in the
ueue. However, if thc process does not execute comp ete y within the time
slice, an interrupt occurs when the time slice expires. The currently running
process is preempted, put back at the end of the queue and the CPU is
allocated to the next process in the queue. The preempted process again gets
the CPU after all the processes before it in the queue have been allocated their
CPU time slice. The whole process continues until all the processes in queue
have been executed.
Example 4.6: Consider four processes P:, ?z, and P, with their arrival times
and required CPU burst (in milliseconds) as shown in the following table:

Arrival time 10 5
3 4 CPU burst (ms) 23

Assuming that the time slice is 3 ms how will these processes be scheduled
according to round robin scheduling algorithm? Compute the average waiting
time and average turnaround time. Solution: Tije processes will be scheduled as
following
depicted in the Gantt chart.

14
36 16 19 Initially,
P; enters the ready queue at t z and
CPU a
gets the for 3 ins. While it executes, P2 py enter the queue at t •v J
respectively. Since, Pi a
and t 3, does not execute within 3 ms' interrupt
is preempted (with
occurs when the time slice gets over. P. remaining
CPU burst in the queue
7 ms), put back after
to execution of not entered P,
Pa. During yet and
enters
because
P, P, has the CPU is allocate
queue queu (with remaining CPU
in the at t 4 and is put at to the end Of burst of 2
The to allocated the next process
ms) and sent end of queue, behind p.. CPU is to in
the
queue, that is, to P 3 and
behind P; , When P, times out, it is
preempted
Scheduling
67 executescompletely before the time slice expires. Thus, the CPU is allocated to
the next process in the queue which is Pr. PI again executes for 3 ms, is then preempted (with
remaining CPU burst of 4 ms) and joins the end of the queue, behind Pa . The CPU is now
cxccutcs
allocated to P 4. P, completely within its time slice, so the CPU is allocated to the
before
process at the head of thc queue, that is, P2. As P2 completes the time out occurs,
the CPU is switched to Pi at t 16 for another 3 ms. When the time slice expires, the CPU is
reallocated to Pi since it is the only proccys remaining: there is no queue left.
Since tumaround time = exit time — entry time, therefore:
Turnaround time for PI = (20 — 0) = 20 ms
Turnaround time for P2 = (16 — l) = 15 ms
Turnaround time for P3 = (8 — 3) = 5 ms
Turnaround time for P4 = (14 — 4) = 10 ms
Average turnaround time = (20 + 15 + 5 + 10)/4 = 12.5 ms
Since waiting time = turnaround time — processing time, therefore:
Waiting time for PI = (20 — 10) = 10 ms
Waiting time for P2 = (15 — 5) = 10 ms
Waiting time for P3 = (5 — 2) = 3 ms
Waiting time for (10 — 3) = 7 ms
Average waiting time = (10 + 10 + 3 + 7)/4 = 7.5 ms
The performance of round robin scheduling is greatly affected by the size of the time
quantum. If the time quantum is too small, a number of context switches occur which, in
turn, increase the system overhead. More time will be spent in performing context
switching rather than executing the processes. On the other hand, if the time quantum is
too large, the performance of round robin simply degrades to FCFS.
Note: If the time quantum is too small, say I ms, the round robin scheduling is called
processor sharing.

Advantages
• It is efficient for time sharing systems where the CPU time is divided among the
competing processes.
• It increases impartiality as regards priority to processes is concerned.

Disadvantages
• Processes (even short ones) may take a long time to execute. This decreases the
system throughput.
• It needs some extra hardware support, such as a timer to cause interrupt after each time
out.
Note: Ideally, the time quantum should be such that 80% of the processes can complete their execution
within the given time quantum.

4.4.7 Multilevel Queue Scheduling


Multilevel ueue scheduling is designed for environments where processes can be
categorized into, different groups on the aslS of their different response time
requirements or different scheduling
Schedulin needs. One
68 possible categorization may be based on whether the process is a
system process
process an
or interactive process (see Figure 4.2). Each group of processes IS associated With
specific priority.
For example, the system rocesses ma have the highest priority whereas batch
processes
:nay have the lowest priority.

TO implement multilevel scheduling algorithm, the ready ueue is artitioned into as many
ieparate
queues as there are groups. Whenever a new process enters, it is assigne mermancntly

to one of the ready queues depending on its properties, including memory requirements, type
batch
and priority. E h read as its own scheduling al orithm. Fnr example, for processes
FCFS scheduling
algorithm may be used, an or interactive processes, one may use the round
scheduling algon queues are executed
robin a 10011, the processes in lgher pnorlty
before
those
priority system
in lower queues.Ährs Implies that no batch process can run unless all the
and interactive
processes processes have been executed completely. Moreover, if a process
higher prjority lower
enters into a queue while a process prioritysueue is executing, the
priority preempted
process is in order to allocate the CPU to the higherpriority
process.
Highest priority Interactive processes

CPU

Lowest Batch
priority processes

Advantages Figure 4.2: Multilevel Queue


System processes Scheduling

• Processes arepermanentlyassignedtotheirrespecåve queues and do not


move between
queues. Disadvantages
The processes lower priority queues may be starved of CPU
in

access situation processes are continuously arriving in higher


in a where
priority queues. One possible to starvation time
way prevent such is to

slice among the queues. Each queue gets a certain share ofCPU
time which
it schedules among the processes it. Note that the time slices ofdifferent
in
priority queues .4.8 Multilevel Feedback Queue
may Scheduling
The multilevel feedback queue scheduling also known as multilevel adaptive

improved version ofmultilevel queue scheduling algorithm


schedulmg is an

In this
scheduling

not permanently assigned to queues; instead, they are allowed to move


are

between ffe The a process between queues.is based


queues• decision to move

the time taken Ifa process uses too much CPU


by in and its waiting time.

time, it is moved a lower


to queue

algorithm processes
that has been waiting too long in a lower priority queue is
Similarly, a process
moved a
higher
It

10
Scheduling
70
in 02, that is, Pt. While PI is executing, enters Qi at t 25 so PI is preempted, placed
in
afterup and P3 starts executing. As Py executes completely within time slice, the
first
scheduler picks the process in which is P: at t = 29. While is executing, Pa enters
t
QI at 32 because

of which P2 is preempted and placed after PI in (02. The CPU is assigned to P4 for 5 ms

and at t 37, P4 is moved to 02 and placed after 1>2. At the same time, the CPU is allocated to PI
(first process

in 02). When it completes at t = 42, the next process in Q2 which is % , starts


executing. When it completes, the last process in 02, that is, P4 is executed.
Since tumaround time = exit time — entry time, therefore:
Tumaround time for PI = (42 — 0) = 42 ms
Turnaround time for P2 = (52 — 12) = 40 ms
Turnaround time for P3 = (29 — 25) = 4 ms
Tumaround time for PC = (57 — 32) = 25 ms
Average turnaround time = (42 + 40 + 4 + 25)/4 = 27.75 ms
Since waiting time turnaround time — processing time, therefore:
Waiting time for Pi = (42 — 25)= 17
Waiting time for P2 = (40 — 18) = 22 ms
Waiting time for P3 = (4 — 4) = 0 ms
Waiting time for P (25— 10) = 15 ms
Average waiting time = (17 + 22 + O + 15)/4 = 13.5 ms

Advantages
• It is fair to I/O-bound (short) processes as these processes need not wait too
long and are executed quickly.
• It prevents starvation by moving a lower priority process to a higher priority queue if it has
been waiting for too long.

Disadvantages
• It is the most complex scheduling algorithm.
• Moving the processes between queues causes a number of context switches, which results in
an increased overhead.
The turnaround time for long processes may increase significantly.

4.5 MULTIPLE PROCESSOR SCHEDULING


processor
So far we have discussed the scheduling of a single among a
processor. different scheduling
the queue. In case there is more than one mechanisms need
on homogeneous
to be incorporated. In this section, we will concentrate multiprocessor
identical
systems, which means systems in which all processors are hence an rocess• in
the ueue can be assigne to any available processor.
multiprocessor scheduling
The scheduling cn na or are same as that for single
new considerations are discussed
procesor scheduling. But there are also some which here.

(a) Implementation of Ready Queue


be implemented
In multiprocessor systems, the ready queue can in tyo ways. Either
(see Figure 4.4(a))
there may be a separate ready queue for each procgssor or t&ere may
be a single sha!.g4
processors
Scheduling 71 read ueue for all the (sec Figure 4.4(b)), In thc former case, it
ready
may happen that at any given moment, the queue of one processor may bc
is very busy in executing processes.
cmpVWhilc thc othcr processor To obviate this
the_nroccsscs entet
sort of situation, the is -preferred, in which all into onc qucuc
and arc schcduJc?l on any availabJg processor.

CPU,

CPU?

cptJ3

CPUn

(a) Ready queues per processor

CPU,

CPU2

CPI-J3

CPUn

(b) A single shared ready queue


Figure 4.4: Implementation of Ready Queue in Multiprocessor Systems

(b) Scheduling Approaches


The next issue is how to schedule the processes from the ready queue to multi le
rocessors. For this, one of following sche uling approac es may e used%
• Symmetric multiprocessing (SMP): In this approach, each processoris self-
scheduling. For each processor, the scheduler selects a process for execution from the ready
queue. Since multiple grocessors need to access common
among multiple processors. This is required in order that no two
and no process IS ost from the ready queue.
• Asymmetric multiprocessing: This approach is based on a master-slave
structure of the processors. The responsibility of making scheduling decisions, I/O
rocessing and other system activities is dele ated to onl one rocessor ca e master), and
other processors calle slaves simp y execute the user's code. Whenever a processor
ecomes available, the master processor a process for it. This approach is easier to implement
than
symmetric multiprocessing since only one processor has access to the system data
structures. But this approach is also inefficient because a number of processes may block
on the master processor.
72
Load
(c) Balancing
Scheduling
On SMP systems having a ready queue for cach processor, it could happen, at a certain

Qoment of time. that one or more processors are sitting idle whilc others arc overloaded

with Ä number of processes for them. Thus, In order to achieve bcttcr utilizatiomof the
processors.
multiple load balancing is required, which means keeping the workload evenly
di>tributed among
multiple
processors. nierc are two techniques-to perforlll load balancing, namely, push
migration and pull migration.
In push migration technique, the load is balanced by acriodically checking the load of each

processor and shift}ng the processes from the ready queues of overloaded processors to that of
less
overloaded or idle processors. On the other hand, in pull migration, the idle processor itself
pulls a waiting process from a busy processor.
Note: Load balancing is often unnecessary on SMP systems with a single shared ready queue.

(d) Processor Affinity


Processor affinity means an effort to make a process I-un on the same processor that executed
it last time. a process executes on a processor, the data most recently accessed by it is kept in
the cache memory of that processor. The next time the process is run on the same processor,
most of its memory accesses are satisfied in the cache memory only and as a result the process
execution speeds up. However, if the process is run on a different processor next time, the
cache of the older processor becomes invalid and the cache of the new processor has to be re-
populated, thus delaying process execution. Therefore, an attempt should be made by the
operating system to run a process on the same processor each time instead of migrating it to
another processor.
When an operating system tries to make a process run on the same processor but does not
guarantee to do so always, it is referred to as soft affinity On the other hand, when an
operating system provides system calls that force a process to run on the same processor,
it is referred to as hard affinity. In soft affinity, there is a possibility of process
migration from one processor to another whereas in hard affinity, the process is never
migrated to another processor.

4.6 REAL-TIME SCHEDULING


In real-time systems, the correctness of the computations not only depends on the
output of the computation but e outpu IS enera e rea - Ime system has well defined,fixed
time
lilts are not met, the system is said to have failed in spite of producing the correct
output.
correct result within its atime constraints. Real-time systems are of two types: hard meal-time
systems and soft real-time sytems.

4.6.1 Hard Real-time Systems


accompli!hed the
In hard real-time systems, a process must be within specified deadlines;
otherwise process servjcedåfter ißdéådlin@has
A paSsedmeÄiiÖi any sense. Industrial
two examples ar real-time
control and robotics are o systems. In hard-real time systems,
process requirements
the scheduler requires that a declare its deadline before entering the
technique known as admission
system. It then employs a control, which uses a special
process should be admitted.
algorithm to decide whether the The process is
admitted
scheduler
can
ensure
that it
will
be accomplished its deadline; rejected' completion time only
by otherwise it is on
time to
if it knows the exact taken by each function the operating system is
perform, and performance
of each function is
if the
can
give
assurance
The
scheduler
of
process

You might also like