04 Scheduling C
04 Scheduling C
Contents
2 CPU Scheduling (Review & Expansion) ...................................................................... 2
2.1 CPU Scheduling Algorithms: background concepts ......................................... 2
2.1.1 Evaluation Criteria .............................................................................................. 2
2.1.1.1 User oriented ............................................................................................... 2
2.1.1.2 System oriented ........................................................................................... 2
2.1.1.3 Performance related.................................................................................... 2
2.1.1.4 Non-performance related .......................................................................... 2
2.1.2 Priorities ............................................................................................................... 3
2.1.3 Service Burst time ............................................................................................... 3
2.1.4 Scheduling Algorithms: Pre-emptive or Non-pre-emptive? ........................ 3
2.1.5 Processes: CPU-bound or I/O-bound? ............................................................ 4
2.1.6 Interactions between scheduling algorithm and process types ................... 4
2.1.7 Starvation ............................................................................................................. 4
2.1.8 Algorithm comparison ....................................................................................... 4
2.2 Algorithms ............................................................................................................... 5
2.2.1 First Come, First Served (FCFS) ........................................................................ 5
2.2.2 Round Robin (RR) ............................................................................................... 6
2.2.2.1 Design issue: Quantum Size ...................................................................... 7
2.2.3 Shortest Process Next (SPN) .............................................................................. 8
2.2.3.1 Design issue: Guessing service needs ...................................................... 8
2.2.4 Shortest Remaining Time (SRT) ...................................................................... 11
2.2.5 Highest Response Ratio Next (HRRN) .......................................................... 11
2.2.6 Multi-Level Feedback (MLF) ........................................................................... 13
2.3 Appendix ................................................................................................................ 14
Page 1 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006
We will examine a number of basic algorithms that try to achieve this. Some are used (e.g.
RR, HRRN) and some are purely investigative (e.g. SPN SRT). The purpose here is twofold:
• to understand how scheduling is done and
• to understand the nature of the problem
By examining the basic algorithms we can see what needs to be done to solve this essential
problem. Any actual implementation will use these basic ideas sometimes in combination.
Page 2 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006
e.g.
Predictability
2.1.2 Priorities
Many systems assign a priority to processes; schedulers may choose higher priority processes
over lower ones. So there may be a separate ready queue for each priority: RQ0, RQ1, RQ2,
etc. The scheduler will process RQ0 first according to some algorithm, then RQ1 second
perhaps using a different algorithm, etc.
Sometimes this can starve low priority processes so often introduce a scheme where a
process’s priority rises the longer it is waiting.
A non-pre-emptive scheduling algorithm is one that only changes the running process when
a convenient interruption happens. Interrupts can happen either because the process itself
asks for I/O service or does some other system call, or some other system event happens
outside the process’s or scheduler’s control that means the process must leave the CPU (e.g.
the user plugs in a USB device that has to be installed or the wifi receiver needs attention,
etc.).
A pre-emptive scheduling algorithm is one that will interrupt the process in the CPU when it
decides. It does not wait for something else to interrupt the process on the CPU, but rather it
will make sure to interrupt the running process when it decides it is time for some other
process to get the CPU say.
Page 3 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006
If a process spends a lot of its time doing I/O it is ‘I/O-bound’. I/O-bound processes tend to
have very short service burst times because no sooner do they get the CPU than they initiate
another I/O call which blocks them.
pre-emptive non-pre-emptive
advantage of disallowing disadvantage of allowing
monopolisation of CPU; monopolisation of CPU;
disadvantage of increasing advantage of not increasing
CPU-bound number of process switches number of process switches
no danger of monopolisation no danger of monopolisation
so no advantage; still has and advantage of not
possible disadvantage of increasing number of process
increasing number of switches
process switches
I/O bound
A system may have mostly I/O bound processes or mostly CPU bound processes. Or it may
have a mixture. Depending on the profile of the system it is more or less advantageous to use
one or other of the scheduling algorithm types. A system with a lot of CPU bound processes
is better served by pre-emptive algorithms. A system with mostly I/O bound processes is
better served by a non-pre-emptive algorithm.
2.1.7 Starvation
If a scheduling algorithm could possibly allow a situation to arise where a ready process
never gets access to the CPU it is said to allow starvation. If a process is starved of access to
the CPU it cannot run. This is totally unacceptable.
1 0 3
2 2 6
3 4 4
Page 4 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006
4 6 5
5 8 2
Process 4 arrives at time 6 and requires 5 units of execution time for this ‘burst’ of activity.
When simulating an algorithm we can assess how it perform by calculating values for the rest
of the table:
Wait time is the length of time spent waiting for the CPU.
Turnaround time is the total time spent in the system for this burst of service (i.e. wait
time + service burst time).
NTT ratio is the Normalised Turnaround Time ratio which is turnaround time divided
by service burst time; this gives a good indication of the relative penalty incurred by
each process under the algorithm in question because it takes into account the amount
of service sought when measuring turnaround.
Note that, in reality, these service burst times are unknown by the scheduling algorithm.
2.2 Algorithms
2.2.1 First Come, First Served (FCFS)
A simple queue: processes get the CPU in the order they arrive in the ready queue.
Non-pre-emptive (i.e. once a process gets served it runs to the end of its required
service burst time without interruption.)
1 0 3 0 3 1 = 3/3
2 2 6 1 7 1.17 = 7/6
3 4 4 5 9 2.25 = 9/4
4 6 5 7 12 2.40 = 12/5
5 8 2 10 12 6 = 12/2
FCFS performs well when all processes have similar Service burst times. But when there is a
mix of short processes behind long ones the short processes in the queue may suffer (see
process 3 below).
Page 5 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006
1 0 1 0 1 1.00
3 2 1 99 100 100
4 3 100 99 199 1.99
Even in this extreme case FCFS performs OK for long processes (see processes 2 & 4 above).
Advantages
Fair in a simple minded way.
Simple algorithm with low administration overhead (no extra process switches).
No possibility of starvation.
Disadvantages
FCFS favours CPU-bound process over I/O bound because I/O bound processes tend
to need shorter bursts of service time. This leads to inefficient use of I/O devices.
In situations where there is a mix of long and short processing burst times, FCFS is
unfair for short processes.
Next process is the one waiting for longest (recently serviced processes go to back of
queue.)
Pre-emptive (Once a process gets service it runs either until it finishes its burst or a
time limit is reached, whichever is sooner.)
RR is FCFS with ‘time slice’ clock interrupts.
1 0 3 1 4 1.33
2 2 6 10 16 2.66
3 4 4 9 13 3.25
4 6 5 9 14 2.80
5 8 2 5 7 3.50
Page 6 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006
1 0 3 0 3 1
2 2 6 9 15 2.5
3 4 4 3 7 1.75
4 6 5 9 14 2.80
5 8 2 9 11 5.50
1 0 1 0 1 1.0
2 1 100 3 4 4.0
3 2 1 97 197 1.97
Advantages
Short processes move through more quickly.
Maintains the basic fairness of a queue
Disadvantages
May increase the number of clock interrupts => more process switches so larger
overhead
Very short? Smaller slices improve response time for typical interactions. However, this
increases the number of process switches.
Longer? Longer slices means less process switching but I/O bound processes get a raw deal.
They will have to wait longer in the queue and, when they win the CPU, they tend not to use
their full slice before leaving the CPU, waiting for the I/O, and then re-joining ready queue.
When they do re-join they will have another longer wait. This can lead to poor performance
of I/O bound processes and thus poor I/O device use – the I/O cannot be requested and so,
although the device may be idle, it will not be in use.
If slices are longer than the longest running process then effectively you have FCFS.
Page 7 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006
Guideline: slice should be slightly bigger than the time needed for a typical interaction.
Next process is the one that requires the least amount of processing time (this must be
guessed – see later).
Non-pre-emptive. When the scheduler has to choose a process, the waiting processes
are ranked according to processing time required. The process that requires the least
processing time gets the highest ranking of waiting processes and will therefore be
served next once the running process leaves.
1 0 3 0 3 1
2 2 6 1 7 1.17
3 4 4 7 11 2.75
4 6 5 9 14 2.80
5 8 2 1 3 1.50
Advantages
Better overall response times
Much better for shorter jobs
Disadvantages
Need to estimate the processing burst requirements
predictability reduced
risks starvation of longer jobs.
Page 8 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006
OR
Where:
Sn+1 is average of previous bursts as estimate of next burst;
S1 is estimated value of first burst (not calculated);
Sn is estimate of previous burst;
Ti is actual processor execution time for the ith burst;
Tn is actual processor execution time for the last burst.
E.g.
Consider the following data for a process:
5 previous burst times (T5 = most recent burst; T1 = first burst):
T5 T4 T3 T2 T1
4 2 3 2 4
S6 = (4+2+3+2+4) / 5 = 15/5 = 3 = estimated next burst time at time n+1 (i.e. at time 6)
OR
Given that the estimate of the previous burst time (i.e. S5 when n=4) would be calculated as
follows:
S5 = (2+3+2+4) / 4 = 11/4 = 2.75 = estimated 5th burst time,
And given that the actual burst time at time 5 (T5) was = 4
Then using
Page 9 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006
This is the same answer as the first method. This means we can calculate a reasonable guess
from less data. In the first method we must store all the previous burst times for all of the
processes. With the second method it is only necessary to store the last estimate, the last burst
time and the number of bursts so far for each process.
However the guess still gives equal weight to each burst. Better to give more weight to recent
bursts as the next one is likely to be more like them.
But 22.4 seems to fall between the very low and very high burst times and so is not a good
guess – this process has recently (at T5) had a very high burst time and so is more likely to
behave the same way in the near future. The average of 22.4 does not reflect this.
Then, for the example data above, if the previous guess was say 3 (= S5) and the last burst
was actually 100 (=Tn) then:
Thus the older the observation, the less it affects the average.
Page 10 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006
Higher values of give more emphasis to recent data and greater differences between the
weights applied to successive terms.
Pre-emptive. As new processes arrive, the processes (including the process in the CPU) are
ranked again according to remaining processing time required. If the new process requires
less processing time than all other processes (including the running process) then it gets the
highest ranking and is thus served next i.e. if necessary the running process is pre-empted
(i.e. interrupted and removed from the CPU) and the new process takes over.
1 0 3 0 3 1
2 2 6 7 13 2.17
3 4 4 0 4 1
4 6 5 9 14 2.80
5 8 2 0 2 1
Advantages
Better overall response times
Much better for shorter jobs
Disadvantages
Need to estimate the processing burst requirements
predictability reduced
risks starvation of longer jobs.
Page 11 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006
Non pre-emptive. When the current process completes or is blocked the scheduler choose a
process from the pool of candidates that has the highest anticipated NTT.
1 0 3 0 3 1
2 2 6 1 7 1.17
3 4 4 5 9 2.25
4 6 5 9 14 2.80
5 8 2 5 7 3.5
Advantages
Favours short jobs
Avoids starvation: HRRN accounts for the time spent waiting so far. So longer jobs
get through once they have waited long enough.
Non-pre-emptive so does not increase the number of process switches needed
Disadvantages
requires a guess about the future processing needs of a process.
Essentially HRRN keeps account of two aspects of the problem; waiting time and service
need. Thus it keeps elements of fairness from a queueing arrangement but is clever enough to
allow short jobs jump the queue within reason.
Imagine a process requiring 2 units of service and another requiring 20. If neither has been
waiting then their respective NTT ratios are equal:
(0 + 2) / 2 = 1 and (0 + 20) / 20 = 1
However, as time goes on their ratios will rise (but notice that the smaller process’s ratio rises
faster):
Time units passed Ratio of short Process Ratio of long Process
0 (0 + 2) / 2 = 1 (0 + 20) / 20 = 1
2 (2 + 2) / 2 = 2 (2 + 20) / 20 = 1.1
4 (4 + 2) / 2 = 3 (4 + 20) / 20 = 1.2
So shorter processes will rise to the top of the ranking more quickly and so will get to win the
competition more quickly than longer processes. But what about the danger of starvation?
Page 12 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006
No danger of that. Starvation tends to happen when new shorter processes that have not
waited jump ahead of waiting longer processes. However, with HRRN, if a long process has
waited just 1 millisecond then its ratio will be >1 [ e.g (w+s)/s = (1+100)/100 = 1.01 ]. And a
short process that has done no waiting will have a ratio = 1 [ e.g. (w+s)/s = (0+2)/2 = 1 ].
This means that if a longer process is in competition with a newly arrived short process it will
win. So a long process cannot be starved by a stream of new arrivals forever. So there is no
danger of starvation.
It is possible to maintain a few ready queues that operate under different rules. Waiting
processes can be assigned to the different queues as required and the queues can be given
different priorities.
Choose the process from the head of the highest priority queue that contains waiting
processes. Here we try to favour shorter processes but at the same time avoid having to rely
on guesswork about the processing time required by processes. Instead we depend on the
amount of time a process has already spent executing as a measure of its length. Instead of
favouring short jobs we penalise long jobs (essentially similar things).
There are several variations. In general there are a number of priority queues. When a process
enters the system it joins the top priority queue (RQ0) and when it gets the CPU it is allocated
Page 13 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006
n time units. If it doesn’t complete it is then assigned to the next lower queue (RQ1) where it
will get m units, and so on. If a process is very long it may end up in the lowest priority
queue. The scheduler deals with all processes in the higher queues before moving to the
lower ones.
Thus, longer processes drift down the queues and shorter ones are favoured. The different
queues can be administered using different queuing policies although RR is favoured.
To counteract possible starvation of long processes there are two strategies employed:
Firstly, the CPU allocation can be increased as you go down the queues e.g. RQ0 gets Time
Slice (ts)=1, RQ1 gets ts=2, RQ2 gets ts=4, and so on. This strategy gives longer processes a
better opportunity of finishing earlier. But starvation is still possible.
2.3 Appendix
All algorithms can be simulated according to the following steps:
1 Calculate the total service units to account for (=m) and draw up a grid with
m+1 columns [1 extra column for the process number].
2 Enter the process numbers/names in the first column to assign one row for
each process and label the remaining columns from 0 – m-1
3 Mark arrival times for each process in the grid with *.
4 Every time the processor becomes available determine the processes in the
competition for the CPU at the start of the next available time slot. [Note that
the processor becomes available for non-pre-emptive algorithms whenever
the running process completes its service burst; for pre-emptive algorithms
the CPU becomes available either when a time limit is reached (RR, MLF with
RR) or when a new process arrives (SRT)]:
a. First include newly arrived processes in the pool of candidates;
b. Then, if the process that has just stopped running is not finished and
needs more time, include it in the pool of candidates. (The order of
steps ’a’ and ‘b’ is important - sometimes the time of arrival in the pool
of candidates affects a process’s chance of selection);
5 Decide which process is next by ordering the pool of candidates according to
the algorithm and remove it from the pool of candidates;
Page 14 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006
6 Record that process as running – either for its entire service need [non-pre-
emptive algorithms] or for the next n milliseconds depending on the
algorithm.
7 Record the other processes in the pool as waiting while the running process is
running.
8 Repeat steps 4 – 7 until complete.
9 Make sure to mark wait times for each process from (and including) time of
arrival to last millisecond before running [and any other pauses between
runs].
10 Count wait times and record in table.
11 Calculate Turnaround times (turnaround = wait time + service burst time).
12 Calculate NTT ratio (AKA Response ratio) = turnaround/service burst time.
Page 15 of 24 pages
Operating Systems: Review of Processes Module SOFT7006
1 2* 2 3* 3 3 3 3 3 4 4 4 4 5 5 5 5 5
QUEUE 4* 4 4 4 5 5 5 5
5* 5
*=JUST ARRIVED. QUEUE IN COLUMN 0 IS STATE OF QUEUE AS AT START OF FIRST TIME INSTANT
Note that at time 2 below process 1 is behind process 2 because process 2 is a new arrival at time 2 and process 1 tries to re-enter the queue at
precisely the same time. In these cases the new arrival goes ahead of the process that has already had some service. In contrast, at time 4 process
3 is a new arrival but is placed behind process 2. This is because, while process 2 has had some service, it was already queueing at time 3 before
process 3 arrived.
Page 16 of 24 pages
Operating Systems: Review of Processes Module SOFT7006
Process 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
1 *R R X R
2 *R X R X R X X R X X X R X X X R
3 *X R X X R X X X R X X X R
4 *X R X X X R X X X R X X R R
5 *X X R X X X R
ROUND 1* 1 2* 1 2 3 2 4 3 2 5 4 3 2 5 4 3 2 4 4
ROBIN 1 2 3* 2 4* 3 2 5 4 3 2 5 4 3 2 4
QUEUE 3 2 5* 4 3 2 5 4 3 2 4
TS=1 4 3 2 5 4 3 2
Page 17 of 24 pages
Operating Systems: Review of Processes Module SOFT7006
ROUND 1* 2* 2 3* 3 3 3 4 4 4 4 2 2 2 2 5 5 4 4
ROBIN 4* 4 2 2 2 2 5 5 5 5 4 4
QUEUE 2 5* 5 5 5 4
TS=4
Page 18 of 24 pages
Operating Systems: Review of Processes Module SOFT7006
In this non pre-emptive algorithm the waiting pool is re-ranked every time a new process arrives so that the shortest process is always ranked
highest. The running process is NOT included in this re-ranking. Here the running process is not interrupted but left to complete its burst.
SPN
1* 2* 2 3* 3 3 3 5* 5 3 3 4 4 4 4
ranked
POOL 4* 4 3 3 4 4
4 4
Page 19 of 24 pages
Operating Systems: Review of Processes Module SOFT7006
In this pre-emptive algorithm the waiting pool is re-ranked every time a new process arrives so that the process with the shortest remaining
burst time is always ranked highest. The running process is included in this re-ranking. Here the running process is interrupted if it is no longer
the highest ranking process.
SRT
1* 1 1 2 3* 3 3 3 5* 5 2 2 2 2 2 4 4 4 4 4
ranked
POOL 2* 2 2 2 2 2 2 4 4 4 4 4
4* 4 4 4
Page 20 of 24 pages
Operating Systems: Review of Processes Module SOFT7006
In this non pre-emptive algorithm the ranking of the waiting processes happens only when the currently running process is complete and the
processor becomes available. See calculations below.
HRRN
1 2 3 5 4
ranked
POOL 4 4
5
Choice of next process occurs when running process completes (or becomes blocked).
No competition for 1 or 2. Then 3, 4 and 5 compete and 3 wins. Then 4 and 5 compete and 5 wins. Finally no competition for 4.
CPU becomes available at time 9 with processes 3,4, and 5 all eager:
Process 3. (w+s)/s = (5+4)/4 = 9/4 = 2.25 = Response ratio (aka NTT)
The ‘5’ in the ‘(5+4)’ above is the time spent by process 3 waiting up to the beginning of time point 9. Process 3 arrived at the beginning of time
point 4 and so waited during time points 4,5,6,7,8 = 5 wait periods.
Process 4. (w+s)/s = (3+5)/5 = 8/5 = 1.6 = Response ratio (aka NTT)
Process 5. (w+s)/s = (1+2)/2 = 3/2 = 1.5 = Response ratio (aka NTT)
Page 21 of 24 pages
Operating Systems: Review of Processes Module SOFT7006
Note that the relative ranking of processes 4 and 5 swaps between the competition at time 9 and that at time 13. This is because, although both
have done the same amount of extra waiting (4 time units), that extra 4 units of wait is a larger proportion of the service burst time for process 5
than it is for process 4. Process 5 adds 4/2 = 2 [i.e. (extra wait)/(service burst time)] to its NTT whereas process 4 only adds 4/5 = 0.8.
Page 22 of 24 pages
Operating Systems: Review of Processes Module SOFT7006
MULTI-LEVEL FEEDBACK
Using the following benchmark data on the following version of MLF.
RQ0 run on round robin basis with time slice of 1 (20).
RQ1 run on round robin basis with time slice of 2 (21).
RQ2 run on round robin basis with time slice of 4 (22).
A process that remains in RQn for a period of consecutive time equal to twice the time slice time of RQn is moved up to join RQn-1.
Process Arrival time Service burst time Wait time Turn-around time NTT ratio
1 0 13 23 36 28/13 (2.15)
2 2 15 21 36 31/16 (1.93)
3 4 4 8 14 14/4 (3.5)
4 6 3 4 7 7/3 (2.33)
5 7 2 5 7 7/2 (3.5)
6 8 1 1 2 2/1 (2)
R=running for TS=1 from Q0, R=running for TS=2 from Q1, R=running for TS=4 from Q2, Q0=waiting in q0, Q1=waiting in q1, etc.
2 indicates process 2 moved up a queue
MLF RQi TS=2i
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
1 R R R Q2 Q2 Q2 Q2 Q2 Q2 Q2 Q2 Q1 Q1 Q1 Q1 Q0 R Q1 Q1 R R Q2 Q2 Q2 Q2 R R R R Q2 Q2 Q2 Q2 R R R END
2 Q0 R Q1 R R Q2 Q2 Q2 Q2 Q2 Q2 Q2 Q2 Q1 Q1 R R Q2 Q2 R R R R Q2 Q2 Q2 Q2 R R R R Q2 Q2 Q2 R R
Page 23 of 24 pages
Operating Systems: Review of Processes Module SOFT7006
3 R Q1 Q1 Q1 Q1 Q0 R Q1 Q1 Q1 R R
4 Q0 R Q1 Q1 Q1 R R
5 Q0 R Q1 Q1 Q1 Q1 R
6 Q0 R
4, 5, 6, 1 1
RQ0 1* 2* 2 3* 4 5 6 3 3
4, 5, 5, 3, 2 2 2, 1 1
5, 3, 3, 1 1
2, 3, 4, 4, 3, 1 1
RQ1 1 2 3 3 3 4 5 5 1
1, 1, 1, 1, 2 2 2 2 2 2,1 1 1 1 1, 2 2 2 2, 1,
RQ2 1 1 1 1 2 2 2 2 2 2 1 1 1 1 2 2 2 2
Page 24 of 24 pages