0% found this document useful (0 votes)
73 views24 pages

04 Scheduling C

This document discusses CPU scheduling algorithms. It begins by outlining evaluation criteria for algorithms such as user-oriented metrics like response time and system-oriented metrics like throughput. It then covers key concepts like priorities, service burst times, whether algorithms are preemptive or not, and whether processes are CPU-bound or I/O-bound. Specific algorithms discussed include First Come First Served, Round Robin, Shortest Process Next, Shortest Remaining Time, and Highest Response Ratio Next. The document analyzes how the type of processes and choice of algorithm interact. It closes by mentioning starvation, where a process is denied CPU access.

Uploaded by

Bigu Marius Alin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views24 pages

04 Scheduling C

This document discusses CPU scheduling algorithms. It begins by outlining evaluation criteria for algorithms such as user-oriented metrics like response time and system-oriented metrics like throughput. It then covers key concepts like priorities, service burst times, whether algorithms are preemptive or not, and whether processes are CPU-bound or I/O-bound. Specific algorithms discussed include First Come First Served, Round Robin, Shortest Process Next, Shortest Remaining Time, and Highest Response Ratio Next. The document analyzes how the type of processes and choice of algorithm interact. It closes by mentioning starvation, where a process is denied CPU access.

Uploaded by

Bigu Marius Alin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Operating Systems: CPU Scheduling Module SOFT7006

Contents
2 CPU Scheduling (Review & Expansion) ...................................................................... 2
2.1 CPU Scheduling Algorithms: background concepts ......................................... 2
2.1.1 Evaluation Criteria .............................................................................................. 2
2.1.1.1 User oriented ............................................................................................... 2
2.1.1.2 System oriented ........................................................................................... 2
2.1.1.3 Performance related.................................................................................... 2
2.1.1.4 Non-performance related .......................................................................... 2
2.1.2 Priorities ............................................................................................................... 3
2.1.3 Service Burst time ............................................................................................... 3
2.1.4 Scheduling Algorithms: Pre-emptive or Non-pre-emptive? ........................ 3
2.1.5 Processes: CPU-bound or I/O-bound? ............................................................ 4
2.1.6 Interactions between scheduling algorithm and process types ................... 4
2.1.7 Starvation ............................................................................................................. 4
2.1.8 Algorithm comparison ....................................................................................... 4
2.2 Algorithms ............................................................................................................... 5
2.2.1 First Come, First Served (FCFS) ........................................................................ 5
2.2.2 Round Robin (RR) ............................................................................................... 6
2.2.2.1 Design issue: Quantum Size ...................................................................... 7
2.2.3 Shortest Process Next (SPN) .............................................................................. 8
2.2.3.1 Design issue: Guessing service needs ...................................................... 8
2.2.4 Shortest Remaining Time (SRT) ...................................................................... 11
2.2.5 Highest Response Ratio Next (HRRN) .......................................................... 11
2.2.6 Multi-Level Feedback (MLF) ........................................................................... 13
2.3 Appendix ................................................................................................................ 14

Page 1 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006

2 CPU Scheduling (Review & Expansion)


The aim of CPU scheduling is to share out CPU access among all processes so that the
objectives of the system are met. These objectives include:
 Response time (e.g. response to user commands),
 throughput (i.e. no. of processes completed over a fixed time), and
 processor efficiency (ideally, constant use of the processor without ill effects on
processes).
Assume Uniprocessor. AKA despatching, short term scheduling.

2.1 CPU Scheduling Algorithms: background concepts


The objective of short term scheduling algorithms is to allocate processor time so as to
optimise one or more aspects of system behaviour. First some background concepts.

We will examine a number of basic algorithms that try to achieve this. Some are used (e.g.
RR, HRRN) and some are purely investigative (e.g. SPN SRT). The purpose here is twofold:
• to understand how scheduling is done and
• to understand the nature of the problem

By examining the basic algorithms we can see what needs to be done to solve this essential
problem. Any actual implementation will use these basic ideas sometimes in combination.

2.1.1 Evaluation Criteria


Evaluate Scheduling algorithms based on criteria:

Two Types: User oriented and system oriented


Of these some are performance related, some are non-performance related.

2.1.1.1 User oriented


Behaviour of system from user point of view.
E.g.
 response time for interactive user.
 Predictability (provision of same service in different conditions)

2.1.1.2 System oriented


Focus on effective and efficient use of CPU.
E.g.
 Throughput (rate at which processes are completed)

2.1.1.3 Performance related


Quantitative and easily measured.
e.g.
 response time,
 throughput.

2.1.1.4 Non-performance related


Qualitative and hard to measure.

Page 2 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006

e.g.
 Predictability

2.1.2 Priorities
Many systems assign a priority to processes; schedulers may choose higher priority processes
over lower ones. So there may be a separate ready queue for each priority: RQ0, RQ1, RQ2,
etc. The scheduler will process RQ0 first according to some algorithm, then RQ1 second
perhaps using a different algorithm, etc.

Sometimes this can starve low priority processes so often introduce a scheme where a
process’s priority rises the longer it is waiting.

2.1.3 Service Burst time


AKA Processing Burst Time
When a process is ready to run and gains access to the CPU it will have a small amount of its
overall work to do and will then either be forced out of the CPU by some external interrupt or
it will leave voluntarily because it is waiting for some system call request (e.g. I/O). That
small amount of work is called its service or processing burst. Typically it is between 1 and 5
milliseconds of work on the CPU.

2.1.4 Scheduling Algorithms: Pre-emptive or Non-pre-emptive?


A scheduling algorithm runs a competition to decide which of the waiting processes should
get the CPU. Once it gives the CPU to a process it can either:
 Wait until that process finished its service burst before running the competition again
to pick the next process, or
 Interrupt the process itself after a certain amount of time or because something has
happened to make it worth running the competition for the CPU again.

A non-pre-emptive scheduling algorithm is one that only changes the running process when
a convenient interruption happens. Interrupts can happen either because the process itself
asks for I/O service or does some other system call, or some other system event happens
outside the process’s or scheduler’s control that means the process must leave the CPU (e.g.
the user plugs in a USB device that has to be installed or the wifi receiver needs attention,
etc.).

A pre-emptive scheduling algorithm is one that will interrupt the process in the CPU when it
decides. It does not wait for something else to interrupt the process on the CPU, but rather it
will make sure to interrupt the running process when it decides it is time for some other
process to get the CPU say.

Pre-emptive scheduling algorithms provide better service overall by preventing


monopolisation of CPU. However, they are more expensive - running more often and
requiring more context switching. If very efficient context switching is used [e.g. using lots
of h/w] this can be OK.

It is essential to understanding an algorithm to know whether it is pre-emptive or non-pre-


emptive.

Page 3 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006

2.1.5 Processes: CPU-bound or I/O-bound?


If a process spends a lot of its time using the processor it is ‘CPU-bound’. A process that is
CPU-bound will tend to have larger service burst times because it will have less need for OS
system call requests because it has the resources it needs – it just needs to execute its own
instructions to get work done.

If a process spends a lot of its time doing I/O it is ‘I/O-bound’. I/O-bound processes tend to
have very short service burst times because no sooner do they get the CPU than they initiate
another I/O call which blocks them.

2.1.6 Interactions between scheduling algorithm and process types

pre-emptive non-pre-emptive
advantage of disallowing disadvantage of allowing
monopolisation of CPU; monopolisation of CPU;
disadvantage of increasing advantage of not increasing
CPU-bound number of process switches number of process switches
no danger of monopolisation no danger of monopolisation
so no advantage; still has and advantage of not
possible disadvantage of increasing number of process
increasing number of switches
process switches
I/O bound

A system may have mostly I/O bound processes or mostly CPU bound processes. Or it may
have a mixture. Depending on the profile of the system it is more or less advantageous to use
one or other of the scheduling algorithm types. A system with a lot of CPU bound processes
is better served by pre-emptive algorithms. A system with mostly I/O bound processes is
better served by a non-pre-emptive algorithm.

2.1.7 Starvation
If a scheduling algorithm could possibly allow a situation to arise where a ready process
never gets access to the CPU it is said to allow starvation. If a process is starved of access to
the CPU it cannot run. This is totally unacceptable.

2.1.8 Algorithm comparison


In order to illustrate the scheduling algorithms we use the following kind of benchmark data:

Arrival Service Turn-around


Process Wait time NTT ratio
time burst time time

1 0 3

2 2 6

3 4 4

Page 4 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006

4 6 5

5 8 2

Process 4 arrives at time 6 and requires 5 units of execution time for this ‘burst’ of activity.

When simulating an algorithm we can assess how it perform by calculating values for the rest
of the table:

 Wait time is the length of time spent waiting for the CPU.

 Turnaround time is the total time spent in the system for this burst of service (i.e. wait
time + service burst time).

 NTT ratio is the Normalised Turnaround Time ratio which is turnaround time divided
by service burst time; this gives a good indication of the relative penalty incurred by
each process under the algorithm in question because it takes into account the amount
of service sought when measuring turnaround.

Note that, in reality, these service burst times are unknown by the scheduling algorithm.

2.2 Algorithms
2.2.1 First Come, First Served (FCFS)
 A simple queue: processes get the CPU in the order they arrive in the ready queue.
 Non-pre-emptive (i.e. once a process gets served it runs to the end of its required
service burst time without interruption.)

Consider the following example:


Arrival Service Turn-around
Process Wait time NTT ratio
time burst time time

1 0 3 0 3 1 = 3/3

2 2 6 1 7 1.17 = 7/6

3 4 4 5 9 2.25 = 9/4

4 6 5 7 12 2.40 = 12/5

5 8 2 10 12 6 = 12/2

FCFS performs well when all processes have similar Service burst times. But when there is a
mix of short processes behind long ones the short processes in the queue may suffer (see
process 3 below).

Process Arrival time Service Wait time Turn-around NTT ratio

Page 5 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006

burst time time

1 0 1 0 1 1.00

2 1 100 0 100 1.00

3 2 1 99 100 100
4 3 100 99 199 1.99

Even in this extreme case FCFS performs OK for long processes (see processes 2 & 4 above).

Advantages
 Fair in a simple minded way.
 Simple algorithm with low administration overhead (no extra process switches).
 No possibility of starvation.

Disadvantages
 FCFS favours CPU-bound process over I/O bound because I/O bound processes tend
to need shorter bursts of service time. This leads to inefficient use of I/O devices.
 In situations where there is a mix of long and short processing burst times, FCFS is
unfair for short processes.

Sometimes FCFS is combined with a priority system to avoid its problems.

2.2.2 Round Robin (RR)


FCFS is non-pre-emptive. RR is pre-emptive and thus avoids problems of FCFS. The reason
short jobs are penalised with FCFS is that they must wait till long jobs are finished. So we
introduce time-slices or ‘quantums’ to level the playing field.

 Next process is the one waiting for longest (recently serviced processes go to back of
queue.)
 Pre-emptive (Once a process gets service it runs either until it finishes its burst or a
time limit is reached, whichever is sooner.)
 RR is FCFS with ‘time slice’ clock interrupts.

With time slice/quantum = 1


Service Turn-around NTT
Process Arrival time Wait time
burst time time ratio

1 0 3 1 4 1.33

2 2 6 10 16 2.66

3 4 4 9 13 3.25

4 6 5 9 14 2.80

5 8 2 5 7 3.50

Page 6 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006

With time slice/quantum = 4


Service Turn-around NTT
Process Arrival time Wait time
burst time time ratio

1 0 3 0 3 1

2 2 6 9 15 2.5

3 4 4 3 7 1.75

4 6 5 9 14 2.80

5 8 2 9 11 5.50

Earlier data with time slice/quantum = 4


Service Turn-around NTT
Process Arrival time Wait time
burst time time ratio

1 0 1 0 1 1.0

2 1 100 3 4 4.0

3 2 1 97 197 1.97

4 3 100 99 199 1.99

Advantages
 Short processes move through more quickly.
 Maintains the basic fairness of a queue

Disadvantages
 May increase the number of clock interrupts => more process switches so larger
overhead

2.2.2.1 Design issue: Quantum Size


How big is a slice? (or what is the length of the time quantum?)

Very short? Smaller slices improve response time for typical interactions. However, this
increases the number of process switches.

Longer? Longer slices means less process switching but I/O bound processes get a raw deal.
They will have to wait longer in the queue and, when they win the CPU, they tend not to use
their full slice before leaving the CPU, waiting for the I/O, and then re-joining ready queue.
When they do re-join they will have another longer wait. This can lead to poor performance
of I/O bound processes and thus poor I/O device use – the I/O cannot be requested and so,
although the device may be idle, it will not be in use.

If slices are longer than the longest running process then effectively you have FCFS.

Page 7 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006

Guideline: slice should be slightly bigger than the time needed for a typical interaction.

2.2.3 Shortest Process Next (SPN)


SPN is another way of avoiding the bias against short jobs in FCFS. RR was pre-emptive;
SPN is non pre-emptive, i.e. it doesn’t force processes off the CPU. Instead the job with the
shortest expected processing burst time is selected next, i.e. short jobs jump queue.

 Next process is the one that requires the least amount of processing time (this must be
guessed – see later).
 Non-pre-emptive. When the scheduler has to choose a process, the waiting processes
are ranked according to processing time required. The process that requires the least
processing time gets the highest ranking of waiting processes and will therefore be
served next once the running process leaves.

Service Turn-around NTT


Process Arrival time Wait time
burst time time ratio

1 0 3 0 3 1

2 2 6 1 7 1.17

3 4 4 7 11 2.75

4 6 5 9 14 2.80

5 8 2 1 3 1.50

Advantages
 Better overall response times
 Much better for shorter jobs

Disadvantages
 Need to estimate the processing burst requirements
 predictability reduced
 risks starvation of longer jobs.

Is it possible to keep these advantages and remove the risk of starvation?

2.2.3.1 Design issue: Guessing service needs


How do you estimate the future processing need of a process?
Keep running average of processing bursts for each process and use it as a guess of the next
burst:

Page 8 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006

OR

Where:
 Sn+1 is average of previous bursts as estimate of next burst;
 S1 is estimated value of first burst (not calculated);
 Sn is estimate of previous burst;
 Ti is actual processor execution time for the ith burst;
 Tn is actual processor execution time for the last burst.

E.g.
Consider the following data for a process:
5 previous burst times (T5 = most recent burst; T1 = first burst):

T5 T4 T3 T2 T1
4 2 3 2 4

So n=5 as there are 5 previous bursts. Can then calculate S6 using

S6 = (4+2+3+2+4) / 5 = 15/5 = 3 = estimated next burst time at time n+1 (i.e. at time 6)

OR

Given that the estimate of the previous burst time (i.e. S5 when n=4) would be calculated as
follows:
S5 = (2+3+2+4) / 4 = 11/4 = 2.75 = estimated 5th burst time,

And given that the actual burst time at time 5 (T5) was = 4
Then using

Page 9 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006

To calculate the next burst time when n=5:


S6 = (1/5 * 4) + (4/5 * 2.75) = 4/5 + 11/5 = 15/5 = 3

This is the same answer as the first method. This means we can calculate a reasonable guess
from less data. In the first method we must store all the previous burst times for all of the
processes. With the second method it is only necessary to store the last estimate, the last burst
time and the number of bursts so far for each process.

However the guess still gives equal weight to each burst. Better to give more weight to recent
bursts as the next one is likely to be more like them.

Consider the following data:


5 previous burst times:
T5 T4 T3 T2 T1
100 2 5 1 4

Here the average is (100+2+5+1+4) / 5 = 112 / 5 = 22.4

But 22.4 seems to fall between the very low and very high burst times and so is not a good
guess – this process has recently (at T5) had a very high burst time and so is more likely to
behave the same way in the near future. The average of 22.4 does not reflect this.

So, use an exponential average:


Sn+1 =  Tn + (1 - )Sn for  = some constant between 0 and 1.

This is equivalent to:


Sn+1 =  Tn+ (1 -)Tn-1+ ... + (1 -)iTn-i+ ... + (1 -)nS1

For example, if  = 0.8:


Sn+1 = 0.8Tn+ 0.16Tn-1+ 0.032Tn-2+ ...
In other words the last burst contributes 80% to the guess, the previous burst contributes
16%, and its predecessor only contributes a negligible 3.2%, and so on. Each successive term
contributes a smaller amount to the next guess.

So, if Sn+1 =  Tn + (1 - )Sn for  = 0.8


Then S6 = 0.8 * Tn + 0.2 * Sn

Then, for the example data above, if the previous guess was say 3 (= S5) and the last burst
was actually 100 (=Tn) then:

S6 = 80% of last burst + 20% of previous guess


= (0.8 * 100) + (0.2 * 3)
= 80 + 0.6
= 80.6 a much better estimate

Thus the older the observation, the less it affects the average.

Page 10 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006

Higher values of  give more emphasis to recent data and greater differences between the
weights applied to successive terms.

2.2.4 Shortest Remaining Time (SRT)


SRT is pre-emptive version of SPN. The job with the shortest remaining processing time is
selected next, i.e. shortest remaining time jobs get CPU immediately. The next process is the
one that has the least amount of processing time remaining. (Estimated)

Pre-emptive. As new processes arrive, the processes (including the process in the CPU) are
ranked again according to remaining processing time required. If the new process requires
less processing time than all other processes (including the running process) then it gets the
highest ranking and is thus served next i.e. if necessary the running process is pre-empted
(i.e. interrupted and removed from the CPU) and the new process takes over.

Arrival Service Turn-around


Process Wait time NTT ratio
time burst time time

1 0 3 0 3 1

2 2 6 7 13 2.17

3 4 4 0 4 1

4 6 5 9 14 2.80

5 8 2 0 2 1

Advantages
 Better overall response times
 Much better for shorter jobs

Disadvantages
 Need to estimate the processing burst requirements
 predictability reduced
 risks starvation of longer jobs.

Still have the risk of starvation – is there another way?

2.2.5 Highest Response Ratio Next (HRRN)


Both SPN and SRT have very good performance but both risk starvation of processes. HRRN
maintains a good performance and avoids starvation altogether. The goal of the scheduler is
to minimise the NTT ratio so this algorithm keeps an eye on the NTT ratios so far and if a
process’s NTT is the highest it is given service at the next opportunity – this keeps the NTT
ratio values down for all processes.

HRRN can estimate the NTT so far:


Estimated NTT so far = w + s / s where w=time spent waiting so far and
s=expected service burst time.
The next process is the one that has the highest anticipated NTT.

Page 11 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006

Non pre-emptive. When the current process completes or is blocked the scheduler choose a
process from the pool of candidates that has the highest anticipated NTT.

Service Turn-around NTT


Process Arrival time Wait time
burst time time ratio

1 0 3 0 3 1

2 2 6 1 7 1.17

3 4 4 5 9 2.25

4 6 5 9 14 2.80

5 8 2 5 7 3.5

Advantages
 Favours short jobs
 Avoids starvation: HRRN accounts for the time spent waiting so far. So longer jobs
get through once they have waited long enough.
 Non-pre-emptive so does not increase the number of process switches needed

Disadvantages
 requires a guess about the future processing needs of a process.

Essentially HRRN keeps account of two aspects of the problem; waiting time and service
need. Thus it keeps elements of fairness from a queueing arrangement but is clever enough to
allow short jobs jump the queue within reason.

Imagine a process requiring 2 units of service and another requiring 20. If neither has been
waiting then their respective NTT ratios are equal:
(0 + 2) / 2 = 1 and (0 + 20) / 20 = 1
However, as time goes on their ratios will rise (but notice that the smaller process’s ratio rises
faster):
Time units passed Ratio of short Process Ratio of long Process

0 (0 + 2) / 2 = 1 (0 + 20) / 20 = 1

1 (1 + 2) / 2 = 1.5 (1 + 20) / 20 = 1.05

2 (2 + 2) / 2 = 2 (2 + 20) / 20 = 1.1

3 (3 + 2) / 2 = 2.5 (3 + 20) / 20 = 1.15

4 (4 + 2) / 2 = 3 (4 + 20) / 20 = 1.2

So shorter processes will rise to the top of the ranking more quickly and so will get to win the
competition more quickly than longer processes. But what about the danger of starvation?

Page 12 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006

No danger of that. Starvation tends to happen when new shorter processes that have not
waited jump ahead of waiting longer processes. However, with HRRN, if a long process has
waited just 1 millisecond then its ratio will be >1 [ e.g (w+s)/s = (1+100)/100 = 1.01 ]. And a
short process that has done no waiting will have a ratio = 1 [ e.g. (w+s)/s = (0+2)/2 = 1 ].

This means that if a longer process is in competition with a newly arrived short process it will
win. So a long process cannot be starved by a stream of new arrivals forever. So there is no
danger of starvation.

2.2.6 Multi-Level Feedback (MLF)

It is possible to maintain a few ready queues that operate under different rules. Waiting
processes can be assigned to the different queues as required and the queues can be given
different priorities.

Choose the process from the head of the highest priority queue that contains waiting
processes. Here we try to favour shorter processes but at the same time avoid having to rely
on guesswork about the processing time required by processes. Instead we depend on the
amount of time a process has already spent executing as a measure of its length. Instead of
favouring short jobs we penalise long jobs (essentially similar things).

There are several variations. In general there are a number of priority queues. When a process
enters the system it joins the top priority queue (RQ0) and when it gets the CPU it is allocated

Page 13 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006

n time units. If it doesn’t complete it is then assigned to the next lower queue (RQ1) where it
will get m units, and so on. If a process is very long it may end up in the lowest priority
queue. The scheduler deals with all processes in the higher queues before moving to the
lower ones.

Thus, longer processes drift down the queues and shorter ones are favoured. The different
queues can be administered using different queuing policies although RR is favoured.

To counteract possible starvation of long processes there are two strategies employed:

Firstly, the CPU allocation can be increased as you go down the queues e.g. RQ0 gets Time
Slice (ts)=1, RQ1 gets ts=2, RQ2 gets ts=4, and so on. This strategy gives longer processes a
better opportunity of finishing earlier. But starvation is still possible.

A second improvement to avoid the danger of starvation involves allowing a process to


ascend the priority queues based on its waiting time. If a process has been waiting a long time
in a lower priority queue it can move to a higher priority queue. As time goes by a process
will ascend the queues until it gets served. This means that eventually even a very long
process will be treated on an equal footing in a queue with all other newly arriving processes.

A possible MLF arrangement is as follows:


RQ0 run on round robin basis with time slice of 1 (20).
RQ1 run on round robin basis with time slice of 2 (21).
RQ2 run on round robin basis with time slice of 4 (22).
A process that remains in RQn for a period of consecutive time equal to twice the time slice
time of RQn is moved up to join RQn-1.

2.3 Appendix
All algorithms can be simulated according to the following steps:
1 Calculate the total service units to account for (=m) and draw up a grid with
m+1 columns [1 extra column for the process number].
2 Enter the process numbers/names in the first column to assign one row for
each process and label the remaining columns from 0 – m-1
3 Mark arrival times for each process in the grid with *.
4 Every time the processor becomes available determine the processes in the
competition for the CPU at the start of the next available time slot. [Note that
the processor becomes available for non-pre-emptive algorithms whenever
the running process completes its service burst; for pre-emptive algorithms
the CPU becomes available either when a time limit is reached (RR, MLF with
RR) or when a new process arrives (SRT)]:
a. First include newly arrived processes in the pool of candidates;
b. Then, if the process that has just stopped running is not finished and
needs more time, include it in the pool of candidates. (The order of
steps ’a’ and ‘b’ is important - sometimes the time of arrival in the pool
of candidates affects a process’s chance of selection);
5 Decide which process is next by ordering the pool of candidates according to
the algorithm and remove it from the pool of candidates;

Page 14 of 24 pages
Operating Systems: CPU Scheduling Module SOFT7006

6 Record that process as running – either for its entire service need [non-pre-
emptive algorithms] or for the next n milliseconds depending on the
algorithm.
7 Record the other processes in the pool as waiting while the running process is
running.
8 Repeat steps 4 – 7 until complete.
9 Make sure to mark wait times for each process from (and including) time of
arrival to last millisecond before running [and any other pauses between
runs].
10 Count wait times and record in table.
11 Calculate Turnaround times (turnaround = wait time + service burst time).
12 Calculate NTT ratio (AKA Response ratio) = turnaround/service burst time.

Page 15 of 24 pages
Operating Systems: Review of Processes Module SOFT7006

FIRST COME FIRST SERVED


Process 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
1 *R R R
2 *X R R R R R R
3 *X X X X X R R R R
4 *X X X X X X X R R R R R
5 *X X X X X X X X X X R R

1 2* 2 3* 3 3 3 3 3 4 4 4 4 5 5 5 5 5
QUEUE 4* 4 4 4 5 5 5 5
5* 5
*=JUST ARRIVED. QUEUE IN COLUMN 0 IS STATE OF QUEUE AS AT START OF FIRST TIME INSTANT

Note that at time 2 below process 1 is behind process 2 because process 2 is a new arrival at time 2 and process 1 tries to re-enter the queue at
precisely the same time. In these cases the new arrival goes ahead of the process that has already had some service. In contrast, at time 4 process
3 is a new arrival but is placed behind process 2. This is because, while process 2 has had some service, it was already queueing at time 3 before
process 3 arrived.

Page 16 of 24 pages
Operating Systems: Review of Processes Module SOFT7006

ROUND ROBIN; TIME SLICE=1

Process 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

1 *R R X R

2 *R X R X R X X R X X X R X X X R

3 *X R X X R X X X R X X X R

4 *X R X X X R X X X R X X R R

5 *X X R X X X R

ROUND 1* 1 2* 1 2 3 2 4 3 2 5 4 3 2 5 4 3 2 4 4

ROBIN 1 2 3* 2 4* 3 2 5 4 3 2 5 4 3 2 4

QUEUE 3 2 5* 4 3 2 5 4 3 2 4

TS=1 4 3 2 5 4 3 2

Page 17 of 24 pages
Operating Systems: Review of Processes Module SOFT7006

ROUND ROBIN; TIME SLICE=4


0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
1 *R R R
2 *X R R R R X X X X X X X X R R
3 *X X X R R R R
4 *X X X X X R R R R X X X X R
5 *X X X X X X X X X R R

ROUND 1* 2* 2 3* 3 3 3 4 4 4 4 2 2 2 2 5 5 4 4

ROBIN 4* 4 2 2 2 2 5 5 5 5 4 4

QUEUE 2 5* 5 5 5 4

TS=4

Page 18 of 24 pages
Operating Systems: Review of Processes Module SOFT7006

In this non pre-emptive algorithm the waiting pool is re-ranked every time a new process arrives so that the shortest process is always ranked
highest. The running process is NOT included in this re-ranking. Here the running process is not interrupted but left to complete its burst.

SHORTEST PROCESS NEXT


Process 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
1 *R R R
2 *X R R R R R R
3 *X X X X X X X R R R R
4 *X X X X X X X X X R R R R R
5 *X R R

SPN
1* 2* 2 3* 3 3 3 5* 5 3 3 4 4 4 4
ranked
POOL 4* 4 3 3 4 4
4 4

Page 19 of 24 pages
Operating Systems: Review of Processes Module SOFT7006

In this pre-emptive algorithm the waiting pool is re-ranked every time a new process arrives so that the process with the shortest remaining
burst time is always ranked highest. The running process is included in this re-ranking. Here the running process is interrupted if it is no longer
the highest ranking process.

SHORTEST REMAINING TIME


Process 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
1 *R R R
2 *X R X X X X X X R R R R R
3 *R R R R
4 *X X X X X X X X X R R R R R
5 *R R

SRT
1* 1 1 2 3* 3 3 3 5* 5 2 2 2 2 2 4 4 4 4 4
ranked
POOL 2* 2 2 2 2 2 2 4 4 4 4 4
4* 4 4 4

Here running processes are shown in ranking while running.

Page 20 of 24 pages
Operating Systems: Review of Processes Module SOFT7006

In this non pre-emptive algorithm the ranking of the waiting processes happens only when the currently running process is complete and the
processor becomes available. See calculations below.

HIGHEST RESPONSE RATIO NEXT


Process 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
1 *R R R
2 *X R R R R R R
3 *X X X X X R R R R
4 *X X X X X X X X X R R R R R
5 *X X X X X R R

HRRN
1 2 3 5 4
ranked
POOL 4 4
5

Choice of next process occurs when running process completes (or becomes blocked).
No competition for 1 or 2. Then 3, 4 and 5 compete and 3 wins. Then 4 and 5 compete and 5 wins. Finally no competition for 4.

CPU becomes available at time 9 with processes 3,4, and 5 all eager:
Process 3. (w+s)/s = (5+4)/4 = 9/4 = 2.25 = Response ratio (aka NTT)
The ‘5’ in the ‘(5+4)’ above is the time spent by process 3 waiting up to the beginning of time point 9. Process 3 arrived at the beginning of time
point 4 and so waited during time points 4,5,6,7,8 = 5 wait periods.
Process 4. (w+s)/s = (3+5)/5 = 8/5 = 1.6 = Response ratio (aka NTT)
Process 5. (w+s)/s = (1+2)/2 = 3/2 = 1.5 = Response ratio (aka NTT)

Page 21 of 24 pages
Operating Systems: Review of Processes Module SOFT7006

So ranking by HRRN is 3,4,5 and process 3 gets the CPU.

CPU becomes available at time 13 with processes 4 and 5 competing:


Process 4. (w+s)/s = (7+5)/5 = 12/5 = 2.4 = Response ratio (aka NTT)
Process 5. (w+s)/s = (5+2)/2 = 7/2 = 3.5 = Response ratio (aka NTT)

So ranking by HRRN is 5,4 and process 5 gets the CPU.

Note that the relative ranking of processes 4 and 5 swaps between the competition at time 9 and that at time 13. This is because, although both
have done the same amount of extra waiting (4 time units), that extra 4 units of wait is a larger proportion of the service burst time for process 5
than it is for process 4. Process 5 adds 4/2 = 2 [i.e. (extra wait)/(service burst time)] to its NTT whereas process 4 only adds 4/5 = 0.8.

Page 22 of 24 pages
Operating Systems: Review of Processes Module SOFT7006

MULTI-LEVEL FEEDBACK
Using the following benchmark data on the following version of MLF.
RQ0 run on round robin basis with time slice of 1 (20).
RQ1 run on round robin basis with time slice of 2 (21).
RQ2 run on round robin basis with time slice of 4 (22).
A process that remains in RQn for a period of consecutive time equal to twice the time slice time of RQn is moved up to join RQn-1.

Process Arrival time Service burst time Wait time Turn-around time NTT ratio

1 0 13 23 36 28/13 (2.15)

2 2 15 21 36 31/16 (1.93)

3 4 4 8 14 14/4 (3.5)

4 6 3 4 7 7/3 (2.33)

5 7 2 5 7 7/2 (3.5)

6 8 1 1 2 2/1 (2)

R=running for TS=1 from Q0, R=running for TS=2 from Q1, R=running for TS=4 from Q2, Q0=waiting in q0, Q1=waiting in q1, etc.
2 indicates process 2 moved up a queue
MLF RQi TS=2i
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
1 R R R Q2 Q2 Q2 Q2 Q2 Q2 Q2 Q2 Q1 Q1 Q1 Q1 Q0 R Q1 Q1 R R Q2 Q2 Q2 Q2 R R R R Q2 Q2 Q2 Q2 R R R END
2 Q0 R Q1 R R Q2 Q2 Q2 Q2 Q2 Q2 Q2 Q2 Q1 Q1 R R Q2 Q2 R R R R Q2 Q2 Q2 Q2 R R R R Q2 Q2 Q2 R R

Page 23 of 24 pages
Operating Systems: Review of Processes Module SOFT7006

3 R Q1 Q1 Q1 Q1 Q0 R Q1 Q1 Q1 R R
4 Q0 R Q1 Q1 Q1 R R
5 Q0 R Q1 Q1 Q1 Q1 R
6 Q0 R
4, 5, 6, 1 1
RQ0 1* 2* 2 3* 4 5 6 3 3
4, 5, 5, 3, 2 2 2, 1 1
5, 3, 3, 1 1
2, 3, 4, 4, 3, 1 1
RQ1 1 2 3 3 3 4 5 5 1
1, 1, 1, 1, 2 2 2 2 2 2,1 1 1 1 1, 2 2 2 2, 1,
RQ2 1 1 1 1 2 2 2 2 2 2 1 1 1 1 2 2 2 2

Page 24 of 24 pages

You might also like