CSTRACKB SYSTEM FUNDAMENTALS Batch 1
CSTRACKB SYSTEM FUNDAMENTALS Batch 1
Priority Scheduling
Easy to implement in Batch systems where required CPU time is known in advance.
Impossible to implement in interactive systems where required CPU time is not known.
The processer should know in advance how much time process will take.
Each process is assigned a priority. Process with highest priority is to be executed first
and so on.
Given: Table of processes, and their Arrival time, Execution time, and priority. Here we are
considering 1 is the lowest priority.
Catanduanes State University
College of Information and Communications Technology
Virac, Catanduanes
Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
The processor is allocated to the job closest to completion but it can be preempted by a
newer ready job with shorter time to completion.
Impossible to implement in interactive systems where required CPU time is not known.
It is often used in batch environments where short jobs need to give preference.
Once a process is executed for a given time period, it is preempted and other process
executes for a given time period.
FCFS Scheduling
FCFS is considered as simplest CPU-scheduling algorithm. In FCFS algorithm, the process that
requests the CPU first is allocated in the CPU first. The implementation of FCFS algorithm is
managed with FIFO (First in first out) queue. FCFS scheduling is non-preemptive. Non-
Catanduanes State University
College of Information and Communications Technology
Virac, Catanduanes
preemptive means, once the CPU has been allocated to a process, that process keeps the CPU
until it executes a work or job or task and releases the CPU, either by requesting I/O.
The process that arrives in the ready queue first is allocated to execute in the CPU first.
It does not require any prior knowledge about the processes. Also, if the run time
behaviour of the processes changes dynamically, there is no impact on FCFS
scheduling algorithm.
Though this is assures fair scheduling, it may result in long waiting time of individual
processes.
In CPU-scheduling problems some terms are used while solving the problems, so for
conceptual purpose the terms are discussed as follows −
Arrival time (AT) − Arrival time is the time at which the process arrives in ready queue.
Burst time (BT) or CPU time of the process − Burst time is the unit of time in which a
particular process completes its execution.
Completion time (CT) − Completion time is the time at which the process has been
terminated.
Turn-around time (TAT) − The total time from arrival time to completion time is known
as turn-around time. TAT can be written as,
Waiting time (WT) − Waiting time is the time at which the process waits for its allocation
while the previous process is in the CPU for execution. WT is written as,
Turn-around time (TAT) = Completion time (CT) − Arrival time (AT) or TAT = Burst time (BT) +
Waiting time (WT)
Response time (RT) − Response time is the time at which CPU has been allocated to a
particular process first time.
Gantt chart − Gantt chart is a visualization which helps to scheduling and managing
particular tasks in a project. It is used while solving scheduling problems, for a concept
of how the processes are being allocated in different algorithms.
Catanduanes State University
College of Information and Communications Technology
Virac, Catanduanes
Problem 1
Consider the given table below and find Completion time (CT), Turn-around time (TAT), Waiting
time (WT), Response time (RT), Average Turn-around time and Average Waiting time.
It is an easy algorithm to implement since it does not include any complex way.
FCFS does not give priority to any random important tasks first so it’s a fair scheduling.
FCFS results in convoy effect which means if a process with higher burst time comes
first in the ready queue then the processes with lower burst time may get blocked and
that processes with lower burst time may not be able to get the CPU if the higher burst
time task takes time forever.
Catanduanes State University
College of Information and Communications Technology
Virac, Catanduanes
If a process with long burst time comes in the line first then the other short burst time
process have to wait for a long time, so it is not much good as time-sharing systems.
Since it is non-preemptive, it does not release the CPU before it completes its task
execution completely.
*Convoy effect and starvation sounds similar but there is slight difference, so it is advised to not
treat these both terms as same words.
Conclusion
Although FCFS is simple to implement and understand, FCFS is not good for interactive system
and not used in modern operating systems. The Convoy effect in the FCFS can be prevented
using other CPU-scheduling preemptive algorithms such as ‘Round-robin scheduling’.
A CPU scheduling strategy is a procedure that selects one process in the waiting state and
assigns it to the CPU so that it can be executed. There are a number of scheduling algorithms.
In this section, we will learn about Shortest Job First or SJF scheduling algorithm.
In the Shortest Job First scheduling algorithm, the processes are scheduled in ascending order
of their CPU burst times, i.e. the CPU is allocated to the process with the shortest execution
time.
In the non-preemptive version, once a process is assigned to the CPU, it runs into completion.
Here, the short term scheduler is invoked when a process completes its execution or when a
new process(es) arrives in an empty ready queue.
This is the preemptive version of SJF scheduling and is also referred as Shortest Remaining
Time First (SRTF) scheduling algorithm. Here, if a short process enters the ready queue while a
longer process is executing, process switch occurs by which the executing process is swapped
out to the ready queue while the newly arrived shorter process starts to execute. Thus the short
term scheduler is invoked either when a new process arrives in the system or an existing
process completes its execution.
In cases where two or more processes have the same burst time, arbitration is done
among these processes on first – come first – serve basis.
It may cause starvation of long processes if short processes continue to come in the
system.
We can understand the workings of the two versions of this scheduling strategy through the aid
of the following examples.
Example 1
Suppose that we have a set of four processes that have arrived at the same time in the order
P1, P2, P3 and P4. The burst time in milliseconds of each process is given by the following
table −
Let us draw the GANTT chart and find the average turnaround time and average waiting time
using non-preemptive SJF algorithm.
Process P3 has the shortest burst time and so it executes first. Then we find that P1 and P4
have equal burst time of 6ms. Since P1 arrived before, CPU is allocated to P1 and then to P4.
Finally P2 executes. Thus the order of execution is P3, P1, P4, P2 and is given by the following
GANTT chart −
Let us compute the average turnaround time and average waiting time from the above chart.
Example 2
Catanduanes State University
College of Information and Communications Technology
Virac, Catanduanes
In the previous example, we had assumed that all the processes had arrived at the same time, a
situation which is practically impossible. Here, we consider circumstance when the processes
arrive at different times. Suppose we have set of four processes whose arrival times and CPU
burst times are as follows −
Let us draw the GANTT chart and find the average turnaround time and average waiting time
using non-preemptive SJF algorithm.
GANTT Chart
While drawing the GANTT chart, we will consider which processes have arrived in the system
when the scheduler is invoked. At time 0ms, only P1 is there and so it is assigned to CPU. P1
completes execution at 6ms and at that time P2 and P3 have arrived, but not P4. P3 is assigned
to CPU since it has the shortest burst time among current processes. P3 completes execution at
time 10ms. By that time P4 has arrived and so SJF algorithm is run on the processes P2 and
P4. Hence, we find that the order of execution is P1, P3, P4, P2 as shown in the following
GANTT chart −
Let us now perform preemptive SJF (SRTN) scheduling on the following processes, draw
GANTT chart and find the average turnaround time and average waiting time.
Catanduanes State University
College of Information and Communications Technology
Virac, Catanduanes
GANTT Chart
Since this is a preemptive scheduling algorithm, the scheduler is invoked when a process
arrives and when it completes execution. The scheduler computes the remaining time of
completion for each of the processes in the system and selects the process having shortest
remaining time left for execution.
Initially, only P1 arrives and so it is assigned to CPU. At time 4ms, P2 and P3 arrive. The
scheduler computes the remaining time of the processes P1, P2 and P3 as 4ms, 10ms and
3ms. Since, P3 has shortest time, P1 is pre-empted by P3. P3 completes execution at 7ms and
the scheduler is invoked. Among the processes in the system, P1 has shortest time and so it
executes. At time 10ms, P4 arrives and the scheduler again computes the remaining times left
for each process. Since the remaining time of P1 is least, no process switch occurs and P1
continues to execute. In the similar fashion, the rest of the processes complete execution.
From the GANTT chart, we compute the average turnaround time and the average waiting time.
In both preemptive and non-preemptive methods, the average waiting time is reduced
substantially in SJF when compared to FCFS scheduling.
In situations where there is an incoming stream of processes with short burst times,
longer processes in the system may be waiting in the ready queue indefinitely leading to
starvation.
In preemptive SJF, i.e. SRTF, if all processes arrive at different times and at frequent
intervals, the scheduler may be always working and consequently the processor may be
more engaged in process switching than actual execution of the processes.
Correct estimation of the burst time a process is a complicated process. Since the
effectiveness of the algorithm is entirely based upon the burst times, an erroneous
calculation may cause inefficient scheduling.
Conclusion
Shortest Job First scheduling can be termed as the optimal scheduling algorithm due to its
theoretical best results. However, the implementation is much more complex and the execution
is more unpredictable than First – Come – First – Serve or Round Robin scheduling.
Among the CPU scheduling strategies, Round Robin Scheduling is one of the most efficient and
the most widely used scheduling algorithm which finds its employability not only in process
scheduling in operating systems but also in network scheduling.
This scheduling strategy derives its name from an age old round-robin principle which
advocated that all participants are entitled to equal share of assets or opportunities in a turn
wise manner. In RR scheduling, each process gets equal time slices (or time quanta) for which it
executes in the CPU in turn wise manner. When a process gets its turn, it executes for the
assigned time slice and then relinquishes the CPU for the next process in queue. If the process
has burst time left, then it is sent to the end of the queue. Processes enter the queue on first –
come – first – serve basis.
Round Robin scheduling is preemptive, which means that a running process can be interrupted
by another process and sent to the ready queue even when it has not completed its entire
execution in CPU. It is a preemptive version of First – Come – First – Serve (FCFS) scheduling
algorithm.
RR is a fair scheduling strategy where all processes get equal share to execute in turn
wise manner.
Any new process that arrives the system is inserted at the end of the ready queue in
FCFS manner.
The first process in the queue is removed and assigned to the CPU.
If the required burst time is less than or equal to the time quantum, the process runs to
completion. The scheduler is invoked when the process completes executing to let in the
next process in the ready queue to the CPU.
If the required burst time is more than the time quantum, the process executes up to the
allotted time quantum. Then its PCB (process control block) status is updated and it is
added to the end of the queue. Context switch occurs and the next process in the ready
queue is assigned to the CPU.
The above steps are repeated until there are no more processes in the ready queue.
We can understand the workings RR scheduling algorithm through the aid of the
following example.
Let us consider time quantum of 2ms and perform RR scheduling on this. We will draw
GANTT chart and find the average turnaround time and average waiting time.
GANTT Chart with time quantum of 2ms
Catanduanes State University
College of Information and Communications Technology
Virac, Catanduanes
In order to calculate the waiting time of each process, we multiply the time quantum with the
number of time slices the process was waiting in the ready queue.
Round Robin scheduling is the most a fair scheduling algorithm whereby all processes
are given equal time quantum for execution.
It does not require any complicated method to calculate the CPU burst time of each
process prior to scheduling.
Convoy effect does not occur in RR scheduling as in First Come First Serve CPU
(FCFS) scheduling.
The performance of Round Robin scheduling is highly dependent upon the chosen time
quantum. This requires prudent analysis before implementation, failing which required
results are not received.
If the chosen time quantum is very large, most of the processes with complete within the
burst time. In effect, RR scheduling will act as FCFS scheduling. Thus, all the limitations
of FCFS will come into the system.
If the chosen time quantum is too small, the CPU will be very busy in context switching,
i.e. swapping in swapping out processes to and from the CPU and memory. This would
reduce the throughput of the system since more time will be expended in context
switching rather than actual execution of the processes.
RR scheduling does not give any scope to assign priorities to processes. So, system
processes which need high priority gets the same preference as background processes.
This may often hamper the overall performance of a system.
Conclusion
Round Robin scheduling, if properly implemented, provide the simplest and the best solutions to
scheduling problems. A number of variations of RR scheduling are being researched upon and
implemented, in order to avoid the disadvantages of this algorithm. One variant that helps to
provide near perfect time quantum is Dynamic Time Quantum Scheduling. Here, the time
Catanduanes State University
College of Information and Communications Technology
Virac, Catanduanes
quantum dynamically varies according to the behaviour of the processes in the system. Another
variant, the Selfish Round Robin scheduling assigns priorities to processes and provides more
CPU slices to higher priority processes.
CPU scheduling algorithms are strategies that decide which process in the ready queue should
execute next, in a multiprogramming environment. There are a number of scheduling strategies,
among which Highest Response Ratio Next (HRNN) scheduling aims to provide one of the most
optimal scheduling solutions.
HRRN algorithm is a non-preemptive scheduling strategy which chooses the next process to
execute based on a parameter called Response Ratio. Response ratio is calculated by the
formula −
ResponseRatio= (W+S)
S
Here, W is the waiting time of the process until now and S is the burst time of the process.
When multiple processes are ready to execute, the scheduler calculates response ratio for each
process and allocates the CPU to the process having the highest value. Since, HRRN is a non-
preemptive algorithm, once a process gets CPU access, it executes to completion before
another process can gain CPU access.
HRRN algorithm is the most optimal algorithm since it chooses processes to execute
based upon their response ratio.
HRRN gives preference to both shorter processes as well as processes which have
been waiting in the ready queue for longer time.
It is envisaged as a modification of Shortest Job First algorithm that solves the starvation
problem of SJF algorithm.
It does not require frequent context switches. This helps the CPU to focus mainly on
execution of the processes.
Initially when the CPU is idle, the scheduler is invoked when one or more new processes
arrive in the ready queue. The process with the shortest burst time is let in.
When the running process completes its execution, response ratio is calculated for each
process waiting in the ready queue. The process with highest response ratio is assigned
the CPU.
Step 2 is repeatedly executed until there are no processes in the ready queue.
Catanduanes State University
College of Information and Communications Technology
Virac, Catanduanes
Let us consider a system that has four processes which have arrived at the same time in
the order P1, P2, P3 and P4. The burst time in milliseconds of each process is given by
the following table −
Let us
perform
HRRN
scheduling on this. We will draw GANTT chart and find the average turnaround time and
average waiting time.
GANTT Chart Generation
To generate the GANTT charts, we will find the response ratio at each instance when
the scheduler is invoked.
At Time = 0ms
Processes in the system: P1.
Since, P1 is the only process, it is scheduled immediately. It runs into completion at
Time = 6ms.
GANTT chart up to this time is −
Catanduanes State University
College of Information and Communications Technology
Virac, Catanduanes
It prefers shorter processes and so has all the advantages of Shortest Job First
scheduling algorithm.
It solves the starvation problem as the response ratio computes to a higher value for
processes which have been waiting in the ready queue for longer time, and are
eventually assigned to the CPU.
Since it is non-preemptive in nature, it does not require frequent context switches. So the
CPU cycles are not wasted on context switches and are instead used for execution of
the processes.
This works only when the CPU burst times are known in advance. The dynamic nature
of most processes makes it difficult to ascertain the burst time prior to execution. An
error in calculation of the burst time may render the entire algorithm erroneous.
The CPU is burdened with the added logic of calculating the response time of each
process before assigning them to the CPU. If there are a large number of short
Catanduanes State University
College of Information and Communications Technology
Virac, Catanduanes
processes in the system, CPU gets more engaged in the scheduling process than in the
actual execution of the processes.
HRRN scheduling does not assign priorities to processes. If a high priority long process
arrives in the system, it may have to wait for considerable time before it is let in. This
may often hamper the overall performance of a system.
Conclusion
Highest Response Ratio Next scheduling algorithm is basically a modified Shortest Job First
algorithm where the problem of starvation has been eliminated to a considerable extent.
Theoretically, it is the most optimal algorithm. However, its practical applicability is restricted
since burst times are unpredictable.
Priority scheduling is a CPU scheduling strategy that decides which process in the ready queue
should execute next based on the priorities assigned to the process. It is commonly used in
systems where the execution of the processes are made in batches.
Priority Scheduling
In systems where Priority Scheduling strategies are implemented, each process is assigned a
priority value. Some systems follow the scheme that lower the priority value, higher the priority;
while other systems follow the scheme of higher the priority value, higher the priority. The
process with the highest priority value is selected for execution.
In the non-preemptive version, once a process is assigned to the CPU, it runs into completion.
Here, the scheduler is invoked when a process completes its execution or when a new
process(es) arrives in an empty ready queue. The scheduler chooses the process with highest
priority for execution.
In the preemptive version, if a high priority process enters the system while a lower priority
process is executing, process switch occurs by which the executing process is swapped out
while the newly arrived higher priority process starts to execute. Thus the scheduler is invoked
either when a new process arrives in the system or an existing process completes its execution.
Static Priority: In this system, once a priority value is assigned to a process, it remains
constant as long as the process remains in the system.
Dynamic Priority: Here, the priority values changes according to the nature of the
process or the waiting time of the process in the system.
Non-preemptive priority scheduling that uses static priorities, are typically used in batch
processes.
Preemptive priority scheduling that uses dynamic priority is used in most operating
systems.
If two processes are of same highest priority, then the scheduler arbitrates between
them on first come first serve basis.
Since most systems have some high priority system processes, priority scheduling finds
its wide implementation, often in conjunction with other scheduling algorithms.
We can understand the workings Priority scheduling algorithm through the aid of the following
examples −
Example 1
Suppose that we have a set of four processes that have arrived at the same time in the order
P1, P2, P3 and P4. The burst time in milliseconds and their priorities of the processes is given
by the following table −
Considering that a lower priority value means higher priority, let us perform non-preemptive
priority scheduling on it. We will draw the GANTT chart and find the average turnaround time
and average waiting time.
GANTT Chart
Process P2 has the highest priority and so it executes first. Then we find that both P1 and P4
have equal priority value of 2. Since P1 arrived before, CPU is allocated to P1 and then to P4.
Finally P3 executes. Thus the order of execution is P2, P1, P4, P3 and is given by the following
Catanduanes State University
College of Information and Communications Technology
Virac, Catanduanes
Example 2
In this example, we consider a situation when the processes arrive at different times. Suppose
we have set of four processes whose arrival times, CPU burst times and priorities are as follows
−
GANTT Chart
At time 0ms, only P1 is there and so it is assigned to CPU. P1 completes execution at 6ms and
at that time P2 and P3 have arrived. P2 has higher priority and hence assigned to CPU. P2
completes execution at time 10ms. By that time P4 has arrived having priority value 1 and so it
is assigned to CPU. Once P4 completes execution, P3 is assigned to CPU. So the order of
execution is P1, P2, P4, P3 as shown in the following GANTT chart −
In preemptive priority scheduling, if a process arrives that has higher priority than the executing
process, then the higher priority process pre-empts the lower priority process. Let us consider
Catanduanes State University
College of Information and Communications Technology
Virac, Catanduanes
the following set of processes whose arrival times, burst times and priorities are give in the
following table −
GANTT Chart
At time 0ms, P1 is the only process and so it starts executing. At time 4ms, P2 and P3 arrive.
Since P2 has higher priority than P1, P2 preempts P1. At 10ms, P4 arrives which has higher
priority than P2 and so pre-empts P2. P4 completes execution at 14ms and leaves. The
processes in the system are P1, P2 and P3, among which P2 has highest priority and so is
assigned to CPU. At 18ms, P2 completes execution and so P1 and P3 are the processes in the
system. Since both processes are of same priority, the scheduler selects P1 by FCFS method.
When P1 completes execution, finally P3 executes. The following GANTT chart shows the order
of execution −
Implementation is simple since scheduler doesn’t require doing any prior calculations.
Once CPU defines the relative relevance (priorities) of the processes, the order of
execution is easily predictable.
Priority scheduling is particularly helpful in systems that have variety of processes each
with their own needs.
In static priority systems, lower priority processes may need to wait indefinitely since the
system is busy executing higher priority processes. This results in stagnation.
Dynamic priority solves the stagnation problem. However, the added logic of dynamically
updating priority values according to the system requires additional CPU cycles and thus
increases load on the system.
Catanduanes State University
College of Information and Communications Technology
Virac, Catanduanes
Conclusion
Priority scheduling algorithm paves way for more complex scheduling methods like multilevel
queue scheduling. Assigning priorities to the processes helps the CPU to complete its important
work first and leave the rest of the time for background processes. Priorities can be assigned to
the processes as a function of their burst time, type of process, waiting time etc. and can thus
incorporate the advantages of other basic scheduling algorithms.
ACTIVITY:
Determine the Average Turnaround Time and Average Waiting Time using FCFS, SJF
(Preemptive and Non-preemptive), Priority (Preemptive and Non-preemptive), and Round Robin
using a time quantum of 4 ms.