Chapter No 4
Chapter No 4
11/27/2021
Course Code: 22516
Program: Computer Technology
MNS
Semester : CM-5I
Class: TYCM
Subject Teacher:- Mohaseen N. Shaikh
1
Teaching & Examination Scheme
11/27/2021
MNS
2
Course Outcomes
• CO-01: Install Operating System & Configure it.
• CO-02: Use operating System tools to perform various functions.
11/27/2021
• CO-03: Execute process commands for performing process
management operations.
• CO-04: Apply scheduling algorithm to calculate turnaround time &
MNS
average waiting time.
• CO-05: Calculate efficiency of different memory management
techniques.
• CO-06: Apply file management techniques.
3
Course Content
• Chapter No 04 Marks (14)
• CPU Scheduling & Algorithm
• 4.1 Scheduling Types
• Scheduling Objectives
• CPU & I/O burst cycles
11/27/2021
• Pre-emptive, Non-pre-emptive Scheduling, Scheduling criteria
• 4.2 Types of Scheduling Algorithms
• First Come First Served(FCFS)
MNS
• Shortest Job First(SJF)
• Shortest Remaining Time(SRTN)
• Round Robin(RR)
• Priority Scheduling
• Multilevel Queue Scheduling
• 4.3 Deadlock
• System Models
• Necessary Conditions leading to Deadlocks
• Deadlock Handling
• Prevention
4
• Avoidance
Unit Outcomes
• UO- 01:- Justify the need & objective of job scheduling with
relevant example.
• UO- 02:- Explain with example the procedure of allocating CPU to
the given process using specified OS.
11/27/2021
• UO- 03:- Calculate turnaround time & average waiting time of the
given scheduling algorithm.
MNS
• UO- 04:- Explain functioning of the given necessary condition
leading to deadlock.
5
Learning Outcomes
• To understand Basic concept of CPU scheduling.
• To study types of scheduling.
• To learn Types of scheduling Algorithm.
• To study Basic Concept of Deadlock.
11/27/2021
MNS
6
INTRODUCTION
• The assignment of physical processors to processes allows
processors to accomplish work.
• In multiprogramming environment, many processes are in memory.
• The problem of determining when processors should be assigned to
11/27/2021
which process is called processor scheduling or CPU scheduling.
• CPU scheduling is to determine when & on what processor each
process has to run.
MNS
• In other words CPU scheduling is the process of selecting a
process & allocating the processor to the selected process for
execution.
• CPU scheduling is important because the overall system utilization
& performance as well as the response time of process depend on
how the processes are scheduled to run.
7
Cont…
• Concept of Scheduling:
• Scheduling is an important function of an OS.
• The process scheduling is the activity of the process manager that
handles the removal of the running process from the CPU & the
11/27/2021
selection of another process on the basis of a particular strategy.
• The scheduler is the kernel component (module/program) responsible
for deciding which program should be executed on the CPU.
MNS
• A scheduler selects a job/task which is to be submitted next for the
execution.
• The function of deciding which program should be given the CPU, &
for how long is called scheduling.
• Fig shows a schematic for CPU scheduling.
• Several programs await allocation of the CPU. The scheduler selects
one of these programs for execution on the CPU.
• A preempted program is added to the set of programs waiting for the 8
CPU.
Cont…
11/27/2021
MNS
Fig. A Schematic of Scheduling
9
Cont…
• Scheduling Model:
• CPU scheduling is the process of selecting a process from the ready
queue & assigning the CPU to it.
• CPU scheduling algorithms decides which of the processes in the
ready queue is to be allocated the CPU.
11/27/2021
• Every process in OS that requests CPU service carries out the
following sequence of actions.
• In first action, join the ready queue & wait for CPU service.
MNS
• In second action, execute for the duration of the current CPU burst or for
the duration of the time slice(timeout).
• In third action, join the I/O queue to wait for I/O service or return to the
ready queue to wait for more CPU service.
• In fourth action, terminate & exit if service is completed i.e., if there are
no more CPU or I/O bursts. If more service is required, return to the
ready queue to wait for more CPU service.
• The CPU scheduler is the part of the OS that selects the next process
to which the CPU will be allocated, de-allocates the CPU from the
process currently executing & allocates the CPU to the newly 10
selected process.
Cont…
• The algorithm used by the scheduler to carry out the selection of
a process for execution is known scheduling algorithm.
• A number of scheduling algorithms are available for CPU
scheduling such as FCFS, RR, etc.
11/27/2021
• Each & every scheduling algorithm influences the resource
utilization, overall system performance & quality of service
provided to the user.
MNS
11
11/27/2021
• In single processor system, only one process can run at a time; any others
must wait until the CPU is free & can be rescheduled.
• The objective of multiprogramming is to have some process running at all
MNS
time, to maximize CPU utilization.
• A process is executing & it required some I/O operation, in a simple
computer system, the CPU then just sits idle.
• All this waiting time is wasted; no useful work is accomplished. With
multiprogramming we try to use this time productively.
• Several processes are kept in memory at one time. When one process has
to perform I/O, the OS takes the CPU away from that process & gives the
CPU to another process. This pattern continues.
12
• Every time one process has to wait, another can take over use of the CPU.
4.1.1 Scheduling Objectives
• A system user/designer must consider a variety of factors when
developing a scheduling discipline, such as the type of system &
the user‟s need.
• Example:
• The scheduling discipline for a real-time system should differ from
11/27/2021
that for interactive desktop system; users expect different results from
these kinds of systems.
• Depending on the system, the user & designer might expect the
MNS
scheduler to following objectives for scheduling.
• Fairness:
• Is defined as, the degree to which each process gets an equal chance to
execute.
• A scheduler makes sure that each process gets its fair share of the
CPU time.
• Maximum Resource Utilization:
• The scheduling techniques should keep the resources of the system
13
busy.
Cont…
• Policy Enforcement:
• The scheduler has to make sure that system‟s policy is enforced.
• For example local policy is safety then the safety control processes
must be able to run whenever they want to, even if it means delay in
payroll processes.
11/27/2021
• Avoid Indefinite Postponement:
• A process should not experience an unbounded wait time before or
while receiving service.
MNS
• Response Time:
• A scheduler should minimize the response time for interactive user.
• Minimize Overhead:
• Overhead often results in wasted resources. But a certain portion of
system resources effectively invested as overhead can greatly
improve overall system performance.
• Turnaround:
• A scheduler should minimize the time batch users must wait for an 14
output.
Cont…
• Maximize Throughput:
• A scheduling discipline should attempt to service the maximum
number of processes per unit time.
• A scheduler should maximize the number of jobs processes per unit
11/27/2021
time.
• Ensure predictability:
• By minimizing the statistical variance in process response times, a
MNS
system can guarantee that processes will receive predictable service
levels.
• Efficiency:
• Scheduler should keep the system busy cent percent of the time when
possible.
• Enforce Priorities:
• If the system assigns priorities to processes, the scheduling
mechanism should favor the higher priority processes. 15
4.1.2 CPU-I/O Burst Cycles
• CPU scheduling is greatly affected by how a process behaves
during its execution.
• Almost all the processes continue to switch between CPU(for
processing) & I/O devices(for performing I/O) during their
11/27/2021
execution.
• The success of CPU scheduling depends upon the observed
property of such as process execution is a cycle of CPU execution
MNS
& I/O units.
• Processes alternate back & forth between these two states.
• Process execution begins with a CPU burst. It is followed by an
I/O burst, which is followed by another CPU burst then another
I/O burst.
• In short we can say that the process execution comprises alternate
cycles of CPU burst & I/O burst.
16
• To realize a system‟s scheduling objectives, the scheduler should
consider process behaviour.
Cont…
• CPU Bound Process:
• The process which spends
more time in computations
or with CPU & very rarely
with the I/O devices is
11/27/2021
called as CPU bound
process.
• I/O Bound Process:
MNS
• The process which spends
more time in I/O operation
than computation is called
I/O bound process.
17
Fig. Execution is an Alternating Sequence of
CPU & I/O Bursts
4.1.3 Scheduling Criteria
• For scheduling purpose, the scheduler may consider some
performance measures & optimization criteria.
• Different CPU scheduling algorithm have different properties &
may favor one class of processes over another.
11/27/2021
• In choosing which algorithm to use in a particular
situation/condition, the properties of various algorithm must be
considered.
MNS
• Following criteria have been suggested for comparing CPU
scheduling algorithms criteria.
• CPU Utilization:
• CPU utilization is defined as, the percentage of time the CPU is busy
in executing processes.
• For higher utilization, CPU must be kept as busy as possible, i.e. there
must be some process running at all times.
• CPU utilization may range from 0 to 100%. 18
• In real system it should range from 40% to 90%.
Cont…
• Throughput:
• If the CPU is busy executing processes, then work is being done.
• One measure of work is the number of processes that are completed
per time unit called throughput.
• For long processes, this rate may be one process per hour, for short
11/27/2021
transactions, throughput might be 10 processes per second.
• Throughput is defined as, the total number of processes that a system
can execute per unit of time.
MNS
• Turnaround Time(TAT):
• It is the difference between the time a process enters the system & the
time is exits the system.
• From the point of view a process, the important criterion is long it
takes to execute that process.
• The interval from the time of submission of a process to the time of
completion is the turnaround time.
• Turnaround time is the sum of the periods spent waiting to get into
memory writing into the ready queue, executing on the CPU & doing 19
I/O.
Cont…
• Waiting Time:
• The CPU scheduling algorithm does not affect the amount of time during
which a process executes of does I/O, if affects only the amount of time
that a process spends waiting in the ready queue.
• Waiting time is defined as, the time spent by a process while waiting in
the ready queue.
11/27/2021
• Balanced Utilization:
• Balanced utilization is defined as, the percentage of time all the system
resources are busy.
MNS
• It considers not only the CPU utilization but the utilization of I/O devices,
memory & all other resources.
• To get more work done by the system, the CPU & I/O devices must be
kept running simultaneously.
• Response Time:
• Response time is defined as, the time elapsed between the moment when a
user initiates a request & the instant when the system starts responding to
this request.
• In an interactive system TAT may not be the best criterion. Thus another 20
measure is the time from submission of a request until the first response is
produced.
4.1.4 Types of Scheduling
• A scheduler is an OS module that selects a job which is to be
admitted next for execution.
• CPU scheduling is the process of selecting a process & allocating
the processor to the selected process for execution.
11/27/2021
• Scheduling types are depending upon how the process is
executed.
• CPU scheduling decisions may take place under the following
MNS
conditions:
• When a process switches from the running state to the waiting
state(I/O request or invocation of wait for termination of one of the
child process).
• When a process switches from the running state to the ready
state(when an interrupt occurs).
• When a process switches from the waiting state to the ready
state(Completion of I/O). 21
• When a process terminates.
4.1.4.1 Pre-emptive Scheduling
• A pre-emptive scheduling allows a higher priority process to
replace a currently running process, even if its time slot is not
completed or it has not requested for any I/O.
• If a higher priority in enters the system, the currently running
process is stopped & the CPU transfers the control to the higher
11/27/2021
priority process.
• The currently running process may be interrupted & moved to the
ready state by the OS. Windows-95 introduces pre-emptive
MNS
scheduling & all subsequent versions of windows OS have used
pre-emptive scheduling.
• The pre-emptive scheduling algorithms is based on priority where
a scheduler may preempt a low priority running process anytime
when a high priority process enters into a ready state.
• The Advantage of pre-emptive scheduling algorithms is they allow
real multiprogramming.
• The disadvantages of pre-emptive scheduling are they are 22
complex & they lead the system race condition.
4.1.4.2 Non-preemptive Scheduling
• In non-preemptive scheduling once the CPU is assigned to a
process, the processor do not release until the completion of that
process.
• It means the running process has the control of the CPU & other
11/27/2021
allocated resources until the normal termination of that process.
• In non-preemptive scheduling, once the CPU has been allocated to
a process keeps the CPU until it releases the CPU either by
MNS
terminating or by switching to the waiting state.
• This scheduling method was by Microsoft Windows 3.
• Non-preemptive algorithms are designed so that once a process
enters the running state, it cannot be preempted until it completes
its allotted time.
• The advantages of non-preemptive scheduling algorithms are they
are simple & cannot lead the system to race condition.
23
• The disadvantage of non-preemptive scheduling algorithm is they
to not allow real multiprogramming.
4.1.4.3 Difference
11/27/2021
MNS
24
4.1.4.4 Concept of Dispatcher
• Dispatcher is a component which involves in the CPU scheduling.
• The dispatcher is the module that actually gives control of the CPU to
the process selected by the short term scheduler.
• The module of OS that performs the function of setting up the execution
of the selected process on the CPU is known as dispatcher.
11/27/2021
• The CPU scheduler only selects a process to be executed next on the
CPU is performed by another module of the OS, known as dispatcher.
• Following are functions performed by dispatcher:
MNS
• Loading the register of the process.
• Switching operating system to the user mode.
• Restart the program by jumping to the proper location in the user
program.
• The dispatcher needs to be as fast as possible, as it is run on every
context switch.
• The time taken by dispatcher to stop one process & start another process
to run is called dispatch latency time. 25
4.2 SCHEDULING ALGORITHMS
• CPU scheduling deals with the problem of deciding which of the
processes in the ready queue is to be allocated the CPU.
• The algorithm used by the scheduler to carry out the selection of a
process for execution is known as scheduling algorithm.
11/27/2021
• Scheduler algorithms are either non-preemptive or preemptive.
• The different CPU scheduling algorithms are
•
MNS
First Come First Serve(FCFS)
• Shortest Job First(SJF)
• Priority Scheduling Algorithm
• Round Robin(RR) Algorithm & so on.
• A process/CPU scheduler schedules different processes to be
assigned to the CPU based on particular scheduling algorithms.
26
4.2.1 First Come First Serve(FCFS):
• It is simplest type of algorithm in which the process that requests
the CPU first is allocated CPU first.
• In FCFS scheduling algorithm processes are scheduled in the
order they are received.
11/27/2021
• The FCFS algorithm is easily implemented by using a queue data
structure for the ready queue.
• Jobs are processed in the order of their arrival in the ready queue.
MNS
It can be implemented with FIFO(First In First Out) queue.
• FCFS scheduling algorithm is non-preemptive.
• Once the CPU has been allocated to a process, that process keeps
the CPU until it wants to release the CPU, either by terminating or
by requesting I/O.
• In time-sharing system it is not useful because process will hold
the CPU until it finishes or changes a state to wait state. 27
• In FCFS once, the processes is given to the CPU it keeps it till the
completion of execution.
Cont…
Completed
4 3 2 1 CPU jobs
11/27/2021
Fig. Concept of FCFS
MNS
with a long CPU burst will hold up other processes.
• Moreover it can affect overall throughput since I/O on processes
in the waiting state may complete; while the CPU bound processes
is still running.
• Average waiting time for FCFS algorithm is not minimal, & it also
varies substantially if the process CPU burst time vary greatly.
28
Cont…
• Advantage:
• FCFS is easier to understand & implement as processes are simply to
be added at the end & removed from the front of queue.
• FCFS is well suited for batch systems where longer time periods for
11/27/2021
each process are often acceptable.
• Disadvantage:
• Average waiting time is very large.
MNS
• FCFS is not an attractive alternative on its own for a single processor
system.
• Another difficulty with FCFS tends to favor processor bound.
Processes over I/O bound processes & may result in inefficient use
of both the processor & the I/O devices.
29
Cont…
• Example 1:Consider the four jobs are scheduled for execution(all
jobs arrived at same time). Find the average waiting time &
turnaround time.
Process CPU Burst Time
P1 8
11/27/2021
P2 4
P3 9
P4 5
MNS
• Solution:
• Gantt Chart:
P1 P2 P3 P4
0 8 12 21 26
Waiting Time Turnaround Time
Waiting time of Process P1= 0 Turnaround time of Process P1= 8
Waiting time of Process P2= 8 Turnaround time of Process P2= 12
Waiting time of process P3= 12 Turnaround time of Process P3= 21
Waiting time of process P4= 21 Turnaround time of Process P4= 26
30
Average waiting time=(Sum of all processes Average Turnaround time=(Sum of all
waiting time)/number of processes processes TAT)/number of processes
AWT=(0+8+12+21)/4=41/4= 10.25 msec. ATA=(8+12+21+26)/4=67/4= 16.75 msec.
Cont…
• Example 2:Consider the four jobs are scheduled for Find the
average waiting time & turnaround time.
Process Arrival Time CPU Burst
Time
P1 0 8
11/27/2021
P2 1 4
P3 2 9
MNS
• Solution: P4 3 5
• Gantt Chart:
P1 P2 P3 P4
0 8 12 21 26
Waiting Time = Starting time - Arrival time Turnaround Time = Ending time - Arrival time
Waiting time of Process P1= 0-0 = 0 Turnaround time of Process P1= 8-0 = 0
Waiting time of Process P2= 8-1 = 7 Turnaround time of Process P2= 12-1 = 11
Waiting time of process P3= 12-2 = 10 Turnaround time of Process P3= 21-2 = 19
Waiting time of process P4= 21-3 = 18 Turnaround time of Process P4= 26-3 = 23
31
Average waiting time=(Sum of all processes Average Turnaround time=(Sum of all processes
waiting time)/number of processes TAT)/number of processes
AWT=(0+7+10+18)/4=35/4= 8.75 msec. ATA=(8+11+19+23)/4=61/4= 15.25 msec.
Cont…
• Example 3:Consider the three jobs are scheduled for Find the
average waiting time & turnaround time.
11/27/2021
P1 24
P2 3
MNS
P3 3
32
4.2.2 Shortest Job First(SJF):
• The SJF scheduling algorithm is also known as Shortest Process
Next(SPN) scheduling algorithm or Shortest Request Next(SRN)
scheduling algorithm that schedules the processes according to the length
of the CPU burst they require.
• In SJF algorithm, the process with the shortest expected processing time
11/27/2021
is assigned to CPU. Hence, the name is shortest job first.
• Jobs or processes are processed in the ascending order of their CPU burst
times.
MNS
• Every time the job with smallest CPU burst-time is selected from the
ready queue.
• If the two processes having same CPU burst time then they will be
scheduled according to FCFS algorithm.
• The performance of this algorithm is very good in comparison to FCFS.
• The SJF scheduling algorithm is probably optimal; it gives minimal
average waiting time for a given set of processes. By moving short
process before a long one, the waiting time of short process decreases. 33
• Consequently average waiting time reduces.
Cont…
• In SJF scheduling algorithm the process in the ready queue with the
shortest expected processing time is assigned to CPU next.
• SJF can be evaluated in two different manners:
• Non-preemptive SJF:
• In this method, if CPU is executing one job, it is not stopped in between
11/27/2021
before completion.
• Pre-emptive SJF:
• In this method, while CPU is executing a job, if a new arrives with
MNS
smaller burst time, then the current job is pre-empted (sent back to ready
queue) & the new job is executed.
• It is also called Shortest Remaining Time First(SRTF).
• The SJF algorithm may be either pre-emptive or non-preemptive.
• The choice arises when a new process arrives at ready queue while a
previous process is executing.
• The new process may have a shorter next CPU burst than what if left of
the currently executing process.
• Where as, a non-preemptive SJF algorithm will allow the currently 34
running process to finish its CPU burst.
Cont…
• Advantages:
• Overall performance is significantly improve in terms of response
time.
• SJF algorithm eliminates the variance in waiting & turnaround time.
• Disadvantages:
11/27/2021
• There is a risk of starvation of longer processes.
• It is very difficult to know the length of next CPU Burst.
MNS
35
Cont…
• Example: Consider following table, find turnaround time & wait
time.
Process Arrival Time CPU Burst
Time
P1 0 4
11/27/2021
P2 1 1
P3 2 2
P4 1 1
MNS
• Non-preemptive SJF:
• At time unit only job(p1) in the ready queue. So we must start with
P1.
• P2 & P4 has same burst times, therefore applying FCFS we will
consider P2 before P4.
• As it non-preemptive P1 will not be preempted before completion.
P1 P2 P4 P3 36
0 4 5 6 8
Cont…
Waiting Time Turnaround Time
Waiting time of Process P1= 0 – 0 = 0 Turnaround time of Process P1= 4 – 0 = 4
Waiting time of Process P2= 4 – 1 = 3 Turnaround time of Process P2= 5 – 1 = 4
Waiting time of process P3= 6 – 2 = 4 Turnaround time of Process P3= 8 – 2 = 6
Waiting time of process P4= 5 – 3 =2 Turnaround time of Process P4= 6 – 3 = 3
11/27/2021
Average waiting time=(Sum of all processes Average Turnaround time=(Sum of all
waiting time)/number of processes processes TAT)/number of processes
AWT=(0+3+4+2)/4=9/4= 2.25 msec. ATA=(4+4+6+3)/4=17/4= 4.25 msec.
MNS
• Pre-emptive SJF:
• At zero time units there is only one job in the ready queue. Therefore we
must start with P1. At one time unit a new job P2 arrives with smaller
burst time therefore P1 will be preempted & P2 will complete its
execution.
• At time units two P3 arrives & CPU starts executing it. P3 has a burst
time of two units. After one unit of it is over(i.e. at 3 time units) P4
arrives with 1 as burst time.
• Now if we compare balance burst time of P3 (i.e. 2-1) & burst time of
P4(i.e. 1) both are exactly same. 37
• Applying FCFS P3 will continue. At the end P4 will be executed & then
balance of P1.
Cont…
Gantt Chart: P1 P2 P3 P3 P4 P1
0 1 2 3 4 5 8
11/27/2021
P2 (1-1) = 0 (2-1) = 1
P3 (2-2) = 0 (4-2)= 2
MNS
P4 (4-3)= 1 (5-3)= 2
11/27/2021
P2 3
P3 3
MNS
• Example: Consider following table, find turnaround time &
wait time using Preemptive SJF
Process Arrival Time CPU Burst
Time
P1 0 24
P2 5 3
39
P3 7 3
4.2.3 Shortest Remaining Time Next
• SRTN scheduling algorithm is a pre-emptive form of Shortest
Job First(SJF) scheduling algorithm.
• The SRTN also known as Shortest Time to Go(STG) scheduling
algorithm.
11/27/2021
• SRTN is a scheduling discipline in which the next scheduling
entity, a job or a process, is selected on the basis of the shortest
remaining execution time.
MNS
• SRTN scheduling may be implemented in either the non-
preemptive or the pre-emptive variety.
• In this algorithm a job is chosen whose remaining run time is the
shortest. For a new job, its run time is compared with remaining
time of current job.
• If new job needs less time to complete than current job, then the
current job is blocked & the new job is run.
40
• It is used for batch systems & provides advantages to new jobs.
Cont…
• Consider the following example which contains four processes p1, p2,
p3, p4 with their arrival times & required burst time (in milliseconds).
11/27/2021
P2 1 5
P3 2 2
P4 3 3
MNS
Gantt Chart: P1 P2 P3 P4 P2 P1
0 1 2 4 7 11 17
11/27/2021
• It favors only those processes that are just about to complete & not
those who have just started their operation. Thus starvation may occur.
MNS
• Like SJF, STRN also requires an estimate of the next CPU burst of a
process in advance.
• Favoring a long process nearing its completion over the several short
processes entering the system may affect the turnaround times of short
processes.
42
4.2.4 Priority Scheduling Algorithm
• A priority is associated with each process & the CPU is allocated to the
process with the highest priority, hence it is called priority scheduling.
• The scheduler always picks up the highest priority process for execution
from the ready queue.
• Equal priority processes are scheduled in FCFS order.
• The SJF algorithm is a special case of the priority scheduling algorithm,
11/27/2021
where the priority is the inverse of the next CPU burst, i.e. lower priority is
assigned to the larger CPU burst.
• Priorities are roughly categorized into following types:
• Internal priorities: are based on burst time, memory requirements, number of
MNS
open files etc. measurable quantity.
• External priorities: are human created e.g. seniority, influence of any person, etc.
• The priority scheduling can be pre-emptive or non-preemptive.
• When process enters into the ready queue, its priority is compared with the
priority of current running process.
• In pre-emptive priority scheduling algorithm, at any time, the CPU is
allocated one process & if the priority of newly arrived process if highest
than the priority of the currently running process, the running process is
interrupted & return to the queue.
• The higher priority job is started for execution. 43
• A non-preemptive scheduling algorithm will simply put the new process at
the head of the ready queue.
Cont…
• Example: Consider the following set of processes assumed to have
arrived at time 0, in the order p1,p2,p3, p4 & p5 with the length of the
CPU burst time given in milliseconds. Calculate the average TAT &
AWT. Process Burst Time Priority
P1 10 3
11/27/2021
P2 1 1
P3 2 3
P4 1 4
MNS
P5 5 2
Gantt Chart: P1 P5 P1 P3 P4
0 1 6 16 18 19
Waiting Time Turnaround Time
Waiting time for process P1 = 6 TAT of process P1 = 16
Waiting time for process P2 = 0 TAT of process P2 = 1
Waiting time for process P3 = 16 TAT of process P3 = 18
Waiting time for process P4 = 18 TAT of process P4 = 19
Waiting time for process P5 = 1 TAT of process P5 = 6 44
Average Waiting Time = (6+0+16+18+1)/5 Average Tat = (16+1+18+19+6)/5 = 60/5
= 41/5 = 8.2ms = 12ms
Cont…
• Note:
• If there are more than one jobs with same priority apply FCFS.
• If along priorities highest & lowest is not specified, please do not forget to
make your own assumption & specify it before you solve the problem.
• Example: Consider the following table & find out AWT & ATAT.
11/27/2021
Jobs Burst Time Priority
J1 4 4
MNS
J2 6 1
J3 2 3
J4 3 2
• Advantages:
• Simple in use.
• Important processes are never made to wait because of the execution
of less important process.
• Suitable for applications with varying time & resource requirements.
45
• Disadvantages:
• It suffers from the problem of starvation of lower priority processes.
4.2.5 Round Robin Scheduling Algorithm
• The Round Robin(RR) scheduling algorithm is designed especially
for time sharing systems.
• RR is the pre-emptive type of scheduling algorithm.
• Round robin scheduling is a pre-emptive version of FCFS
scheduling.
11/27/2021
• Processes are (FCFS) dispatched in a First In First Out(FIFO)
sequence but each process is allowed to run for only a limited
MNS
amount of time.
• A small unit of time called a Time Quantum or Time slice is defined.
• A time quantum is generally from 10 to 100 milliseconds.
• The ready queue is treated as a circular queue. The CPU scheduler
goes around the ready queue, allocating CPU to each process for a
time called a time slice or a quantum.
• If process does not complete before its quantum expires, the system
preempts it & gives the processor to the next waiting process. 46
• The system then places the preempted process at the back of the
ready queue.
Cont…
• In fig. process P1 is dispatched to a processor, where it executes
either until completion, in which case it exists the system, or until
its time slice expires, at which point is preempted process at the tail
of the ready queue.
• The scheduler then dispatches process P2.
11/27/2021
Dispatch Completed job
MNS
A B C D CPU
Preemption
Timeout
Fig. RR Scheduling
11/27/2021
• RR increases the fairness among the process.
• In RR scheduling algorithm overhead on processor is low.
MNS
• Disadvantages:
• In RR the processes may take long time to execute. This decrease the
system throughput.
• RR requires some extra hardware support, such as a timer to cause
interrupt after each time out.
• In RR scheduling care must be taken in choosing quantum value.
• Throughput in RR scheduling algorithm is low if time quantum is too
small. 48
Cont…
• Example 1: Consider the following set of processes that arrive at
time 0, with the length of CPU burst time given in milliseconds.
(time quantum=4ms). Jobs Burst Time
J1 24
J2 3
11/27/2021
J3 3
Gantt Chart:
P1 P5 P1 P1
MNS
0 4 7 10 30
11/27/2021
P2 1 3
P3 2 8
P4 3 6
MNS
Gantt Chart:
P1 P2 P3 P4 P1 P3 P4 P3
0 3 6 9 12 14 17 20 22
11/27/2021
one queue, depending upon their properties such as the size of the
memory or the process or priority of the process.
• So each queue follows a separate scheduling algorithm.
• In multilevel queue scheduling algorithm scheduling the processes are
MNS
classified into different groups such as System processes, Interactive
processes, Interactive editing processes, Batch processes, User processes
etc.
• The interactive processes are known as foreground processes & the batch
processes are known as background processes.
• These two types of processes have different response-time requirements
& so may have different scheduling needs.
• Foreground processes may have priority(external defined) over
background processes.
• A multilevel queue scheduling algorithm partitions the ready in several 51
separate queues.
Cont…
• Each queue has absolute priority over lower-priority queues. No
process in the batch queue, for example, could run unless the
queues for system processes, interactive processes & interactive
editing processes were all empty.
• If an interactive editing process entered the ready queue while a
11/27/2021
batch process was running, the batch process would be
preempted.
• Another possibility is to time-slice among the queues.
MNS
System processes
Highest priority
Interactive processes
Batch processes
52
Student processes
Lowest priority
Fig. Multilevel Priority Queue Scheduling
Cont…
• Advantages:
• In MLQ, the processes are permanently assigned to their respective
queues & do not move between queues. This results in low scheduling
overhead.
• In MLQ one can apply different algorithms to different processes.
11/27/2021
• There are many processes which we are not able to put them in the one
single queue which is solved by MLQ scheduling as we can now put
MNS
them in different queues.
• Disadvantages:
• The processes in lower priority queues may have to starve for CPU in
case processes are continuously arriving in higher priority queues.
• In MLQ the processes does not move from one queue to another
queue.
53
4.2.6 Multi Level Feedback Queue Scheduling
• The MLFQ scheduling also known as multilevel adaptive
scheduling is an improved version of MLQ scheduling algorithm.
• In MLQ processes cannot move from one queue to other because
processes do not change their foreground or background nature.
11/27/2021
• But in MLFQ scheduling processes are not permanently assigned
to a queue on entry to the system.
• They are allowed to move between queues.
MNS
• The idea is to separate processes with different CPU burst
characteristics.
• If a process uses too much CPU time, it will be moved to a lower
priority queue.
• Similarly, a process that waits too long in a low priority queue
will be moved to higher priority queue. Will help to reduce
starvation.
54
Cont…
• Multilevel Feedback queue scheduler is defined by following
parameters:
• The number queues.
• The scheduling algorithm for each queue.
• The method used to determine when to upgrade a process to a higher
11/27/2021
priority queue.
• The method used to determine to demote a process to a lower priority
MNS
queue.
• The method used to determine which queue a process will enter when
that process needs service.
55
11/27/2021
priority queue if it has been waiting for too long.
• It is fair to I/O bound(short) processes as these processes need not
wait too long & are executed quickly.
MNS
• It improves the overall performance of the system.
• Disadvantages:
• The turnaround time for long processes may increase significantly.
• It is the most complex scheduling algorithm.
• In this algorithm, moving the processes between queues causes a
number of context switches which results in an increased overhead.
56
Cont…
MNS 11/27/2021
57
4.3 DEADLOCK
• Deadlock can be defined as, the permanent blocking of a set of
processes that either compete for system resources or
communicate with each other.
• Deadlock involves conflicting needs for resources by two or more
processes.
11/27/2021
• For example, In multiprogramming system, suppose two processes
are there and each want to print a very large file. Process A
requests permission to use the printer & us granted. Process B
MNS
then requests permission to use the tape drive & is also granted.
• Now A ask for the tape drive, but the request is denied until B
releases it. Instead of releasing the tape drive, B asks for the
printer. At this point both processes are blocked & will remain so
forever this situation is called a Deadlock.
• Deadlock is the blocking of a set of processes that either compete
for system resources or communicate with each other.
• The blockage is permanent unless the OS takes some 58
extraordinary action, such as killing one or more processes or
forcing one or more processes to back track.
Cont…
MNS 11/27/2021
59
4.3.1 System Model
• A system consist of a finite number of resources to be distributed
among a number of competing processes.
• The resources are portioned into several types, each consisting of
some number of identical instances.
11/27/2021
• Some of examples of resources are memory, CPU, files & I/O
devices.
• If the systems have two processors, then we can say that there are
MNS
two instances of CPUs.
• When a process may requests for a resource, any instance of that
resource type can be used to satisfy the request.
• If that instance cannot complete the request, than the instances are
not identical. A process must request a resources before using it &
must release the resources after using it.
• A process may request as many resources as it requires carrying 60
out its designed task.
Cont…
• A process must request a resource before using it, & release when
it is done, in the following sequence.
• Request: The process first requests the resource. If the request cannot
be granted immediately satisfied it has to wait until the requested
resource is available.
11/27/2021
• Use: The process can operate on the resource.( If the resource is a
printer, the process can print on the printer).
• Release: The process releases the resources.
MNS
• The request & release of resources are system calls provided by
OS.
• Example of system calls are request() & release() device(), &
close() files, allocate() & free() memory.
• A system table records whether each resource is free or allocated;
for each resource that is allocated, the table also records the
process to which it is allocated.
• If a process requests a resource that is currently allocated to
61
another process, it can be added to a queue of processes waiting for
this resource.
4.3.2 Necessary Conditions to Deadlock
• Deadlock is a situation when two or more processes get locked &
processed further because of inter-dependability.
• In real world, deadlock can arise when two persons wait for phone calls
from one another.
• Deadlock is defined as, “a situation where a set of processes are blocked
11/27/2021
because each process is holding a resource & waiting for another
acquired by some other process”.
• Processes involved in a deadlock remain blocked permanently & this
MNS
affects OS performance indices like throughput & resource efficiency.
• In multiprogramming environment, multiple processes may try to access
a resource.
62
Fig. Deadlock
Cont…
• A deadlock arises when the four conditions hold true
simultaneously in a system.
• Mutual Exclusion:
• At least one resource is held in a non-sharable mode, that is only one
11/27/2021
process at a time can use the resource.
• If another process requests that resource, the requesting process must
be delayed until the resource has been released.
MNS
• Each resource is either currently assigned to exactly one process or it
available.
• Hold & Wait:
• There must exist a process that is holding at least one resource & is
waiting to acquire additional resources that are currently being held
by another process.
• Process currently holding resources granted earlier can request new
resources. 63
Cont…
• No pre-emption:
• Resources cannot be pre-empted; i.e. resource can only be released
voluntarily by the process holding it, after the process has completed
its task.
• Resources previously granted cannot be forcibly taken from a
11/27/2021
process.
• They must be explicitly released by the process holding them.
MNS
• Circular Wait:
• There exist a set (P0, P1,…….Pn) of waiting processes such that P0
is waiting for a resource which is held by P1, P1 is waiting for a
resource which is held by P2, Pn-1 is waiting for a resource which is
held by Pn & Pn is waiting for a resource which is held by P0.
• Thus there must be a circular chain of two or more processes, each of
which is waiting for a resource held by the next member of the chain.
64
4.3.3 Deadlock Handling
• A deadlock in OS can be handled in following four different ways:
• Adopt methods for avoiding the deadlock.
• Prevent the deadlock from occurring(use protocol).
• Ignore the deadlock.
• Allow the deadlock to occur, detect it & recover from it.
11/27/2021
• To ensure that deadlocks never occur, the system can use either
deadlock prevention or deadlock avoidance techniques.
• If any of these two techniques is not used, a deadlock may occur.
MNS
• In this case, an algorithm can be provided for detecting the
deadlock & then using the algorithm to recover the system from
the deadlock.
• The other method must be provided to either the deadlock from
occurrence or detect the deadlock & takes an appropriate action if
a deadlock occurs.
• If in a system, deadlock occurs less frequently then it is better to
ignore the deadlock instead of adopting expensive techniques for 65
deadlock prevention, avoidance or deadlock detection & recovery.
4.3.3.1 Deadlock Prevention
• The deadlock can be prevented by not allowing all four conditions
to be satisfied simultaneously, i.e. by making sure that at least one
of the four conditions does not hold.
• In simple words, deadlock prevention provides a set of method for
11/27/2021
ensuring that at least one of the necessary conditions cannot hold.
• Eliminating Mutual Exclusion Condition:
• The mutual exclusion condition must hold for non-sharable types of
MNS
resources.
• Example: several processes cannot simultaneously share a printer.
• A process never needs to wait for a sharable resource.
• It is not possible to prevent deadlock by denying the mutual exclusion
condition.
• Read only files are good example of sharable resource.
• If several processes attempt to open read only file at the same time,
they can be granted simultaneously access to the file. 66
Cont…
• Eliminating Hold & Wait:
• In order to ensure that the hold & wait condition never holds in the
system, one must guarantee that whenever a process request a
resource it does not hold any other resources.
• In other words, hold & wait condition can be eliminated by not
11/27/2021
allowing any process to request for a resource until it releases the
resources held by it, which is impractical as process may require the
MNS
resource simultaneously.
• One protocol that can be used requires each process to request & be
allocated all of its resources before it begins execution.
• Example: Consider a process which copies from a card reader to a
disk file, sorts the disk file & then prints the results to a line printer &
copies them to a magnetic tape.
• If all resources to be requested at the beginning of the process, then
the process must initially request the card reader, disk file, line printer
& tape drive. 67
Cont…
• Eliminating No-preemption condition:
• The third necessary condition in deadlock is that there is no pre-
emption of resources that have already been allocated.
• The elimination of no preemptive condition means a process can
release the resource held by it.
11/27/2021
• If a process requests for a resource held by some other process then
instead of making it wait, all the resources currently held by this
MNS
process can be preempted.
• The process will be restarted only when it is allocated the requested
as well as the preempted resources.
• Note that only those resources can be preempted whose current
working state can be saved & can be later restored.
• Example like system resources as disk drives & printer cannot be
preempted.
68
Cont…
• Eliminating Circular Wait:
• The circular wait condition of deadlock can be eliminated by
assigning a priority number to each available resource & a process can
request resources only in increasing order.
• Whenever a process request for a resource, the priority of the required
11/27/2021
resource is compared with the priority number of the resource already
held by it.
MNS
• If the priority number of a requested resource is greater than that of all the
currently held resources, the request is granted.
• If the priority number of a requested resource is less than that of the
currently held resources, all the resources with greater priority number
must be released first, before acquiring the new resource.
• Circular wait condition can be prevented by imposing a linear
ordering on resource type & each is allocated an integer number.
Number Resource Name
0 Tape drive 69
1 Printer
2 Plotter
4.3.3.2 Deadlock Avoidance
• A deadlock can be prevented by eliminating any one of the four
necessary conditions of the deadlock which results in the
inefficient use of resources.
• Deadlock avoidance approaches insure that ensure that deadlock
cannot arise/occur in the system, by not allowing the conditions
11/27/2021
for deadlocks to hold simultaneously.
• Deadlock avoidance requires that the operating system be given
information in advance regarding the resources a process will
MNS
request and use.
• This information is used by the Operating System to schedule the
allocation of resources so that no process waits for resource.
• Deadlock prevention prevent deadlocks by restraining how request
can be made.
• The restraints ensure that at least one of the necessary conditions
for deadlock cannot occur, and hence that deadlock cannot hold.
• A side effect of preventing deadlock by this method, is possibly 70
low device utilization and reduced system throughput.
Cont…
• Safe State:
• A state is safe if the system can
allocate all resources requested by all
processes without entering a
deadlock state.
11/27/2021
• A state is safe if there exist a safe
sequence of processes (P0, P1,P
2,......Pn) s.t. all of the resource
MNS
requests for Pi can be granted using
the resources currently allocated to Pi
and all processes Pj where j<i.
• Unsafe State:
• If a safe sequence does not exist,
then the system is in unsafe state, Fig. Safe, Unsafe & Deadlock Space
which may lead to deadlock( all safe state
States are deadlock free, but not all 71
unsafe States lead to deadlocks).
Cont…
• Deadlock Avoidance Example:
• Let us consider a system having 12 magnetic tapes and three processes P1, P2, P3.
• Process P1 requires 10 magnetic tapes, process P2 may need as many as 4 tapes,
process P3 may need up to 9 tapes.
• Suppose at a time t0, process P1 is holding 5 tapes, process P2 is holding 2 tapes
and process P3 is holding 2 tapes. (There are 3 free magnetic tapes)
11/27/2021
MNS
• So at time t0, the system is in a safe state. The sequence is <P2,P1,P3> satisfies
the safety condition.
• Process P2 can immediately be allocated all its tape drives and then return them.
• After the return the system will have 5 available tapes, then process P1 can get
all its tapes and return them ( the system will then have 10 tapes); 72
• finally, process P3 can get all its tapes and return them (The system will then
have 12 available tapes).
Cont…
• Bankers Algorithm:
• It is a banker algorithm used to avoid deadlock and allocate resources
safely to each process in the computer system.
• The 'S-State' examines all possible tests or activities before deciding
whether the allocation should be allowed to each process.
11/27/2021
• It also helps the operating system to successfully share the resources
between all the processes.
• The banker's algorithm is named because it checks whether a person
MNS
should be sanctioned a loan amount or not to help the bank system
safely simulate allocation resources.
• Suppose the number of account holders in a particular bank is 'n', and
the total money in a bank is 'T'.
• If an account holder applies for a loan; first, the bank subtracts the
loan amount from full cash and then estimates the cash difference is
greater than T to approve the loan amount.
• These steps are taken because if another person applies for a loan or
withdraws some amount from the bank, it helps the bank manage and
73
operate all things without any restriction in the functionality of the
banking system.
Cont…
• There are four types of data structures used to implement Banker‟s algorithm:
• Available
• Max
• Allocation
• Need
• Available
• Available is a one-dimensional array. The size of the array is „m‟ which is used to
11/27/2021
determine the number of available resources of each kind.
• Available[j] = k indicates that we have „k‟ instances of „Rj‟ resource type.
• Max
• Max is a two-dimensional array. The size of the array is „n*m‟. The max data structure
MNS
is used to determine the maximum number of resources that each process requests.
• Max[i, j] = k indicates that „Pi‟ can demand or request maximum „k‟ instances of „Rj‟
resource type.
• Allocation
• Allocation is a two-dimensional array, and the size of the array is „n*m‟, which is used
to define the number of resources of each kind presently assigned to each process.
• Allocation[i, j] = k indicates that currently process „Pi‟ is assigned „k‟ instances of „Rj‟
resource type.
• Need
• Need is a two-dimensional array. The size of the array is „n*m‟. Need is used to define
the remaining resources which are required for each process.
• Need [i, j] = k indicates that for the execution of „Pi‟ process, presently „k‟ instances of 74
resource type „Rj‟ are required.
Cont…
• Example of Banker’s Algorithm
• Consider the following snapshot of a system:
• Calculate the content of the need matrix?
• Is the system in a safe state?
• Determine the total amount of resources of each type?
11/27/2021
MNS
• 1. Content of the need matrix can be calculated by using the below
formula
75
• Need = Max – Allocation
Cont…
11/27/2021
2. Now, we check for a safe state
Safe sequence:
MNS
1. For process P0, Need = (3, 2, 1) and
Available = (2, 1, 0)
Need ? Available = False
So, the system will move for the next process.
2. For Process P1, Need = (1, 1, 0)
Available = (2, 1, 0)
Need ? Available = True
Request of P1 is granted.
? Available = Available +Allocation 76
= (2, 1, 0) + (2, 1, 2)
= (4, 2, 2) (New Available)
Cont… 3. For Process P2, Need = (5, 0, 1)
Available = (4, 2, 2)
Need ? Available = False
So, the system will move to the next process.
4. For Process P3, Need = (7, 3, 3)
Available = (4, 2, 2)
Need ? Available = False
So, the system will move to the next process.
5. For Process P4, Need = (0, 0, 0)
Available = (4, 2, 2)
Need ? Available = True
11/27/2021
Request of P4 is granted.
? Available = Available + Allocation
= (4, 2, 2) + (1, 1, 2)
= (5, 3, 4) now, (New Available)
6. Now again check for Process P2, Need = (5, 0, 1)
MNS
Available = (5, 3, 4)
Need ? Available = True
Request of P2 is granted.
? Available = Available + Allocation
= (5, 3, 4) + (4, 0, 1)
= (9, 3, 5) now, (New Available)
7. Now again check for Process P3, Need = (7, 3, 3)
Available = (9, 3, 5)
Need ? Available = True
Request of P3 is granted.
? Available = Available +Allocation
= (9, 3, 5) + (0, 2, 0) = (9, 5, 5)
8. Now again check for Process P0, = Need (3, 2, 1)
= Available (9, 5, 5)
Need ? Available = True
So, the request will be granted to P0.
Safe sequence: < P1, P4, P2, P3, P0> 77
The system allocates all the needed resources to each process. So, we can say that system is in a safe state.
3. The total amount of resources = sum of columns of allocation + Available
= [8 5 7] + [2 1 0] = [10 6 7]
Cont…
• Banker’s algorithm comprises of two algorithms:
• Safety algorithm
• Resource request algorithm
• Safety algorithm
11/27/2021
• The safety algorithm is used to check the system state means whether
the system is in a safe state or not.
• The safety algorithm contains the following steps:
MNS
• Let Work & Finish be vectors of length m & n, respectively. Initialize
Work= Available & Finish[i]=false for i=0,1,…..n-1.
• Find an index I such that both
• Finish[i]== false
• Needi<=Work
• If no such I exist, go to step 4.
• Work=Work + Allocation;
• Finish[i]=true
• Go to step 2 78
• If Finish[i]==true for all i, then the system is in a safe state.
Cont…
• Resource request algorithm:
• Let Requesti be the request vector for the process Pi. If Requesti[j]==k,
then process Pi wants k instance of Resource type Rj. When a request for
resources is made by the process Pi, the following are the actions that
will be taken:
11/27/2021
• If Requesti <= Needi, then go to step 2;else raise an error condition, since the
process has exceeded its maximum claim.
• If Requesti <= Availablei then go to step 3; else Pi must have to wait as
resources are not available.
MNS
• Now we will assume that resources are assigned to process Pi and thus
perform the following steps:
• Available= Available-Requesti;
• Allocationi=Allocationi +Requesti;
• Needi =Needi - Requesti;
79
Cont…
MNS 11/27/2021
80
4.3.4 Deadlock Detection
• Deadlock detection detects deadlock by checking whether all conditions
Necessary for deadlock hold simultaneously.
• Deadlock detection algorithm is required in a system which does not
employ either a deadlock prevention or a deadlock avoidance
algorithm.
• In this case, a system may enter a deadlock and hence an algorithm for
11/27/2021
detection of deadlock is necessary.
• Moreover, an algorithm for recovering from a deadlock is also
necessary.
MNS
• We consider two cases as single instance of each resource type and
multiple instance of each resource type.
• Single instance of each resource type:
• If all resources have only a single instance, then we can define a
deadlock algorithm that uses a variant of the resource allocation graph
called wait for graph.
• We obtain this graph from resource allocation graph by removing the
resource nodes and collapsing the associated edges.
• An edge from Pi to Pj in a wait for graph implies that process Pi is
waiting for process Pj to release resource that Pi needs.
81
• An edge Pi -> Pj Exist in a wait for graph if and only if the
corresponding resource allocation graph contents two edges Pi-> Rq &
Rq->Pj for some resource Rq.
Cont…
11/27/2021
MNS
• As before, a deadlock exists in the system if and only if the wait
for graph contains a cycle.
• To detect deadlocks, the system needs to maintain the wait for
graph and periodically invoke an algorithm that searches for a
cycle in the graph.
• An algorithm to detect cycle in the graph require an order n2
operations, where n is the number of vertices in the graph.
82
Cont…
• Several Instances of a Resource Type:
• The wait for graph scheme is not applicable to resource-allocation system with multiple
instances of each resource type.
• Deadlock detection algorithm that is applicable to such a system.
• The algorithm employs several time-varying data structures that are similar to those used
in the banker's algorithm.
• Available: a vector of length m indicates the number of available resources of each type.
11/27/2021
• Allocation: An n x m matrix define the number of resources of each type currently allocated to
each process.
• Request: An n x m matrix indicates the current request of each process. if Request[i][j] equals k,
then process Pi is requesting k more instances of resource type Rj.
• Algorithm:
MNS
• Let Work & Finish be vectors of length m & n, respectively. Initialize
Work=Available. For i=0,1,2,…..n-1, if Allocationi !=0, then Finish[i]=false;
otherwise, Finish[i]=true.
• Find an index I such that both
• Finish[i] ==false
• Requesti <= Work
• If no such I exists, go to step 4.
• Work=Work + Allocation;
• Finish[i]=true
• Go to step 2
• If Finish[i] ==false for some i, 0<=i<n, then the system is in a deadlocked state.
Moreover, if Finish[i]==false,
• Then process Pi is deadlocked. 83
• Above algorithm requires an order of mxn2 operations to detect whether the system
is in a deadlocked state.
4.3.5 Deadlock Recovery
• Once, the system has detected deadlock in the system, some method is
needed to recover the system from the deadlock and continue with the
processing.
• There are two ways or methods to recover from deadlock, manually and
automatically.
11/27/2021
• Manual:
• In Manual deadlock recovery method, the system can report the operator that
deadlock has occurred, who in turn can take suitable action.
• Automatic:
MNS
• In automatically deadlock recovery method, either some of the processes can
be terminated to break circular wait or some of the resources can be
preempted from processes.
• Process termination:
• Abort all deadlocked processes:
• This method clearly break the deadlock cycle, but at great expense; the
deadlocked processes may have computed for a long time, and the result of
these partial computations must be discarded and probably will have to be
computed letter.
84
Cont…
• Abort one process at a time until the deadlock cycle is
eliminated:
• This method Enquiries considerable overhead, since after
each process is aborted, the deadlock detection algorithm
11/27/2021
must be invoked to determine whether any processes are
still they locked.
• There are many factors are used to decide which
MNS
processes to be terminated at next
• What the priority of process is?
• How long the process has completed and how much longer the
process will compute before completing its designated task?
• How many and what type of resources the process has used?
• How many more resources the process needs in order to
compute?
• How many processes will need to be terminated? 85
• Whether the process is interactive or batch?
Cont…
• Resource Preemption:
• Another method to recover system from the state of deadlock is to preempt
the resources from the process one by one & allocate them to another
processes until the circular wait condition is eliminated.
• Select the process for preemption:
11/27/2021
• The choice of resources & processes in the system must be such that they incur
minimum cost to the system.
• All the factors must be considered while making choice(Priority, Number of
resources, type of resources & Number of required to complete).
MNS
• Rollback of the process:
• After preempting the resources in the system, the corresponding process must be
rolled backed properly so that it does not leave the system in an inconsistent state.
• In case no such safe state can be achieved, the process must be totally rolled
backed.
• However, partial rollback is always preferred over total rollback.
• Prevent Starvation:
• In case the selection of a process is based on the cost factor, it is quite possible
that same process is selected repeatedly for the rollback leading, to the situation of
starvation. 86
• This starvation can be avoided by including the number of rollbacks of a given
process in the cost factor.