0% found this document useful (0 votes)
24 views62 pages

Unit 4 OSY

Uploaded by

yhoc610
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views62 pages

Unit 4 OSY

Uploaded by

yhoc610
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 62

Unit-IV

CPU Scheduling & Algorithms


Marks:14
4.1 Scheduling Types
Scheduling Objectives
• In a single-processor system, only one process can
run at a time. Others must wait until the CPU is free
and can be rescheduled.
• The objective of multiprogramming is to have some
process running at all times, to maximize CPU
utilization.
• A process is executed until it must wait for the
completion of some I/O request. In a single processor
system, the CPU then just sits idle. All this waiting
time is wasted; no useful work is accomplished.
Scheduling Objectives cont…
• With multiprogramming, we try to use this time
productively. Several processes are kept in memory
at one time.
• One process has to wait, the operating system takes
the CPU away from that process and gives the CPU to
another process. This pattern continues. Every time
one process has to wait, another process can take
over use of the CPU.
• Almost all computer resources are scheduled before
use. The CPU is, one of the primary computer
resources. Thus, its scheduling is central to
operating-system design
 CPU–I/O Burst Cycle
• Process execution consists of a cycle of CPU execution and
I/O wait.
• Process execution begins with a CPU burst.
• Each cycle consist of a CPU burst followed by an I/O burst.

• A process terminates on a CPU burst .


• CPU-bound processes have longer CPU bursts than I/O-
bound processes.
• I/O bound process: The process which spends more time in
I/O operation than computation) is I/O bound process.
• CPU bound process: The process which spends more time
in computations or with CPU and very rarely with the I/O
devices is called as CPU bound process.
CPU–I/O Burst Cycle
 CPU Scheduler
• Whenever the CPU becomes idle, the
operating system must select one of the
processes in the ready queue to be executed.
The selection process is carried out by the
short-term scheduler, or CPU scheduler.
• Short-term scheduler selects from among the
processes in ready queue, and allocates the
CPU to one of them
 CPU Scheduler cont…
• CPU scheduling decisions may take place
when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
• Scheduling under 1 and 4 is non-preemptive .
• All other scheduling is preemptive.
 CPU Scheduler cont…
• Non-preemptive Scheduling:
– once the CPU has been allocated to a process, the
process keeps the CPU until it releases the CPU either
by terminating or by switching to the waiting state.
– Run each process to completion
– Not efficient for I/O bound applications
– It does not require special hardware.
– Easy to understand and implement
– Example: Microsoft Windows 3.x, prior to Mac OS x.
 CPU Scheduler cont…
• Preemptive Scheduling:
– the CPU is allocated to the processes for a specific time
period
– Preemptive scheduling is used when a process switches
from running state to ready state or from the waiting
state to ready state.
– Scheduler suspends a running process
– One process can preempted to run another process, if
the priority of the newly arrived process is higher than
the running process or if the burst time of newly arrived
process is shorter than the remaining burst of the
running process.
– It requires special hardware.
– Example: Microsoft Windows 95 and later, MacOS x
 Dispatcher
• The dispatcher is the module that gives control of
the CPU to the process selected by the short-
term scheduler.
• This function involves the following:
1. Switching context
2. Switching to user mode
3. Jumping to the proper location in the user program
to restart that program
• The dispatcher should be as fast as possible.
• The time taken by dispatcher to stop one
process and start another running is known as
the dispatch latency.
 Preemptive vs Non-preemptive
Sr.no Preemptive scheduling Non-preemptive scheduling
One process can be preempted The process keeps the CPU until it
1 to run another process releases the CPU
2 Reducing the throughput Increased throughput
Suited for online real-time Not suited for real time system
3 processing
Require special hardware(e.g. Does not require special hardware
4 timer)
More overhead in context Less overhead in context switching
5 switching
Process switches from running Process switches from running to waiting
6 to ready state or waiting to or terminated
ready state
Windows 95 & subsequent Windows 3.x
7 versions of windows , MAC OS
Scheduling Algorithms: Scheduling Algorithms:
1. Preemptive SJF, 1. FCFS
8 2. Preemptive Priority, 2. Non-Preemptive SJF,
3. Round Robin. 3. Non-Preemptive Priority.
 Scheduling Criterion
The criteria include the following:
• CPU utilization:
 In multiprogramming the main objective is to keep CPU as busy as
possible.
 On a real system CPU usage should range from 40% (lightly loaded) to
90% (heavily loaded.)
 A scheduler should maximize the CPU utilization.
• Throughput:
 It is a measure of work done in the system. When CPU is busy in
executing processes, then work is being done in the system.
 The number of processes that are completed per time unit, called
throughput.
 For long processes, throughput can be one process per unit time
whereas for short processes it may be 10 processes per unit time.
 A scheduler should maximize the throughput.
 Scheduling Criterion cont…
• Turnaround time:
 Amount of time required for a particular process to complete.
 The time interval from the time of submission of a process to the
time of completion of that process is called as turnaround time.
 It is the sum of the time spent waiting in ready queue, doing I/O,
and executing on the CPU.
 Turn Around Time = Completion Time – Arrival Time
 A scheduler should minimize the turnaround time.
• Waiting time:
 Waiting time is the sum of time periods spent in the ready queue.
 The CPU-scheduling algorithm does not affect the amount of time
during which a process executes or does I/O.
 It affects only the amount of time that a process spends waiting in
the ready queue.
 A scheduler should minimize the waiting time.
 Waiting Time = Turn Around Time – Burst Time
 Scheduling Criterion cont…
• Response time:
 The time period from the submission of a request until the first
response is produced is called as response time.
 It is the time when system starts responding not the completion of a
process.
 In the system, a process can Produce some output fairly early and
can continue computing new results while previous results are being
output to the user.
 A scheduler should minimize the response time.

• It is desirable to maximize CPU utilization and throughput


and to minimize turnaround time, waiting time, and
response time.
4.2 Types of Scheduling Algorithms
• CPU scheduling selects a process to be allocated to CPU for
execution from the list of processes in the ready queue.
• There are many different CPU-scheduling algorithms. Such as,
1. First Come First Served(FCFS)
2. Shortest Job First (SJF)
3. Shortest Remaining Time(SRTN)
4. Round Robin(RR)
5. Priority Scheduling
6. Multi level Queue Scheduling
1.First Come First Served Scheduling (FCFS)
• FCFS algorithm is the simplest scheduling algorithm.
• It maintains a FIFO (first-in, first-out) ready queue.
• New processes go to the end of the ready queue.
• It is non-preemptive
• The process that requests the CPU first is allocated the CPU first.
• The job/process which comes first in the ready queue will get the CPU
first.
• It is not optimal because the average waiting time is higher.
• Advantages:
1. Simple and Easy to implement
2. First come, First Served
• Disadvantages:
1. process will run to the completion.
2. problem of starvation may occur
3. poor in performance since the average waiting time is higher
4. not suitable for interactive jobs / time sharing environment
First Come First Served Scheduling (FCFS) cont…
• Example:
• Suppose process arrives in order: P1, P2, and P3 with burst in
milliseconds Process Burst Time
P1 24
P2 3
P3 3

• Gantt chart using FCFS is as follows


P1 P2 P3
0 24 27 30
• Waiting time and turnaround time of each process is shown in
foll. Table. Process Waiting Time Turnaround Time
P1 24-24=0 ms 24 ms
P2 27-3=24 ms 27 ms
P3 30-3=27 ms 30 ms

• Thus, Average Waiting time = (0 + 24 + 27) / 3 = 17 ms.


• Thus, Average Turnaround time= (24 + 27 + 30) / 3 = 27 ms.
2. Shortest Job First Scheduling (SJF)
• SJF algorithm associate with burst time of each process.
• SJF selects process which has the shortest burst time.
• When CPU is available, it is assigned to the process that has smallest
next CPU burst.
• Once a process gets the CPU, it releases the CPU only when the burst
time is over.
• If the CPU bursts of two processes are the same, FCFS scheduling is
used to break the tie.
• SJF can be either preemptive or non-preemptive.
• SJF is optimal, because it gives minimum average waiting time.
• Advantages:
1. It is optimal algorithm, gives minimum average waiting time.
2. Appropriate for Batch Jobs.
• Disadvantages:
1. It requires precise knowledge of process run time
2. Starvation possible (Long jobs may never be scheduled )
Shortest Job First Scheduling(SJF) cont…
• Example:
• Suppose process arrives in order: P1, P2, and P3 , p4 with burst in
milliseconds Process Burst Time
P1 6

P2 8

P3 7
P4 3

• Gantt chart using SJF is as follows


P4 P1 P3 P2
0 3 9 16 24

• Waiting time andProcess


turnaround time
Waiting Time of Turnaround
each process
Time is shown in foll.
Table. P1 9-6=3 ms 9 ms
P2 24-8=16 ms 24 ms
P3 16-7=9 ms 16 ms
P4 3-3=0 ms 3 ms

• Thus, Average Waiting time = (3 + 16 + 9 + 0) / 4 = 7 ms.


3. Shortest Remaining Time Scheduling (SRTN)
• A preemptive SJF algorithm will preempt the currently executing
process.
• Preemptive SJF scheduling is also known as shortest-remaining-
time-first (SRTF) or shortest-remaining-time-next (SRTN).
• While a process is running if a process with lower burst time
arrives then it will pre-empt the currently running process.
• Whenever a new process arrives check whether its burst time is
less than the remaining burst of currently running process.
• The process having the smallest amount of remaining time is
selected first to execute.
• It is useful in time sharing environment.
• It must keep track of the elapsed time of the running process
Shortest Remaining Time Scheduling(SRTN) Cont..
• Advantages:
1. Faster than SJF
• Disadvantages:
1. context switching is done a lot more times
2. It requires precise knowledge of process run time
3. Starvation possible (Long jobs may never be scheduled )
Shortest Remaining Time Scheduling(SRTN) Cont..
• Example:
• Suppose process arrives in order: P1, P2, and P3 , p4 with burst in
Process Arrival Time Burst Time
milliseconds P1 0 8
P2 1 4
P3 2 9
P4 3 5

• Gantt chart using SRTN is as follows


P1 P2P2P2 P4 P1 P3
0 1 2 3 5 10 17 26
• At time 0, P1 is the only process in ready queue so p1 will starts first.
• At time 1, P2 arrives. Its burst time is 4 ms which is smaller than remaining time
of P1 i.e 7 ms. Hence P1 is preempted and p2 starts its execution.
• At time 2, P3 arrives. Its burst time is 9 ms which is larger than remaining time
of P2 i.e 3 ms. So P2 is not preempted and continue its execution.
• At time 3, P4 arrives. Its burst time is 5 ms which is larger than remaining time
of P2 i.e 2 ms. So P2 is not preempted and continue its execution.
• After p2 completes, remaining process schedules using SJF algorithm.
Shortest Remaining Time Scheduling(SRTN) Cont..
• Waiting time and turnaround time of each process is shown in
foll. Table.
Process Waiting Time Turnaround Time
(Turnaround time-burst time) (completion time-arrival)
P1 ( 17 - 8 ) = 9 ms (17-0)=17 ms
P2 ( 4 - 4 ) = 0 ms (5-1)=4 ms
P3 ( 24 - 9 ) = 15 ms (26-2)=24 ms
P4 ( 7 - 5 ) = 2 ms (10-3)=7 ms

• Thus, Average Waiting time = (9 + 0+ 15 + 2) / 4 = 6.5 ms.


• Thus, Average Turnaround time= (17 + 4 + 24 + 7) / 4 = 13 ms.
4. Round Robin Scheduling(RR)
• Round robin scheduling is a preemptive version of FCFS
scheduling.
• Processes are dispatched in a first-in-first-out sequence but each
process is allowed to run for only a limited amount of time.
• The CPU is shifted to next process after fixed interval of time
called time quantum.
• This time interval is known as a time-slice or quantum.Each
process gets a time quantum, usually 10-100 milliseconds.
• When time quantum expires, the process is preempted and added
to the end of the ready queue.
• The ready queue is maintained as a circular queue.
• CPU is assigned to the entire processes one by one, on first come
first serve basis, for a specific time period. Every process executes
for specified time period and CPU is given to the next process
when time quantum expires.
• Used when all processes are equally important.
Round Robin Scheduling(RR)
• Advantage:
1. All the jobs get a fair and efficient allocation of CPU
2. It is effective in time sharing environment.
3. Better response time.
4. No starvation
• Disadvantage:
1. context switching overhead
2. Its performance heavily depends on time quantum.
Round Robin Scheduling(RR)
• Example: suppose all processes arrived at 0th time.
Use time quantum=4 ms.
Process Burst Time

P1 24
P2 3
P3 3

• Gantt chart using RR is as follows


P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
• Waiting time and turnaround time of each process is shown in
foll. Table. Process Waiting Time Turnaround Time
P1 (30 – 24 )= 6 ms 30 ms
P2 (7-3)=4 ms 7 ms
P3 (10-3)=7 ms 10 ms

• Thus, Average Waiting time = (6 + 4 + 7) / 3 = 5.66 ms.


• Thus, Average Turnaround time= (30 + 7 + 10) / 3 = 15.66 ms.
5. Priority Scheduling
• A priority number (integer) is associated with each process.
• Whenever two processes compete for a processor, the process
with the higher priority wins.
• Equal-Priority processes are scheduled in FCFS order or Round
Robin.
• Lower the number, higher is the priority.
• Priority scheduling can be either preemptive or non-preemptive.
• Priorities can be assigned either internally or externally.
• Used when all processes are not equally important.
• Windows XP uses a priority-based preemptive scheduling
algorithm.
• Types of Priority Scheduling
1. Non-preemptive Priority scheduling :
2. Preemptive Priority scheduling
Priority Scheduling cont…
• Advantages-
1. It considers the priority of the processes and allows the
important processes to run first.
2. Priority scheduling in preemptive mode is best suited for real
time operating system.
• Disadvantage:
1. Starvation occurs. (low priority processes may never execute)
 Non-preemptive Priority scheduling :
• The CPU has been allocated to a specific process having higher
priority.
• Process will run till the completion
• Non-preemptive algorithm will not preempt the currently
executing process. If during the execution of a process, another
process with a higher priority arrives for execution, even then
the currently executing process will not be disturbed
• The queue of jobs is sorted by priority, so that higher priority
process runs first.
 Non-preemptive Priority scheduling Example: Suppose all
processes arrived at 0th time. Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

• Gantt chart using Non-preemptive Priority is as follows:


P2 P5 P1 P3 P4
0 1 6 16 18 19
• Waiting time and turnaround time of each process is shown in foll.
Table Process Waiting Time Turnaround Time
P1 16-10=6 ms 16 ms
P2 1-1=0 ms 1 ms
P3 18-2=16 ms 18 ms
P4 19-1=18 ms 19 ms
P5 6-5=1 ms 6 ms

• Thus, Average Waiting time = (6 + 0 + 16 + 18 + 1) / 5 = 8.2 ms.


• Thus, Average Turnaround time= (16 + 1 + 18 + 19 + 6) / 5 = 12 ms.
 Preemptive Priority scheduling
• At the time of arrival of a process in the ready queue, its
Priority is compared with the priority of the other processes
present in the ready queue as well as with the one which is
being executed by the CPU at that point of time.
• A process is preempted whenever a higher priority process is
available in the ready list.
• One with the highest priority among all the available processes
will be given the CPU next.
 Preemptive Priority scheduling Example:
Process Arrival Burst Time Priority
P1 0 10 3
P2 1 1 1
P3 2 2 4
P4 3 1 5
P5 4 5 2

• Gantt chart using Preemptive Priority is as follows:


P1 P2 P1 P1 P5 P1 P3 P4
0 1 2 3 4 9 16 18 19

• Waiting time and turnaround time of each process is shown in foll. Table
Process Waiting Time Turnaround Time
P1 16-10=6 ms 16-0=16 ms
P2 1-1=0 ms 2-1=1 ms
P3 16-2=14 ms 18-2=16 ms
P4 16-1=15 ms 19-3=16 ms
P5 5-5=0 ms 9-4=5 ms

• Thus, Average Waiting time = (6 + 0 + 14 + 15 + 0) / 5 = 7 ms.


• Thus, Average Turnaround time= (16 + 1 + 16 + 16 + 5) / 5 = 10.8 ms.
6. Multilevel Queue Scheduling (MLQ)
• A multi-level queue scheduling algorithm partitions the
ready queue into several separate queues.
• This allows us to categorize and separate system
processes, interactive processes, low-priority interactive
processes, and background non-interactive processes.
• Processes are permanently assigned to one queue. Jobs
cannot switch from queue to queue.
• Each queue has its own scheduling algorithm.
• For example, The foreground queue might be scheduled
by the Round Robin algorithm, while the background
queue is scheduled by an FCFS algorithm.
• Scheduling must be done between the queues. Commonly
used algorithm is Fixed priority preemptive scheduling
Multilevel Queue Scheduling (MLQ) cont…
multilevel queue scheduling algorithm with five queues, listed
below in order of priority: 1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes

CPU
Multilevel Queue Scheduling (MLQ) cont…
• Advantage : Low scheduling overhead
• Disadvantage:
1. It is inflexible (Jobs cannot switch from queue to
queue)
2. Starvation possible.
4.3 Deadlock
 Introduction:
• In a multiprogramming environment, several processes
may compete for a finite number of resources.
• A process requests resources; if the resources are not
available at that time, the process enters a waiting state.
• Sometimes, a waiting process is never again able to
change state, because the resources it has requested are
held by other waiting processes. This situation is called a
deadlock.
System Model
• System consists of resources
• Examples of Resource Types
– CPU cycles, files, and I/O devices (such as printers and DVD
drives)
• Resource may have its ‘N’ number of instances
– If a system has 5 printers then resource type printer has five
instances
• Each process utilizes a resource as follows:
• Request : The process requests the resource.
• Use : The process can operate on the resource.
• Release: The process releases the resource.
Deadlock
• Definition: A deadlock consists of a set of blocked processes,
each holding a resource and waiting to acquire a resource held
by another process in the set.

• Process 1 has non-shareable resources A, say, a tape drive, and


process 2 has non-sharable resource B, say, a printer.
• Now, process 1 needs resource B (printer) to proceed and
process 2 needs resource A (the tape drive) to proceed
• These are the only two processes in the system. Each blocked
other and all useful work in the system stops.
Necessary Conditions for deadlock
• For a deadlock to occur, each of the following four
conditions must hold.
1. Mutual Exclusion – At least one resource must be held in a
non-sharable mode. That is Only one process at a time can
use a resource. If any other process requests this resource,
then that process must wait for the resource to be released.
2. Hold and Wait – A process must be holding a resource and
waiting for another.
3. No preemption – Resource cannot be preempted. That is a
resource can be released only after that process has
completed its task.
4. Circular Wait - A set of processes {P0, P1, P2, …, PN} must exist
such that P0 is waiting for a resource held by P1; P1 is waiting
for a resource held by P2, and so on.
Deadlock Handling
Method for Handling Deadlocks :
• There are three different methods for dealing
with the deadlock problem:
1. Deadlock Prevention & Avoidance
2. Deadlock Detection & Recovery
3. Deadlock Ignorance
Deadlock Handling cont…
1. Deadlock Prevention & Avoidance :
– To ensure that deadlocks never occur, the system can use
either a deadlock prevention or a deadlock-avoidance
scheme.
– Deadlock prevention provides a set of methods to ensure
that at least one of the necessary conditions cannot hold.
– Deadlock avoidance, requires that additional information
concerning which resources a process will request and use
during its lifetime is given to the operating system in
advance.
With this additional knowledge, OS can decide whether the
current request can be satisfied or must be delayed.
System must consider the resources currently available, the
resources currently allocated to each process, and the future
requests and releases of each process.
Deadlock Handling cont…
2. Deadlock Detection & Recovery:
– If a system does not employ either a deadlock-prevention or a
deadlock avoidance algorithm, then a deadlock situation may
arise.
– In such case, the system can provide an algorithm to detect
deadlock has occurred and an algorithm to recover from the
deadlock.
3. Deadlock Ignorance:
– If a system does not provide a mechanism for deadlock
detection and recovery.
– This strategy involves ignoring the concept of deadlock and
assuming as if it does not exist.
– This strategy helps to avoid the extra overhead of handling
deadlock.
– Windows and Linux use this strategy and it is the most widely
used method.
Deadlock Prevention
• We can Prevent deadlocks by limiting how requests can
be made.
• This ensures that at least one of the four conditions
cannot hold. So we can prevent the occurrence of a
deadlock.

• Deadlock prevention conditions:-


1. Preventing Mutual exclusion condition
2. Preventing Hold and wait condition
3. Preventing No preemption condition
4. Preventing Circular wait condition
Deadlock Prevention cont..
1. Elimination of mutual exclusion :
• A mutual exclusion means that non-shareable resources
cannot be accessed simultaneously by processes.
• Sharable resources do not require mutually exclusive access
and thus cannot be involved in a deadlock.
• Read-only files are a good example of a sharable resource.
• For Example: read operation on a file can be done
simultaneously by multiple processes, but write operation
cannot. Write operation requires sequential access, so, some
processes have to wait while another process is doing a write
operation.
• It is not possible to eliminate mutual exclusion, as some
resources are inherently non-shareable. Some sharable
resources must be accessed exclusively (e. g .tape drive,
printer).
Deadlock Prevention cont…
2. Elimination of hold and wait condition:
• To ensure that the hold-and-wait condition never occurs in the system, we
must guarantee that, whenever a process requests a resource, it does not
hold any other resources.
• There are two ways to eliminate hold and wait:-
a) By eliminating wait:
• The process specifies the resources it requires in advance so that it does
not have to wait for allocation after execution starts.
• For Example: Process1 declares in advance that it requires both
Resource1 and Resource2.
• This can be wasteful of system resources
b) By eliminating hold:
• The process has to release all resources it is currently holding before
making a new request. And then re-acquire the released resources along
with the new ones.
• For Example: Process1 has to release Resource1 before making request
for Resource2.
• This can be a problem if a process has partially completed an operation
using a resource and then fails to get it re-allocated after releasing it.
Deadlock Prevention cont…
• Both these methods have two main disadvantages.
1. resource utilization may be low
2. starvation is possible
Deadlock Prevention cont…
3. Elimination of No Preemption:
• To ensure that this condition does not hold, we can use
the following protocol.
– If a process is holding some resources and requests
another resource that cannot be immediately
allocated to it then all resources the process is
currently holding are preempted.
– The preempted resources are added to the list of
resources for which the process is waiting.
– The process will only be restarted when it can regain
its old resources, as well as the new ones that it is
requesting.
Deadlock Prevention cont…
4. Elimination of Circular wait :
• If a circular wait condition is prevented, the problem of
the deadlock can be prevented too.
• One way to ensure that this condition never holds is to
impose a total ordering of all resource types and to
require that each process requests resources in an
increasing order of enumeration.
• Consider all resources are numbered as shown in figure:
Deadlock Prevention cont…
1. Each process can request resources only in an increasing
order.
– A process can initially request any number of instances of a
resource type —say, Ri
– After that, the process can request instances of resource type
Rj if and only if F(Rj ) > F (Ri ).
– Ex: A process may request first tape drive and then a printer
(order: 0, 1), but it may not request first a plotter and then a
printer (order: 2, 1).
2. Alternatively it is require that a process requesting an
instance of resource type Rj must have released any
resources Ri such that F(Ri ) ≥ F(Rj ).
• If these two protocols are used, then the circular-wait
condition cannot hold.
Deadlock Avoidance
• Avoiding deadlocks requires additional information about
how resources are to be requested.
• With this knowledge of the complete sequence of
requests and releases for each process, the system can
decide for each request whether or not the process
should wait in order to avoid a possible future deadlock.
• The simplest and most useful model requires that each
process declare the maximum number of resources of
each type that it may need.
• A deadlock-avoidance algorithm dynamically examines
the resource-allocation state to ensure that a circular-wait
condition can never exist.
• The resource allocation state is defined by the number of
available and allocated resources and the maximum
Deadlock Avoidance cont…
• Safe State:
– When a process requests an available resource, system must
decide if immediate allocation leaves the system in a safe state.
– A system is in safe state if the system can allocate all resources
requested by all processes (up to their stated maximums) without
entering a deadlock state.
– A state is safe if there exists a safe sequence of processes { P0,
P1, P2, ..., PN } such that for each Pi , the resources that Pi can
still request can be satisfied by currently available resources +
resources held by all the Pj , with j < i.
– That is:
1. If Pi resource needs are not immediately available, then Pi can
wait until all Pj have finished.
2. When Pj is finished, Pi can obtain needed resources, execute,
return allocated resources, and terminate
3. When Pi terminates, Pi +1 can obtain its needed resources, and
so on
Deadlock Avoidance cont…
– If a safe sequence does not exist, then the system is
in an unsafe state, which may lead to deadlock.
– If a system is in safe state , no deadlocks
– If a system is in unsafe state , possibility of deadlock
– Avoidance : ensure that a system will never enter an
unsafe state.
Deadlock Avoidance Algorithms
 Two algorithms:
 Resource Allocation Graph
• Used when there is single instances of a resource type
• efficient
 Bankers Algorithm
• Used when there are multiple instances of a resource type
• Less efficient
Deadlock Avoidance Algorithms
 Resource Allocation Graph Algorithm
– Deadlock avoidance algorithm.
– If there are no cycles in the resource allocation graph,
then there are no deadlocks.
– If there are cycles, there may be a deadlock.
– A calm edge denotes that a request may be made in
future and is represented as a dashed line.
– An edge from a process to resource is a request edge.
– An edge from a resource to process is an allocation
edge.
– Resources must be claimed a priori in the system.
Based on calm edges we can see if there is a chance for a
cycle and then grant requests if the system will again be in
a safe state.
Deadlock Avoidance Algorithms
 Bankers Algorithm
– Deadlock avoidance algorithm
– Used when there are multiple instances of a resource
type.
– When a new process enters the system, it must
declare the maximum number of instances of each
resource type that it may need. This number may not
exceed the total number of resources in the system.
– Allocation of resources is made only, if the allocation
ensures a safe state; else the processes need to wait.
– The Banker’s algorithm can be divided into two parts:
1. Safety algorithm
2. Resource request algorithm
 Data Structures for the Banker’s Algorithm Deadlock :
Let n = number of processes, and m = number of
resources types, k=no. of instances of resource
• Available: Vector of length m. If available [j] = k, there
are k instances of resource type Rj available
• Max: n x m matrix. If Max [i,j] = k, then process Pi may
request at most k instances of resource type Rj
• Allocation: n x m matrix. If Allocation[i,j] = k then Pi is
currently allocated k instances of Rj
• Need: n x m matrix. If Need[i,j] = k, then Pi may need k
more instances of Rj to complete its task.
Need [i,j] = Max[i,j] – Allocation [i,j]
 Example of Banker’s Algorithm :
• Consider , 5 processes P0 through P4;
• 3 resource types: A (10 instances), B (5instances), and C
(7 instances)
• Snapshot of the system at time T0
• matrix Need is defined to be Max − Allocation
and is as follows:

• The system is in a safe state since the sequence


< P1 , P3 , P4 , P2 , P0> satisfies safety criteria
• Suppose If process P1 requests 1 instance of A and 2
instances of C. (Request1 = (1, 0, 2))
• Check that Request  Available i.e, (1, 0, 2)  (3, 3, 2)  true.
• Executing safety algorithm shows that sequence <P1, P3,
P4, P0, P2> satisfies safety requirement.

• Request for (3,3,0) by P4 cannot be granted, since the


resources are not available.
Questions
1.

2.
Questions
3.

4.

You might also like