Deadlock
Deadlock
If deadlocks occur on the average once every five years, but
system crashes due to hardware failures, compiler errors, and
operating system bugs occur once a week, most engineers would
not be willing to pay a large penalty in performance or
convenience to eliminate deadlocks.
Deadlock Prevention
This technique tries to ensure that at least one of the necessary
conditions stated by Coffman et al. is not fulfilled.
Addressing Mutual exclusion: Mutual exclusion condition will be
fulfilled for non-sharable resources. A sharable resource can never
fulfil the mutual exclusion condition, hence will not be part of
dealock.
But we cannot prevent deadlock by denying mutual exclusion
because some resources are intrinsically non-sharable.
Addressing Hold and Wait: If we can prevent processes that hold
resources from waiting for more resources, we can eliminate
deadlocks.
Deadlock Prevention
One way to achieve this goal is to require all processes to request all
their resources before starting execution.
If everything is available, the process will be allocated whatever it
needs and can run to completion.
If one or more resources are busy, nothing will be allocated and the
process would just wait.
An immediate problem with this approach is that many processes do
not know how many resources they will need until after they have
started running.
Processes will keep resources engaged for a long time, reducing
efficiency
Deadlock Prevention
A slightly different way to break the hold-and-wait condition is to require a
process requesting a resource to first temporarily release all the resources it
currently holds.
Then it tries to get everything it needs all at once.
Addressing no-preemption: If a process is holding some resources and
requests another resource that cannot be immediately allocated to it (that is,
the process must wait), then all resources currently being held are preempted.
In other words, these resources are implicitly released.
The preempted resources are added to the list of resources for which the
process is waiting.
The process will be restarted only when it can regain its old resources, as well
as the new ones that it is requesting.
Deadlock Prevention
Alternatively, if a process requests some resources, we first check
whether they are available.
If they are, we allocate them.
If they are not, we check whether they are allocated to some other
process that is waiting for additional resources.
If so, we preempt the desired resources from the waiting process and
allocate them to the requesting process.
If the resources are neither available nor held by a waiting process, the
requesting process must wait.
While it is waiting, some of its resources may be preempted, but only if
another process requests them.
Deadlock Prevention
Addressing Circular wait: One way to avoid the circular wait is to provide a
global numbering of all the resources
Now the rule is this: processes can request resources whenever they want
to, but all requests must be made in numerical order.
If we go by the following numbering, a process may request first a scanner
and then a tape drive, but it may not request first a plotter and then a
scanner.
1. Imagesetter
2. Scanner
3. Plotter
4. Tape drive
5. CD Rom drive
Deadlock Prevention
With this rule, the resource allocation graph can never have cycles
Let us see why this is true for the case of two processes
We can get a deadlock only if A requests resource j and B
requests resource i.
If i > j, then A is not allowed to request j because that is lower
than what it already has.
If i < j, then B is not allowed to request i because that is lower
than what it already has.
Either way, deadlock is impossible.
Deadlock Prevention
With multiple processes, the same logic holds.
At every instant, one of the assigned resources will be highest.
The process holding that resource will never ask for a resource already
assigned.
It will either finish, or at worst, request even higher numbered resources, all
of which are available.
Eventually, it will finish and free its resources.
At this point, some other process will hold the highest resource and can
also finish.
In short, there exists a scenario in which all processes finish, so no
deadlock is present.
Deadlock Prevention
Although numerically ordering the resources eliminates the
problem of deadlocks, it may be impossible to find an ordering
that satisfies everyone.
Moreover, a perfectly good and available copy of a resource could
be inaccessible with such a rule
Deadlock Avoidance
Deadlock can avoided not by imposing arbitrary rules on
processes but by carefully analyzing each resource request to see
if it could be safely granted.
The question arises: is there an algorithm that can always avoid
deadlock by making the right choice all the time?
The answer is a qualified yes, but only if certain information is
available in advance.
We will discuss about two deadlock avoidance algorithms that can
handle two different situations
Deadlock Avoidance
A state is safe if the system can allocate resources to each
process (up to its maximum) in some order and still avoid a
deadlock.
More formally! a system is in a safe state only if there exists a safe
sequence.
A safe state is not a deadlocked state.
Conversely, a deadlocked state is an unsafe state. Not all unsafe
states are deadlocks, however.
An unsafe state may lead to a deadlock.
Deadlock Avoidance
Let us try to understand the concept of safe state:
A state is safe if the system can
allocate resources to each process
(up to its maximum) in some order
and still avoid a deadlock.
Deadlock Avoidance
To illustrate, let us consider a system with 12 magnetic tape drives
and three processes: P0, P1, and P2 .
Process P0 requires 10 tape drives, process P1 may need as many
as 4 tape drives, and process P2 may need up to 9 tape drives.
Suppose that, at time t0, Maximum Currently
Needs Holds
process P0 is holding 5 tape
drives, process P1 is holding 2 P0 10 5
tape drives, and process P2 is
P1 4 2
holding 2 tape drives. (Thus,
there are 3 free tape drives.)
P2 9 2
Deadlock Avoidance
Although R2 is
currently free, we
cannot allocate it to
P2, since this action
will create a cycle in
the graph
Banker’s Algorithm
The resource-allocation-graph based algorithm is not
applicable to a resource allocation system with multiple
instances of each resource type.
In such situations an algorithm named Banker’s
algorithm is used.
The name was chosen because the algorithm could be
used in a banking system to ensure that the bank never
allocated its available cash in such a way that it could
no longer satisfy the needs of all its customers.
Banker’s Algorithm
When a new process enters the system, it must declare the
maximum number of instances of each resource type that it
may need.
This number may not exceed the total number of resources in
the system.
When a user requests a set of resources, the system must
determine whether the allocation of these resources will leave
the system in a safe state.
If it will, the resources are allocated; otherwise, the process
must wait until some other process releases enough resources.
Banker’s Algorithm
Assuming n number of processes and m number of
resources, following data structures are used to encode the
resource allocation system
Available: A vector of length m indicates the number of
available resources of each type. If Available[j] equals k,
there are k instances of resource type Rj available.
Max: An nxm matrix defines the maximum demand of each
process. If Max[i][j] equals k, then process Pi may request at
most k instances of resource type Rj.
Banker’s Algorithm
Allocation: An nxm matrix defines the number of resources
of each type currently allocated to each process. If
Allocation[i][j] equals k, then process Pi is currently
allocated k instances of resource type Rj.
Need: An nxm matrix indicates the remaining resource need
of each process. If Need[i][j] equals k then process Pi may
need k more instances of resource type Rj to complete its
task. Note that Need[i][j] equals Max[i][j]- Allocation[i][j].
Banker’s Safety Algorithm
1) Let Work and Finish be vectors of length m and n, respectively. Initialize Work =
Available and Finish[i] =false for i = 0, 1, ..., n – 1.
2) Find an i such that both
Finish[i] == false
Needi <= Work
If no such i exists, go to step 4.
3) Work =Work + Allocationi
Finish[i] = true
Go to step 2
4) If Finish[i] == true for all i, then the system is in a safe state, otherwise it is
unsafe.
Banker’s Safety Algorithm
This algorithm may require an order of mxn2
operations to deternline whether a state is
safe.
Banker’s Resource-Request Algorithm
1) If Requesti is <= Needi, go to step 2. Otherwise, raise an error condition, since the
process has exceeded its maximum claim.
2) If Requesti <= Available, go to step 3. Otherwise, Pi must wait, since the resources are
not available.
3) Have the system pretend to have allocated the requested resources to process Pi by
modifying the state as follows
Available = Available - Requesti;
Allocationi = Allocationi + Requesti;
Needi =Needi – Requesti;
4) If the resulting resource-allocation state is safe, the transaction is completed, and
process Pi is allocated its resources.
5) However, if the new state is unsafe, then Pi must wait for Requesti, and the old resource-
allocation state is restored.
Banker’s Algorithm
Banker’s Algorithm
Banker’s Algorithm
Banker’s Algorithm
Banker’s Algorithm
Banker’s Algorithm
Banker’s Algorithm
Banker’s Algorithm
Detection and Recovery
If a system does not employ either a deadlock-prevention or a
deadlock avoidance algorithm then a deadlock situation may
occur. In this environment, the system must provide:
An algorithm that examines the state of the system to determine
whether a deadlock has occurred
An algorithm to recover from the deadlock
Detection and Recovery
If all resources have only a single instance, then we can define a
deadlock detection algorithm that uses a variant of the resource-
allocation graph, called a wait-for graph.
We obtain this graph from the resource-allocation graph by
removing the resource nodes and collapsing the appropriate edges.
More precisely, an edge from Pi to Pj in a wait-for graph implies that
process Pi is waiting for process Pj to release a resource that Pi
needs.
An edge Pi -> Pj exists in a wait-for graph if and only if the
corresponding resource allocation graph contains two edges Pi ->
R and R-> Pj for some resource R
Detection and Recovery
Detection and Recovery
As before, a deadlock exists in the system if and only if the wait-
for graph contains a cycle.
To detect deadlocks, the system needs to maintain the wait-for
graph and periodically invoke an algorithm that searches for a
cycle in the graph.
An algorithm to detect a cycle in a graph requires an order of n 2
operations, where n is the number of vertices in the graph.
On detection of deadlock processes are killed to recover the
system.
Detection and Recovery
The wait-for graph scheme is not applicable to a resource-
allocation system with multiple instances of each resource type.
We turn now to a deadlock detection algorithm that is applicable
to such a system.
The algorithm employs several time-varying data structures that
are similar to those used in the banker's algorithm
Available
Allocation
Request
Detection and Recovery
1) Let Work and Finish be vectors of length m and n, respectively. Initialize
Work=Available. For i=0, 1, ..., n-1, if Requesti!= 0, then Finish[i]=false; otherwise,
Finish[i]=true
2) Find an index i such that both
a. Finish[i]== false
b. Requesti<= Work
If no such i exists, go to step 4.
3) Work=Work + Allocationi
Finish[i] =true
Go to step 2.
4) If Finish[i]=false, for some i, 0<= i < n, then the system is in a deadlocked state.
Moreover, if Finish[i]=false, then process Pi is deadlocked.
Detection and Recovery
This algorithm requires an order of m x n2 operations to detect
whether the system is in a deadlocked state.
When should we invoke the detection algorithm?
Of coursc, if the deadlock-detection algorithm is invoked for every
resource request, this will incur a considerable overhead in
computation time.
A less expensive alternative is simply to invoke the algorithm at
less frequent intervals for example, once per hour or whenever
CPU utilization drops below 40 percent.
55