0% found this document useful (0 votes)
94 views3 pages

3.2 Introduction To Deadlocks

The document discusses deadlocks in computing systems, defining deadlock as a situation where a set of processes are all waiting indefinitely for resources held by each other in a circular chain. It presents the four conditions necessary for deadlock to occur and models the allocation of resources and processes waiting for resources using directed graphs. Strategies for dealing with deadlocks include ignoring the problem, detecting and recovering from deadlocks, dynamically avoiding deadlocks through careful resource allocation, and preventing deadlocks by eliminating one of the necessary conditions.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views3 pages

3.2 Introduction To Deadlocks

The document discusses deadlocks in computing systems, defining deadlock as a situation where a set of processes are all waiting indefinitely for resources held by each other in a circular chain. It presents the four conditions necessary for deadlock to occur and models the allocation of resources and processes waiting for resources using directed graphs. Strategies for dealing with deadlocks include ignoring the problem, detecting and recovering from deadlocks, dynamically avoiding deadlocks through careful resource allocation, and preventing deadlocks by eliminating one of the necessary conditions.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

SEC. 3.

163

3.2 INTRODUCTION TO DEADLOCKS


Deadlock can be defined formally as follows: A set of processes is deadlocked if each process in the set is waiting for an event that only another process in the set can cause. Because all the processes are waiting, none of them will ever cause any of the events that could wake up any of the other members of the set, and all the processes continue to wait forever. For this model, we assume that processes have only a single thread and that there are no interrupts possible to wake up a blocked process. The no-interrupts condition is needed to prevent an otherwise deadlocked process from being awakened by, say, an alarm, and then causing events that release other processes in the set. In most cases, the event that each process is waiting for is the release of some resource currently possessed by another member of the set. In other words, each member of the set of deadlocked processes is waiting for a resource that is owned by a deadlocked process. None of the processes can run, none of them can release any resources, and none of them can be awakened. The number of processes and the number and kind of resources possessed and requested are unimportant. This result holds for any kind of resource, including both hardware and software.

3.2.1 Conditions for Deadlock


Coffman et al. (1971) showed that four conditions must hold for there to be a deadlock: 1. Mutual exclusion condition. Each resource is either currently assigned to exactly one process or is available. 2. Hold and wait condition. Processes currently holding resources granted earlier can request new resources. 3. No preemption condition. Resources previously granted cannot be forcibly taken away from a process. They must be explicitly released by the process holding them. 4. Circular wait condition. There must be a circular chain of two or more processes, each of which is waiting for a resource held by the next member of the chain. All four of these conditions must be present for a deadlock to occur. If one of them is absent, no deadlock is possible. It is worth noting that each condition relates to a policy that a system can have or not have. Can a given resource be assigned to more than one process at once? Can a process hold a resource and ask for another? Can resources be preempted?

164

DEADLOCKS

CHAP. 3

Can circular waits exist? Later on we will see how deadlocks can be attacked by trying to negate some of these conditions.

3.2.2 Deadlock Modeling


Holt (1972) showed how these four conditions can be modeled using directed graphs. The graphs have two kinds of nodes: processes, shown as circles, and resources, shown as squares. An arc from a resource node (square) to a process node (circle) means that the resource has previously been requested by, granted to, and is currently held by that process. In Fig. 3-1(a), resource R is currently assigned to process A.
A S D

R (a)

B (b)

C (c)

Figure 3-1. Resource allocation graphs. (a) Holding a resource. (b) Requesting a resource. (c) Deadlock.

An arc from a process to a resource means that the process is currently blocked waiting for that resource. In Fig. 3-1(b), process B is waiting for resource S. In Fig. 3-1(c) we see a deadlock: process C is waiting for resource T, which is currently held by process D. Process D is not about to release resource T because it is waiting for resource U, held by C. Both processes will wait forever. A cycle in the graph means that there is a deadlock involving the processes and resources in the cycle (assuming that there is one resource of each kind). In this example, the cycle is C T D U C. Now let us look at an example of how resource graphs can be used. Imagine that we have three processes, A, B, and C, and three resources, R, S, and T. The requests and releases of the three processes are given in Fig. 3-2(a)-(c). The operating system is free to run any unblocked process at any instant, so it could decide to run A until A finished all its work, then run B to completion, and finally run C. This ordering does not lead to any deadlocks (because there is no competition for resources) but it also has no parallelism at all. In addition to requesting and releasing resources, processes compute and do I/O. When the processes are run sequentially, there is no possibility that while one process is waiting for I/O, another can use the CPU. Thus running the processes strictly sequentially may not be optimal. On the other hand, if none of the processes do any I/O at all,

SEC. 3.2

INTRODUCTION TO DEADLOCKS

165

shortest job first is better than round robin, so under some circumstances running all processes sequentially may be the best way. Let us now suppose that the processes do both I/O and computing, so that round robin is a reasonable scheduling algorithm. The resource requests might occur in the order of Fig. 3-2(d). If these six requests are carried out in that order, the six resulting resource graphs are shown in Fig. 3-2(e)-(j). After request 4 has been made, A blocks waiting for S, as shown in Fig. 3-2(h). In the next two steps B and C also block, ultimately leading to a cycle and the deadlock of Fig. 3-2(j). However, as we have already mentioned, the operating system is not required to run the processes in any special order. In particular, if granting a particular request might lead to deadlock, the operating system can simply suspend the process without granting the request (i.e., just not schedule the process) until it is safe. In Fig. 3-2, if the operating system knew about the impending deadlock, it could suspend B instead of granting it S. By running only A and C, we would get the requests and releases of Fig. 3-2(k) instead of Fig. 3-2(d). This sequence leads to the resource graphs of Fig. 3-2(l)-(q), which do not lead to deadlock. After step (q), process B can be granted S because A is finished and C has everything it needs. Even if B should eventually block when requesting T, no deadlock can occur. B will just wait until C is finished. Later in this chapter we will study a detailed algorithm for making allocation decisions that do not lead to deadlock. For the moment, the point to understand is that resource graphs are a tool that let us see if a given request/release sequence leads to deadlock. We just carry out the requests and releases step by step, and after every step check the graph to see if it contains any cycles. If so, we have a deadlock; if not, there is no deadlock. Although our treatment of resource graphs has been for the case of a single resource of each type, resource graphs can also be generalized to handle multiple resources of the same type (Holt, 1972). In general, four strategies are used for dealing with deadlocks. 1. Just ignore the problem altogether. Maybe if you ignore it, it will ignore you. 2. Detection and recovery. Let deadlocks occur, detect them, and take action. 3. Dynamic avoidance by careful resource allocation. 4. Prevention, by structurally negating one of the four conditions necessary to cause a deadlock. We will examine each of these methods in turn in the next four sections.

You might also like