0% found this document useful (0 votes)
42 views17 pages

Unit Iii-Os

The document discusses deadlocks in operating systems. It defines deadlock as a situation where multiple processes are waiting for resources held by each other in a cyclic manner, such that no progress can be made. It then describes the four necessary conditions for deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. Finally, it discusses methods for handling deadlocks, including prevention protocols that avoid one of the necessary conditions, and detection and recovery approaches.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views17 pages

Unit Iii-Os

The document discusses deadlocks in operating systems. It defines deadlock as a situation where multiple processes are waiting for resources held by each other in a cyclic manner, such that no progress can be made. It then describes the four necessary conditions for deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. Finally, it discusses methods for handling deadlocks, including prevention protocols that avoid one of the necessary conditions, and detection and recovery approaches.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

UNIT III

DEADLOCKS

In a multiprogramming environment, several processes may compete for a finite number


of resources. A process requests resources; if the resources are not available at that time, the
process enters a wait state. Waiting processes may never again change state, because the resources
they have requested are by other waiting processes. This situation is called a deadlock.
A process must request a resource before using it, and must release the resource after using
it. A process may request as many resources as it requires to carry out its task. The number of
resources, requested may not exceed the total number of resources available in the system. A
process cannot request three printers if the system has only two.
Under the normal mode of operation, a process may utilize a resource in only the following
sequence:

1. Request: If the request cannot be granted immediately, (if the resource is being used by
another process), then the requesting process must wait until it can acquire the resource.
2. Use: The process can operate on the resource, (for example if the resource is a printer, the
process can print on the printer).
3. Release: The process releases the resources.

A set of processes is in a deadlock state when every process in the set is waiting for an event
that can be caused only be another process in the set.

DEADLOCK CHARACTERIZATION

I. Necessary Conditions:

A deadlock situation can arise if the following four conditions hold simultaneously in a system:
1. Mutual exclusion: At least one resource must be held, that is only one process at a time
can use the resource. If another process requests that resource, the requesting process must
be delayed until the resource has been released.
2. Hold and wait: A process must be holding at least one resource and waiting to acquire
additional resources that are currently being held by other processes.
3. No preemption: Resources cannot be preempted, that is, a resource can be released only
after the process ahs completed its task.
4. Circular wait: A set {P0, P1,P2,……,Pn} of waiting processes must exist such that P0 is
waiting for a resource that is held by P1,
P1 is waiting for a resource that is held by P2,
P2 is waiting for a resource that is held by p3,
Pn-1 is waiting for a resource that is held by Pn,
Pn is waiting for a resource that is held by P0.
II. Resource Allocation Graph:
Deadlocks can be described in terms of directed graph called a system resource allocation graph.
This graph consists of set of vertices V and set of edges E. The set of vertices V is partitioned into
two different types of nodes
• P= {P1, P2,…,Pn}, set consisting of active processes in the system
• R={R1, R2,….,R3}, set consisting of all resource types in the system.

A directed edge from process Pi to resource type Rj is denoted by Pi Rj it signifies that


process Pi requested an instance of resource type Rj and is currently waiting for that resource. This
edge is called a request edge.
A directed edge from resource type Rj to resource type Pi is denoted by Rj Pi it signifies that
an instance of resource type Rj has been allocated to process Pi. This edge is called a assignment
edge.
Pictorially each process Pi is represented by a circle, and each resource type Rj as a square.
Since resource type Rj may have more than one instance, we represent each such instance as a dot
within the square.
The resource allocation graph shown in figure depicts the following situation:
Resource allocation graph

R1 R3

P
P P 3
1 2

R2

R4
• The sets P, R and E:
o P={P1,P2,P3}
o R={R1,R2,R3}
o E={P1 R1, P2 R3, R1 P2, R2 P2, R2 P1, R3 P3}
• Resource instances:
o One instance of resource type R1
o Two instances of resource type R2
o One instance of resource type R3
o Three instances of resource type R4

• Process states:
o Process P1 is holding an instance of resource type R2, and is waiting for an instance
of resource type R1.
o Process P2 is holding an instance of R1 and R2, and is waiting for an instance of
resource type R3.
o Process P3 is holding an instance of R3.
➢ Given the definition of a resource allocation graph, if the graph contain no cycle, then no
process in the system is deadlocked.
➢ If the graph contains a cycle, then a deadlock may exist.
➢ If each resource type is exactly having one instance, then a cycle implies that a deadlock
has occurred.
➢ If each resource type has several instances, then a cycle does not have a deadock occurred.
Fig. Resource allocation graph with a deadlock

R1 R3

P
P P 3
1 2

R2

R4

Two minimal cycles exist in the system:

P1 R1 P2 R3 P3 R2 P1

P2 R3 P3 R2 P2

Processes P1, P2 and P3 are deadlocked. Process P2 is waiting for the resource R3, which is
held by process P3. Process P3, on the other hand, is waiting for either process P1 or process
P2 to release resource R2. In addition, process P1 is now waiting for process P2 to release
resource R1.

Methods for Handling Deadlocks


Deadlock problem can be dealt in one of the three ways:
• We can use a protocol to prevent or avoid deadlocks, ensuring that the system will never
enter a deadlock state.
• We can allow the system to enter a deadlock state, detect it, and recover.
• We can ignore the problem altogether, and pretend that deadlocks never occur in the
system.

To ensure that deadlocks never occur, the system can use either a deadlock prevention or
a deadlock avoidance scheme.

Deadlock prevention: this is a set of methods for ensuring that at least one of the necessary
condition cannot hold.
Deadlock avoidance: requires the OS be given in advance additional information concerning
which resources a process will request and use during its lifetime. With this additional knowledge,
we can decide for each request can be satisfied or must be delayed, the system must consider the
resources currently available, the resources currently allocated to each process and the future
requests and releases of each process.

DEADLOCK PREVENTION

Deadlocks can be prevented by ensuring that each of the four conditions cannot hold. The
conditions are:
• Mutual Exclusion
• Hold and Wait
• No Preemption
• Circular Wait

1. Mutual Exclusion:

The mutual exclusion condition must hold for non sharable resources. For example, a printer
cannot be simultaneously shared by several processes. Sharable resources on the other hand, do
not require mutually exclusive access, and thus cannot be involved in a deadlock. Read only files
are a good example for sharable resources. If several processes attempt to open a read-only file at
the same time, they can be granted simultaneous access to the file. A process never needs to wait
for a sharable resource.

2. Hold and Wait:

To ensure that the hold and wait condition never occurs in the system, we must guarantee that,
whenever a process requests a resource, it does not hold any other resources.
One protocol that can be used requires each process to request and be allocated all its resources
before it begins execution.
Another protocol allows a process to request resources only when the process has none. A process
may request some resources and use them. Before it can request any additional resources, it must
release all the resources that it is currently allocated.
Examples to illustrate the two protocols:
Consider a process that copies data from a tape drive to a disk file, sorts the disk file and then
prints the results to a printer.

Protocol one - If all resources must be requested at the beginning of the process, then the process
must initially request the tape drive, disk file and printer. It will hold the printer for its entire
execution, even though it needs the printer only at the end.

Protocol two – the second method allows the process to request initially only the tape drive and
disk file. It copies from the tape drive to the disk, then releases both the tape drive and the disk
file. The process must then again request the disk file and the printer. After copying the disk file
to the printer, it releases these two resources and terminates.

Disadvantages of two protocols:

1. Resource utilization may be low, since many of the resources may be allocated but unused
for a long period.
2. Starvation is possible. A process that needs several popular resources may have to wait
indefinitely, because at least one of the resources that it needs is always allocated to some
other process.

3. No Preemption

The third necessary condition is that there be no preemption of resources that have already been
allocated. To ensure this condition does not happen, the following protocol is used.
If the process is holding some resources and requests another resource that cannot be
immediately allocated to it, then all resources for which the process is waiting. The process will
be restarted only when it can regain its old resources, as well as the new ones that it is requesting.
If process requests some resources, we first check whether they are available. If they are,
we allocate them. If they are not available, we check whether they are allocated to some other
process that is waiting for additional resources. If so preempt the desired resources from the
waiting process and allocate them to the requesting process. If the resources are not either available
or held by a waiting process, the requesting process must wait. While it is waiting, some of its
resources may be preempted, but only if another process requests them. A process can be restarted
only when it is allocated the new resources it is requesting and recovers any resources that were
preempted while it is waiting.

4. Circular Wait

The fourth condition for deadlocks is circular-wait. One way to ensure this conditions is to impose
a total ordering of all resource types, and to require that each process requests resources in the
increasing order of enumeration.
Let R = {R1, R2,…. Rm} be set of resource types. We assign to each resource type an
unique number, which compares the resources and to determine whether one precedes another in
our ordering.

Example: F(tape drive) = 1,


F(disk drive) = 5
F(printer) = 12.

Now consider the protocol to prevent deadlocks. Each process is requested in an increasing order.
If a process request resource type Ri, after that the process can request instances of resource type
Rj if and only if F(Rj) > F(Ri). Example, using the function defined above, a process that wants to
use a tape drive and printer at the same time must first request the tape drive and then request the
printer.
Alternatively, we can require that, whenever a process requests an instance of resource type
Rj, it has released any resources Ri such that the F(Ri)>= F(Rj). If these two protocol are used,
then the circular wait cannot hold.

DEADLOCK RECOVERY

There are two approaches to solve the deadlock problem. They are

• Suspend/Resume a Process
• Kill a Process

1. Suspend/Resume a Process:

In this method a process is selected based on a variety of criteria example low priority and it is
suspended for a long time. The resources are reclaimed form that process and then allocated to
other processes that are waiting for them. When one of the waiting process gets over the original
suspended process is resumed.
This process strategy cannot be used in any on-line or real time systems, because the
response time of some processes then become unpredictable.
Suspend/Resume operations are not easy to manage example that a tape is read half way
through and then a process holding the tape drive is suspended. The operator will have to dismount
that tape, mount the new tape for the new process to which the tape drive is now to be allocated.
After this new process is over, when the old process is resumed, the tape for the original process
will have to be mounted again, and more importantly, it will exactly positioned.

2. Kill a Process:
The operating system decides to kill a process and reclaim all its resources after ensuring that such
action will solve the deadlock. This solution is simple but involves loss of at least one process.
Choosing a process to be killed, again, depends on the scheduling policy and the process priority.
It is safest to kill a lowest priority process which has just begin, so the loss is not very heavy.

DEADLOCK AVOIDANCE

Deadlock avoidance is concerned with starting with an environment, where a deadlock is possible,
but by some algorithm in the operating system, it is ensured before allocating any resource that
after allocating it, deadlock can be avoided. If that cannot be avoided, the operating system does
not grant the request of the process for a resource.
Dijkstra was the first person to propose an algorithm in 1965 for deadlock avoidance. This
is known as “Banker algorithm” due to its similarity in solving a problem of a banker waiting to
disburse loans to various customers within limited resources.
This algorithm in the OS is such that it can know in advance before a resource is allocated
to a process, whether it can lead to a deadlock “unsafe state” or it can manage to avoid it “safe
state”.
Banker’s algorithm maintains two matrices.
Matrix A – consists of the resources allocated to different processes at a given time.
Matrix B – maintains the resources still needed by different processes at the same time.

Process Tape Printers Plotters Process Tape Printers Plotters


drives drives
P0 2 0 0 P0 1 0 0
P1 0 1 0 P1 1 1 0
P2 1 2 1 P2 2 1 1
P3 1 0 1 P3 1 1 1

Matrix A Matrix B
Resources assigned Resources still required

Vectors

Total Resources (T) = 543


Held Resources (H) = 432
Free Resources (F) = 111

Matrix A shows that process P0 is holding 2 tape drives. At the same time
P1 is holding 1 printer and so on. The total held resources by various processes are : 4 tape drives,
3 printers and 2 plotters.

This says that at a given moment, total resources held by various processes are: 4 tape
drivers, 3printers and 2 plotters. This should not be confused with the decimal number 432. That
is why it is called a vector. By the same logic, the figure shows that the vector for the Total
Resources (T) is 543. This means that in the whole system, there are physically 5 tape drivers,
4printers and 3 plotters. These resources are made known to the operating system at the time of
system generation. By subtraction of (H) from (T) column wise, we get a vector (F) of free
resources which is 111. This means that the resources available to the operating system for further
allocation are: 1 tape drive, 1 printer and 1 plotter at that juncture.

Matrix B gives process wise additional resources that are expected to be required in the
course during the execution of these processes. For instance, process P2 will require 2 tape drives,
1 printer and 1 plotter, in addition to the resources already held by it. It means that process P2
requires in all 1+2=3 tape drivers, 2+1=3 printers and 1+1=2 plotters. If the vector of all the
resources required by all the processes (vector addition of Matrix A and Matrix B) is less then the
vector T for each of the resources, there will be no contention and therefore, no deadlock. However,
if that is not so, a deadlock has to be avoided.

Having maintained these two matrices, the algorithm for the deadlock avoidance works as
follows:

(i) Each process declares the total required resources tot the operating system at the
beginning. The operating system puts this figure in Matrix B (resources required
for completion) against each process. For a newly created process, the row in
Matrix A is fully zeros to begin with, because no resources are yet assigned for that
process. For instance, at the beginning of process P2, the figures for the row P2 in
Matrix A will be all 0’s; and those in Matrix B will be 3, 3 and 2 respectively.

(ii) When a process requests the operating system for a resources, the operating system
finds out whether the resource is free and whether it can be allocated by using the
vector F. If it can be allocated, the operating system does so, and updates Matrix A
by adding 1 to the appropriate slot. It simultaneously subtracts 1 from the
corresponding slot of Matrix B. For instance, starting from the beginning, if the
operating system allocates a tape drive to P2, the row for P2 in Matrix will become
1, 0 and 0. The row for P2 in Matrix B will correspondingly become 2, 3 and 2. At
any time, the total vector of these two, i.e. addition of the corresponding numbers
in the two rows, is always constant and is equivalent to the total resources needed
by P2, which in this case will be 3, 3 and 2.

(iii) However, before making the actual allocation, whenever, a process makes a
request to the operating system for any resource, the operating system goes through
the Banker’s algorithm to ensure that after the imaginary allocation, there need not
be a deadlock, i.e. after the allocation, the system will still be in a ‘safe state’. The
operating system actually allocates the resource only after ensuring this. If it finds
that there can be a deadlock after the imaginary allocation at some point in time, it
postpones the decision to allocate that resource. It calls this state of the system that
would result after the possible allocation as ‘unsafe’. Remember: the unsafe state
is not actually a deadlock. It is a situation of a potential deadlock.

The point is: How does the operating system conclude about the safe or unsafe? It uses an
interesting method. It looks at vector F and each row of Matrix B. It compares them on a vector to
vector basis i.e. within the vector, it compares each digit separately to conclude whether all the
resources that a process is going to need to complete are available at that juncture or not. For
instance, the figure shows F = 111. It means that at that juncture, the system has 1 tape drive, 1
printer and 1 plotter free and allocable. (The first row in Matrix B for P0 to 100.) This means that
if the operating system decides to allocate all needed resources to P0, P0 can go to completion,
because 111 > 100 on a vector basis. Similarly row for P1 in Matrix B is 110. Therefore, if the
operating system decides to allocate resources to P1 instead of to P0, P1 can complete. The row
for P2 is 211. Therefore, P2 cannot complete unless there is one more tape drive available. This is
because 211 is greater than 111 on a vector basis.
The vector comparison should not be confused with the arithmetic comparison. For
instance, if were 411 and a row in Matrix B was 322, it might appear that 411 > 322 and therefore,
the process can go to completion. But that is not true. As 4 > 3, the tape drives would be allocable.
But as 1 < 2, the printer as well as the plotter would both fall short.
The operating system now does the following to ensure the safe state:

(a) After the process requests for a resources, the operating system allocates it on a ‘trial’ basis.
(b) After this trial allocation, it updates all the matrices and vectors, i.e. it arrives at the new
values of F and Matrix B, as if the allocation was actually done. Obviously, this updation
will have to be done by the operating system in a separate work area in the memory.
(c) It then compares vector F with each row of Matrix B on a vector basis.
(d) If F is smaller than each of the rows in Matrix B on a vector basis, i.e. even if all F was
made available to any of the processes in Matrix B, none would be guaranteed to complete,
the operating system concludes that it is an ‘unsafe state’. Again, it does not mean that a
deadlock has resulted. However, it means that it can takes place if the operating system
actually goes ahead with the allocation.
(e) If F is greater than any row for a process in Matrix B, the operating system proceeds as
follows:

• It allocates all the needed resources for that process on a trial basis.
• It assumes that after this trial allocation, that process will eventually get completed,
and, in fact, release all the resources on completion. These resources now will be
added to the free pool (F). Its now calculates all the matrices and F after this trial
allocation and the imaginary completion of this process. It removes the row for the
completed process from both the matrices.
• It repeats the procedures from step © above. If in the process, all the rows in the
matrices get eliminated, i.e. all the process can go tot completion, yit concludes that
it is a ‘safe state’ i.e. even after the allocation, a deadlock can be avoided.
Otherwise, it concludes that it is an ‘unsafe state’.
(f) For each request for any resources by a process, the operating system goes through all these
trial or imaginary allocations and updations, and if finds that after the trial allocation, the
state of the system would be ‘safe’, it actually goes ahead and makes an allocation after
which it updates various matrices and table in real sense. The operating system may need
to maintain two sets of matrices for this purpose. Any time, before any allocation, it could
copy the first set of matrices (the real one) into the other, carry out all trial allocations and
updations in the other, and of the safe state results, update the former set with the
allocations.
Banker’s Algorithm

The resource-allocation graph algorithm is not applicable to a resource-allocation system with


multiple instances of each resource type. The allocation system with multiple instances of each
resource type. The deadlock-avoidance algorithm that we describe next is applicable to such a
system, but is less efficient than the resource-allocation graph scheme. This algorithm is commonly
known as the banker’s algorithm. The name was chosen because this algorithm could used in a
banking system to ensure that the bank never allocates its available its available cash such that it
can longer satisfy the needs of all its customers.

When a new process enters the system, it must declare the maximum number of instances
of each resources type that it may need. This number may not exceed the total number of resources
in the system. When a user requests a set of resources, the system must determine whether the
allocation of these resources will leave the system in a safe state. If it will, the resources are
allocated; otherwise, the process must wait until some other releases enough resources.

Safety Algorithm

The algorithm for finding out the whether or not a system is in a safe state can be described as
follows:

1. Let work and Finish be vectors of length m and n, respectively. Initialize Work:
=Available and Finish[i]:=false for i=1,2,3,….., n.
2. Find an I such that both

a. Finish[i]=false
b. Need i Work.

If no such I exists, go to step 4.

3. Work: =Work + Allocation;


Finish[i]: =true
Go to step 2.
4. If Finish[i] = true for all I, then the system is in a safe state.

This algorithm may require an order of m x n operations to decide whether a state is safe.

Resource-Request Algorithm

Let Request i be the request vector for process Pi. If requesti[j] = k, then process pi wants k
instances of resource type Rj. When a request for resources is made by process Pi, the following
actions are taken:

1. If Requesti < Needi, go to step 2. Otherwise , raise an error condition, since the process has
exceeded its maximum claim.
2. If Requesti< Available, go to step 3. Otherwise, Pi must wait, since the resources are not
available.
3. Have the system pretend to have allocated the requested resources to process Pi by
modifying the state as follows:

Available: = Available – Requesti;


Allocationi :=Allocationi + Requesti;
Needi := Needi – Requesti;

If the resulting resource-allocation state is safe, the transaction is completed and process Pi is
allocated its resources. However, if the new state is unsafe, then Pi must wait for Request and the
old resource-allocation state is restored.

DEADLOCK DETECTION

The graphs (DRAG) provide good help in doing this, as we have seen. However, normally,
a realistic DRAG is not as straight forward as a DRAG between two processes (P1, P2) and two
resources (R1 and R2) as depicted. In reality, there could be a number of resource types such as
printers, plotters, tapes, and so on. For instance, the system could have two identical printers, and
the operating system must be told about it at the time of printers when requested. The complexity
arises due to the fact that allocation to a process is made of a specific resource by the operating
system, depending upon the availability, but the process normally makes the request to the
operating system for only a resource type. A very large number of processes can make this DRAG
look more complex and the deadlock detection more time consuming.

We will denote multiple instances of the same resource type by means of multiple symbols
within the square. For example, consider the DRAG shown in figure.
Figure: multiple resource for a type
P
1
R1 ( )R10 ( )R20 R2
( )R11

P
2
R1 is a resource type – say, a tape drive of a certain kind, and let us assume that there are two tape
drivers R10 and R11 of the same kind known to the system. R2 may be a of a certain type there
may be only one of that type available in the system – say, R20. In fig, R10 is allocated to P1. P1
is waiting for R20. R20 is allocated to P2. Now comes the question of last in the diagram. Let us
assume that R11 is free and P2 wants it. In this case, P2 can actually grab R11. And if it does so,
an arrow will be actually drawn from R11 to P2 as shown in Fig. If you traverse from a node,
following the arrows, you would not arrive at the starting node. This violates the rules for a circular
wait. Therefore, even if it gives an impression of a deadlock at first sight, it is NOT a deadlock
situation.
P
1
R1 ( )R10 ( )R20 R2
( )R11

P
2
Figure: no deadlock situation.

Therefore, P2 in this case need not wait for R11. It can go to completion. The point is that the
visual illusion of the cycle should not deceive us. It is not a circular wait condition. If R11 however,
is also not free and is already grabbed by, say P1, it can lead to a deadlock if P2 requests for R1.
The operating system, in the case, could do the following to detect a deadlock:

(i) Number all process as P0, P1 ….PN.

(ii) Number each resource separately – using a meaningful coding scheme. For instance,
the first character could always be “R” denoting a resource. The second character could
denote the resource type (0=tape, 1=printer, etc.) and the third character could denote the
resource number or an instance within the type, e.g. R00, R01, R02,…..could be different
tape drivers of the same type; R10, R11, R12, …….
(iii) Maintain two tables as shown in fig. One is a resource wise table giving , for each
resource, its type, allocation status, the process to which it is allocated and the processes
that are waiting for it.
Another table is a process wise table, giving for each process, the resources held
by it and the resources it is waiting for. This is normally held along with PCB. Logically,
it is a part of PCB, but an operating system could choose to maintain it in a separate table
linked to the PCB for that process. The operating system could use this information to
detect any deadlock, as we shall see later.

(iv) Whenever a process requests the operating system for a resource, the request is
obviously for a resource belonging to resource type. The user would not really care which
one is exactly allocated (if he did, a new resource type should have been created). The
operating system then goes through the resource wise table to see if there is any free
resource of that type. This also will necessitate updating of both tables.
When a process releases a resource, again both tables will be updated accordingly.

(v) At any time, the operating system can use these tables to detect a circular wait or a
deadlock. Typically, whenever a resource is demanded by a process, before actually
allocating it, the operating system could use this algorithm to see whether the allocation
can potentially lead to a deadlock or not.
The working is as follows:

(a) Go through the resource wise table entries one by one (e.g. R00, R01 …etc.), each time
storing the resource numbers processed. This is useful in detecting a circular wait, i.e.
in finding out whether we have reached the same node again or not.
(b) Ignore entries for free resources.
(c) For each of the entries, access the process to which the resource is allocated. In the
case, store the numbers R01 and P1 in separate lists called resource list and process list
respectively.
(d) Access the entry in the process wise table for that process obtained in step (c) (P1 in
this case).
(e) Access one by one the resources this process (P1) is waiting for. For example, P1 is
waiting for resource R20. Check if this is the same as the one already encountered i.e.
if R20 Is the same as R01 stored in step (c). In short, check if circular wait is already
encountered. If yes, the deadlock is detected. If no, store this resource (e.g R20) in the
resource list. This list will now contain R01 and R20. The process list still contains
only P1. Whether there is any other resource apart from R20 that process P1 is waiting
for. If there is any, this procedure will have to be repeated. In this example, there is no
such resource. Therefore, the operating system goes to the next step (f).
(f) Access the next entry in the resource list maintained in step (e). This entry is R20. Now
access the resource wise table for R20 to find that R20 is allocated to P5.
(g) Check if this process (i.e. P5) is the one already encountered in the process list
maintained in step (e). If it is the same, a deadlock is confirmed. In this case, P5 is not
the same as P1. So only store P5 after P1 in the process list and proceed. The process
list now contains P1 and P5. The resource list is still R01, R20 as in step (e). After this,
the operating system will have to choose R10 and R23, as they are the resources
process P5 is waiting for. It finds that R10 is allocated to P1 already existed in the
process list. Hence, a deadlock (P1←R20←P5←R10←P1) has been detected.
Therefore, the operating system will have to maintain two lists – one list of
resources already encountered and a separate list of all the waiting processes already
encountered. Any time the operating system list while going through the algorithm, the
deadlock is confirmed.
(h) If a deadlock is not confirmed, continue this procedure for all the permutations and
combinations e.g for all the resources that a process is waiting for and then for each of
these resources, the processes to which they are allocated. This procedure has to be
repeated until both the lists are exhausted one by one. If all the paths lead to resources
which are free and allocable, there is no deadlock. If all the paths make the operating
system repeatedly got through same process or resource, it is a deadlock situation.
Having finished one row, go to the next one and repeat this procedure for all the rows where the
status is NOT=free.

PROCESS SYNCHRONIZATION
A system consists on n processes { P0,P1,….,Pn-1}. Each process has a segment of code called a
critical section, in which the process may be changing common variables, updating a table,
writing a file and so on. Each process must request permission to enter its critical section. The
section of code implementing this request is the entry section. The critical section may be followed
by an exit section. The remaining code is the remainder section.

Fig. General structure of a typical process Pi.

do{

entry section

critical section

exit section

remainder section
} while(1);

A solution to the critical section problem must satisfy the following requirements:

1. Mutual Exclusion: If process Pi is executing in its critical section, then no other processes
can be executing in their critical sections.
2.Progress: If no process is executing in its critical section and some process wish to enter
their critical sections, then only that process not in the remainder section can enter the critical
section.
3. Bounded waiting: There exists a bound on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to its critical section.

Two Process Solutions:

I. Algorithm 1:

do{
while (turn !=j);
critical section

turn = j;
remainder section
} while (1);

Fig. The structure of process Pi in algorithm 1.

The two processes P0, P1 share the common integer variable turn = 0 or 1. If turn == i, then process
Pi is allowed to execute in its critical section. The structure of Pi is shown in fig. This solution
ensures that only one process at a time can be in its critical section. But it does not satisfy the
progress requirement.

II. Algorithm 2

do{
flag[i] = true;
while (flag[j]);

Critical section
flag[i] = false;
Remainder section
{ while(1);

The remedy to algorithm 1 is we can replace the variable turn with the following array:

Boolean flag[2];

The elements of the array are initialized to false. If flag[i] is true, this value indicates that Pi is
ready to enter the critical section. The structure of Pi is shown in figure.
In this algorithm, process Pi first sets flag[i] to be true, signaling that it is ready to enter its
critical section. Then Pi checks to verify that process Pj is not also ready to enter its critical section.
If Pj were ready, then Pi would wait until Pj no need to be in critical section. Then Pi will enter
critical section and on exiting the critical section, Pi wouls set flag[i] to be false, allowing the other
process to enter its critical section.
In this mutual exclusion is satisfied. But progress is not satisfied. Consider the following
execution:
To: Po sets flag[0] = true
T1: P1 sets flag[1] = true
During time To the process Po is executed and a timing interrupt occurs after To is executed, and
the CPU switches from one process to another.

III. Algorithm 3:

By combining the key ideas of algorithm 1 and 2 we obtain the correct solution to the critical
section problem where all the three requirements is met.

The processes share the two variables:


Boolean flag[2];
int turn;

Fig. The structure of process Pi in algorithm 3.

do{
flag[i] = true;
turn = j;
while (flag[j] && turn == j);

critical section
flag[i] = false;
remainder section
} while (1);

Initially flag[o] = flag[1] = false, and the value of turn = 0 or 1.


To enter the critical section, process Pi first sets flag[i]=true and sets turn = j value(1). So that the
pother process wishes to enter the critical section can enter. If both process try to enter at the same
time then value of turn decides which of the two process to enter.

To proves solution is correct show:

1. Mutual exclusion is preserved


2. The progress requirement is satisfied
3. The bounded waiting requirement is met.
To prove property 1:

Pi enters its critical section only is either flag[j] ==false or turn==i. At the time, flag[j]==true, and
turn ==j, until this condition persist as long as Pj is in its critical section, the result is mutual
exclusion is followed.

To prove property 2:

The Process Pi can be prevented from entering the critical section when the condition flag[j] ==true
and turn==j;
If Pj is not ready to enter the critical section , then flag[j] == false and Pi an enter the critical
section.
If Pj has set flag[j] ==true and is and executing in the while loop then either turn==I or
turn==j; If turn==I then Pi will enter the critical section. If turn==j, then Pj will enter the critical
section. When Pj exists the critical section it will reset the flag[j]==false allowing Pi to enter the
critical section.
If Pj resets flag[j] == true then it must also set turn ==i. Thus, since Pi does not change the
value of the variable turn while executing the while statement, Pi will enter the critical
section(progress) which after at most one entry by Pj(bounded waiting).

CLASSIC PROBLEMS OF SYNCHRONIZATION

I. Bounded Buffer Problem:

This illustrates the concept of cooperating processes, consider the producer and consumer
problem. A producer process produces information that is consumed by a consumer. Example,
a print program produces characters that is consumed by the printer device. We must have available
buffer of items that can be filled by the producer and emptied by the consumer. A producer can
produce one item while the consumer is consuming another item. The producer and the consumer
must be synchronized so that the consumer does not try to consume n item that has not yet been
produced.
In unbounded buffer produce4r consumer problems has no problem on the limit on the
size of the buffer. The consumer have to wait for new items but the producer can always produce
new items.
In bounded buffer producer consumer problem assumes fixed buffer size. In this case if
consumer must wait if the buffer is empty, and the producer must wait if the buffer is full.
In the bounded buffer problem , we assume the pool consists of n buffers, each capable of
holding one item. The mutex semaphore provides mutual exclusion for access to the buffer pool
and is initialized to the value 1. The empty and full semaphores count the number of empty and
full buffers. The semaphore empty is initialized to value n, and full initialized to the value 0.

Fig. The structure of the producer process.

do{
….
Produce an item
….
wait(empty);
wait(mutex);
….
add to buffer
….
signal(mutex);
signal(full);
}while(1);

Fig. The structure of consumer process.


do{
wait(full);
wait(mutex);
….
Remove an item from buffer
….
Signal(mutex);
Signal(empty)
….
Consume the item
….
}while(1);

You might also like