Lecture Notes Study Materials 66277
Lecture Notes Study Materials 66277
Process Synchronization
Process Synchronization means sharing system resources by processes in such a way that,
Concurrent access to shared data is handled thereby minimizing the chance of inconsistent data.
Maintaining data consistency demands mechanisms to ensure synchronized execution of
cooperating processes.
Process Synchronization was introduced to handle problems that arose while multiple process
executions. Some of the problems are discussed below.
A solution to the critical section problem must satisfy the following three conditions:
Prof dhanapalan college of science and management
1. Mutual Exclusion
Out of a group of cooperating processes, only one process can be in its critical section at a
given point of time.
2. Progress
If no process is in its critical section, and if one or more threads want to execute their
critical section then any one of these threads must be allowed to get into its critical section.
3. Bounded Waiting
After a process makes a request for getting into its critical section, there is a limit for how
many other processes can get into their critical section, before this process's request is granted.
So after the limit is reached, system must grant the process permission to get into its critical
section.
Two – process solutions:
The processes are numbered P0 and P1. For convenience when presenting Pi, we use Pj to
denote the process that is j=1-i.
Algorithm one:
Our first approach is to let the processes share a comman integer variable turn initialized to 0 or
1. If turn==i, then process Pi is allowed to execute in its critical section. The structure of process
Pi is shown.
do { while(turn!
=i); critical
section turn=j;
reminder section
}while(1);
This solution ensures that only one process at a time can be in its critical section. It does not
satisfy the progress requirement, since it requires strict alternation of processes in the execution
of the critical section.
Algorithm two:
The problem with algorithm 1 is that it does not retain sufficient information about the state of
each process, it remembers only which process is allowed to enter its critical section. We can
replace the variable turn with the following array:
Boolean flag[2];
The elements of the array are initialized to false. If flag[i] is true, this value indicates that Pi is
ready to enter the critical section. The structure of process Pi is do {
flag[i]=true;
while(flag[j]);
critical section;
Prof dhanapalan college of science and management
flag[i]=false; reminder
section
}while(1);
In this algorithm process Pi first set flag[i] to be true, signaling that it is ready to enter its critical
section. Then .Pi checks to verify that process Pj is not also ready to enter its critical section. If Pj
were ready, then Pi would enter the critical section. On existing the critical section, Pi would set
flag[i]to be false, allowing the other process to enter its critical section.
Algorithm 3:
by combining the key ideas of algorithm1 and algorithm2 , we obtain a correct solution to the
critical section problem, where all three requirements are met. The processes share two variables:
boolean flag[2]; int
turn;
Initially flag[0]=flag[1]=false, and the value of turn is immaterial. The structure of process Pi is
shown. do {
flag[i]=true;
turn=j;
while(flag[j] &&turn==j);
critical section
flag[i]=false;
remainder section
} while(1);
To enter the critical section, process Pi first sets flag[i] to be turn and then sets turn to the value j,
thereby asserting that if the other process wishes to enter the critical section it can do so. If both
processes try to enter at the same time, turn will be set to both i and j at roughly the same time.
Only one of these assignments will last.
1. Mutual exclusion is preserved.
2. The progress requirement is satisfied
3. The bounded – waiting requirement is met.
Semaphores
The solutions of the critical section problem represented in the section is not easy to
generalize to more complex problems. To overcome this difficulty, we can use a synchronization
tool call a semaphore. A semaphore S is an integer variable that, a part from initialization, is
accessed two standard atomic operations: wait and signal. This operations were originally termed
Prof dhanapalan college of science and management
P (for wait; from the Dutch proberen, to test) and V (for signal ; from verhogen, to increment).
The Classical definition of wait and signal are
Wait (S) {
while (S <=0)
S =S – 1; }
signal(S) {
S = S + 1;
}
The integer value of the semaphore in the wait and signal operations must be executed
indivisibly. That is, when one process modifies the semaphore value, no other process can
simultaneously modify that same semaphore value.
In addition, in the case of the wait(S), the testing of the integer value of S (S <=0), and its
possible modification (S := S – 1), must also be executed without interruption.
Usage:
We can use semaphores to deal with the n-process critical section problem. The n
processes share a semaphore, mutex(standing for mutual exclusion), initialized to 1. Each
process Pi is organized
do {
wait(mutex);
critical section
signal(mutex);
remainder section
}while(1);
We use semaphores to solve various synchronization problem.
Implementation:
When a process is in its critical section, any other process that tries to enter its critical section
must loop continuously in the entry code. This continual looping is clearly a problem in a real
multiprogramming system, where a single CPU is shared among many processes. Busy waiting
wastes CPU cycles that some other process might be able to use productively. This type of
semaphore is also called a spinlock. Spinlocks are useful in multiprocessor systems.
A process that is blocked, waiting on a semaphore S, should be restarted when some other
process executes a signal operation. The process is restarted by a wakeup operation, which
changes the process from the waiting state to the ready state. The process is then placed in the
ready queue.
typedef struct {
Int value;
Struct process*L;
Prof dhanapalan college of science and management
}semaphore;
Each semaphore has an integer value and a list of processes. When a process must wait on
a semaphore, it is added to the list of processes. A signal operation removes one process from the
list of waiting processes and awakens that process.
The wait semaphore operation can now be defined as
Void wait (semaphore S)
{
S.value--; if
(S.value<0)
{
add this process to S.L;
block();
}
}
Void signal(semaphore S) {
S.value++;
if(S.value<=0)
{
Remove a process P from S.L;
Wakeup(p);
}
}
The block operation suspends the process that invokes it. The wakeup(p) operation
resumes the execution of a blocked process P. these two operations are provided by operating
system as basic system calls.
Binary semaphores:
The semaphore construct described in the previous section is commonly known as a counting
semaphore, since its integer value can range over an unrestricted domain. A binary semaphore is
a semaphore with an integer value that can range only between 0 and 1.
The binary semaphore can be simpler to implement than a counting semaphore, depending on
the underlying hardware architecture. Counting semaphore can be implemented using binary
semaphores.
Lets S be a counting semaphore. To implement it in terms of binary semaphores we need the
following semaphore.
binary-semaphore S1,S2; int
C;
Prof dhanapalan college of science and management
Initially S1=1,S2=0, and the value of integer C is set to the initial value of the counting
semaphore S.
The wait operation on the counting semaphore S can be implemented as follows:
Wait(S1);
C--;
if ( C<0)
{
Signal(S1);
wait(S2);
}
Signal(S1);
The signal operation on the counting semaphore S can be implemented as follows:
Wait(S1);
C++;
if ( C<=0)
Signal(S2);
else
Signal(S1);
Classical problems of synchronization :
Semaphore can be used in other synchronization problems besides Mutual Exclusion.
Below are some of the classical problem depicting flaws of process synchronaization in systems
where cooperating processes are present. We will discuss the following three problems:
In this problem, a number of concurrent processes require access to some object (such as a
file.) Some processes extract information from the object and are called readers; others change or
insert information in the object and are called writers. The Bernstein conditions state that many
readers may access the object concurrently, but if a writer is accessing the object, no other
processes (readers or writers) may access the object. There are two possible policies for doing
this:
1. First Readers-Writers Problem. Readers have priority over writers; that is, unless a writer
has permission to access the object, any reader requesting access to the object will get it.
Note this may result in a writer waiting indefinitely to access the object.
2. Second Readers-Writers Problem. Writers have priority over readers; that is, when a
writer wishes to access the object, only readers which have already obtained permission
to access the object are allowed to complete their access; any readers that request access
after the writer has done so must wait until the writer is done. Note this may result in
readers waiting indefinitely to access the object.
So there are two concerns: first, enforce the Bernstein conditions among the processes, and
secondly, enforcing the appropriate policy of whether the readers or the writers have priority. A
typical example of this occurs with databases, when several processes are accessing data; some
will want only to read the data, others to change it. The database must implement some
mechanism that solves the readers-writers problem.
The semaphores mutex and wrt are initialized to 1, readcount is initialized to 0. The
semaphore wrt is comman to both the reader and writer processes. The mutex semaphore is used
to ensure mutual exclusion when the variable readcount is undated. The readcount variable keep
track of how many processes are currently reading the object.
Prof dhanapalan college of science and management
The semaphore wrt functions as a mutual-exclusion semaphore for the writers. It is also
used by the first or last reader that enters or exits the critical section. It is not used by readers who
enter or exit while other reader are in their critical section
Code for writer process
Wait(wrt);
…………..
Writing is performed
…………
Signal(wrt);
readcnt++;
if (readcnt==1)
wait(wrt);
signal(mutex);
wait(mutex);
readcnt--;
if (readcnt == 0)
signal(wrt);
signal(mutex);
} while(true);
In this problem, five philosophers sit around a circular table eating spaghetti and thinking.
In front of each philosopher is a plate and to the left of each plate is a fork (so there are five
forks, one to the right and one to the left of each philosopher's plate).
Prof dhanapalan college of science and management
When a philosopher wishes to eat, he picks up the forks to the right and to the left of his
plate. When done, he puts both forks back on the table. The problem is to ensure that no
philosopher will be allowed to starve because he cannot ever pick up both forks.
There are two issues here: first, deadlock (where each philosopher picks up one fork so
none can get the second) must never occur; and second, no set of philosophers should be able to
act to prevent another philosopher from ever eating. A solution must prevent both.
Semaphore chopsticks[5];
do{ wait(chopstick[i]); // left chopstick
wait(chopstick[(i+1)%5 ]); // right chopstick
………………….
eat
…………………
signal(chopstick[i]); // left chopstick signal(chopstick[(i+1)%5
]); // right chopstick //
…………………………
think
…………………………….
} while(TRUE);
The solution to the dining – philosophers problem that ensures freedom from deadlocks.
• Allow at most four philosophers to be sitting simultaneously at the table .
• Allow a philosopher to pick up her chopsticks only if both chopsticks are available .
• Use an asymmetric solution, that ia an odd philosopher picks up first her left chopstick
and then her right chopstick, where as even philosopher picks up her right chopstick and
then her left chopstick.
Prof dhanapalan college of science and management
Critical Regions
• A critical region is a section of code that is always executed under mutual exclusion.
• Critical regions shift the responsibility for enforcing mutual exclusion from the programmer
(where it resides when semaphores are used) to the compiler
. • They consist of two parts:
1. Variables that must be accessed under mutual exclusion.
2. A new language statement that identifies a critical region in which the variables are accessed.
The critical region high-level language synchronization construct requires that a variable v of
type T , which is to be shared among many processes, be declared as
V: shared T;
The variable v can be accessed only inside a region statement of the following form:
Region v when B do S;
While statement S is being executed, on other process can access the variable v. The expression
B is a Boolean expression that governs the access to the critical region. When a process tries to
enter the critical section region, the Boolean expression B is evaluated. If the expression is true,
statement S is executed. If it is false, the process relinquishes the mutual exclusion and is delayed
until B becomes true and on other process is in the Region associated with v. thus, if the two
statement, region v when (true) S1; region v when (true) S2; are executed concurrently in distinct
sequential processes, the result will be equivalent to the sequential execution S1 followed by S2
or S2 followed by S1.
The critical – region construct can be effectively used to solve certain general synchronization
problems, let as code the bounded – buffer scheme.
struct buffer
{ item pool[n];
int count, in, out;
}
The producer process insert a new item nextp into the shared buffer by executing region
buffer when (count < n)
{ pool[in]=next
p;
in=(in+1)%n;
count++; }
The consumer process removes an item from the shared buffer and puts it in nextc by executing
region buffer when (count >0)
{ nextc=pool[out];
out= (out+1)%n;
count--;
}
Wait (mutex)
While(!B)
{ first_count++ if
(second_count>0)
signal(second_delay);
else signal(mutex);
wait(first_delay);
first_count--;
second_count++;
if(first_count>0)
signal(first_delay);
else
signal(second_delay);
wait(second_delay);
second_count--;
}
S;
If ( first_count > 0)
Signal(first_delay); else
if(second_count >0)
Signal(second_delay);
else
SignaL(mutex)
Monitors
High-level synchronization construct that allows the safe sharing of an abstract data type among
concurrent processes.
monitor monitor-name
{
shared variable declarations
procedure body P1 (…) {
...
}
procedure body P2 (…) {
...
}
Prof dhanapalan college of science and management
.
.
.
.
procedure body Pn (…)
{
...
}
{
initialization code
}
}
The representation of a monitor type cannot be used directly by the various processes.
Thus, a procedure define within a monitor can access only those variables declared locally within
the monitor can access only those variable declared locally within the monitor can be accessed by
only the local procedure.
To allow a process to wait within the monitor, a condition variable must be declared, as
condition x, y;
Condition variable can only be used with the operations wait and signal.
The operation
x.wait();
means that the process invoking this operation is suspended until another process invokes
x.signal();
The x.signal operation resumes exactly one suspended process. If no process is suspended, then
the signal operation has no effect.
Prof dhanapalan college of science and management
monitor dp
{
enum {thinking, hungry, eating} state[5];
condition self[5];
void pickup(int i)
{
state[i] = hungry;
test[i]; if (state[i] !=
eating)
self[i].wait();
}
void putdown(int i)
{
state[i] = thinking;
test((i+4) % 5);
test((i+1) % 5);
}
void test(int i)
{
if ( (state[(I + 4) % 5] != eating) && (state[i] == hungry) && (state[(i + 1) % 5] != eating))
{ state[i] =
eating;
self[i].signal();
}
}
void init()
{
for (int i = 0; i < 5; i++)
state[i] = thinking;
}
}
Prof dhanapalan college of science and management
DeadLock:
A process requests resources; if the resources are not available at that time, the process
enters a waiting state. Sometimes, a waiting process is never again able to change state, because
the resources it has requested are held by other waiting processes. This situation is called a
deadlock.
1. Request. The process requests the resource. If the request cannot be granted immediately (for
example, if the resource is being used by another process), then the requesting process must
wait until it can acquire the resource.
2. Use. The process can operate on the resource (for example, if the resource is a printer, the
process can print on the printer).
Necessary Conditions
A deadlock situation can arise if the following four conditions hold simultaneously in a system:
1. Mutual exclusion. At least one resource must be held in a nonsharable mode; that is, only
one process at a time can use the resource. If another process requests that resource, the
requesting process must be delayed until the resource has been released.
2. Hold and wait. A process must be holding at least one resource and waiting to acquire
additional resources that are currently being held by other processes.
3. No preemption. Resources cannot be preempted; that is, a resource can be released only
voluntarily by the process holding it, after that process has completed its task.
4. Circular wait. A set {P0, P1, ..., Pn} of waiting processes must exist such that P0 is
waiting for a resource held by P1, P1 is waiting for a resource held by P2, ..., Pn−1 is waiting for
a resource held by Pn, and Pn is waiting for a resource held by P0.
Resource-Allocation Graph
Deadlocks can be described more precisely in terms of a directed graph called a system
resource-allocation graph. This graph consists of a set of vertices V and a set of edges E. The set
of vertices V is partitioned into two different types of nodes: P = {P1, P2, ..., Pn}, the set
consisting of all the active processes in the system, and R = {R1, R2, ..., Rm}, the set consisting
of all resource types in the system.
Given the definition of a resource-allocation graph, it can be shown that, if the graph
contains no cycles, then no process in the system is deadlocked. If the graph does contain a cycle,
then a deadlock may exist. If each resource type has exactly one instance, then a cycle implies
that a deadlock has occurred. If the cycle involves only a set of resource types, each of which has
only a single instance, then a deadlock has occurred.
Each process involved in the cycle is deadlocked. In this case, a cycle in the graph is both
a necessary and a sufficient condition for the existence of deadlock. If each resource type has
several instances, then a cycle does not necessarily imply that a deadlock has occurred. In this
case, a cycle in the graph is a necessary but not a sufficient condition for the existence of
deadlock.
. Suppose that process P3 requests an instance of resource type R2. Since no resource instance is
currently available, we add a request edge P3 → R2 to the graph.
Prof dhanapalan college of science and management
Deadlock Prevention
i) Mutual Exclusion
The mutual exclusion condition must hold. That is, at least one resource must be
nonsharable. Sharable resources, in contrast, do not require mutually exclusive access and thus
cannot be involved in a deadlock.
Read-only files are a good example of a sharable resource. If several processes attempt to
open a read-only file at the same time, they can be granted simultaneous access to the file. A
process never needs to wait for a sharable resource ii) Hold and Wait
A process requests a resource, it does not hold any other resources. One protocol that we
can use requires each process to request and be allocated all its resources before it begins
execution. We can implement this provision by requiring that system calls requesting resources
for a process precede all other system calls. iii) No Preemption
The third necessary condition for deadlocks is that there be no preemption of resources
that have already been allocated. To ensure that this condition does not hold, we can use the
following protocol. If a process is holding some resources and requests another resource that
cannot be immediately allocated to it (that is, the process must wait), then all resources the
process is currently holding are preempted. In other words, these resources are implicitly
released.
The preempted resources are added to the list of resources for which the process is
waiting. The process will be restarted only when it can regain its old resources, as well as the
new ones that it is requesting. iv) Circular Wait
The fourth and final condition for deadlocks is the circular-wait condition. One way to
ensure that this condition never holds is to impose a total ordering of all resource types and to
require that each process requests resources in an increasing order of enumeration.
To illustrate, we let R = {R1, R2, ..., Rm} be the set of resource types. We assign to each
resource type a unique integer number, which allows us to compare two resources and to
determine whether one precedes another in our ordering. Formally, we define a one-to-one
function F: R → N, where N is the set of natural numbers. For example, if the set of resource
types R includes tape drives, disk drives, and printers, then the function F might be defined as
follows:
F(tape drive) = 1
F(disk drive) = 5
F(printer) = 12
Deadlock Avoidance
Safe State
A state is safe if the system can allocate resources to each process (up to its maximum) in
some order and still avoid a deadlock. More formally, a system is in a safe state only if there
exists a safe sequence. A sequence of processes is a safe sequence for the current allocation state
if, for each Pi , the resource requests that Pi can still make can be satisfied by the currently
available resources plus the resources held by all Pj , with j < i. In this situation, if the resources
that Pi needs are not immediately available, then Pi can wait until all Pj have finished. When they
Prof dhanapalan college of science and management
have finished, Pi can obtain all of its needed resources, complete its designated task, return its
allocated resources, and terminate. When Pi terminates, Pi+1 can obtain its needed resources, and
so on. If no such sequence exists, then the system state is said to be unsafe.
A safe state is not a deadlocked state. Conversely, a deadlocked state is an unsafe state.
Not all unsafe states are deadlocks.
we consider a system with twelve magnetic tape drives and three processes: P0, P1, and P2.
Process P0 requires ten tape drives, process P1 may need as many as four tape drives, and
process P2 may need up to nine tape drives. Suppose that, at time t0, process P0 is holding five
tape drives, process P1 is holding two tape drives, and process P2 is holding two tape drives.
(Thus, there are three free tape drives.)
Banker’s Algorithm
The resource-allocation-graph algorithm is not applicable to a resourceallocation system
with multiple instances of each resource type. The deadlockavoidance algorithm that we describe
next is applicable to such a system but is less efficient than the resource-allocation graph scheme.
This algorithm is commonly known as the banker’s algorithm.
Several data structures must be maintained to implement the banker’s algorithm. These
data structures encode the state of the resource-allocation system. We need the following data
structures, where n is the number of processes in the system and m is the number of resource
types:
• Available.Avector of lengthmindicates the number of available resources of each type. If
Available[j] equals k, then k instances of resource type Rj are available.
• Max. An n × m matrix defines the maximum demand of each process. If Max[i][j] equals
k, then process Pi may request at most k instances of resource type Rj .
• Allocation. An n × m matrix defines the number of resources of each type currently
allocated to each process. If Allocation[i][j] equals k, then process Pi is currently allocated k
instances of resource type Rj .
• Need. An n × m matrix indicates the remaining resource need of each process. If Need[i]
[j] equals k, then process Pi may need k more instances of resource type Rj to complete its task.
Note that Need[i][j] equalsMax[i][j]− Allocation[i][j].
Safety Algorithm
Prof dhanapalan college of science and management
We can now present the algorithm for finding out whether or not a system is in a safe state. This
algorithm can be described as follows:
Resource-Request Algorithm
Next, we describe the algorithm for determining whether requests can be safely granted.
Let Requesti be the request vector for process Pi . If Requesti [ j] == k, then process Pi wants k
instances of resource type Rj . When a request for resources is made by process Pi , the following
actions are taken:
1. If Requesti ≤Needi , go to step 2. Otherwise, raise an error condition, since the process
has exceeded its maximum claim.
2. If Requesti ≤ Available, go to step 3. Otherwise, Pi must wait, since the resources are not
available.
3. Have the system pretend to have allocated the requested resources to process Pi by
modifying the state as follows:
Available = Available–Requesti ;
Allocationi = Allocationi + Requesti ;
Needi = Needi –Requesti ;
If the resulting resource-allocation state is safe, the transaction is completed, and process Pi is
allocated its resources. However, if the new state is unsafe, then Pi must wait for Requesti , and
the old resource-allocation state is restored.
An Illustrative Example
To illustrate the use of the banker’s algorithm, consider a system with five processes P0 through
P4 and three resource types A, B, and C. Resource type A has ten instances, resource type B has
five instances, and resource type C has seven instances. Suppose that, at time T0, the following
snapshot of the system has been taken:
Allocation Max Available
ABC ABC ABC
P0 0 1 0 7 5 3 3 3 2
P1 2 0 0 3 2 2
Prof dhanapalan college of science and management
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3
The content of the matrix Need is defined to be Max − Allocation and is as follows:
Need
ABC
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1
We claim that the system is currently in a safe state. Indeed, the sequence <P1, P3, P4, P2, P0>
satisfies the safety criteria. Suppose now that process P1 requests one additional instance of
resource type A and two instances of resource type C, so Request1 = (1,0,2). To decide whether
this request can be immediately granted, we first check that Request1 ≤ Available—that is, that
(1,0,2) ≤ (3,3,2), which is true. We then pretend that this request has been fulfilled, and we arrive
at the following new state:
We must determine whether this new system state is safe. To do so, we execute our safety
algorithm and find that the sequence <P1, P3, P4, P0, P2> satisfies the safety requirement.
Hence, we can immediately grant the request of process P1.
You should be able to see, however, that when the system is in this state, a request for (3,3,0) by
P4 cannot be granted, since the resources are not available. Furthermore, a request for (0,2,0) by
P0 cannot be granted, even though the resources are available, since the resulting state is unsafe.
Deadlock Detection
If a system does not employ either a deadlock-prevention or a deadlockavoidance
algorithm, then a deadlock situation may occur. In this environment, the system may provide:
• An algorithm that examines the state of the system to determine whether a deadlock has
occurred
• An algorithm to recover from the deadlock.
Prof dhanapalan college of science and management
If all resources have only a single instance, then we can define a deadlockdetection
algorithm that uses a variant of the resource-allocation graph, called a wait-for graph. We obtain
this graph from the resource-allocation graph by removing the resource nodes and collapsing the
appropriate edges.
More precisely, an edge from Pi to Pj in a wait-for graph implies that process Pi is
waiting for process Pj to release a resource that Pi needs. An edge Pi → Pj exists in a wait-for
graph if and only if the corresponding resource allocation graph contains two edges Pi → Rq and
Rq → Pj for some resource Rq . In Figure 7.9, we present a resource-allocation graph and the
corresponding wait-for graph.
A deadlock exists in the system if and only if the wait-for graph contains a cycle. To
detect deadlocks, the system needs to maintain the waitfor graph and periodically invoke an
algorithm that searches for a cycle in the graph. An algorithm to detect a cycle in a graph
requires an order of n2 operations, where n is the number of vertices in the graph.
1. LetWork and Finish be vectors of length m and n, respectively. Initialize Work = Available.
For i = 0, 1, ..., n–1, if Allocationi _= 0, then Finish[i] = false. Otherwise, Finish[i] = true.
To illustrate this algorithm, we consider a system with five processes P0 through P4 and three
resource types A, B, and C. Resource type A has seven instances, resource type B has two
instances, and resource type C has six instances. Suppose that, at time T0, we have the following
resource-allocation state:
Allocation Request Available
ABC ABC ABC
P0 0 1 0 0 0 0 0 0 0
P1 2 0 0 2 0 2
P2 3 0 3 0 0 0
P3 2 1 1 1 0 0
P4 0 0 2 0 0 2
We claim that the system is not in a deadlocked state. Indeed, if we execute our
algorithm, we will find that the sequence <P0, P2, P3, P1, P4> results in Finish[i] == true for
all i. Suppose now that process P2 makes one additional request for an instance of type C. The
Request matrix is modified as follows:
` Request
ABC
P0 0 0 0
P1 2 0 2
P2 0 0 1
P3 1 0 0
P4 0 0 2
Prof dhanapalan college of science and management
We claim that the system is now deadlocked. Although we can reclaim the resources held
by process P0, the number of available resources is not sufficient to fulfill the requests of the
other processes. Thus, a deadlock exists, consisting of processes P1, P2, P3, and P4.
i) Process Termination
To eliminate deadlocks by aborting a process, we use one of two methods. In both methods,
the system reclaims all resources allocated to the terminatedprocesses.
• Abort all deadlocked processes. This method clearly will break the deadlock cycle, but
at great expense. The deadlocked processes may have computed for a long time, and the results
of these partial computations must be discarded and probably will have to be recomputed later.
• Abort one process at a time until the deadlock cycle is eliminated. This method incurs
considerable overhead, since after each process is aborted, a deadlock-detection algorithm must
be invoked to determine whether any processes are still deadlocked.
2. Rollback. If we preempt a resource from a process, what should be done with that
process? Clearly, it cannot continue with its normal execution; it is missing some needed
resource. We must roll back the process to some safe state and restart it from that state. Since, in
general, it is difficult to determine what a safe state is, the simplest solution is a total rollback:
abort the process and then restart it. Although it is more effective to roll back the process only as
far as necessary to break the deadlock, this method requires the system to keep more information
about the state of all running processes.
3. Starvation. How do we ensure that starvation will not occur? That is, how can we
guarantee that resources will not always be preempted from the same process?
In a system where victim selection is based primarily on cost factors, it may happen that
the same process is always picked as a victim. As a result, this process never completes its
designated task, a starvation situation any practical system must address. Clearly, we must ensure
that a process can be picked as a victim only a (small) finite number of times. The most common
solution is to include the number of rollbacks in the cost factor.