Unit_3_OS
Unit_3_OS
MODULE 3
PROCESS SYNCHRONIZATION
A cooperating process is one that can affect or be affected by other processes
executing in the system. Cooperating processes can either directly share a logical
address space (that is, both code and data) or be allowed to share data only through
files or messages.
Concurrent access to shared data may result in data inconsistency. To maintain data
consistency, various mechanisms is required to ensure the orderly execution of
cooperating processes that share a logical address space.
while (true) {
while (true){
while (counter ==0)
; // do nothing
nextConsumed
=buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in nextConsumed */
}
1
Operating Systems
Race Condition
When the producer and consumer routines shown above are correct separately, theymay
not function correctly when executed concurrently.
Illustration:
Suppose that the value of the variable counter is currently 5 and that the producer and
consumer processes execute the statements "counter++" and "counter--" concurrently. The
value of the variable counter may be 4, 5, or 6 but the only correct result is counter == 5,
which is generated correctly if the producer and consumer execute separately.
Note: It is arrived at the incorrect state "counter == 4", indicating that four buffers are full,
when, in fact, five buffers are full. If we reversed the order of the statementsat T4 and T5,
we would arrive at the incorrect state "counter==6".
Definition Race Condition: A situation where several processes access and manipulate the
same data concurrently and the outcome of the execution depends on the particular order in
which the access takes place, is called a Race Condition.
To guard against the race condition, ensure that only one process at a time can be
manipulating the variable counter. To make such a guarantee, the processes are
synchronized in some way.
2
Operating Systems
Each process must request permission to enter its critical section. The section of code
implementing this request is the entry section.
The critical section may be followed by an exit section. The remaining code is the
reminder section.
A solution to the critical-section problem must satisfy the following three requirements:
1. Mutual exclusion: If process Pi is executing in its critical section, then no other processes
can be executing in their critical sections.
2. Progress: If no process is executing in its critical section and some processes wish to enter
their critical sections, then only those processes that are not executing in their remainder
sections can participate in deciding which will enter its critical section next, and this
selection cannot be postponed indefinitely.
3. Bounded waiting: There exists a bound, or limit, on the number of times that other
processes are allowed to enter their critical sections after a process has made a request to
enter its critical section and before that request is granted.
3
Operating Systems
PETERSON'S SOLUTION
turn: The variable turn indicates whose turn it is to enter its critical section. Ex:
if turn == i, then process Pi is allowed to execute in its critical section
flag: The flag array is used to indicate if a process is ready to enter its critical
section. Ex: if flag [i] is true, this value indicates that Pi is ready to enter its
critical section.
Pi Pj
do { do {
flag[i] = true; flag[j] = true;
turn = j; turn = i;
while (flag[j] && turn == j); while (flag[i] && turn == i);
/* critical section */ /* critical section */
flag[i] = false; flag[j] = false;
/* remainder section */ /* remainder section */
} }
while (true); while (true);
Figure: The structure of process Pi and Pj in Peterson's solution
To enter the critical section, process Pi first sets flag [i] to be true and then sets turn
to the value j, thereby asserting that if the other process wishes to enter the critical
section, it can do so.
If both processes try to enter at the same time, turn will be set to both i and j at
roughly the same time. Only one of these assignments will last, the other will
occur but will be overwritten immediately.
The eventual value of turn determines which of the two processes is allowed
toenter its critical section first
4
Operating Systems
SYNCHRONIZATION HARDWARE
The solution to the critical-section problem requires a simple tool-a lock.
Race conditions are prevented by requiring that critical regions be protected by locks.
That is, a process must acquire a lock before entering a critical section and it releases the
lock when it exits the critical section.
do {
acquire lock
critical
section release lock
remainder section
} while (TRUE);
Figure: Solution to the critical-section problem using locks.
5
Operating Systems
Pi Pj
do do
{ {
while (TestAndSet(&lock)) ; while (TestAndSet(&lock )); //
// do nothing do nothing
//critical //critical
section lock section lock
=FALSE; =FALSE;
//remainder section //remainder section
} while (TRUE); } while (TRUE);
Figure: Mutual-exclusion implementation with TestAndSet ()
6
Operating Systems
Definition:
void Swap (boolean *a, boolean *b)
{
boolean temp = *a;
*a = *b;
*b = temp:
}
Figure: The definition of the Swap ( ) instruction
The Swap() instruction, operates on the contents of two words, it is defined as shown
above Swap() it is executed atomically. If the machine supports the Swap() instruction,
then mutual exclusion can be provided as follows.
Pi Pj
do { do {
key = TRUE; key = TRUE;
while (key = = TRUE) while (key = = TRUE)
Swap(&lock, &key}; Swap(&lock, &key};
// critical section
j = (i + 1) % n;
while ((j != i) && !waiting[j]) j = (j + 1) %
n; if (j == i)
lock = FALSE;
else
waiting[j] = FALSE;
// remainder section
} while (TRUE);
Figure: Bounded-waiting mutual exclusion with Testament ()
7
Operating Systems
wait (S) {
while S <= 0 ; // no-
op
S--;}
Definition of signal (): process has completed its critical section or shared resources has to be
released
signal (S) {
S++;}
All modifications to the integer value of the semaphore in the wait () and signal() operations
must be executed indivisibly. That is, when one process modifies the semaphore value, no
other process can simultaneously modify that same semaphore value.
8
Operating Systems
Binary semaphore
The value of a binary semaphore can range only between 0 and1.
Binary semaphores are known as mutex locks, as they are locks that provide mutual
exclusion. Binary semaphores to deal with the critical-section problem for multiple
processes. Then processes share a semaphore, mutex, initialized to1
Each process Pi is organized as shown in below figure
Figure: Mutual-exclusion implementation with semaphores
do {
wait (mutex);
// Critical
Section signal
(mutex);
// remainder section
} while (TRUE);
Counting semaphore
The value of a counting semaphore can range over an unrestricted domain.
Counting semaphores can be used to control access to a given resource consisting of a
finite number of instances.
The semaphore is initialized to the number of resources available. Each process that
wishes to use a resource performs a wait() operation on the semaphore. When a process
releases a resource, it performs a signal()operation.
When the count for the semaphore goes to 0, all resources are being used. After that,
processes that wish to use a resource will block until the count becomes greater than 0.
Implementation
The main disadvantage of the semaphore definition requires busy waiting.
While a process is in its critical section, any other process that tries to enter itscritical
section must loop continuously in the entry code.
This continual looping is clearly a problem in a real multiprogramming system,where a
single CPU is shared among many processes.
Busy waiting wastes CPU cycles that some other process might be able to use productively.
This type of semaphore is also called a spinlock because the process"spins" while waiting
for the lock.
9
Operating Systems
which changes the process from the waiting state to the ready state. The process is then
placed in the ready queue.
To implement semaphores under this definition, we define a semaphore as a "C' struct:
typedef struct {
int value;
struct process
*list;
} semaphore;
Each semaphore has an integer value and a list of processes list. When a process mustwait on a
semaphore, it is added to the list of processes. A signal() operation removesone process from
the list of waiting processes and awakens that process.
The wait() semaphore operation can now be defined as:
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S- >list; block();
}}
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
10
Operating Systems
. .
. .
signal(S); signal(Q);
signal(Q); signal(S);
Suppose that Po executes wait (S) and then P1 executes wait (Q). When Poexecutes wait (Q), it
must wait until P1 executes signal (Q). Similarly, when P1 executes wait (S), it must wait until Po
executes signal(S). Since these signal () operations cam1ot be executed, Po and P1 are deadlocked.
Another problem related to deadlocks is indefinite blocking or starvation: Asituation
in which processes wait indefinitely within the semaphore.
Indefinite blocking may occur if we remove processes from the list associated witha
semaphore in LIFO (last-in, first-out) order.
Bounded-Buffer Problem
What is the Problem Statement?
There is a buffer of n slots and each slot is capable
of storing one unit of data. There are two
processes running, namely, producer and
consumer, which are operating on the buffer.
A producer tries to insert data into an empty slot of the buffer. A consumer tries to remove data from
a filled slot in the buffer.
Challenges:
Producer must not insert data when buffer is full. Consumer must not remove data when buffer is
empty. Producer and consumer should not insert and remove data simultaneously.
Solution to bounded buffer problem is : 3 semaphores are used
Binary Semaphore mutex initialized to the value 1
Counting Semaphore full initialized to the value0
Counting Semaphore empty initialized to the value N.
The structure of the producer process: The structure of the consumer process:
do do
{ {
wait(empty); //wait until empty>0 wait(full); //wait until full>0
wait(mutex); // acquire lock wait(mutex); //acquire lock
// add data to buffer // remove data from buffer
signal(mutex); // release lock signal(mutex); //release lock
signal(full); // increment *full* signal(empty); //increment *empty*
}while(TRUE) }while(TRUE)
12
Operating Systems
a producer first waits until there is The consumer waits until there is
atleast one empty slot. atleast one full slot in the buffer.
Then it decrements the empty semaphore Then it decrements the full semaphore
because, there will now be one less because the number of occupied slots will
empty slot, since the producer is going to be decreased by one, after the consumer
insert data in one of those slots. completes its operation.
Then, it acquires lock on the buffer, so After that, the consumer acquires lock
that the consumer cannot access the buffer on the buffer and completes the removal
until producer completes its operation. operation so that the data from one of
After performing the insert operation, the the full slots is removed.
lock is released and the value of full is Then, the consumer releases the lock.
incremented because the producer has Finally, the empty semaphore is
just filled a slot in the buffer. incremented by 1, indicating consumer
has just removed data from an
occupied
slot.
Readers-Writers Problem
The Problem Statement
There is a shared resource which should be accessed by multiple processes. There are two types of
processes in this context. They are reader and writer..
Readers – Any number of readers can read from the shared resource simultaneously. They
can only read the data set; they do not perform any updates.
Writers – can both read and write.
Challenges
Allow multiple readers to read at the same time. Only one single writer can access the
shared data at the same time.
When a writer is writing data to the resource, no other process can access the resource.
A writer cannot write to the resource if there are non zero number of readers
accessing the resource at that time
Solution using semaphore:
Binary Semaphore mutex initialized to 1.
Binary Semaphore w initialized to1.
Integer variable readcount is used to maintain the number of readers currently
accessing the resource and is initialized to 0.
Writer Reader
while(TRUE) while(TRUE)
{ {
wait(w); //acquire lock
wait(m);
/* perform the write operation */ readcount++;
if(readcount == 1)
signal(w); wait(w);
} //release lock
signal(m);
/* perform the reading operation */
// acquire lock
wait(m);
13
Operating Systems
readcount--;
if(readcount == 0)
signal(w);
// release lock
signal(m);
}
As seen above in the code for the writer, the writer just waits on the w semaphore until it gets a
chance to write to the resource.
After performing the write operation, it increments w so that the next writer can access the resource.
The code for the reader, the lock is acquired whenever the readcount is updated by a process.
When a reader wants to access the resource, first it increments the readcount value, then accesses the
resource and then decrements the readcount value.
The semaphore w is used by the first reader which enters the critical section and the last reader which
exits the critical section. The reason for this is, when the first readers enters the critical section, the
writer is blocked from the resource. Only new readers can access the resource now.
Similarly, when the last reader exits the critical section, it signals the writer using the w semaphore
because there are zero readers now and a writer can have the chance to access the resource.
Dining-Philosophers Problem
The dining philosophers’ problem is another classic synchronization problem which is used to
evaluate situations where there is a need of allocating multiple resources to multiple processes.
What is the Problem Statement?
Consider five philosophers who spend their lives thinking and eating. The philosophers share a
circular table surrounded by five chairs, each belonging to one philosopher. In the center of the table is
a bowl of rice, and the table is laid with five single chop sticks.
A philosopher gets hungry and tries to pick up the two chopsticks that are closest to her (the
chopsticks that are between her and her left and right neighbors). A philosopher may pick up
only one chopstick at a time. When a hungry philosopher has both her chopsticks at the same
time, she eats without releasing the chopsticks. When she is finished eating, she puts down both
chopsticks and starts thinking again.
It is a simple representation of the need to allocate several resources among several processes in
a deadlock-free and starvation-free manner.
Solution: One simple solution is to represent each chopstick with a semaphore. A philosopher
tries to grab a chopstick by executing a wait() operation on that semaphore. She releases her
chopsticks by executing the signal() operation on the appropriate semaphores. Thus, the shared
data are
semaphore stick[5];
where all the elements of chopstick (a binary semaphore) are initialized to 1. The structure of
philosopher i is shown
14
Operating Systems
while(TRUE)
{
wait(stick[i]);
wait(stick[(i+1) % 5]);
/*to find the adjacent stick mod is used because if i=5, next chopstick is 1 (dining table is circular)*/
signal(stick[i]);
signal(stick[(i+1) % 5]);
Monitor
High level abstraction provides a convenient and effective mechanism for process
synchronization.
15
Operating Systems
16
Operating Systems
Since a signaling process must wait until the resumed process either leaves or waits, an
additional semaphore, next, is introduced, initialized to 0. The signaling processes can use
next to suspend themselves. An integer variable next_count is also provided to count the
number of processes suspended on next.Thus, each external function F is replaced by
For each condition x, we introduce a semaphore x sem and an integer variable xcount,
both initialized to 0. The operation x.wait() can now be implemented as
where c is an integer expression that is evaluated when the wait() operation is executed. The
value of c, which is called a priority number, is then stored with the name of the process that is
suspended. When x.signal() is executed, the process with the smallest priority number is resumed
next.
17
Operating Systems
The Resource Allocator monitor shown in the above Figure, which controls
the allocation of a single resource among competing processes.
A process that needs to access the resource in question must observe the
following sequence:
R.acquire(t);
...
access the resource;
...
R.release();
where R is an instance of type Resource Allocator.
The monitor concept cannot guarantee that the preceding access sequence will be
observed. In particular, the following problems can occur:
A process might access a resource without first gaining access permission to the
resource.
A process might never release a resource once it has been granted access to the
resource.
A process might attempt to release a resource that it never requested.
A process might request the same resource twice (without first releasing the resource).
18
Operating Systems
DEADLOCKS
SYSTEM MODEL
Under the normal mode of operation, a process may utilize a resource in only the following sequence:
1. Request: The process requests the resource. If the request cannot be granted immediately,
then the requesting process must wait until it can acquire the resource.
2. Use: The process can operate on the resource.
3. Release: The process releases the resource.
A set of processes is in a deadlocked state when every process in the set is waiting for an event that can
be caused only by another process in the set. The events with which we are mainly concerned here are
resource acquisition and release. The resources may be either physical resources or logical resources.
DEADLOCK CHARACTERIZATION
Necessary Conditions
A deadlock situation can arise if the following four conditions hold simultaneously in a system:
1. Mutual exclusion: At least one resource must be held in a non-sharable mode, that is, only one
process at a time can use the resource. If another process requests that resource, the requesting
process must be delayed until the resource has been released.
19
Operating Systems
2. Hold and wait: A process must be holding at least one resource and waiting to acquire
additional resources that are currently being held by other processes.
3. No preemption: Resources cannot be preempted; that is, a resource can be released only
voluntarily by the process holding it, after that process has completed its task.
4. Circular wait: A set {P0, Pl, ... , Pn} of waiting processes must exist such that Po is waiting for
a resource held by P1, P1 is waiting for a resource held by P2, ... , Pn-1 is waiting for a resource
held by Pn and Pn is waiting for a resource held by Po.
Resource-Allocation Graph
Deadlocks can be described in terms of a directed graph called System Resource-Allocation Graph
The graph consists of a set of vertices V and a set of edges E. The set of vertices V is
partitioned into two different types of nodes:
P = {P1, P2, ...,Pn}, the set consisting of all the active processes in the system.
R = {R1, R2, ..., Rm} the set consisting of all resource types in the system.
A directed edge from process Pi to resource type Rj is denoted by Pi → Rj it signifies that process Pi has
requested an instance of resource type Rj and is currently waiting for that resource.
A directed edge from resource type Rj to process Pi is denoted by Rj → Pi it signifies that an instance of
resource type Rj has been allocated to process Pi.
A directed edge Pi → Rj is called a Request Edge.
A directed edge Rj → Pi is called an Assignment Edge.
Pictorially each process Pi as a circle and each resource type Rj as a rectangle. Since resource type Rj
may have more than one instance, each instance is represented as a dot within the rectangle.
A request edge points to only the rectangle R j, whereas an assignment edge must also designate one of
the dots in the rectangle.
When process Pi requests an instance of resource type Rj, a request edge is inserted in the resource-
allocation graph. When this request can be fulfilled, the request edge is instantaneously transformed to
an assignment edge. When the process no longer needs access to the resource, it releases the resource;
as a result, the assignment edge is deleted.
The resource-allocation graph shown in Figure depicts the following situation.
20
Operating Systems
Resource instances:
One instance of resource type R1
Two instances of resource type R2
One instance of resource type R3
Three instances of resource type R4
Process states:
Process P1 is holding an instance of resource type R2 and is waiting for an instance
of resource type R1.
Process P2 is holding an instance of R1 and an instance of R2 and is waiting for
an instance of R3.
Process P3 is holding an instance of R3.
If the graph does contain a cycle, then a deadlock may exist.
If each resource type has exactly one instance, then a cycle implies that a deadlock has
occurred. If the cycle involves only a set of resource types, each of which has only a single
instance, then a deadlock has occurred. Each process involved in the cycle is deadlocked.
If each resource type has several instances, then a cycle does not necessarily imply that a
deadlock has occurred. In this case, a cycle in the graph is a necessary but not a sufficient
condition for the existence of deadlock.
Processes P1, P2, and P3 are deadlocked. Process P2 is waiting for the resource R3, which is held by
process P3. Process P3 is waiting for either process P1 or process P2 to release resourceR2. In addition,
process P1 is waiting for process P2 to release resource R1.
Consider the resource-allocation graph as depicted in below Figure. In this example also have a cycle:
P1→R1→P3→R2→P1
21
Operating Systems
However, there is no deadlock. Observe that process P4 may release its instance of resource type
R2. That resource can then be allocated to P3, breaking the cycle.
To ensure that deadlocks never occur, the system can use either deadlock prevention or a
deadlock-avoidance scheme.
Deadlock prevention provides a set of methods for ensuring that at least one of the necessary
conditions cannot hold. These methods prevent deadlocks by constraining how requests for resources
can be made.
Deadlock-avoidance requires that the operating system be given in advance additional information
concerning which resources a process will request and use during its lifetime. Withthis additional
knowledge, it can decide for each request whether or not the process should wait. To decide whether
the current request can be satisfied or must be delayed, the system must consider the resources currently
available, the resources currently allocated to each process, and the future requests and releases of each
process.
If a system does not employ either a deadlock-prevention or a deadlock avoidance algorithm, then a
deadlock situation may arise. In this environment, the system can provide an algorithm that examines
the state of the system to determine whether a deadlock has occurred and an algorithm to recover from
the deadlock.
In the absence of algorithms to detect and recover from deadlocks, the system is in a deadlock state yet
has no way of recognizing what has happened. In this case, the undetected deadlock will result in
deterioration of the system's performance, because resources are being held by processes that cannot
run and because more and more processes, as they make requests for resources, will enter a deadlocked
state. Eventually, the system will stop functioning and will need to be restarted manually.
DEADLOCK PREVENTION
22
Operating Systems
Deadlock can be prevented by ensuring that at least one of the four necessary conditions cannot hold.
Mutual Exclusion
The mutual-exclusion condition must be held for non-sharable resources. Sharable resources do
not require mutually exclusive access and thus cannot be involved in a deadlock.
Ex: Read-only files are example of a sharable resource. If several processes attempt to open a
read-only file at the same time, they can be granted simultaneous access to the file. A process
never needs to wait for a sharable resource.
Deadlocks cannot prevent by denying the mutual-exclusion condition because some resources
are intrinsically non-sharable.
Ex
:
Consider a process that copies data from a DVD drive to a file on disk, sorts the file, and then
prints the results to a printer. If all resources must be requested at the beginning of the process,
then the process must initially request the DVD drive, disk file, and printer. It will hold the
printer for its entire execution, even though it needs the printer only at the end.
The second method allows the process to request initially only the DVD drive and disk file. It
copies from the DVD drive to the disk and then releases both the DVD drive and the disk file.
The process must then again request the disk file and the printer. After copying the disk file to
the printer, it releases these two resources and terminates.
No Preemption
The third necessary condition for deadlocks is that there be no preemption of resources that
have already been allocated.
To ensure that this condition does not hold, the following protocols can be used:
If a process is holding some resources and requests another resource that cannot be immediately
allocated to it, then all resources the process is currently holding are preempted.
The preempted resources are added to the list of resources for which the process is waiting. The
process will be restarted only when it can regain its old resources, as well as the new ones that it
is requesting.
If a process requests some resources, first check whether they are available. If they are, allocate them.
23
Operating Systems
If they are not available, check whether they are allocated to some other process that is waiting for
additional resources. If so, preempt the desired resources from the waiting process and allocate them to
the requesting process.
If the resources are neither available nor held by a waiting process, the requesting process must wait.
While it is waiting, some of its resources may be preempted, but only if another process requests them.
A process can be restarted only when it is allocated the new resources it is requesting and recovers any
resources that were preempted while it was waiting.
Circular Wait
One way to ensure that this condition never holds is to impose a total ordering of all resource types and
to require that each process requests resources in an increasing order of enumeration.
To illustrate, let R = {R1, R2, ... , Rm} be the set of resource types. Assign a unique integer number to
each resource type, which allows to compare two resources and to determine whether one precedes
another in ordering. Formally, it is defined as a one-to-one function.
F: R ->N, where N is the set of natural numbers.
Example: if the set of resource types R includes tape drives, disk drives, and printers, then the function
F might be defined as follows:
F (tape drive) = 1
F (disk drive) = 5
F (printer) = 12
Now consider the following protocol to prevent deadlocks. Each process can request resources only in
an increasing order of enumeration. That is, a process can initially request any number of instances of a
resource type -Ri. After that, the process can request instances of resource typeR j if and only if F(Rj) >
F(Ri).
DEADLOCK AVOIDANCE
To avoid deadlocks additional information is required about how resources are to be requested.
With the knowledge of the complete sequence of requests and releases for each process, the
system can decide for each request whether or not the process should wait in order to avoid a
possible future deadlock.
Each request requires that in making this decision the system consider the resources currently
available, the resources currently allocated to each process, and the future requests and releases
of each process.
The various algorithms that use this approach differ in the amount and type of information
required. The simplest model requires that each process declare the maximum number of
resources of each type that it may need. Given this a prior information, it is possible to construct
an algorithm that ensures that the system will never enter a deadlocked state. Such an algorithm
defines the deadlock-avoidance approach.
Safe State
24
Operating Systems
Safe state: A state is safe if the system can allocate resources to each process (up to its
maximum) in some order and still avoid a deadlock. A system is in a safe state only if there
exists a safe sequence.
Safe sequence: A sequence of processes <P1, P2, ... , Pn> is a safe sequence for the current
allocation state if, for each Pi, the resource requests that Pi can still make can be satisfied by the
currently available resources plus the resources held by all Pj, with j <i.
In this situation, if the resources that Pi needs are not immediately available, then Pi can wait until all Pj
have finished. When they have finished, Pi can obtain all of its needed resources, complete its
designated task, return its allocated resources, and terminate. When Pi terminates, Pi+1 can obtain its
needed resources, and so on. If no such sequence exists, then the system state is said to be unsafe.
A safe state is not a deadlocked state. Conversely, a deadlocked state is an unsafe state. Not all unsafe
states are deadlocks as shown in figure. An unsafe state may lead to a deadlock. As long as the state is
safe, the operating system can avoid unsafe states.
Resource-Allocation-Graph Algorithm
If a resource-allocation system has only one instance of each resource type, then a variant of
the resource-allocation graph is used for deadlock avoidance.
In addition to the request and assignment edges, a new type of edge is introduced, called a
claim edge.
A claim edge Pi ->Rj indicates that process Pi may request resource Rj at some time in the
future. This edge resembles a request edge in direction but is represented in the graph by a
dashed line.
When process Pi requests resource Rj, the claim edge Pi ->Rj is converted to a request edge.
When a resource Rj is released by Pi the assignment edge Rj->Pi is reconverted to a claim
edge Pi->Rj.
25
Operating Systems
Note that the resources must be claimed a prior in the system. That is, before process Pi starts
executing, all its claim edges must already appear in the resource-allocation graph.
We can relax this condition by allowing a claim edge Pi ->Rj to be added to the graph only if all the
edges associated with process Pi are claim edges.
Now suppose that process Pi requests resource Rj. The request can be granted only if converting the
request edge Pi ->Rj to an assignment edge Rj->Pi does not result in the formation of a cycle in the
resource-allocation graph.
There is a need to check for safety by using a cycle-detection algorithm. An algorithm for detecting a
cycle in this graph requires an order of n2 operations, where n is the number of processes in the system.
If no cycle exists, then the allocation of the resource will leave the system in a safe state.
If a cycle is found, then the allocation will put the system in an unsafe state. In that case,process
Pi will have to wait for its requests to be satisfied.
Banker's Algorithm
The Banker’s algorithm is applicable to a resource allocation system with multiple instances of each
resource type.
When a new process enters the system, it must declare the maximum number of instances of
each resource type that it may need. This number may not exceed the total number of resources
in the system.
When a user requests a set of resources, the system must determine whether the allocation of
these resources will leave the system in a safe state. If it will, the resources are allocated;
otherwise, the process must wait until some other process releases enough resources.
26
Operating Systems
To implement the banker's algorithm the following data structures are used.
Available: A vector of length m indicates the number of available resources of each type. If
available [j] = k, there are k instances of resource type Rj available.
Max: An n x m matrix defines the maximum demand of each process. If Max [i,j] = k, then
process Pi may request at most k instances of resource type Rj
Allocation: An n x m matrix defines the number of resources of each type currently allocated to
each process. If Allocation[i,j] = k then Pi is currently allocated k instances of Rj
Need: An n x m matrix indicates the remaining resource need of each process. If Need[i,j] = k,then Pi
may need k more instances of Rj to complete its task.
Safety Algorithm
The algorithm for finding out whether or not a system is in a safe state. This algorithm can be
described as follows:
4. If Finish [i] == true for all i, then the system is in a safe state
This algorithm may require an order of m x n2 operations to determine whether a state is safe.
27
Operating Systems
Resource-Request Algorithm
1.
If RequestiNeedigo to step 2. Otherwise, raise error condition, since process has exceeded
its maximum claim
2.
If RequestiAvailable, go to step 3. Otherwise Pi must wait, since resources are not available
3.
Have the system pretend to allocate requested resources to Pi by modifying the state
as follows:
Available = Available – Request;
Allocationi= Allocationi + Requesti;
Needi=Needi – Requesti;
Example
Consider a system with five processes Po through P4 and three resource types A, B, and C. Resource type
A has ten instances, resource type B has five instances, and resource type C has seven instances.
Suppose that, at time T0 the following snapshot of the system has been taken:
28
Operating Systems
The system is currently in a safe state. Indeed, the sequence <P1, P3, P4, P2, P0> satisfies the safety
criteria.
Suppose now that process P1 requests one additional instance of resource type A and two instances of
resource type C, so Request1 = (1,0,2). Decide whether this request can be immediately granted.
Then pretend that this request has been fulfilled, and the following new state has arrived.
Executing safety algorithm shows that sequence <P1, P3, P4, P0, P2> satisfies safety requirement.
DEADLOCK DETECTION
If a system does not employ either a deadlock-prevention or a deadlock avoidance algorithm, then a
deadlock situation may occur. In this environment, the system may provide:
An algorithm that examines the state of the system to determine whether a deadlock has
occurred.
An algorithm to recover from the deadlock.
If all resources have only a single instance, then define a deadlock detection algorithm that uses
a variant of the resource-allocation graph, called a wait-for graph.
This graph is obtained from the resource-allocation graph by removing the resource nodes and
collapsing the appropriate edges.
An edge from Pi to Pj in a wait-for graph implies that process Pi is waiting for process Pjto release
a resource that Pi needs. An edge Pi → Pj exists in a wait-for graph if and only if the corresponding
resource allocation graph contains two edges Pi →Rq and Rq→Pi for some resource Rq.
Example: In below Figure, a resource-allocation graph and the corresponding wait-for graph is
presented.
29
Operating Systems
A deadlock detection algorithm that is applicable to several instances of a resource type. The
algorithm employs several time-varying data structures that are similar to those used in the banker's
algorithm.
Available: A vector of length m indicates the number of available resources of each type.
Allocation: Ann x m matrix defines the number of resources of each type
currently allocated to each process.
Request: An n x m matrix indicates the current request of each process. If Request[i][j]
equals k, then process P; is requesting k more instances of resource type Rj.
Algorithm:
4. If Finish[i] == false, for some i, 1 in, then the system is in deadlock state. Moreover, if
Finish[i] == false, then Pi is deadlocked
Algorithm requires an order of O(m x n2) operations to detect whether the system is in
deadlocked state
30
Operating Systems
Consider a system with five processes Po through P4 and three resource types A, B, and C.
Resource type A has seven instances, resource type B has two instances, and resource type Chas six
instances. Suppose that, at time T0, the following resource-allocation state:
After executing the algorithm, Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i
Suppose now that process P2 makes one additional request for an instance of type C. TheRequest
matrix is modified as follows:
The system is now deadlocked. Although we can reclaim the resources held by process Po, the
number of available resources is not sufficient to fulfill the requests of the other processes.
Thus, a deadlock exists, consisting of processes P1, P2, P3, and P4.
Detection-Algorithm Usage
If deadlocks occur frequently, then the detection algorithm should be invoked frequently.Resources
allocated to deadlocked processes will be idle until the deadlock can be broken.
If detection algorithm is invoked arbitrarily, there may be many cycles in the resource graph and so we
would not be able to tell which of the many deadlocked processes “caused” the deadlock.
31
Operating Systems
The system recovers from the deadlock automatically. There are two options for breaking a deadlock
one is simply to abort one or more processes to break the circular wait. The other is to preempt some
resources from one or more of the deadlocked processes.
Process Termination
To eliminate deadlocks by aborting a process, use one of two methods. In both methods, the system
reclaims all resources allocated to the terminated processes.
1. Abort all deadlocked processes: This method clearly will break the deadlock cycle, butat great
expense; the deadlocked processes may have computed for a long time, and the results of these
partial computations must be discarded and probably will have to be recomputed later.
2. Abort one process at a time until the deadlock cycle is eliminated: This method incurs
considerable overhead, since after each process is aborted, a deadlock-detection
algorithm must be invoked to determine whether any processes are still deadlocked.
If the partial termination method is used, then we must determine which deadlocked process (or
processes) should be terminated. Many factors may affect which process is chosen, including:
Resource Preemption
To eliminate deadlocks using resource preemption, we successively preempt some resources from
processes and give these resources to other processes until the deadlock cycle is broken.
If preemption is required to deal with deadlocks, then three issues need to be addressed:
1. Selecting a victim. Which resources and which processes are to be preempted? As in process
termination, we must determine the order of preemption to minimize cost. Cost factors may
include such parameters as the number of resources a deadlocked process is holding and the
amount of time the process has thus far consumed during its execution.
2. Rollback. If we preempt a resource from a process, what should be done with that process?
Clearly, it cannot continue with its normal execution; it is missing some needed resource. We
must roll back the process to some safe state and restart it from that state. Since it is difficult to
determine what a safe state is, the simplest solution is a total rollback: abort the process and then
restart it.
3. Starvation. How do we ensure that starvation will not occur? That is, how can we
guarantee that resources will not always be preempted from the same process?
32