0% found this document useful (0 votes)
24 views23 pages

OS (Unit 4)

BIM 8 sem operating system

Uploaded by

amangrg9845
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views23 pages

OS (Unit 4)

BIM 8 sem operating system

Uploaded by

amangrg9845
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Operating System (Unit 4) Prepared By: Sujesh Manandhar

UNIT 4
Basic Synchronization Principles
Process synchronization means sharing system resources by processes in such a
way that, concurrent access to shared data is handled thereby minimizing the
chance of inconsistent data. Maintaining data consistency demands mechanism
to ensure synchronized execution of cooperative process. It handle the problem
that arose while multiple process execute.
When multiple process are executing then a process may be interrupted at any
point in its instruction stream and the processing core may be assigned to execute
instruction of another process. In this case several process might be accessing and
manipulating the same data resources (from common area) i.e. different parallel
process may be reading and writing to same data resources. This makes the
outcome of such resource inconsistent. Such case is known as race condition.
Race Condition: when several process access and manipulate the same data
concurrently and the outcome of the execution depends on the particular order in
which the access takes place known as race condition.
To handle such case, we need process synchronization which will decide in which
order process should execute i.e. which process will read at first and which will
write at first from the common data resource. Synchronization is required For
example:
One process may be writing data to a certain main memory area while another
process may be reading the data from that area and sending it to printer. The
reader and writer must be synchronized so that writer does not overwrite.

Critical Section Problem:


Critical section is a piece of code that access a shared resources (data structure)
that must not be concurrently accessed by more than one thread of execution.
Each process have a segment of code called a critical section in which process
may be changing common variable, accessing the common sharable variable,
table, file etc. If a process at any point of time, wants to access common sharable
variable, table or file then the process is trying to enter in its critical section.
When one process is executing in its critical section then no other process is
allowed to execute in its critical section i.e. no two process are executing on
critical section at the same time.

For: BIM 8th Semester Page | 1


Operating System (Unit 4) Prepared By: Sujesh Manandhar

The general structure of typical process is shown below which contains three
field:

• Entry section: section of code implementing or handling the number of


request made by process to enter into critical section.
• Critical section: section of code in which only one process can execute at
one time. If the OS grant the permission to process then process will enter
in this section.
• Exit section: indicates the end of critical section, releasing the process from
critical section.
• Remainder section: the remaining code after critical section.
A solution to the critical section problem must satisfy the following requirement:
1. Mutual exclusion: if the one process is executing in its critical section then
no other process are allowed to enter in its critical section. At a time only
one process can enter into its critical section.
2. Progress: if no process is executing in its critical section and if some
process wishes to enter into critical section then selection of such process
cannot be postponed indefinitely. Only those process that are not executing
in their remainder sections can participate in deciding which will enter its
critical section.
3. Bounded waiting: there exist a bound or limit on the number of times the
other processes are allowed to enter their critical section after a process has

For: BIM 8th Semester Page | 2


Operating System (Unit 4) Prepared By: Sujesh Manandhar

made a request to enter its critical section and before that the request is
granted.

Solution of Critical Section Problem:


1. Peterson’s Solution:
Peterson’s solution is a classical software based solution to the critical
section problem which is restricted to two processes that alternate
execution between their critical sections and remainder section. Let us
consider two process be: Process ( i ) and Process (j).
Following figure shows the structure of process Pi in Peterson’s solution:

do{

flag[i] = true;
turn = j;
while (flag[j] && turn == j);

Critical section
flag[i] = false;

Remainder section
} while (true);
Peterson solution requires the two process to share two data item:
int turn;
Boolean flag [i or j];
The variable turn indicates whose turn it is to enter its critical section i.e.
if turn = = i then process i is allow to execute in its critical section. The flag
array is used to indicate whether or not a process is ready to enter its critical
section i.e. if flag [i] = = true then it indicates that the process i is ready to
enter in its critical section.
To enter the critical section, process i first sets flag [i] to true and sets the
turn to j indicating that if other process wishes to enter critical section it

For: BIM 8th Semester Page | 3


Operating System (Unit 4) Prepared By: Sujesh Manandhar

can do so. If both process try to enter at the same time then turn will be set
to both i and j at roughly the same time but only one of these assignment
will last, the other will occur but overwritten immediately.
To prove this solution is correct, following properties should be meet.
• Mutual exclusion is preserved.
• The progress requirement is satisfied.
• The bounded waiting requirement is met.
To prove the above property let us consider the following example:
For process i For process j
do{ do{
flag[i] = true flag[j] = true
turn = j; turn = i;
while (flag[j]==true && turn==j); while (flag[i]==true && turn==i);
Critical section Critical section
flag[i] = false; flag[j] = false;
Remainder section; Remainder section;
}while(true); }while(true);

Proving the mutual exclusion:


For the process i to enter on critical section it sets the flag [i] to true and
turn = j. After that condition is checked of while loop. If both condition is
matched process i will enter to while loop and if one condition is false it
will goes to critical section.
Let us suppose, while process i is executing in its critical section, process
j wishes to goes on critical section. In this condition we have flag[i]== true
because process i haven’t complete its critical section, process j sets its
flag[j] to true (because it want to enter it its critical section) and sets turn =
i. Now, the condition of while loop of process j is checked. Here, flag[i] is
also true and turn is also i so, process j will enter into its while loop (not in
critical section).
After the process i completes its critical section flag[i] is set to false. Now,
the condition of while loop in process j will be false as flag[i] is set to false.
Process j which was inside the while loop now, enters into critical section.
Therefore, it proves that if one process is executing in its critical section
then no other process can enter in critical section at the same time. Hence,
mutual exclusion is preserved.

For: BIM 8th Semester Page | 4


Operating System (Unit 4) Prepared By: Sujesh Manandhar

Proving Progress:
We note that the process i can be prevented from entering the critical
section only if it is stuck in its while loop with condition flag [j] = = true
&& turn = =j. If process j is not ready to enter in its critical section then
flag [j] will be false and process i goes into critical section. If process j set
flag [j] to true and is executing in its while loop then turn can be either i or
j. If turn is i then process i will enter into critical section and if turn is j then
process j will enter into critical section.
This proves that whenever the critical section is free then those process
who makes the request at first will get chance to enter into critical section
and if another process request then it will not allowed to enter at same time.
This proves the progress requirement.
Proving Bounded Waiting:
When process i want to enter into critical section then it sets its flag to true
and turn to j and enter into critical section if the condition flag[j] == true
and turn == j does not matched. If process j makes request at this time then
it will enter into its while loop. In this situation flag [i] is true, flag [j] is
true and turn == i.
If process i completes its critical section then it sets its flag[i] to false. Now,
we have flag[i] = false, flag [j] is true and turn == i. Now, if process i again
wants to enter into its critical section then it sets flag[i] == true and turn to
j. Here we have flag[i] = = true, flag[j] = = true (it is still in while loop) and
turn = = j. Therefore, process i will not get chance to enter into critical
section because condition of while loop is matched. So process j will enter
into critical section.
It implies that process i’s request to again entering into critical section is
denied until process j complete its critical section. Hence bounded waiting
requirement is met.

2. Semaphore:
Semaphore is a resource that contains an integer value and allows process
to synchronize by testing and setting this value on a single atomic
operation. It is an integer variable, apart from initialization is accessed
through two standard atomic operation: wait () and signal (). It is a one kind
of tool to prevent race condition. The wait () operation is used for testing
whereas signal is used for increment. The process that test the value of

For: BIM 8th Semester Page | 5


Operating System (Unit 4) Prepared By: Sujesh Manandhar

semaphore and sets it to different value is guarantee no other process will


interfere with the operation in the middle.
The definition of wait () is:
wait (S) {
while (S <= 0); // busy wait
S--;
}
The definition of signal () is:
signal (S) {
S++;
}
When one process modifies the value of semaphore then no other process
can simultaneously modify that same semaphore value. In the case of
wait() the testing of the integer value of S < = 0 as well as its possible
modification S - - must be executed without interruption.

Work of wait () and signal ():


The wait () operation (it can be also called semWait() or down())
decrement the value of semaphore variable S if the condition is false in
while loop and indicates that the process is interested to enter in critical
section. The signal () operation (it can be also called semSignal() or up())
increment the value of semaphore variable S and indicates that process is
terminated or has come out from its critical section.

Working Procedure of Semaphore:


Let us consider processes p1, p2…pn. Before any process request for
critical section, the value of semaphore is initialize to one i.e. S=1. When
process p1 request to enter into critical section, wait (S) operation is
executed where condition S < = 0 is checked. Here, S = 1 so the condition
did not matched. P1 will enter into critical section and value of S is
decremented i.e. now S=0.
While P1 is executing in critical section and if process P2 makes request to
enter into critical section then the wait (S) operation should be executed
first where condition S < = 0 is checked. In this time, S = 0 (P1 is still in
critical section) so the condition is matched. P2 will not enter into critical
section and falls in trap of while loop.
In some point of time, P1 will exit from critical section and execute signal
() operation where the value of semaphore S is incremented by one. Now,
S became one from 0 i.e. S=1. The condition S < = 0 is checked. Process

For: BIM 8th Semester Page | 6


Operating System (Unit 4) Prepared By: Sujesh Manandhar

P2 which was on trap of while loop now comes out as condition S<=0
becomes false. Now P2 will enter into critical section.
Therefore, it implies that whenever one process is executing in its critical
section no other process is allowed to enter in critical section at the same
time. Hence, mutual exclusion is preserved.

Progress condition is automatically presents here because those process


that want to enter into critical section will manipulate the value of S
(increment and decrement) and no any process order is determined.

Types of Semaphore:
I. Counting semaphore:
Counting semaphore are used to control access to a given resource
consisting of finite number of instances. The value of semaphore is
initialized to the number of resource available. If any process wishes to use
a resource perform wait () operation thereby decrementing the value of s.
When a process release a resource it performs a signal () operation i.e.
value of s is incremented. When the value of semaphore goes to zero then
the process that wishes to use a resource is blocked until the S becomes
greater than 0. For example if the resource is 10 then the value of
semaphore is also 10 and 10 process can enter into critical section and after
that all further process are blocked. The definition of counting semaphore
is as follows:
wait (semaphore S){ Signal (S){
S = S-1; S = S+1;
if (S<0){ If(S<=0){
Put process in suspend list, sleep() Select a process from suspend list,
} wakeup()
else }
return; }
}

In this case, semaphore value is set to the number of resources available.


The number of processes requesting to enter into critical section can enter
until the condition S < 0 is false. When the condition S < 0 is true process
requesting for critical section is blocked and put into suspend list.
Whenever the process want to exit from critical section signal operation is
executed where the value of S is incremented and if the condition S<0 is
true then the blocked process from suspend list are taken to ready queue

For: BIM 8th Semester Page | 7


Operating System (Unit 4) Prepared By: Sujesh Manandhar

using wakeup() operation and such process can now request for critical
section.

II. Binary semaphore:


Binary semaphore are used to control access for single resources taking
the value of either 0 indicating resource is in use or 1 indicating resource
is available. It is use to prevent mutual exclusion. The definition of
binary semaphore is:
Wait (semaphore S){ Signal(S){
If (S = = 1){ If (suspend list is empty){
S = 0; S = 1;
} }
Else{ Else {
Block the process and place in Select the process from block list
suspend list, sleep(); and wake up();
}
}

The value of semaphore S is 1 at first. Whenever a process makes


request to enter in critical section condition S = = 1 is checked. If it is
true then then value of S is changed to 0 and process goes into critical
section. If condition does not matched then the process will be blocked.
Whenever the process want to exit from critical section signal operation
is executed where suspend list is empty or not is checked and if it is
empty value of S is changed to 1. If suspend list is not empty then the
blocked process from suspend list is executed using wakeup () (brings
to ready queue).
The disadvantage of semaphore is busy waiting that is when one process is in
critical section then other process that request for critical section are loop
indefinitely until the process exit from critical section.

3. Mutex locks:
It is a simple tool that protect the critical region and thus prevent the race
condition. Here, a process must acquire the lock before entering a critical
section and releases the lock when it exist from the critical section. The
acquire () function acquires the lock and release () function releases the
lock. The definition of acquire () and release () is shown below:

For: BIM 8th Semester Page | 8


Operating System (Unit 4) Prepared By: Sujesh Manandhar

acquire () { release (){


while (!available); /* busy wait */ available = true;
available = false; }
}
Solution to critical section problem using mutex lock is shown below:
do {
acquire lock

Critical section

release lock

Remainder section
} while (true);
Whenever the process P1 request to enter into critical section first acquire
() is executed. If available is false then process enter into while loop and if
available is true P1 enter into critical section and the value of available is
set to false. While executing in critical section if another process P2 request
to enter into critical section then it will enter into while loop (not in critical
section) because available is false (P1 is still in critical section).
When P1 exit from critical section then release () is executed where the
value of available is set to true. Now, P2 which was executing in while
loop will exit and enter into critical section because available is true in this
case (P1 is already exit from critical section). Therefore, it implies that
when one process is executing in critical section then no other process are
allowed to enter into critical section at the same time. Hence, mutual
exclusion is preserved.

4. Synchronization Hardware:
The critical section problem could be solve in a single processor
environment if interrupt could be prevent from occurring while a shared
variable was being modified. In this way no other instruction would be run
so no unexpected modification could be made to the shared variable. But

For: BIM 8th Semester Page | 9


Operating System (Unit 4) Prepared By: Sujesh Manandhar

this solution is not as feasible in multiprocessor environment because


disabling interrupts on a multiprocessor can be time consuming and system
efficiency can be decreased.
To solve the critical section problem, many modern computer system
provide special hardware instruction that allows either to test or modify the
content of word or swap the content of two word atomically i.e. as one
uninterruptible unit. Here test_and_set () instruction is used which is
shown below:
Boolean test_and_set (boolean target) {
Boolean rv = target;
target = true;

return rv;
}
If the machine support test_and_set () instruction then mutual exclusion
can be implemented by declaring a Boolean variable lock, initialize to
false. The structure is shown below:
do {
while (test_and_set (lock)); //do nothing
critical section
lock = false;
remainder section;
} while (true);
First the value of lock is set to true. Process requesting to enter into critical
section will get permission to enter and value of lock is set to false. Now,
if any process request to enter critical section then it will get blocked as
lock = false.

Deadlock:
Deadlock is a situation where a set of processes are blocked because each process
is holding a resource and waiting for another resource acquired by some other
process. In multiprogramming environment, several processes may compete for
a finite number of resources. A process requests resources, if the resource are not

For: BIM 8th Semester Page | 10


Operating System (Unit 4) Prepared By: Sujesh Manandhar

available at that time, the process enters a waiting state. If the resources it has
requested are held by other waiting process then a waiting process is never again
able to change state. Such situation is called deadlock.
For e.g. when two train approach each other at a crossing both shall come to a
full stop and neither shall start up again until the other has gone.
Example 2: process 1 is holding resource 1 and waiting for resource 2 which is
acquired by process 2 and process 2 is waiting for resource 1 as shown in figure
below:

Figure: Deadlock Condition

A process must request for resource before using it and must release the resource
after using it. The number of resources requested may not exceed the total number
of resource available in the system. A set of process is in deadlock state when
every process in the set is waiting for an event (resource acquisition and release)
that can be caused only by another process in the set. The resource may be either
physical (printer, tape drive, memory space etc.) or logical (semaphore, mutex
lock and files).

Deadlock Characterization:
1. Necessary Condition:
A deadlock situation can arise if the following conditions hold
simultaneously in a system:

For: BIM 8th Semester Page | 11


Operating System (Unit 4) Prepared By: Sujesh Manandhar

• Mutual exclusion: at least one resource must be held in a non-


sharable mode i.e. only one process at a time can use the resource.
If another process request for such resource then the process must be
delayed until the resource has been released.
• Hold and Wait: a process must be holding at least one resource and
waiting to acquire additional resources that are currently being held
by other processes.
• No preemption: resource cannot be preempted i.e. a resource can
be released only voluntarily by the process holding it after that
process has completed its task.
• Circular wait: a set (P0, P1… Pn) of waiting process must exist such
that P0 is waiting for a resource held by P1, P1 is waiting for a
resource held by P2, Pn-1 is waiting for a resource held by Pn and Pn
is waiting for a resource held by P0.
All these condition must hold for deadlock to occur.
2. Resource Allocation Graph:
Deadlock can be described more precisely in terms of a directed graph
called a system resource allocation graph. This graph consist of a set of
vertices V and set of edges E. the set of vertices V is partitioned into two
different types of node: the set containing all of the active processes P =
{P1,P2 … Pn} and set containing all of the resources types in the system
R = {R1, R2, … Rm}. Each process is represented as circle and each
resource type as rectangle. Resource may contain more than one instance
which is represented as dot in the rectangle.
If the process Pi request for the resource Rj then it is shown as Pi Rj
(known as request edge).
If the resource type Rj has been allocated to Pi then it is shown as Rj Pi
(known as assignment edge and in this case, edge is point to one of the
instance of resource).
Following figure shows the resource allocation graph:

For: BIM 8th Semester Page | 12


Operating System (Unit 4) Prepared By: Sujesh Manandhar

Figure: resource allocation graph

In above figure: P1 is holding an instance of R2 (R2 P2) and requesting


or waiting for resource R1 (P1 R1). Process P2 is holding an instance of
R2 (R2 P2), instance of R1 (R1 P2) and requesting a resource R3
(P2 R3). Process P3 is holding an instance of R3 (R3 P3).

Condition for Deadlock:


If the graph contains cycle then a deadlock might exist. If each resource
type has exactly one instance then a cycle implies that a dead lock has
occurred. Each process in this cycle is deadlocked. In this case, cycle in the
graph is both necessary and a sufficient condition for the existence of
deadlock.
If a resource type have several instance then a cycle does not necessarily
imply that the deadlock has occurred. In this case cycle in the graph is a
necessary but not a sufficient condition for the existence of deadlock.
Following figure shows the condition for deadlock. Here, two minimal
cycle exist in the system:
• P1 R1 P2 R3 P3 R2 P1
• P2 R3 P3 R2 P2

For: BIM 8th Semester Page | 13


Operating System (Unit 4) Prepared By: Sujesh Manandhar

Figure: graph with deadlock


Processes P1, P2, P3 are deadlocked because P2 is waiting for the resource
R3 which is already allocated to P3. Similarly, P3 is waiting for either P1
or P2 to release R2 and P1 is waiting for R2.

Condition for no deadlock:


If the graph does not contains any cycle the process is not deadlock. Even
if there is cycle, there might not be a chance of deadlock if the resource
contains multiple instances. Following figure shows the graph with cycle
but no deadlock.
Here, cycle exist in the system but does not contains any deadlock:
• P1 R1 P3 R2 P1
P4 may release its instance of resource type R2. Such resource can be
allocated to P3 breaking the cycle.

For: BIM 8th Semester Page | 14


Operating System (Unit 4) Prepared By: Sujesh Manandhar

Figure: Graph with a cycle but no deadlock.

Methods for Handling Deadlocks:


Deadlock problem can be handle in one of the three ways:
• Protocols can be used to prevent or avoid deadlock ensuring that
system will never enter a deadlock state
• Allowing the system to enter a deadlock state, detect it and recover
• Ignoring the problem and pretending that deadlock never occur in
the system.

1. Deadlock Prevention:
It provides a set of method to ensure that at least one of the necessary
conditions (mutual exclusion, hold and wait, no preemption and circular
wait) cannot hold. By ensuring that at least one of this condition cannot
hold, we can prevent the occurrence of deadlock.
i. Mutual exclusion:
This condition must hold i.e. at least one resource must be non-
sharable. Sharable resource do not require mutually exclusive
access and thus cannot be involve in a deadlock. A process never
need to wait for sharable resources.

For: BIM 8th Semester Page | 15


Operating System (Unit 4) Prepared By: Sujesh Manandhar

If we can able to violate resource behaving in mutually exclusive


manner then we can prevent the dead lock. For this spooling can be
used in which separate memory is associated which stores the job
from each of the process into it. By using this mechanism process
does not have to wait for job and can continue whatever it is doing.

ii. Hold and Wait:


To ensure the hold and wait condition never occur in the system, it
must be guarantee that whenever a process request a resource it does
not hold any other resource. Two types of protocol are used for this
purpose.
• One protocol requires each process to request and be allocated
all its resources before it begins execution. This provision can
be implement by requiring that system calls requesting
resources for a process precede all other system calls.
• An alternative protocol allows a process to request resource
only when it has none. A process may request resource and
use them. Before it can request any additional resource it
should release all the resources that it is acquiring or holding.
To illustrate these case let us consider an example in which a process
need to copy data from DVD drive to the disk and print the result to
a printer.
If used first protocol then process initially request all the resource
i.e. DVD, disk and printer. It will hold the printer for its entire
execution even though it needs the printer only at the end.
If used second protocol then it allows a process to request initially
only the DVD drive and disk to copies the data and release both
DVD and disk. The process then request disk and printer to copy the
file to printer. After the work is done process must releases both
resource and terminates.
Disadvantages:
• Resource utilization may be low as resource are unused for
long period of time although resources are allocated.
• Starvation is possible. A process that needs several other
resources have to wait because at least one of the resources
that it need is always allocated to some other process.

For: BIM 8th Semester Page | 16


Operating System (Unit 4) Prepared By: Sujesh Manandhar

iii. No preemption:
The third condition for possibility of deadlock is that there be no
preemption of resources that have already been allocated. To ensure
that this condition does not hold following protocol can be used:
• If a process is holding some resource and requests another
resource that cannot be immediately allocated then all the
resource that a process is currently holding are preempted.
The process will be restarted when it can regain its old
resources as well as the new one that it was request.
• Alternately, if a process request some resources first the
availability of such resource is check. If they are not available,
then search is done to ensure that whether that resource are
allocated to other process that is waiting for another resource.
If so then the desire resource is preempted from such process
and allocated to requesting process.

iv. Circular Wait:


One way to ensure that this condition does not hold is to impose a
total ordering of all resource types and to require that each process
requests resources in an increasing order of enumeration. To
illustrate this let us suppose R= {R1, R2 … Rn} be the set of
resource types. Unique integer number is assign to each resource in
order to compare two resource and to determine whether one process
precedes another or not. For e.g. if the resource set contains three
resource: tape drive, disk drive and printer then function might be
define as:
F (tape drive) = 1
F (disk drive) = 5
F (printer) = 12
In order to prevent deadlock, each process can request resource only
in an increasing order of enumeration. The process can initially
request any instance of a resource let say Ri. After that, the process
can request instances of other resource let say Rj if and only if F (Rj)
> = F (Ri). For example if a process wants tape drive and disk drive
at the same time then first tape drive have to be requested and then
disk drive should be request.
Alternately, it can be require that the process requesting an instance
of resource type Rj must have release any resource type Ri such that
F(Ri) > =F(Rj). For e.g. if a process is holding resource printer and

For: BIM 8th Semester Page | 17


Operating System (Unit 4) Prepared By: Sujesh Manandhar

if it requires disk driver then in this case F (printer) > F (disk drive)
so a process have to release the resource disk drive.

2. Deadlock Avoidance:
An alternative method for avoiding deadlock is to require additional
information about how resources are to be requested. Each request requires
that the system consider the resources currently available, the resource
currently allocated to each process and the future request and releases of each
process.
The various algorithm that use this approach differ in the amount and type of
information required. The simplest and most useful model requires that each
process declare the maximum number of resource of each type that it may
need. Given this information, algorithm can be construct that ensures that the
system will never enter a deadlock state.
A deadlock avoidance algorithm dynamically examines the resource
allocation state to ensure that a circular wait condition can never exist. The
resource allocation state is defined by the number of available and allocated
resource and the maximum demand of the process.
Following are the deadlock avoidance algorithm:
a) Resource Allocation graph Algorithm:
In addition to the request and assignment edges in resource allocation
graph here a new type of edge is used known as claim edge. A claim
edge Pi Rj indicates that process Pi may request resource Rj at some
point of time. Claim edge is represented by dashed line in graph. When
a process Pi request resource Rj then claim edge is converted to request
edge (dashed line is converted to straight line). When resource Rj is
released by Pi then the assignment edge Rj Pi is reconverted to claim
edge Pi Rj.
When the process Pi requests resource Rj then the request is granted
only if converting the request edge Pi Rj to an assignment edge Rj Pi
does not forms the cycle. If no cycle is exist then the allocation of the
resource will leave the system in a safe state. If a cycle is found then the
allocation will put the system in an unsafe state. In this case, Pi have to
wait for its request to be satisfied.
Following figure shows the resource allocation graph for deadlock
avoidance:

For: BIM 8th Semester Page | 18


Operating System (Unit 4) Prepared By: Sujesh Manandhar

Figure: Resource allocation graph for deadlock avoidance.


To illustrate this algorithm let us consider that P2 request R2. Although
R2 is currently free, it cannot be allocated because this will create a
cycle in a graph i.e. system is in unsafe state.

Figure: An unsafe state in a resource allocation graph

For: BIM 8th Semester Page | 19


Operating System (Unit 4) Prepared By: Sujesh Manandhar

b) Banker’s Algorithm:
Based on the safety algorithm this algorithm is applicable to a resource
with multiple instance and use to find out whether the system is in safe
state or not and whether the request can be safely granted or not. It uses
two algorithm: safety algorithm and resource request algorithm.
When a new process enters the system, it must declare the maximum
number of instance of each resource type that it may need. When the set
of resources are requested then the system must determine whether the
allocation of these resource will leave the system in safe state or not. If
it will then the resource are allocated, if not then the process have to
wait until other process release enough resources.

Data structure:
Here “n” is the number of process in the system and m is the number of
resource type.
• Available: a vector of length m indicates the number of available
resources of each type. If Available [j] equals k, then k instance
of resource type Rj are available.
• Max: An n * m matrix defines the maximum demand of each
process. If Max [i] [j] equals k then process Pi may request at
most k instance of resource type Rj.
• Allocation: an n*m matrix defines the number of resources of
each type currently allocated to each process. If Allocation [i][j]
equals k then process Pi is currently allocated k instances of
resource type Rj.
• Need: an n * m matrix indicates the remaining resource need of
each type. If Need [i] [j] equals k then process Pi may need k
more instance of resource type Rj to complete its task.
Mathematically: Need[i] [j] = Max [i] [j] - Allocation [i][j]

1. Safety algorithm:
This algorithm finds out whether the system is in safe state or not.
This algorithm can be described as:
• Step 1: Need matrix = max – allocation
• Step 2: if (need < = available){
Execute process;
New available = available + allocation
}

For: BIM 8th Semester Page | 20


Operating System (Unit 4) Prepared By: Sujesh Manandhar

Else{
Do not execute go forward
}
2. Resource Request Algorithm:
It determines whether requests can be safely granted or not.
• Step 1: if request < = need then go to step 2
Else error
• Step 2: if request < = available, go to step 3
Else wait
• Step 3:
Available = available – request
Allocation = allocation + request
Need = need – request

Note: Numerical question of Banker Algorithm are done in class.


So, refer your class note for numerical problem:

For: BIM 8th Semester Page | 21


Operating System (Unit 4) Prepared By: Sujesh Manandhar

Recovery from the Deadlock:


When it is proved that deadlock exist in system then several alternatives are
available. There are two option for breaking a deadlock. One is simply to abort
one or more processes to break the circular wait. The other is to preempt some
resources from one or more of the deadlock processes.
1) Process Termination:
Two methods are used to eliminate the deadlock by aborting a process.
o Abort all deadlock process:
This method will break the deadlock cycle, but at great expense. The
deadlock processes may have computed for a long time and the
results of these partial computations must be discarded and probably
will have to be recomputed later.
o Abort one process at a time until the deadlock cycle is eliminated:
After each process is aborted, a deadlock detection algorithm must
be invoked to determine whether any process are still in deadlock or
not. So, it contains considerable overhead.
Disadvantage:
o If the process was is the middle of updating file, terminating it will
leave that file in an incorrect state. Here, deadlock process that
should be terminated have to determine in advance.

2) Resource preemption:
Using this method, some resource are preempted from process and such
resource are given to another process until the deadlock cycle is broken.
To use this method, following three issues should be addressed:
o Selecting a victim: which resource and which processes are to be
preempted? The order of the preemption should be determine such
that it minimize the cost.
o Rollback: if the resource is preempted from process, what should be
done with that process? It cannot continue normal execution because
resource have been taken from it. So, the process should be roll back
to safe state and restart from that state.
o Starvation: how can we guarantee that resource will not be
preempted from the same process? We must ensure that a process
can be picked as a victim only for finite number of time. The most
common solution is to include the number of rollback in the cost
factor.

For: BIM 8th Semester Page | 22


Operating System (Unit 4) Prepared By: Sujesh Manandhar

Note: For further material scan following QR code:

For: BIM 8th Semester Page | 23

You might also like