OS (Unit 4)
OS (Unit 4)
UNIT 4
Basic Synchronization Principles
Process synchronization means sharing system resources by processes in such a
way that, concurrent access to shared data is handled thereby minimizing the
chance of inconsistent data. Maintaining data consistency demands mechanism
to ensure synchronized execution of cooperative process. It handle the problem
that arose while multiple process execute.
When multiple process are executing then a process may be interrupted at any
point in its instruction stream and the processing core may be assigned to execute
instruction of another process. In this case several process might be accessing and
manipulating the same data resources (from common area) i.e. different parallel
process may be reading and writing to same data resources. This makes the
outcome of such resource inconsistent. Such case is known as race condition.
Race Condition: when several process access and manipulate the same data
concurrently and the outcome of the execution depends on the particular order in
which the access takes place known as race condition.
To handle such case, we need process synchronization which will decide in which
order process should execute i.e. which process will read at first and which will
write at first from the common data resource. Synchronization is required For
example:
One process may be writing data to a certain main memory area while another
process may be reading the data from that area and sending it to printer. The
reader and writer must be synchronized so that writer does not overwrite.
The general structure of typical process is shown below which contains three
field:
made a request to enter its critical section and before that the request is
granted.
do{
flag[i] = true;
turn = j;
while (flag[j] && turn == j);
Critical section
flag[i] = false;
Remainder section
} while (true);
Peterson solution requires the two process to share two data item:
int turn;
Boolean flag [i or j];
The variable turn indicates whose turn it is to enter its critical section i.e.
if turn = = i then process i is allow to execute in its critical section. The flag
array is used to indicate whether or not a process is ready to enter its critical
section i.e. if flag [i] = = true then it indicates that the process i is ready to
enter in its critical section.
To enter the critical section, process i first sets flag [i] to true and sets the
turn to j indicating that if other process wishes to enter critical section it
can do so. If both process try to enter at the same time then turn will be set
to both i and j at roughly the same time but only one of these assignment
will last, the other will occur but overwritten immediately.
To prove this solution is correct, following properties should be meet.
• Mutual exclusion is preserved.
• The progress requirement is satisfied.
• The bounded waiting requirement is met.
To prove the above property let us consider the following example:
For process i For process j
do{ do{
flag[i] = true flag[j] = true
turn = j; turn = i;
while (flag[j]==true && turn==j); while (flag[i]==true && turn==i);
Critical section Critical section
flag[i] = false; flag[j] = false;
Remainder section; Remainder section;
}while(true); }while(true);
Proving Progress:
We note that the process i can be prevented from entering the critical
section only if it is stuck in its while loop with condition flag [j] = = true
&& turn = =j. If process j is not ready to enter in its critical section then
flag [j] will be false and process i goes into critical section. If process j set
flag [j] to true and is executing in its while loop then turn can be either i or
j. If turn is i then process i will enter into critical section and if turn is j then
process j will enter into critical section.
This proves that whenever the critical section is free then those process
who makes the request at first will get chance to enter into critical section
and if another process request then it will not allowed to enter at same time.
This proves the progress requirement.
Proving Bounded Waiting:
When process i want to enter into critical section then it sets its flag to true
and turn to j and enter into critical section if the condition flag[j] == true
and turn == j does not matched. If process j makes request at this time then
it will enter into its while loop. In this situation flag [i] is true, flag [j] is
true and turn == i.
If process i completes its critical section then it sets its flag[i] to false. Now,
we have flag[i] = false, flag [j] is true and turn == i. Now, if process i again
wants to enter into its critical section then it sets flag[i] == true and turn to
j. Here we have flag[i] = = true, flag[j] = = true (it is still in while loop) and
turn = = j. Therefore, process i will not get chance to enter into critical
section because condition of while loop is matched. So process j will enter
into critical section.
It implies that process i’s request to again entering into critical section is
denied until process j complete its critical section. Hence bounded waiting
requirement is met.
2. Semaphore:
Semaphore is a resource that contains an integer value and allows process
to synchronize by testing and setting this value on a single atomic
operation. It is an integer variable, apart from initialization is accessed
through two standard atomic operation: wait () and signal (). It is a one kind
of tool to prevent race condition. The wait () operation is used for testing
whereas signal is used for increment. The process that test the value of
P2 which was on trap of while loop now comes out as condition S<=0
becomes false. Now P2 will enter into critical section.
Therefore, it implies that whenever one process is executing in its critical
section no other process is allowed to enter in critical section at the same
time. Hence, mutual exclusion is preserved.
Types of Semaphore:
I. Counting semaphore:
Counting semaphore are used to control access to a given resource
consisting of finite number of instances. The value of semaphore is
initialized to the number of resource available. If any process wishes to use
a resource perform wait () operation thereby decrementing the value of s.
When a process release a resource it performs a signal () operation i.e.
value of s is incremented. When the value of semaphore goes to zero then
the process that wishes to use a resource is blocked until the S becomes
greater than 0. For example if the resource is 10 then the value of
semaphore is also 10 and 10 process can enter into critical section and after
that all further process are blocked. The definition of counting semaphore
is as follows:
wait (semaphore S){ Signal (S){
S = S-1; S = S+1;
if (S<0){ If(S<=0){
Put process in suspend list, sleep() Select a process from suspend list,
} wakeup()
else }
return; }
}
using wakeup() operation and such process can now request for critical
section.
3. Mutex locks:
It is a simple tool that protect the critical region and thus prevent the race
condition. Here, a process must acquire the lock before entering a critical
section and releases the lock when it exist from the critical section. The
acquire () function acquires the lock and release () function releases the
lock. The definition of acquire () and release () is shown below:
Critical section
release lock
Remainder section
} while (true);
Whenever the process P1 request to enter into critical section first acquire
() is executed. If available is false then process enter into while loop and if
available is true P1 enter into critical section and the value of available is
set to false. While executing in critical section if another process P2 request
to enter into critical section then it will enter into while loop (not in critical
section) because available is false (P1 is still in critical section).
When P1 exit from critical section then release () is executed where the
value of available is set to true. Now, P2 which was executing in while
loop will exit and enter into critical section because available is true in this
case (P1 is already exit from critical section). Therefore, it implies that
when one process is executing in critical section then no other process are
allowed to enter into critical section at the same time. Hence, mutual
exclusion is preserved.
4. Synchronization Hardware:
The critical section problem could be solve in a single processor
environment if interrupt could be prevent from occurring while a shared
variable was being modified. In this way no other instruction would be run
so no unexpected modification could be made to the shared variable. But
return rv;
}
If the machine support test_and_set () instruction then mutual exclusion
can be implemented by declaring a Boolean variable lock, initialize to
false. The structure is shown below:
do {
while (test_and_set (lock)); //do nothing
critical section
lock = false;
remainder section;
} while (true);
First the value of lock is set to true. Process requesting to enter into critical
section will get permission to enter and value of lock is set to false. Now,
if any process request to enter critical section then it will get blocked as
lock = false.
Deadlock:
Deadlock is a situation where a set of processes are blocked because each process
is holding a resource and waiting for another resource acquired by some other
process. In multiprogramming environment, several processes may compete for
a finite number of resources. A process requests resources, if the resource are not
available at that time, the process enters a waiting state. If the resources it has
requested are held by other waiting process then a waiting process is never again
able to change state. Such situation is called deadlock.
For e.g. when two train approach each other at a crossing both shall come to a
full stop and neither shall start up again until the other has gone.
Example 2: process 1 is holding resource 1 and waiting for resource 2 which is
acquired by process 2 and process 2 is waiting for resource 1 as shown in figure
below:
A process must request for resource before using it and must release the resource
after using it. The number of resources requested may not exceed the total number
of resource available in the system. A set of process is in deadlock state when
every process in the set is waiting for an event (resource acquisition and release)
that can be caused only by another process in the set. The resource may be either
physical (printer, tape drive, memory space etc.) or logical (semaphore, mutex
lock and files).
Deadlock Characterization:
1. Necessary Condition:
A deadlock situation can arise if the following conditions hold
simultaneously in a system:
1. Deadlock Prevention:
It provides a set of method to ensure that at least one of the necessary
conditions (mutual exclusion, hold and wait, no preemption and circular
wait) cannot hold. By ensuring that at least one of this condition cannot
hold, we can prevent the occurrence of deadlock.
i. Mutual exclusion:
This condition must hold i.e. at least one resource must be non-
sharable. Sharable resource do not require mutually exclusive
access and thus cannot be involve in a deadlock. A process never
need to wait for sharable resources.
iii. No preemption:
The third condition for possibility of deadlock is that there be no
preemption of resources that have already been allocated. To ensure
that this condition does not hold following protocol can be used:
• If a process is holding some resource and requests another
resource that cannot be immediately allocated then all the
resource that a process is currently holding are preempted.
The process will be restarted when it can regain its old
resources as well as the new one that it was request.
• Alternately, if a process request some resources first the
availability of such resource is check. If they are not available,
then search is done to ensure that whether that resource are
allocated to other process that is waiting for another resource.
If so then the desire resource is preempted from such process
and allocated to requesting process.
if it requires disk driver then in this case F (printer) > F (disk drive)
so a process have to release the resource disk drive.
2. Deadlock Avoidance:
An alternative method for avoiding deadlock is to require additional
information about how resources are to be requested. Each request requires
that the system consider the resources currently available, the resource
currently allocated to each process and the future request and releases of each
process.
The various algorithm that use this approach differ in the amount and type of
information required. The simplest and most useful model requires that each
process declare the maximum number of resource of each type that it may
need. Given this information, algorithm can be construct that ensures that the
system will never enter a deadlock state.
A deadlock avoidance algorithm dynamically examines the resource
allocation state to ensure that a circular wait condition can never exist. The
resource allocation state is defined by the number of available and allocated
resource and the maximum demand of the process.
Following are the deadlock avoidance algorithm:
a) Resource Allocation graph Algorithm:
In addition to the request and assignment edges in resource allocation
graph here a new type of edge is used known as claim edge. A claim
edge Pi Rj indicates that process Pi may request resource Rj at some
point of time. Claim edge is represented by dashed line in graph. When
a process Pi request resource Rj then claim edge is converted to request
edge (dashed line is converted to straight line). When resource Rj is
released by Pi then the assignment edge Rj Pi is reconverted to claim
edge Pi Rj.
When the process Pi requests resource Rj then the request is granted
only if converting the request edge Pi Rj to an assignment edge Rj Pi
does not forms the cycle. If no cycle is exist then the allocation of the
resource will leave the system in a safe state. If a cycle is found then the
allocation will put the system in an unsafe state. In this case, Pi have to
wait for its request to be satisfied.
Following figure shows the resource allocation graph for deadlock
avoidance:
b) Banker’s Algorithm:
Based on the safety algorithm this algorithm is applicable to a resource
with multiple instance and use to find out whether the system is in safe
state or not and whether the request can be safely granted or not. It uses
two algorithm: safety algorithm and resource request algorithm.
When a new process enters the system, it must declare the maximum
number of instance of each resource type that it may need. When the set
of resources are requested then the system must determine whether the
allocation of these resource will leave the system in safe state or not. If
it will then the resource are allocated, if not then the process have to
wait until other process release enough resources.
Data structure:
Here “n” is the number of process in the system and m is the number of
resource type.
• Available: a vector of length m indicates the number of available
resources of each type. If Available [j] equals k, then k instance
of resource type Rj are available.
• Max: An n * m matrix defines the maximum demand of each
process. If Max [i] [j] equals k then process Pi may request at
most k instance of resource type Rj.
• Allocation: an n*m matrix defines the number of resources of
each type currently allocated to each process. If Allocation [i][j]
equals k then process Pi is currently allocated k instances of
resource type Rj.
• Need: an n * m matrix indicates the remaining resource need of
each type. If Need [i] [j] equals k then process Pi may need k
more instance of resource type Rj to complete its task.
Mathematically: Need[i] [j] = Max [i] [j] - Allocation [i][j]
1. Safety algorithm:
This algorithm finds out whether the system is in safe state or not.
This algorithm can be described as:
• Step 1: Need matrix = max – allocation
• Step 2: if (need < = available){
Execute process;
New available = available + allocation
}
Else{
Do not execute go forward
}
2. Resource Request Algorithm:
It determines whether requests can be safely granted or not.
• Step 1: if request < = need then go to step 2
Else error
• Step 2: if request < = available, go to step 3
Else wait
• Step 3:
Available = available – request
Allocation = allocation + request
Need = need – request
2) Resource preemption:
Using this method, some resource are preempted from process and such
resource are given to another process until the deadlock cycle is broken.
To use this method, following three issues should be addressed:
o Selecting a victim: which resource and which processes are to be
preempted? The order of the preemption should be determine such
that it minimize the cost.
o Rollback: if the resource is preempted from process, what should be
done with that process? It cannot continue normal execution because
resource have been taken from it. So, the process should be roll back
to safe state and restart from that state.
o Starvation: how can we guarantee that resource will not be
preempted from the same process? We must ensure that a process
can be picked as a victim only for finite number of time. The most
common solution is to include the number of rollback in the cost
factor.