UNIT-3 Process Synchronization and Deadlock
UNIT-3 Process Synchronization and Deadlock
Process synchronization
and deadlock
--AVANISH GOSWAMI
Critical Section Problem
• Critical Section is the part of a program which tries to access shared resources. That resource
may be any resource in a computer like a memory location, Data structure, CPU or any IO device.
• The critical section cannot be executed by more than one process at the same time; operating
system faces the difficulties in allowing and disallowing the processes from entering the critical
section.
• The critical section problem is used to design a set of protocols which can ensure that the Race
condition among the processes will never arise.
• In order to synchronize the cooperative processes, our main task is to solve the critical section
problem. We need to provide a solution in such a way that the following conditions can be
satisfied.
Two general approaches are used to handle critical sections:
Architectural Neutrality
• Our mechanism must be architectural natural. It means that if our
solution is working fine on one architecture then it should also run on
the other ones as well.
Semaphore
• Semaphores are integer variables that are used to solve the critical section problem by
using two atomic operations wait and signal that are used for process synchronization.
• The definitions of wait and signal are as follows −
Wait
• The wait operation decrements the value of its argument S, if it is positive. If
S is negative or zero, then no operation is performed.
wait(S)
{
while (S<=0);
S--;
}
Signal
• The signal operation increments the value of its argument S.
signal(S)
{
S++;
}
Types of Semaphores
• There are two main types of semaphores i.e. counting semaphores and
binary semaphores. Details about these are given as follows:
Counting Semaphores:
• These are integer value semaphores and have an unrestricted value
domain.
• These semaphores are used to coordinate the resource access, where the
semaphore count is the number of available resources.
• If the resources are added, semaphore count automatically incremented
and if the resources are removed, the count is decremented.
Binary Semaphores:
• The binary semaphores are like counting semaphores but their value
is restricted to 0 and 1.
• The wait operation only works when the semaphore is 1
• and the signal operation succeeds when semaphore is 0. It is
sometimes easier to implement binary semaphores than counting
semaphores.
Advantages of Semaphores
• Semaphores allow only one process into the critical section.
• They follow the mutual exclusion principle strictly and are much more
efficient than some other methods of synchronization.
• There is no resource wastage because of busy waiting in semaphores
as processor time is not wasted unnecessarily to check if a condition
is fulfilled to allow a process to access the critical section.
• Semaphores are implemented in the machine independent code of
the microkernel. So they are machine independent.
Disadvantages of Semaphores
• Semaphores are complicated so the wait and signal operations must
be implemented in the correct order to prevent deadlocks.
• Semaphores are impractical for last scale use as their use leads to loss
of modularity.
• This happens because the wait and signal operations prevent the
creation of a structured layout for the system.
• Semaphores may lead to a priority inversion where low priority
processes may access the critical section first and high priority
processes later.
Counting Semaphore vs. Binary
Semaphore
Counting Semaphore Binary Semaphore
• No mutual exclusion • Mutual exclusion
• Any integer value • Value only 0 and 1
• More than one slot • Only one slot
• Provide a set of Processes • It has a mutual exclusion
mechanism
Deadlock
• A deadlock happens in operating system when two or more processes
need some resource to complete their execution that is held by the
other process.
• Every process needs some resources to complete its execution.
However, the resource is granted in a sequential order.
Circular Wait
• All the processes must be waiting for the resources in a cyclic manner
so that the last process is waiting for the resource which is being held
by the first process.
Deadlock
Methods for Handling Deadlocks
Generally speaking there are three ways of handling deadlocks:
• Deadlock prevention or avoidance – Do not allow the system to get
into a deadlocked state.
• Deadlock detection and recovery – Abort a process or pre-empt
some resources when deadlocks are detected.
• Ignore the problem all together – If deadlocks only occur once a year
or so, it may be better to simply let them happen and reboot as
necessary than to incur the constant overhead and system
performance penalties associated with deadlock prevention or
detection. This is the approach that both Windows and UNIX take.
Deadlock Detection
A deadlock can be detected by a resource scheduler as it keeps track of
all the resources that are allocated to different processes. After a
deadlock is detected, it can be resolved using the following methods:
• All the processes that are involved in the deadlock are terminated.
This is not a good approach as all the progress made by the processes
is destroyed.
• Resources can be pre-empted from some processes and given to
others till the deadlock is resolved.
Deadlock Prevention
• Deadlock prevention algorithms ensure that at least one of the
necessary conditions (Mutual exclusion, hold and wait, no pre-
emption, and circular wait) does not hold true.
• We do this by facing each of the four conditions on separate
occasions. However, most prevention algorithms have poor resource
utilization and hence result in reduced throughputs.
Deadlock Avoidance
• There are three algorithms in the hardware approach of solving Process Synchronization
problem:
• This problem is generalised in terms of the Producer Consumer problem, where a finite buffer pool is used to
exchange messages between producer and consumer processes.
• Solution to this problem is, creating two counting semaphores "full" and "empty" to keep track of the current
number of full and empty buffers respectively.
• In this Producers mainly produces a product and consumers consume the product, but both can use of one of
the containers each time.
• The main complexity of this problem is that we must have to maintain the count for both empty and full
containers that are available.
Dining Philosophers Problem
• The dining philosopher's problem involves the allocation of limited
resources to a group of processes in a deadlock-free and starvation-
free manner.
• There are five philosophers sitting around a table, in which there are
five chopsticks/forks kept beside them and a bowl of rice in the
centre, When a philosopher wants to eat, he uses two chopsticks -
one from their left and one from their right. When a philosopher
wants to think, he keeps down both chopsticks at their original place.
The Readers Writers Problem
• In this problem there are some processes(called readers) that only read the
shared data, and never change it, and there are other processes(called
writers) who may change the data in addition to reading, or instead of
reading it.
• The main complexity with this problem occurs from allowing more than one
reader to access the data at the same time.
Thank you