OS U-III Process Coordination
OS U-III Process Coordination
Science
Operating System
by
Prof. A.A.Salunke
M.Tech. Computer
Process Coordination
08 Hrs.
1. Synchronization:
Principles of Concurrency
Requirements for Mutual Exclusion
Mutual Exclusion: Hardware Support, Operating System Support (Semaphores and Mutex), Programming
Language Support (Monitors).
2. Classical synchronization problems:
Readers/Writers Problem,
Producer and Consumer problem,
Inter-process communication (Pipes, shared memory: system V)
3. Deadlock:
Deadlock Characterization,
Methods for Handling Deadlocks,
Deadlock Prevention,
Deadlock Avoidance,
Deadlock Detection,
Recovery from Deadlock
Principles of Concurrency
same time .
same file.
The amount of time it takes for a process to execute is not easily calculated,
Non-atomic
Operations that are non-atomic but interruptible by multiple processes can cause
problems. (an atomic operation is one that runs completely independently of any
is non-atomic)
Race conditions
A situation where several processes access and manipulate the same data
concurrently and the outcome of the execution depends on the particular order
in which the access takes place, is called a race condition Race conditions.
Blocking
A process that is blocked is one that is waiting for some event, such as resource becoming
A process could be blocked for long period of time waiting for input from a terminal.
If the process is required to periodically update some data, this would be very undesirable.
Issues of Concurrency :
Starvation
A problem encountered in concurrent computing where a process is perpetually
denied necessary resources to process its work.
Starvation may be caused by errors in a scheduling or mutual exclusion algorithm,
but can also be caused by resource leaks
Deadlock
In concurrent computing, a deadlock is a state in which each member of a group
waits for another member, including itself, to take action, such as sending a
message or more commonly releasing a lock.
Deadlocks are a common problem in multiprocessing systems, parallel computing,
and distributed systems, where software and hardware locks are used to arbitrate
shared resources and implement process synchronization
Requirements for Mutual Exclusion
It is the requirement that a process can not enter its critical section
while another concurrent process is currently present or executing in its critical
section
i.e only one process is allowed to execute the critical section at any given
instance of time.
REQUIREMENTS FOR MUTUAL EXCLUSION:
1) Only one process at a time is allowed in the critical section for a resource.
2) A process that halts in its Non-critical section must do so without interfering with
the processes.
4) A process must not be delayed access to a critical section when there is no other
process using it.
6) A process remains inside its critical section for a finite time only
Critical Section Problem
The part of the process, where the code for accessing the
We know that there are multiple processes in the system and these processes access
shared resources.
When these processes access the shared resources simultaneously then the results
obtained are inconsistent.
To avoid such inconsistency in the result the processes must cooperate while accessing
the shared resources.
Therefore, the code to access shared resources is written under the critical section of the
process.
Let us understand the inconsistency in the result while accessing the shared resource
simultaneously with the help of an example.
Look at the figure above, suppose there are two processes P0 and P1.
Both share a common variable A=0.
While accessing A both the processes increments the value of A by 1.
First case
Process P0 reads the value of A=0, increments it by 1 (A=1) and writes the incremented
value in A.
Now, process P1 reads the value of A =1, increments its value by 1 (A=2) and writes the
incremented value in A.
So, after both the processes P0 & P1 finishes accessing the variable A. The value of A is
2.
Second case
1. Consider that process P0 has read the variable A=0. Suddenly context switch
happens and P1 takes the charge, and start executing. P1 would increment the value of
A (A=1).
2. After execution P1 gives the charge again to P0. Now the value of A for P0 is 0 &
So here, when both the processes P0 & P1 end up accessing the variable A, the value of
Now, this type of condition “ where the sequence of execution of the processes
Each process has a segment of code, called a critical section, in which the process
may be changing common variables, updating a table, writing a file, and so on.
The important feature of the system is that, when one process is executing in its
critical section, no other process is to be allowed to execute in its critical section.
The critical-section problem is to design a protocol that the processes can use to
cooperate.
Each process must request permission to enter its critical section.
Synchronization Hardware
Synchronization hardware is a hardware-based solution to resolve the critical section
problem.
how the multiple processes sharing common resources must be synchronized to avoid
inconsistent results.
The hardware instructions that can be used to resolve the critical section
problem effectively
Hardware solutions are often easier and also improves the efficiency of the system.
The hardware-based solution to critical section problem is based on a simple
The solution implies that before entering into the critical section the process
must acquire a lock and must release the lock when it exits its critical section.
Example
1. you have to globally declare a Boolean variable lock and
initialize it to false
1. Consider we have two processes P0, and P1 are interested to enter their critical
section.
1. Now, when P0 is already in its critical section process P1 also wants to enter
in its critical section
1. Now P1 it will execute the do-while loop and invoke TestAndSet() instruction
only to see that the lock is already set to true which means some process is in
the critical section
1. which will make P1 repeat while loop unless P0 turns the lock to false.
1. Once the process P0 complete executing its critical section its will turn the
lock variable to false.
1. Then P1 can modify the lock variable to true using TestAndSet() instruction
and enter its critical section.
This is how you can achieve mutual exclusion with the do-while structure above
i.e. it let only one process to execute its critical section at a time
Swap Hardware Instruction
1. Like TestAndSet() instruction the swap() hardware instruction is also an atomic
instruction.
2. With a difference that it operates on two variables provided in its parameter.
3. The structure of swap() instruction is :
To achieve mutual exclusion using swap() instruction, the structure
we use is a follow:
1. The structure above operates on one global shared Boolean variable lock and another
local Boolean variable key.
1. The process P0 interested in executing its critical section execute code above and set
lock as true and enter its critical section.
1. Thus refrain (Block) other processes from executing their critical section satisfying
mutual exclusion.
Mutual Exclusion : Operating System Support (Semaphores and Mutex)
Semaphores
1. Semaphores serve another important purpose, mutual exclusion and
condition synchronization. It is a type of signaling mechanism.
2. Semaphores are integer variables that are used to solve the
critical section problem
3. Semaphores use two atomic operations, wait and signal that are used for
process synchronization.
4. The wait and signal operations can modify a semaphore.
5. semaphore S is an integer variable that, a part from initialization, is
accessed only through two standard atomic operations: wait and signal.
Semaphores
Three kinds of operations are performed on semaphores;
1. To initialize the semaphore
2. To increment the semaphore value
3. To decrement the semaphore value
Binary Semaphores
Binary Semaphore strictly provides mutual exclusion.
The semaphore can have only two values, 0 or 1.
It is used to implement the solution of critical section problems with multiple
processes.
Let P1, P2 , P3 …….,PN are the process that want to go inside critical section
Let P1, P2 , P3 …….,PN are the process that want to go inside
critical section
Initially S=1 (semaphore) & P(wait) , V(signal)
Some point regarding P and V operation :
P operation is also called wait, sleep, or down operation, and V operation is also
called signal, wake-up, or up operation.
Both operations are atomic and semaphore(s) is always initialized to one. Here
atomic means that variable on which read, modify and update happens at the
same time/moment with no pre-emption i.e. in-between read, modify and
update no other operation is performed that may change the variable.
Positive value indicates the number of processes that can be present in the critical
section at the same time.
Negative value indicates the number of processes that are blocked in the waiting
list (Queue).
Counting semaphores
The wait operation is executed when a process tries to enter the critical section.
Wait operation decrements the value of counting semaphore by 1.
If the resulting value of counting semaphore is greater than or equal to 0, process is allowed
to enter the critical section.
If the resulting value of counting semaphore is less than 0, process is not allowed to enter
the critical section. In this case, process is put to sleep in the waiting list (queue) .
Counting semaphores
The signal operation is executed when a process takes exit from the critical
section.
Signal operation increments the value of counting semaphore by 1.
If the resulting value of counting semaphore is less than or equal to 0, a process is chosen
from the waiting list and wake up to execute.
3. The Mutex is a locking mechanism that makes sure only one thread can acquire the
4. This thread only releases the Mutex when it exits the critical section.
Mutex
1. A Mutex is different than a semaphore as it is a locking mechanism while a
semaphore is a signaling mechanism.
1. A binary semaphore can be used as a Mutex but a Mutex can never be used as a
semaphore.
Programming Language Support (Monitors).
1. With the help of programming languages, we can use a monitor to achieve mutual
exclusion among the processes.
1. Example of monitors: Java Synchronized methods such as Java offers notify() and
wait() constructs.
2. Monitors are the group of procedures, and condition variables that are merged
3. If the process is running outside the monitor, then it cannot access the monitor’s
internal variable. But a process can call the procedures of the monitor.
6. There is only one process that can be active at a time inside the monitor.
Components of Monitor
There are four main components of the monitor:
1. Initialization
2. Private data
3. Monitor procedure
4. Monitor entry queue
Initialization: – Initialization comprises the code, and when the monitors are created, we use
this code exactly once.
Private Data: – Private data is another component of the monitor. It comprises all the private
data, and the private data contains private procedures that can only be used within the
monitor. So, outside the monitor, private data is not visible.
Monitor Procedure: – Monitors Procedures are those procedures that can be called from
outside the monitor.
Monitor Entry Queue: – Monitor entry queue is another essential component of the
monitor that includes all the threads, which are called procedures.
Condition Variables
There are two types of operations that we can perform on the condition variables of
the monitor:
Wait
Signal
let say we have 2 condition variables
Wait operation
x.wait () : Process performing wait operation on any condition variable are
suspended.
The suspended processes are placed in block queue of that condition variable.
Signal operation
x.signal (): When a process performs signal operation on condition variable, one
of the blocked processes is given chance.
Disadvantages of Monitor:
Monitors have to be implemented as part of the programming language .
2. Classical synchronization problems:
1. Readers/Writers Problem,
2. Producer and Consumer problem,
3. Inter-process communication (Pipes, shared memory: system V)
Classical synchronization problems Readers/Writers Problem
1. The readers-writers problem relates to an object such as a file that is shared between
multiple processes.
1. Some of these processes are readers i.e. they only want to read the data from the object
and some of the processes are writers i.e. they want to write into the object.
1. For example - If two readers access the object at the same time there is no problem.
However if two writers or a reader and writer access the object at the same time,
there may be problems.
1. To solve this situation, a writer should get exclusive access to an object i.e. when a writer
is accessing the object, no reader or writer may access it.
1. However, multiple readers can access the object at the same time.
1. This can be implemented using semaphores. The codes for the reader and writer process
in the reader-writer problem are given as follows −
Producer and Consumer problem
The producer consumer problem is a synchronization problem
There is a fixed size buffer and the producer produces items and enters them into
the buffer.
The consumer removes the items from the buffer and consumes them.
A producer should not produce items into the buffer when the consumer is
consuming an item from the buffer and vice versa. So the buffer should only be
accessed by the producer or consumer at a time
The codes for the producer and consumer process are given as follows −
What are the Problems in the Producer-Consumer Problem?
1. At the same time, the producer and consumer cannot access the buffer.
2. The producer cannot produce the data if the memory buffer is full. It means when the
memory buffer is not full, then only the producer can produce the data.
3. The consumer can only consume the data if the memory buffer is not vacant. In a
condition where memory buffer is empty, the consumer is not allowed to take data
when there are two or more processes that hold some resources and wait for
For example, in the above diagram, Process 1 is holding Resource 1 and waiting
resource 1.
Deadlock Characterization OR Deadlock can arise
Mutual Exclusion: One or more than one resource are non-shareable (Only one
process can use at a time)
Hold and Wait: A process is holding at least one resource and waiting for other
resources also.
No Preemption: A resource cannot be taken from a process unless the process
releases the resource.
1. A resource cannot be preempted from a process by force.
2. A process can only release a resource voluntarily.
3. In the diagram below, Process 2 cannot preempt Resource 1 from Process 1. It will
only be released when Process 1 relinquishes it voluntarily after its execution is
complete.
Circular Wait: A set of processes are waiting for each other in circular form.
Methods for Handling Deadlocks
1. Deadlock Ignorance
1. Deadlock Ignorance is the most widely used approach among all the mechanism.
2. In this approach, the Operating system assumes that deadlock never occurs.
3. It simply ignores deadlock.
4. This approach is best suitable for a single end user system where User uses the
system only for browsing and all other normal stuff.
5. The operating systems like Windows and Linux mainly focus upon performance.
6. However, the performance of the system decreases if it uses deadlock handling
mechanism all the time if deadlock happens 1 out of 100 times then it is completely
unnecessary to use the deadlock handling mechanism all the time.
7. In these types of systems, the user has to simply restart the computer in the case of
deadlock. Windows and Linux are mainly using this approach.
Methods for Handling Deadlocks
2. Deadlock prevention
1. Deadlock happens only when Mutual Exclusion, hold and wait, No preemption
and circular wait holds simultaneously.
1. The idea behind the approach is very simple that we have to fail one of the four
conditions but there can be a big argument on its physical implementation in the
system.
Methods for Handling Deadlocks
3. Deadlock avoidance
1. The process continues until the system is in safe state. Once the system moves
to unsafe state, the OS has to backtrack one step.
1. In simple words, The OS reviews each allocation so that the allocation doesn't
cause the deadlock in the system.
Methods for Handling Deadlocks
If it occurs then it applies some of the recovery methods to the system to get rid of deadlock.
Deadlock Prevention
1. Mutual Exclusion
1. Mutual section Resource can never be used by more than one process
simultaneously which is fair enough but that is the main reason behind the
deadlock.
1. If a resource could have been used by more than one process at the same
time then the process would have never been waiting for any resource.
We cannot force a resource to be used by more than one process at the same time
since it will not be fair enough and some serious problems may arise in the
performance. Therefore, we cannot violate mutual exclusion for a process
practically.
Deadlock Prevention
2. Hold and Wait Deadlock Prevention
1. Deadlock occurs because there can be more than one process which are holding
one resource and waiting for other in the cyclic order.
1. That means, a process must be assigned all the necessary resources before the
execution starts.
1. A process must not wait for any resource once the execution has been started.
1. However, this sounds very practical but can't be done in the computer system
because a process can't determine necessary resources initially.
Deadlock Prevention
2. Hold and Wait Deadlock Prevention
1. This is not a good approach at all since if we take a resource away which is
being used by the process then all the work which it has done till now can
become inconsistent.
Deadlock Prevention
4. Circular Wait
1. To violate circular wait, we can assign a priority number to each of the resource.
1. This ensures that not a single process can request a resource which is being
utilized by some other process and no cycle will be formed.
1. Among all the methods, violating Circular wait is the only approach that can be
implemented practically
Deadlock Avoidance
Deadlock avoidance is the simplest and most useful model that each
process declares the maximum number of resources of each type
that it may need.
Banker’s Algorithm
The banker’s algorithm is a resource allocation and deadlock avoidance algorithm
https://binaryterms.com/synchronization-hardware-in-os.html
https://t4tutorials.com/semaphores-in-operating-systems-os/
https://www.gatevidyalay.com/semaphore-semaphore-in-os-countin
g-semaphore/
https://www.javatpoint.com/os-counting-semaphore
https://www.guru99.com/semaphore-in-operating-system.html#2
https://www.geeksforgeeks.org/semaphores-in-process-synchronization/
https://www.tutorialspoint.com/semaphores-in-operating-system
https://www.cs.jhu.edu/~yairamir/cs418/os4/
sld025.htm