0% found this document useful (0 votes)
18 views22 pages

MODULE 3 - PART 1 - Process Coordination and Deadlock..

Uploaded by

kartiksh1610
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views22 pages

MODULE 3 - PART 1 - Process Coordination and Deadlock..

Uploaded by

kartiksh1610
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 22

Module 3 – Part 1

Process Coordination and Deadlock


What is Process Synchronization?
• When two or more processes cooperate with each other, their order
of execution must be preserved otherwise there can be conflicts in
their execution and inappropriate outputs can be produced.
• A cooperative process is the one which can affect the execution of
other process or can be affected by the execution of other process.
Such processes need to be synchronized so that their order of
execution can be guaranteed.
• The procedure involved in preserving the appropriate order of
execution of cooperative processes is known as Process
Synchronization.
Why is it needed?

• Process A changing the data in a memory location while another process B is


trying to read the data from the same memory location.
• There is a high probability that data read by the second process will be
erroneous.
First case
The order of execution of the processes is P0, P1
respectively.
Process P0 reads the value of A=0, increments it by 1
(A=1) and writes the incremented value in A.
Now, process P1 reads the value of A =1, increments
its value by 1 (A=2) and writes the incremented value
in A.
So, after both the processes P0 & P1 finishes
accessing the variable A. The value of A is 2.
Second case
Consider that process P0 has read the variable A=0.
Suddenly context switch happens and P1 takes the
charge, and start executing. P1 would increment the
value of A (A=1). After execution P1 gives the charge
again to P0.
Now the value of A for P0 is 0 & when it starts
executing it would increment the value of A, from 0 to
1.
So here, when both the processes P0 & P1 end up
accessing the variable A, the value of A =1 which is
different from the value of A=2 in the first case.
Race Condition
• At the time when more than one process is either executing the same
code or accessing the same memory or any shared variable, there is a
possibility that the output or the value of the shared variable is
wrong.
• This condition is commonly known as a race condition.
• As several processes access the same data in a concurrent manner,
the outcome depends on the particular order in which the access of
data takes place.
• Mainly this condition is a situation that may occur inside the critical
section.
Critical Section
• The regions of a program that try to access shared resources and may
cause race conditions are called critical section.
• To avoid race condition among the processes, we need to assure that
only one process at a time can execute within the critical section.
• The critical section cannot be executed by more than one process at
the same time; operating system faces the difficulties in allowing and
disallowing the processes from entering the critical section.
• The critical section problem is used to design a set of protocols which
can ensure that the Race condition among the processes will never
arise.
Solution to Critical Section Problem
A solution to the critical section problem must satisfy the following
three conditions:
• Mutual Exclusion: Out of a group of cooperating processes, only one
process can be in its critical section at a given point of time.
• Progress: If one process doesn't need to execute in its critical section
then it should not stop other processes to get into the critical section.
• Bounded Waiting: It should not happen that one process is entering
its critical section repeatedly and other processes are endlessly
waiting for getting into the critical sections.
Entry Section-
• It acts as a gateway for a process to enter inside the
critical section.
• It ensures that only one process is present inside the
critical section at any time.
• It does not allow any other process to enter inside the
critical section if one process is already present inside it.

Exit Section-
• It acts as an exit gate for a process to leave the critical
section.
• When a process takes exit from the critical section, some
changes are made so that other processes can enter
inside the critical section.
Peterson’s Solution
• Peterson’s Solution is restricted to two processes that alternate
execution between their critical sections. Let’s call the processes –
Process i and Process j.
• We consider two data items – turn and flag.
• The data item turn indicates whose turn it is to enter its critical
section.
• The data item flag can be either true or false. The value true indicates
that the process is ready to enter its critical section.
Synchronization Hardware
• This is a hardware solution to the synchronization problem.
• We use the Test Lock and Set Lock instructions.
• The shared lock variable can take either of the two values – 0 or 1.
• If lock value is 0 (unlocked), a process can take the lock, change its
value to 1, and then execute in its critical section.
• If the lock value is 1 (locked), a process cannot take the lock. It has to
wait till the lock value becomes 0.
Semaphores
• Semaphores are a synchronization mechanism used to coordinate the
activities of multiple processes in a computer system.
• They are used to enforce mutual exclusion, avoid race conditions and
implement synchronization between processes.
• Semaphores provide two operations: wait (P) and signal (V).
• The wait operation decrements the value of the semaphore, and the signal
operation increments the value of the semaphore.
• When the value of the semaphore is zero, any process that performs a wait
operation will be blocked until another process performs a signal
operation.
Types of Semaphores
Binary Semaphore:
• It is a special form of semaphore used for implementing mutual exclusion,
hence it is often called a Mutex.
• A binary semaphore is initialized to 1 and only takes the values 0 and 1
during the execution of a program.
• In Binary Semaphore, the wait operation works only if the value of
semaphore = 1, and the signal operation succeeds when the semaphore=
0.
• Binary Semaphores are easier to implement than counting semaphores.
Counting Semaphore:
• These are used to implement bounded concurrency.
• These can be used to control access to a given resource that consists of a
finite number of Instances.
• Here the semaphore count is used to indicate the number of available
resources.
• If the resources are added then the semaphore count automatically gets
incremented and if the resources are removed, the count is decremented.
• Counting Semaphore has no mutual exclusion.
• The value of semaphore can vary from –infinity to +infinity.
Monitors
• The monitor is one of the ways to achieve Process synchronization. The monitor is
supported by programming languages to achieve mutual exclusion between
processes.
• It is the collection of shared variables, condition variables and procedures
combined together in a special kind of module or a package.
• Only the procedures defined inside the monitor can access the shared variables
declared in the monitor.
• Only one process at a time can be active inside the monitor.
Difference between Semaphores and Monitors
• In Semaphores, processes can directly access shared variables unlike Monitors
where processes have to access procedures to access shared variables.
• In semaphores, we have to code cooperative processes to implement mutual
exclusion unlike monitors where the monitor contains the code to implement
mutual exclusion.
• Two different operations are performed on the condition variables of the monitor – Wait and
Signal
• let say we have 1 condition variables – x
Wait operation
• x.wait() : Process performing wait operation on any condition variable are suspended if any other
process is accessing it. The suspended processes are placed in block queue of that condition
variable.
• Note: Each condition variable has its unique block queue.
Signal operation
• x.signal(): When a process performs signal operation on condition variable, one of the blocked
processes is given chance.
• If (x block queue empty)
• // Ignore signal
• else
• // Resume a process from block queue.

You might also like