0% found this document useful (0 votes)
28 views47 pages

UNIT-3 Process Synchronization and Deadlock

added goswamiavanish92

Uploaded by

goswamiavanish92
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views47 pages

UNIT-3 Process Synchronization and Deadlock

added goswamiavanish92

Uploaded by

goswamiavanish92
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 47

UNIT-3

Process synchronization
and deadlock
--AVANISH GOSWAMI
Critical Section Problem
• Critical Section is the part of a program which tries to access shared resources. That resource
may be any resource in a computer like a memory location, Data structure, CPU or any IO device.

• The critical section cannot be executed by more than one process at the same time; operating
system faces the difficulties in allowing and disallowing the processes from entering the critical
section.

• The critical section problem is used to design a set of protocols which can ensure that the Race
condition among the processes will never arise.

• In order to synchronize the cooperative processes, our main task is to solve the critical section
problem. We need to provide a solution in such a way that the following conditions can be
satisfied.
Two general approaches are used to handle critical sections:

• Preemptive kernels: A pre-emptive kernel allows a process to be pre-


empted while it is running in kernel mode.
• Non pre-emptive kernels: A non pre-emptive kernel does not allow a
process running in kernel mode to be pre-empted; a kernel-mode
process will run until it exists in kernel mode, blocks, or voluntarily
yields control of the CPU. A non pre-emptive kernel is essentially free
from race conditions on kernel data structures, as only one process is
active in the kernel at a time.
Critical Section Problem
The use of critical sections in a
program can cause a number of
issues, including
• Starvation
• Deadlock
• Overhead
Advantages of critical section in
process synchronization:
• Prevents race conditions
• Provides mutual exclusion
• Reduces CPU utilization
• Simplifies synchronization
Disadvantages of critical section
in process synchronization:
• Overhead
• Deadlocks
• Can limit parallelism
• Can cause contention
Important points related to
critical section in process
synchronization are:
• Understanding the concept of critical section and why it’s important for
synchronization.
• Familiarity with the different synchronization mechanisms used to
implement critical sections, such as semaphores, mutexes, and monitors.
• Knowledge of common synchronization problems that can arise in critical
sections, such as race conditions, deadlocks, and livelocks.
• Understanding how to design and implement critical sections to ensure
proper synchronization of shared resources and prevent synchronization-
related issues.
• Familiarity with best practices for using critical sections in concurrent
programming.
Although there are some properties that
should be followed if any code in the critical
section
• Mutual Exclusion: If process Pi is executing in its critical section, then
no other processes can be executing in their critical sections.
• Progress: If no process is executing in its critical section and some
processes wish to enter their critical sections, then only those processes
that are not executing in their remainder sections can participate in
deciding which will enter its critical section next, and this selection
cannot be postponed indefinitely.
• Bounded Waiting: There exists a bound, or limit, on the number of
times that other processes are allowed to enter their critical sections
after a process has made a request to enter its critical section and
before that request is granted.
Requirements of
Synchronization mechanisms
Primary
Mutual Exclusion
• Our solution must provide mutual exclusion. By Mutual Exclusion, we
mean that if one process is executing inside critical section then the
other process must not enter in the critical section.
Progress
• Progress means that if one process doesn't need to execute into
critical section then it should not stop other processes to get into the
critical section.
Secondary
Bounded Waiting
• We should be able to predict the waiting time for every process to get
into the critical section. The process must not be endlessly waiting for
getting into the critical section.

Architectural Neutrality
• Our mechanism must be architectural natural. It means that if our
solution is working fine on one architecture then it should also run on
the other ones as well.
Semaphore
• Semaphores are integer variables that are used to solve the critical section problem by
using two atomic operations wait and signal that are used for process synchronization.
• The definitions of wait and signal are as follows −
Wait
• The wait operation decrements the value of its argument S, if it is positive. If
S is negative or zero, then no operation is performed.
wait(S)
{
while (S<=0);
S--;
}
Signal
• The signal operation increments the value of its argument S.
signal(S)
{
S++;
}
Types of Semaphores
• There are two main types of semaphores i.e. counting semaphores and
binary semaphores. Details about these are given as follows:
Counting Semaphores:
• These are integer value semaphores and have an unrestricted value
domain.
• These semaphores are used to coordinate the resource access, where the
semaphore count is the number of available resources.
• If the resources are added, semaphore count automatically incremented
and if the resources are removed, the count is decremented.
Binary Semaphores:
• The binary semaphores are like counting semaphores but their value
is restricted to 0 and 1.
• The wait operation only works when the semaphore is 1
• and the signal operation succeeds when semaphore is 0. It is
sometimes easier to implement binary semaphores than counting
semaphores.
Advantages of Semaphores
• Semaphores allow only one process into the critical section.
• They follow the mutual exclusion principle strictly and are much more
efficient than some other methods of synchronization.
• There is no resource wastage because of busy waiting in semaphores
as processor time is not wasted unnecessarily to check if a condition
is fulfilled to allow a process to access the critical section.
• Semaphores are implemented in the machine independent code of
the microkernel. So they are machine independent.
Disadvantages of Semaphores
• Semaphores are complicated so the wait and signal operations must
be implemented in the correct order to prevent deadlocks.
• Semaphores are impractical for last scale use as their use leads to loss
of modularity.
• This happens because the wait and signal operations prevent the
creation of a structured layout for the system.
• Semaphores may lead to a priority inversion where low priority
processes may access the critical section first and high priority
processes later.
Counting Semaphore vs. Binary
Semaphore
Counting Semaphore Binary Semaphore
• No mutual exclusion • Mutual exclusion
• Any integer value • Value only 0 and 1
• More than one slot • Only one slot
• Provide a set of Processes • It has a mutual exclusion
mechanism
Deadlock
• A deadlock happens in operating system when two or more processes
need some resource to complete their execution that is held by the
other process.
• Every process needs some resources to complete its execution.
However, the resource is granted in a sequential order.

1. The process requests for some resource.


2. OS grant the resource if it is available otherwise let the process waits.
3. The process uses it and release on the completion.
Necessary conditions for
Deadlocks
Mutual Exclusion:
• A resource can only be shared in mutually exclusive manner. It
implies, if two process cannot use the same resource at the same
time.

Hold and Wait


• A process waits for some resources while holding another resource at
the same time.
No pre-emption:
• The process which once scheduled will be executed till the
completion. No other process can be scheduled by the scheduler
meanwhile.

Circular Wait
• All the processes must be waiting for the resources in a cyclic manner
so that the last process is waiting for the resource which is being held
by the first process.
Deadlock
Methods for Handling Deadlocks
Generally speaking there are three ways of handling deadlocks:
• Deadlock prevention or avoidance – Do not allow the system to get
into a deadlocked state.
• Deadlock detection and recovery – Abort a process or pre-empt
some resources when deadlocks are detected.
• Ignore the problem all together – If deadlocks only occur once a year
or so, it may be better to simply let them happen and reboot as
necessary than to incur the constant overhead and system
performance penalties associated with deadlock prevention or
detection. This is the approach that both Windows and UNIX take.
Deadlock Detection
A deadlock can be detected by a resource scheduler as it keeps track of
all the resources that are allocated to different processes. After a
deadlock is detected, it can be resolved using the following methods:
• All the processes that are involved in the deadlock are terminated.
This is not a good approach as all the progress made by the processes
is destroyed.
• Resources can be pre-empted from some processes and given to
others till the deadlock is resolved.
Deadlock Prevention
• Deadlock prevention algorithms ensure that at least one of the
necessary conditions (Mutual exclusion, hold and wait, no pre-
emption, and circular wait) does not hold true.
• We do this by facing each of the four conditions on separate
occasions. However, most prevention algorithms have poor resource
utilization and hence result in reduced throughputs.
Deadlock Avoidance

• os avoid deadlocks, we can make use of prior knowledge about the


usage of resources by processes, including resources available,
resources allocated, future requests, and future releases by
processes.
• Most deadlock avoidance algorithms need every process to tell in
advance the maximum number of resources of each type that it may
need.
• Based on all information, we may decide if a process should wait for a
resource or not, and thus avoid chances for the circular wait.
Deadlock Avoidance

• If a system is already in a safe state, we can try to stay away from an


unsafe state and avoid deadlock.
• Deadlocks cannot be avoided in an unsafe state. A system can be
considered to be in a safe state if it is not in a state of deadlock and
can allocate resources to the maximum available.
• A safe sequence of processes and allocation of resources ensures a
safe state.
• Deadlock avoidance algorithms try not to allocate resources to a
process if it makes the system go into an unsafe state.
Recovery from deadlock
• If the system is in a deadlock state, some methods for recovering it
from the deadlock state must be applied. There are various ways for
recovery:
• Allocate one resource to several processes, by violating mutual
exclusion.
• Pre-empt some resources from some of the deadlocked processes.
• Abort one or more processes in order to break the deadlock.
• If pre-emption is used:
• 1. Select a victim. (Which resource(s) is/are to be pre-empted from
which process?)
• 2. Rollback: If we pre-empt a resource from a process, roll the process
back to some safe state and make it continue.
• Here the OS may be probably encounter the problem of starvation.
How can we guarantee that resources will not always be pre-empted
from the same process?
In selecting a victim, important parameters are:
• Process priorities
• How long the process has occupied?
• How long will it occupy to finish its job
• How many resources of what type did the process use?
• How many more resources does the process need to finish its job?
• How many processes will be rolled back? (More than one victim may
be selected.)
• For rollback, the simplest solution is a total rollback. A better solution
is to roll the victim process back only as far as it’s necessary to break
the deadlock. However, the OS needs to keep more information about
process states to use the second solution.
• To avoid starvation, ensure that a process can be picked as a victim for
only a small number of times. So, it is a wise idea to include the
number of rollbacks as a parameter.
Hardware Synchronization
• Process Synchronization problems occur when two processes running concurrently share
the same data or same variable. The value of that variable may not be updated correctly
before its being used by a second process. Such a condition is known as Race Around
Condition. There are a software as well as hardware solutions to this problem. In this
article, we will talk about the most efficient hardware solution to process synchronization
problems and its implementation.

• There are three algorithms in the hardware approach of solving Process Synchronization
problem:

• Test and Set


• Swap
• Unlock and Lock
Advantages of Process
Synchronization
• Ensures data consistency and integrity
• Avoids race conditions
• Prevents inconsistent data due to concurrent access
• Supports efficient and effective use of shared resources
Disadvantages of Process
Synchronization
• Adds overhead to the system
• This can lead to performance degradation
• Increases the complexity of the system
• Can cause deadlocks if not implemented properly.
Monitors
• A monitor is a programming language construct that controls access to shared data
◆ Synchronization code added by compiler, enforced at runtime
◆ Why is this an advantage?
A monitor is a module that encapsulates
◆ Shared data structures
◆ Procedures that operate on the shared data structures
◆ Synchronization between concurrent procedure invocations
• A monitor protects its data from unstructured access
• It guarantees that threads accessing its data through
• its procedures interact only in legitimate ways
Monitor Semantics

• A monitor guarantees mutual exclusion


◆ Only one thread can execute any monitor procedure at any
time (the thread is “in the monitor”)
◆ If a second thread invokes a monitor procedure when a first
thread is already executing one, it blocks
» So the monitor has to have a wait queue…
◆ If a thread within a monitor blocks, another one can enter
 What are the implications in terms of parallelism in
monitor?
Critical Regions
• Shared variable v
region v do …. done
Critical Regions
Producer-consumer code with CRs
Producer:
• While (true) region buff if (!full) produce done
Consumer
• While (true) region buff if (!empty) consume done
Conditional Critical Regions
• Region v when C Do……done
Producer-consumer code with CCRs
Producer:
While (true) region buff when (!full) do
produce done
Consumer:
While (true) region buff when (!empty) do
consume done
Readings
• Hoare: Towards a theory of parallel programming, 1971
• Hansen: Structured multiprogramming, 1993
Classical Problems of
Synchronization
• In this tutorial we will discuss about various classical problem of
synchronization.
• Semaphore can be used in other synchronization problems besides
Mutual Exclusion.
• Below are some of the classical problem depicting flaws of process
synchronaization in systems where cooperating processes are present.
We will discuss the following three problems:
Bounded Buffer (Producer-Consumer) Problem
• Because the buffer pool has a maximum size, this problem is often called the Bounded buffer problem.

• This problem is generalised in terms of the Producer Consumer problem, where a finite buffer pool is used to
exchange messages between producer and consumer processes.

• Solution to this problem is, creating two counting semaphores "full" and "empty" to keep track of the current
number of full and empty buffers respectively.

• In this Producers mainly produces a product and consumers consume the product, but both can use of one of
the containers each time.

• The main complexity of this problem is that we must have to maintain the count for both empty and full
containers that are available.
Dining Philosophers Problem
• The dining philosopher's problem involves the allocation of limited
resources to a group of processes in a deadlock-free and starvation-
free manner.

• There are five philosophers sitting around a table, in which there are
five chopsticks/forks kept beside them and a bowl of rice in the
centre, When a philosopher wants to eat, he uses two chopsticks -
one from their left and one from their right. When a philosopher
wants to think, he keeps down both chopsticks at their original place.
The Readers Writers Problem
• In this problem there are some processes(called readers) that only read the
shared data, and never change it, and there are other processes(called
writers) who may change the data in addition to reading, or instead of
reading it.

• There are various type of readers-writers problem, most centred on relative


priorities of readers and writers.

• The main complexity with this problem occurs from allowing more than one
reader to access the data at the same time.
Thank you

You might also like