0% found this document useful (0 votes)
14 views25 pages

Unit-3 Pos

Process synchronization is essential for coordinating multiple processes in a system to prevent race conditions and ensure data consistency. Techniques such as semaphores, monitors, and critical sections are utilized to manage access to shared resources, with various algorithms like Peterson’s solution and hardware synchronization methods like Test and Set. While synchronization enhances data integrity, it can introduce overhead and complexity, potentially leading to performance issues and deadlocks if not managed properly.

Uploaded by

gajulapallavi48
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views25 pages

Unit-3 Pos

Process synchronization is essential for coordinating multiple processes in a system to prevent race conditions and ensure data consistency. Techniques such as semaphores, monitors, and critical sections are utilized to manage access to shared resources, with various algorithms like Peterson’s solution and hardware synchronization methods like Test and Set. While synchronization enhances data integrity, it can introduce overhead and complexity, potentially leading to performance issues and deadlocks if not managed properly.

Uploaded by

gajulapallavi48
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

UNIT-3

Process Synchronization
Process Synchronization is the coordination of execution of multiple processes in a multi-process system to
ensure that they access shared resources in a controlled and predictable manner. It aims to resolve the problem
of race conditions and other synchronization issues in a concurrent system.

The main objective of process synchronization is to ensure that multiple processes access shared resources
without interfering with each other, and to prevent the possibility of inconsistent data due to concurrent access.
To achieve this, various synchronization techniques such as semaphores, monitors, and critical sections are
used.

In a multi-process system, synchronization is necessary to ensure data consistency and integrity, and to avoid
the risk of deadlocks and other synchronization problems. Process synchronization is an important aspect of
modern operating systems, and it plays a crucial role in ensuring the correct and efficient functioning of multi-
process systems.

On the basis of synchronization, processes are categorized as one of the following two types:

Independent Process: The execution of one process does not affect the execution of other processes.

Cooperative Process: A process that can affect or be affected by other processes executing in the system.

Process synchronization problem arises in the case of Cooperative process also because resources are shared in
Cooperative processes.

Race Condition:

When more than one process is executing the same code or accessing the same memory or any shared variable
in that condition there is a possibility that the output or the value of the shared variable is wrong so for that all
the processes doing the race to say that my output is correct this condition known as a race condition. Several
processes access and process the manipulations over the same data concurrently, then the outcome depends on
the particular order in which the access takes place. A race condition is a situation that may occur inside a
critical section. This happens when the result of multiple thread execution in the critical section differs
according to the order in which the threads execute. Race conditions in critical sections can be avoided if the
critical section is treated as an atomic instruction. Also, proper thread synchronization using locks or atomic
variables can prevent race conditions.

Critical Section Problem:


A critical section is a code segment that can be accessed by only one process at a time. The critical section
contains shared variables that need to be synchronized to maintain the consistency of data variables. So the
critical section problem means designing a way for cooperative processes to access shared resources without
creating data inconsistencies.
In the entry section, the process requests for entry in the Critical Section.

Any solution to the critical section problem must satisfy three requirements:

Mutual Exclusion: If a process is executing in its critical section, then no other process is allowed to execute in
the critical section.

Progress: If no process is executing in the critical section and other processes are waiting outside the critical
section, then only those processes that are not executing in their remainder section can participate in deciding
which will enter in the critical section next, and the selection can not be postponed indefinitely.

Bounded Waiting: A bound must exist on the number of times that other processes are allowed to enter their
critical sections after a process has made a request to enter its critical section and before that request is granted.

Peterson’s Solution:

Peterson’s Solution is a classical software-based solution to the critical section problem. In Peterson’s solution,
we have two shared variables:

boolean flag[i]: Initialized to FALSE, initially no one is interested in entering the critical section

int turn: The process whose turn is to enter the critical section.

Peterson’s Solution preserves all three conditions:

Mutual Exclusion is assured as only one process can access the critical section at any time.

Progress is also assured, as a process outside the critical section does not block other processes from entering
the critical section.
Bounded Waiting is preserved as every process gets a fair chance.

Disadvantages of Peterson’s solution:

It involves busy waiting.(In the Peterson’s solution, the code statement- “while(flag[j] && turn == j);” is
responsible for this. Busy waiting is not favored because it wastes CPU cycles that could be used to perform
other tasks.)

It is limited to 2 processes.

Peterson’s solution cannot be used in modern CPU architectures.

Semaphores:

A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be signaled by another
thread. This is different than a mutex as the mutex can be signaled only by the thread that is called the wait
function.

A semaphore uses two atomic operations, wait and signal for process synchronization.
A Semaphore is an integer variable, which can be accessed only through two operations wait() and signal().
There are two types of semaphores: Binary Semaphores and Counting Semaphores.

Binary Semaphores: They can only be either 0 or 1. They are also known as mutex locks, as the locks can
provide mutual exclusion. All the processes can share the same mutex semaphore that is initialized to 1. Then, a
process has to wait until the lock becomes 0. Then, the process can make the mutex semaphore 1 and start its
critical section. When it completes its critical section, it can reset the value of the mutex semaphore to 0 and
some other process can enter its critical section.

Counting Semaphores: They can have any value and are not restricted over a certain domain. They can be
used to control access to a resource that has a limitation on the number of simultaneous accesses. The
semaphore can be initialized to the number of instances of the resource. Whenever a process wants to use that
resource, it checks if the number of remaining instances is more than zero, i.e., the process has an instance
available. Then, the process can enter its critical section thereby decreasing the value of the counting semaphore
by 1. After the process is over with the use of the instance of the resource, it can leave the critical section
thereby adding 1 to the number of available instances of the resource.

Advantages and Disadvantages:


Advantages of Process Synchronization:

Ensures data consistency and integrity

Avoids race conditions

Prevents inconsistent data due to concurrent access

Supports efficient and effective use of shared resources

Disadvantages of Process Synchronization:

Adds overhead to the system


Can lead to performance degradation

Increases the complexity of the system

Can cause deadlocks if not implemented properly.

Hardware Synchronization Algorithms : Unlock and Lock, Test and Set, Swap
Process Synchronization problems occur when two processes running concurrently share the
same data or same variable. The value of that variable may not be updated correctly before its
being used by a second process. Such a condition is known as Race Around Condition. There
are a software as well as hardware solutions to this problem. In this article, we will talk about
the most efficient hardware solution to process synchronization problems and its
implementation.

There are three algorithms in the hardware approach of solving Process Synchronization
problem:

Test and Set

Swap

Unlock and Lock

Hardware instructions in many operating systems help in the effective solution of critical
section problems.

1. Test and Set:


Here, the shared variable is lock which is initialized to false. TestAndSet(lock) algorithm works
in this way – it always returns whatever value is sent to it and sets lock to true. The first process
will enter the critical section at once as TestAndSet(lock) will return false and it’ll break out of
the while loop. The other processes cannot enter now as lock is set to true and so the while loop
continues to be true. Mutual exclusion is ensured. Once the first process gets out of the critical
section, lock is changed to false. So, now the other processes can enter one by one. Progress is
also ensured. However, after the first process, any process can go in. There is no queue
maintained, so any new process that finds the lock to be false again can enter. So bounded
waiting is not ensured.

Test and Set Pseudocode –


//Shared variable lock initialized to false
boolean lock;
boolean TestAndSet (boolean &target){
boolean rv = target;
target = true;
return rv;
}

while(1){
while (TestAndSet(lock));
critical section
lock = false;
remainder section
}
2. Swap:
Swap algorithm is a lot like the TestAndSet algorithm. Instead of directly setting lock to true in
the swap function, key is set to true and then swapped with lock. First process will be executed,
and in while(key), since key=true , swap will take place and hence lock=true and key=false.
Again next iteration takes place while(key) but key=false , so while loop breaks and first
process will enter in critical section. Now another process will try to enter in Critical section, so
again key=true and hence while(key) loop will run and swap takes place so, lock=true and
key=true (since lock=true in first process). Again on next iteration while(key) is true so this will
keep on executing and another process will not be able to enter in critical section. Therefore
Mutual exclusion is ensured. Again, out of the critical section, lock is changed to false, so any
process finding it gets t enter the critical section. Progress is ensured. However, again bounded
waiting is not ensured for the very same reason.

Swap Pseudocode –
// Shared variable lock initialized to false
// and individual key initialized to false;

boolean lock;
Individual key;

void swap(boolean &a, boolean &b){


boolean temp = a;
a = b;
b = temp;
}

while (1){
key = true;
while(key)
swap(lock,key);
critical section
lock = false;
remainder section
}
3. Unlock and Lock :
Unlock and Lock Algorithm uses TestAndSet to regulate the value of lock but it adds another
value, waiting[i], for each process which checks whether or not a process has been waiting. A
ready queue is maintained with respect to the process in the critical section. All the processes
coming in next are added to the ready queue with respect to their process number, not
necessarily sequentially. Once the ith process gets out of the critical section, it does not turn
lock to false so that any process can avail the critical section now, which was the problem with
the previous algorithms. Instead, it checks if there is any process waiting in the queue. The
queue is taken to be a circular queue. j is considered to be the next process in line and the while
loop checks from jth process to the last process and again from 0 to (i-1)th process if there is
any process waiting to access the critical section. If there is no process waiting then the lock
value is changed to false and any process which comes next can enter the critical section. If
there is, then that process’ waiting value is turned to false, so that the first while loop becomes
false and it can enter the critical section. This ensures bounded waiting. So the problem of
process synchronization can be solved through this algorithm.

Unlock and Lock Pseudocode –


// Shared variable lock initialized to false
// and individual key initialized to false

boolean lock;
Individual key;
Individual waiting[i];

while(1){
waiting[i] = true;
key = true;
while(waiting[i] && key)
key = TestAndSet(lock);
waiting[i] = false;
critical section
j = (i+1) % n;
while(j != i && !waiting[j])
j = (j+1) % n;
if(j == i)
lock = false;
else
waiting[j] = false;
remainder section
}

Mutex:
Mutex in Operating System
Mutex lock in OS is essentially a variable that is binary nature that provides code wise functionality for mutual
exclusion. At times, there maybe multiple threads that may be trying to access same resource like memory or
I/O etc. To make sure that there is no overriding. Mutex provides a locking mechanism.

Only one thread at a time can take the ownership of a mutex and apply the lock. Once it done utilising the
resource and it may release the mutex lock.

Mutex Highlights

Mutex is very different from Semaphores, please read Semaphores or below and then read the difference
between mutex and semaphores here.

Mutex is Binary in nature

Operations like Lock and Release are possible

Mutex is for Threads, while Semaphores are for processes.

Mutex works in user-space and Semaphore for kernel

Mutex provides locking mechanism

A thread may acquire more than one mutex

Binary Semaphore and mutex are different

Semaphores in OS
While mutex is a lock (wait) and release mechanism. Semaphores are signalling mechanisms that signal to
processes the state of the Critical section in OS and grant access to the critical section accordingly.

Semaphores use the following methods to control access to critical section code –

Wait
Signal

Semaphore Types

We have two types of semaphores –

Binary Semaphore –

Only True/False or 0/1 values

Counting Semaphore –

Non-negative value

Semaphore Implementation

Wait and Signal are two methods that are associated with semaphores. While some articles are represented as
wait(s) or signal(s) however in some blogs are represented as p(s) for wait and v(s) for signal

Wait p(s) or wait(s)

Wait decrements the value of semaphore by 1

Signal v(s) or signal(s)

Signal increments the value of semaphore by 1

Semaphore

Semaphore can only have positive values

Before the start of the program, it is always initialized to n in Counting semaphore (Where n is the number of
processes allowed to enter critical section simultaneously)

1 in the case of a binary semaphore

Signal Operations

Increments semaphore by 1

Signals that the process has completed its critical section execution

signal(S)

S++;

Wait Operations

Wait operation decrements the value of semaphore S if S is a positive number


Else if S is 0 or negative then code gets stuck at while loop as it keeps implementing infinitively

The semi-colon after while forces while loop definitively if S is 0 or negative

Thus the code doesn’t move ahead in hopes that the value of S will increase because of some other signal
operation elsewhere

Code Logic for Incrementing – Decrementing value of Semaphore –

wait(S)
{
while (S<=0);

S--;
}
Actual Working for both functions together to achieve access of critical section –
// some code

wait(s);

// critical section code

signal(s);

// remainder code
The eventual goal is to protect the critical section code using wait and signal operations.

Semaphores in Process Synchronization


Semaphores are just normal variables used to coordinate the activities of multiple processes in a computer
system. They are used to enforce mutual exclusion, avoid race conditions and implement synchronization
between processes.

The process of using Semaphores provides two operations: wait (P) and signal (V). The wait operation
decrements the value of the semaphore, and the signal operation increments the value of the semaphore. When
the value of the semaphore is zero, any process that performs a wait operation will be blocked until another
process performs a signal operation.

Semaphores are used to implement critical sections, which are regions of code that must be executed by only
one process at a time. By using semaphores, processes can coordinate access to shared resources, such as shared
memory or I/O devices.

Semaphores are of two types:


Binary Semaphore –
This is also known as a mutex lock. It can have only two values – 0 and 1. Its value is initialized to 1. It is used
to implement the solution of critical section problems with multiple processes.

Counting Semaphore –
Its value can range over an unrestricted domain. It is used to control access to a resource that has multiple
instances.
Now let us see how it does so.

First, look at two operations that can be used to access and change the value of the semaphore variable.

Some points regarding P and V operation:

P operation is also called wait, sleep, or down operation, and V operation is also called signal, wake-up, or up
operation.

Both operations are atomic and semaphore(s) is always initialized to one. Here atomic means that variable on
which read, modify and update happens at the same time/moment with no pre-emption i.e. in-between read,
modify and update no other operation is performed that may change the variable.

A critical section is surrounded by both operations to implement process synchronization. See the below image.
The critical section of Process P is in between P and V operation.

Now, let us see how it implements mutual exclusion. Let there be two processes P1 and P2 and a semaphore s is
initialized as 1. Now if suppose P1 enters in its critical section then the value of semaphore s becomes 0. Now if
P2 wants to enter its critical section then it will wait until s > 0, this can only happen when P1 finishes its
critical section and calls V operation on semaphore s.

This way mutual exclusion is achieved. Look at the below image for details which is a Binary semaphore.
Implementation: Binary semaphores

struct semaphore {

enum value(0, 1);

// q contains all Process Control Blocks (PCBs)


// corresponding to processes got blocked
// while performing down operation.
Queue<process> q;

};
P(semaphore s)
{
if (s.value == 1) {
s.value = 0;
}
else {
// add the process to the waiting queue
q.push(P) sleep();
}
}
V(Semaphore s)
{
if (s.q is empty) {
s.value = 1;
}
else {

// select a process from waiting queue


Process p = q.front();
// remove the process from waiting as it has been
// sent for CS
q.pop();
wakeup(p);
}
}
The description above is for binary semaphore which can take only two values 0 and 1 and ensure mutual
exclusion. There is one other type of semaphore called counting semaphore which can take values greater than
one.

Now suppose there is a resource whose number of instances is 4. Now we initialize S = 4 and the rest is the
same as for binary semaphore. Whenever the process wants that resource it calls P or waits for function and
when it is done it calls V or signal function. If the value of S becomes zero then a process has to wait until S
becomes positive. For example, Suppose there are 4 processes P1, P2, P3, P4, and they all call wait operation on
S(initialized with 4). If another process P5 wants the resource then it should wait until one of the four processes
calls the signal function and the value of semaphore becomes positive.

Limitations :

One of the biggest limitations of semaphore is priority inversion.


Deadlock, suppose a process is trying to wake up another process that is not in a sleep state. Therefore, a
deadlock may block indefinitely.

The operating system has to keep track of all calls to wait and signal the semaphore.

Problem in this implementation of a semaphore:

The main problem with semaphores is that they require busy waiting, If a process is in the critical section, then
other processes trying to enter the critical section will be waiting until the critical section is not occupied by any
process. Whenever any process waits then it continuously checks for semaphore value (look at this line while
(s==0); in P operation) and waste CPU cycle.

There is also a chance of “spinlock” as the processes keep on spins while waiting for the lock. In order to avoid
this another implementation is provided below.

Implementation: Counting semaphore

struct Semaphore {

int value;

// q contains all Process Control Blocks(PCBs)


// corresponding to processes got blocked
// while performing down operation.
Queue<process> q;

};
P(Semaphore s)
{
s.value = s.value - 1;
if (s.value < 0) {

// add process to queue


// here p is a process which is currently executing
q.push(p);
block();
}
else
return;
}

V(Semaphore s)
{
s.value = s.value + 1;
if (s.value <= 0) {

// remove process p from queue


Process p = q.pop();
wakeup(p);
}
else
return;
}
In this implementation whenever the process waits it is added to a waiting queue of processes associated with
that semaphore. This is done through the system call block() on that process. When a process is completed it
calls the signal function and one process in the queue is resumed. It uses the wakeup() system call.

Advantages of Semaphores:

A simple and effective mechanism for process synchronization

Supports coordination between multiple processes

Provides a flexible and robust way to manage shared resources.

It can be used to implement critical sections in a program.

It can be used to avoid race conditions.

Disadvantages of Semaphores:

It Can lead to performance degradation due to overhead associated with wait and signal operations.

Can result in deadlock if used incorrectly.

It was proposed by Dijkstra in 1965 which is a very significant technique to manage concurrent processes by
using a simple integer value, which is known as a semaphore. A semaphore is simply an integer variable that is
shared between threads. This variable is used to solve the critical section problem and to achieve process
synchronization in the multiprocessing environment.

It can cause performance issues in a program if not used properly.

It can be difficult to debug and maintain.

It can be prone to race conditions and other synchronization problems if not used correctly.

It can be vulnerable to certain types of attacks, such as denial of service attacks.

Monitors in Process Synchronization


Monitors are a higher-level synchronization construct that simplifies process synchronization by providing a
high-level abstraction for data access and synchronization. Monitors are implemented as programming language
constructs, typically in object-oriented languages, and provide mutual exclusion, condition variables, and data
encapsulation in a single construct.

A monitor is essentially a module that encapsulates a shared resource and provides access to that resource
through a set of procedures. The procedures provided by a monitor ensure that only one process can access the
shared resource at any given time, and that processes waiting for the resource are suspended until it becomes
available.
Monitors are used to simplify the implementation of concurrent programs by providing a higher-level
abstraction that hides the details of synchronization. Monitors provide a structured way of sharing data and
synchronization information, and eliminate the need for complex synchronization primitives such as
semaphores and locks.

The key advantage of using monitors for process synchronization is that they provide a simple, high-level
abstraction that can be used to implement complex concurrent systems. Monitors also ensure that
synchronization is encapsulated within the module, making it easier to reason about the correctness of the
system.

However, monitors have some limitations. For example, they can be less efficient than lower-level
synchronization primitives such as semaphores and locks, as they may involve additional overhead due to their
higher-level abstraction. Additionally, monitors may not be suitable for all types of synchronization problems,
and in some cases, lower-level primitives may be required for optimal performance.

The monitor is one of the ways to achieve Process synchronization. The monitor is supported by programming
languages to achieve mutual exclusion between processes. For example Java Synchronized methods. Java
provides wait() and notify() constructs.

It is the collection of condition variables and procedures combined together in a special kind of module or a
package.

The processes running outside the monitor can’t access the internal variable of the monitor but can call
procedures of the monitor.

Only one process at a time can execute code inside monitors.

Syntax: Condition Variables: Two different operations are


performed on the condition variables of the monitor.

Wait.

signal.

let say we have 2 condition variables condition x, y; // Declaring variable Wait operation x.wait() : Process
performing wait operation on any condition variable are suspended. The suspended processes are placed in
block queue of that condition variable. Note: Each condition variable has its unique block queue. Signal
operation x.signal(): When a process performs signal operation on condition variable, one of the blocked
processes is given chance.

If (x block queue empty)

// Ignore signal

else
// Resume a process from block queue.

Advantages of Monitor: Monitors have the advantage of making parallel programming easier and less error
prone than using techniques such as semaphore. Disadvantages of Monitor: Monitors have to be implemented as
part of the programming language . The compiler must generate code for them. This gives the compiler the
additional burden of having to know what operating system facilities are available to control access to critical
sections in concurrent processes. Some languages that do support monitors are Java,C#,Visual Basic,Ada and
concurrent Euclid. Please write comments if you find anything incorrect, or you want to share more information
about the topic discussed above

What is Deadlock in Operating System (OS)?


Every process needs some resources to complete its execution. However, the resource is granted in a sequential
order.

1. The process requests for some resource.


2. OS grant the resource if it is available otherwise let the process waits.
3. The process uses it and release on the completion.

A Deadlock is a situation where each of the computer process waits for a resource which is being assigned to
some another process. In this situation, none of the process gets executed since the resource it needs, is held by
some other process which is also waiting for some other resource to be released.

Let us assume that there are three processes P1, P2 and P3. There are three different resources R1, R2 and R3.
R1 is assigned to P1, R2 is assigned to P2 and R3 is assigned to P3.

After some time, P1 demands for R1 which is being used by P2. P1 halts its execution since it can't complete
without R2. P2 also demands for R3 which is being used by P3. P2 also stops its execution because it can't
continue without R3. P3 also demands for R1 which is being used by P1 therefore P3 also stops its execution.

In this scenario, a cycle is being formed among the three processes. None of the process is progressing and they
are all waiting. The computer becomes unresponsive since all the processes got blocked.
Difference between Starvation and Deadlock

Sr Deadlock Starvation
.
1 Deadlock is a situation where no process got blocked Starvation is a situation where the low priority
and no process proceeds process got blocked and the high priority
processes proceed.
2 Deadlock is an infinite waiting. Starvation is a long waiting but not infinite.
3 Every Deadlock is always a starvation. Every starvation need not be deadlock.
4 The requested resource is blocked by the other The requested resource is continuously be used
process. by the higher priority processes.
5 Deadlock happens when Mutual exclusion, hold and It occurs due to the uncontrolled priority and
wait, No preemption and circular wait occurs resource management.
simultaneously.

Necessary conditions for Deadlocks


1. Mutual Exclusion

A resource can only be shared in mutually exclusive manner. It implies, if two process cannot use the
same resource at the same time.

2. Hold and Wait

A process waits for some resources while holding another resource at the same time.

3. No preemption

The process which once scheduled will be executed till the completion. No other process can be
scheduled by the scheduler meanwhile.

4. Circular Wait

All the processes must be waiting for the resources in a cyclic manner so that the last process is waiting
for the resource which is being held by the first process.
Strategies for handling Deadlock

1. Deadlock Ignorance

Deadlock Ignorance is the most widely used approach among all the mechanism. This is being used by many
operating systems mainly for end user uses. In this approach, the Operating system assumes that deadlock never
occurs. It simply ignores deadlock. This approach is best suitable for a single end user system where User uses
the system only for browsing and all other normal stuff.

There is always a tradeoff between Correctness and performance. The operating systems like Windows and
Linux mainly focus upon performance. However, the performance of the system decreases if it uses deadlock
handling mechanism all the time if deadlock happens 1 out of 100 times then it is completely unnecessary to use
the deadlock handling mechanism all the time.

In these types of systems, the user has to simply restart the computer in the case of deadlock. Windows and
Linux are mainly using this approach.

2. Deadlock prevention

Deadlock happens only when Mutual Exclusion, hold and wait, No preemption and circular wait holds
simultaneously. If it is possible to violate one of the four conditions at any time then the deadlock can never
occur in the system.

The idea behind the approach is very simple that we have to fail one of the four conditions but there can be a
big argument on its physical implementation in the system.

We will discuss it later in detail.

3. Deadlock avoidance

In deadlock avoidance, the operating system checks whether the system is in safe state or in unsafe state at
every step which the operating system performs. The process continues until the system is in safe state. Once
the system moves to unsafe state, the OS has to backtrack one step.

In simple words, The OS reviews each allocation so that the allocation doesn't cause the deadlock in the system.

We will discuss Deadlock avoidance later in detail.

4. Deadlock detection and recovery

This approach let the processes fall in deadlock and then periodically check whether deadlock occur in the
system or not. If it occurs then it applies some of the recovery methods to the system to get rid of deadlock.

We will discuss deadlock detection and recovery later in more detail since it is a matter of discussion.

Deadlock Prevention

If we simulate deadlock with a table which is standing on its four legs then we can also simulate four legs with
the four conditions which when occurs simultaneously, cause the deadlock.
However, if we break one of the legs of the table then the table will fall definitely. The same happens with
deadlock, if we can be able to violate one of the four necessary conditions and don't let them occur together then
we can prevent the deadlock.

Let's see how we can prevent each of the conditions.

1. Mutual Exclusion

Mutual section from the resource point of view is the fact that a resource can never be used by more than one
process simultaneously which is fair enough but that is the main reason behind the deadlock. If a resource could
have been used by more than one process at the same time then the process would have never been waiting for
any resource.

However, if we can be able to violate resources behaving in the mutually exclusive manner then the deadlock
can be prevented.

Spooling

For a device like printer, spooling can work. There is a memory associated with the printer which stores jobs
from each of the process into it. Later, Printer collects all the jobs and print each one of them according to
FCFS. By using this mechanism, the process doesn't have to wait for the printer and it can continue whatever it
was doing. Later, it collects the output when it is produced.

Although, Spooling can be an effective approach to violate mutual exclusion but it suffers from two kinds of
problems.

1. This cannot be applied to every resource.


2. After some point of time, there may arise a race condition between the processes to get space in that
spool.

We cannot force a resource to be used by more than one process at the same time since it will not be fair enough
and some serious problems may arise in the performance. Therefore, we cannot violate mutual exclusion for a
process practically.
2. Hold and Wait

Hold and wait condition lies when a process holds a resource and waiting for some other resource to complete
its task. Deadlock occurs because there can be more than one process which are holding one resource and
waiting for other in the cyclic order.

However, we have to find out some mechanism by which a process either doesn't hold any resource or doesn't
wait. That means, a process must be assigned all the necessary resources before the execution starts. A process
must not wait for any resource once the execution has been started.

!(Hold and wait) = !hold or !wait (negation of hold and wait is, either you don't hold or you don't wait)

This can be implemented practically if a process declares all the resources initially. However, this sounds very
practical but can't be done in the computer system because a process can't determine necessary resources
initially.

Process is the set of instructions which are executed by the CPU. Each of the instruction may demand multiple
resources at the multiple times. The need cannot be fixed by the OS.

The problem with the approach is:

1. Practically not possible.


2. Possibility of getting starved will be increases due to the fact that some process may hold a resource for
a very long time.

3. No Preemption

Deadlock arises due to the fact that a process can't be stopped once it starts. However, if we take the resource
away from the process which is causing deadlock then we can prevent deadlock.

This is not a good approach at all since if we take a resource away which is being used by the process then all
the work which it has done till now can become inconsistent.

Consider a printer is being used by any process. If we take the printer away from that process and assign it to
some other process then all the data which has been printed can become inconsistent and ineffective and also
the fact that the process can't start printing again from where it has left which causes performance inefficiency.

4. Circular Wait

To violate circular wait, we can assign a priority number to each of the resource. A process can't request for a
lesser priority resource. This ensures that not a single process can request a resource which is being utilized by
some other process and no cycle will be formed.
Among all the methods, violating Circular wait is the only approach that can be implemented practically.

Deadlock avoidance

In deadlock avoidance, the request for any resource will be granted if the resulting state of the system doesn't
cause deadlock in the system. The state of the system will continuously be checked for safe and unsafe states.

In order to avoid deadlocks, the process must tell OS, the maximum number of resources a process can request
to complete its execution.

The simplest and most useful approach states that the process should declare the maximum number of resources
of each type it may ever need. The Deadlock avoidance algorithm examines the resource allocations so that
there can never be a circular wait condition.

Safe and Unsafe States

The resource allocation state of a system can be defined by the instances of available and allocated resources,
and the maximum instance of the resources demanded by the processes.

A state of a system recorded at some random time is shown below.

Resources Assigned
Process Type 1 Type 2 Type 3 Type 4
A 3 0 2 2
B 0 0 1 1
C 1 1 1 0
D 2 1 4 0

Resources still needed


Process Type 1 Type 2 Type 3 Type 4
A 1 1 0 0
B 0 1 1 2
C 1 2 1 0
D 2 1 1 2

1. E = (7 6 8 4)
2. P = (6 2 8 3)
3. A = (1 4 0 1)

Above tables and vector E, P and A describes the resource allocation state of a system. There are 4 processes
and 4 types of the resources in a system. Table 1 shows the instances of each resource assigned to each process.

Table 2 shows the instances of the resources, each process still needs. Vector E is the representation of total
instances of each resource in the system.

Vector P represents the instances of resources that have been assigned to processes. Vector A represents the
number of resources that are not in use.

A state of the system is called safe if the system can allocate all the resources requested by all the processes
without entering into deadlock.

If the system cannot fulfill the request of all processes then the state of the system is called unsafe.

The key of Deadlock avoidance approach is when the request is made for resources then the request must only
be approved in the case if the resulting state is also a safe state.

Resource Allocation Graph

The resource allocation graph is the pictorial representation of the state of a system. As its name suggests, the
resource allocation graph is the complete information about all the processes which are holding some resources
or waiting for some resources.

It also contains the information about all the instances of all the resources whether they are available or being
used by the processes.

In Resource allocation graph, the process is represented by a Circle while the Resource is represented by a
rectangle. Let's see the types of vertices and edges in detail.
Vertices are mainly of two types, Resource and process. Each of them will be represented by a different shape.
Circle represents process while rectangle represents resource.

A resource can have more than one instance. Each instance will be represented by a dot inside the rectangle.

Edges in RAG are also of two types, one represents assignment and other represents the wait of a process for a
resource. The above image shows each of them.

A resource is shown as assigned to a process if the tail of the arrow is attached to an instance to the resource and
the head is attached to a process.
A process is shown as waiting for a resource if the tail of an arrow is attached to the process while the head is
pointing towards the resource.

Example

Let'sconsider 3 processes P1, P2 and P3, and two types of resources R1 and R2. The resources are having 1
instance each.

According to the graph, R1 is being used by P1, P2 is holding R2 and waiting for R1, P3 is waiting for R1 as
well as R2.

The graph is deadlock free since no cycle is being formed in the graph.

Deadlock Detection and Recovery

In this approach, The OS doesn't apply any mechanism to avoid or prevent the deadlocks. Therefore the system
considers that the deadlock will definitely occur. In order to get rid of deadlocks, The OS periodically checks
the system for any deadlock. In case, it finds any of the deadlock then the OS will recover the system using
some recovery techniques.
The main task of the OS is detecting the deadlocks. The OS can detect the deadlocks with the help of Resource
allocation graph.

In single instanced resource types, if a cycle is being formed in the system then there will definitely be a
deadlock. On the other hand, in multiple instanced resource type graph, detecting a cycle is not just enough. We
have to apply the safety algorithm on the system by converting the resource allocation graph into the allocation
matrix and request matrix.

In order to recover the system from deadlocks, either OS considers resources or processes.

For Resource

Preempt the resource

We can snatch one of the resources from the owner of the resource (process) and give it to the other process
with the expectation that it will complete the execution and will release this resource sooner. Well, choosing a
resource which will be snatched is going to be a bit difficult.

Rollback to a safe state

System passes through various states to get into the deadlock state. The operating system canrollback the
system to the previous safe state. For this purpose, OS needs to implement check pointing at every state.

The moment, we get into deadlock, we will rollback all the allocations to get into the previous safe state.

For Process

Kill a process

Killing a process can solve our problem but the bigger concern is to decide which process to kill. Generally,
Operating system kills a process which has done least amount of work until now.

Kill all process


This is not a suggestible approach but can be implemented if the problem becomes very serious. Killing all
process will lead to inefficiency in the system because all the processes will execute again from starting.

You might also like