Unit 2 Notes Srmcem

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

UNIT –II

Introduction of Process
Synchronization
On the basis of synchronization, processes are categorized as one of the following
two types:
 Independent Process: The execution of one process does not affect the
execution of other processes.
 Cooperative Process: A process that can affect or be affected by other
processes executing in the system.
Process synchronization problem arises in the case of Cooperative process also
because resources are shared in Cooperative processes.

Race Condition:

When more than one process is executing the same code or accessing the same
memory or any shared variable in that condition there is a possibility that the output
or the value of the shared variable is wrong so for that all the processes doing the
race to say that my output is correct this condition known as a race condition.
Several processes access and process the manipulations over the same data
concurrently, then the outcome depends on the particular order in which the access
takes place. A race condition is a situation that may occur inside a critical section.
This happens when the result of multiple thread execution in the critical section
differs according to the order in which the threads execute. Race conditions in
critical sections can be avoided if the critical section is treated as an atomic
instruction. Also, proper thread synchronization using locks or atomic variables can
prevent race conditions.

Critical Section Problem:

A critical section is a code segment that can be accessed by only one process at a
time. The critical section contains shared variables that need to be synchronized to
maintain the consistency of data variables. So the critical section problem means
designing a way for cooperative processes to access shared resources without
creating data inconsistencies.

In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
 Mutual Exclusion: If a process is executing in its critical section, then no other
process is allowed to execute in the critical section.
 Progress: If no process is executing in the critical section and other processes
are waiting outside the critical section, then only those processes that are not
executing in their remainder section can participate in deciding which will enter
in the critical section next, and the selection can not be postponed indefinitely.
 Bounded Waiting: A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted. A process is
bounded for certain times for critical section rather than unbounded access that
will create starvation for another process.
 No assumption of hardware and speed: process should not depend on the
hardware configuration like 32 or 64 bit processor or processing speed of
processor.
Example 1: understanding the need of Synchronization

Int shared
Process 1 Process 2

Int x=shared; Int y=shared;

x++; y--;

Sleep(1); //context switching or preempt Sleep(1); //context switching or preempt

Shared =x; Shared =y;

Using above example if the process 1 preempt at sleep(1) and control transferred to
process 2 and further process 2 also preempt at sleep(1) then final value of shared
variable came as 4 rather it should be 5 as process 1 decrease it by 1 and process 2
increase it by 1hence it should come 5

Producer-Consumer problem
The Producer-Consumer problem is a classical multi-process synchronization
problem, that is we are trying to achieve synchronization between more than one
process.

There is one Producer in the producer-consumer problem, Producer is producing


some items, whereas there is one Consumer that is consuming the items
produced by the Producer. The same memory buffer is shared by both producers
and consumers which is of fixed-size.

The task of the Producer is to produce the item, put it into the memory buffer,
and again start producing items. Whereas the task of the Consumer is to consume
the item from the memory buffer.
Let's understand what is the problem?
Below are a few points that considered as the problems occur in Producer-
Consumer

o The producer should produce data only when the buffer is not full. In case it
is found that the buffer is full, the producer is not allowed to store any data
into the memory buffer.
o Data can only be consumed by the consumer if and only if the memory
buffer is not empty. In case it is found that the buffer is empty, the
consumer is not allowed to use any data from the memory buffer.
o Accessing memory buffer should not be allowed to producer and consumer
at the same time.

Let's see the code for the above problem:

Producer Code
Consumer Code

Producer Shared Variables and Consumer


Space

Int in=0,out=0;

register int count=0;


void producer(void){ 0 void consumer(void){
1
int itemp 2 int itemc;
3
while(1){ while(1){
4
produceitem(itemp); 5 produceitem(itemp);
6
while(count==n); buffer 7 while(count==0);//buffer
full empty

buffer[in]=itemp; itemc=buffer[out]

in=(in+1) mod n; out=(out+1) mod n;

count++; count--;

} }

} }
Process synchronization is not achieved if the producer or consumer code is
preempt and will generate the race condition.

Hardware Synchronization and


Algorithms : Unlock and Lock, Test
and Set, Swap
Process Synchronization problems occur when two processes running concurrently
share the same data or same variable. The value of that variable may not be updated
correctly before its being used by a second process. Such a condition is known as
Race Around Condition. There are software as well as hardware solutions to this
problem. In this article, we will talk about the most efficient hardware solution to
process synchronization problems and its implementation.
There are three algorithms in the hardware approach of solving Process
Synchronization problem:
1. Test and Set
2. Swap
3. Unlock and Lock
Hardware instructions in many operating systems help in effective solution of
critical section problems.
1. Test and Set :
Here, the shared variable is lock which is initialized to false. TestAndSet(lock)
algorithm works in this way – it always returns whatever value is sent to it and sets
lock to true. The first process will enter the critical section at once as
TestAndSet(lock) will return false and it’ll break out of the while loop. The other
processes cannot enter now as lock is set to true and so the while loop continues to
be true. Mutual exclusion is ensured. Once the first process gets out of the critical
section, lock is changed to false. So, now the other processes can enter one by one.
Progress is also ensured. However, after the first process any process can go in.
There is no queue maintained, so any new process that finds the lock to be false
again, can enter. So bounded waiting is not ensured.
Test and Set Pseudocode –
//Shared variable lock initialized to false
boolean lock; Lock=0;

Entry section
boolean TestAndSet (boolean &target){
1. While(lock==1);
boolean rv = target; 2. Lock=1;

target = true;
return rv;
3. Critical section;
}

Exit Code
while(1){ 4. Lock=0;
while (TestAndSet(lock));
critical section
lock = false; Case1:
remainder section P1 P2
} 1,2,3,4 1,2,3,4
Mutual exclusion, progress

Case2:

P1 P2
1 pre 2,3 1,2,3
No mutual exclusion: as more than 2 process can enterthe
critical section

Solution:convert two instructions into one instructions so


that entry condition is completely executed before
preemption using test and set algorithm

2. Swap :
Swap algorithm is a lot like the TestAndSet algorithm. Instead of directly setting
lock to true in the swap function, key is set to true and then swapped with lock. So,
again, when a process is in the critical section, no other process gets to enter it as
the value of lock is true. Mutual exclusion is ensured. Again, out of the critical
section, lock is changed to false, so any process finding it gets t enter the critical
section. Progress is ensured. However, again bounded waiting is not ensured for the
very same reason.
Swap Pseudocode –
// Shared variable lock initialized to false
// and individual key initialized to false;

boolean lock;
Individual key;

void swap(boolean &a, boolean &b){


boolean temp = a;
a = b;
b = temp;
}

while (1){
key = true;
while(key)
swap(lock,key);
critical section
lock = false;
remainder section
}
3. Unlock and Lock :
Unlock and Lock Algorithm uses TestAndSet to regulate the value of lock but it
adds another value, waiting[i], for each process which checks whether or not a
process has been waiting. A ready queue is maintained with respect to the process in
the critical section. All the processes coming in next are added to the ready queue
with respect to their process number, not necessarily sequentially. Once the ith
process gets out of the critical section, it does not turn lock to false so that any
process can avail the critical section now, which was the problem with the previous
algorithms. Instead, it checks if there is any process waiting in the queue. The queue
is taken to be a circular queue. j is considered to be the next process in line and the
while loop checks from jth process to the last process and again from 0 to (i-1)th
process if there is any process waiting to access the critical section. If there is no
process waiting then the lock value is changed to false and any process which
comes next can enter the critical section. If there is, then that process’ waiting value
is turned to false, so that the first while loop becomes false and it can enter the
critical section. This ensures bounded waiting. So the problem of process
synchronization can be solved through this algorithm.
Unlock and Lock Pseudocode –
// Shared variable lock initialized to false
// and individual key initialized to false

boolean lock;
Individual key;
Individual waiting[i];

while(1){
waiting[i] = true;
key = true;
while(waiting[i] && key)
key = TestAndSet(lock);
critical section
j = (i+1) % n;
while(j != i && !waiting[j])
j = (j+1) % n;
if(j == i)
lock = false;
else
waiting[j] = false;
remainder section
}

4. Turn variable (Strict Alteration): It is generally 2 process solution

Lets there is turn variable whose value is set to 0

int turn=0;
Process 1: Process 2:

While(turn!=0); While(turn!=1);

Critical section; Critical section;

Turn=1; Turn=0;
1. Mutual Exclusion is satisfied
2. Progress is not satisfied as the turn is initialized with value 0 hence
process 2 can not be enetered into it until process 1 enters first in
critical section.
3. Bound and wait satisfied: as once the p1 is executed then p1 can not
enter the critical section.
Semaphores in Process
Synchronization
To get rid of the problem of wasting the wake-up signals, Dijkstra proposed an
approach which involves storing all the wake-up calls. Dijkstra states that, instead
of giving the wake-up calls directly to the consumer, producer can store the wake-
up call in a variable. Any of the consumers can read it whenever it needs to do so.

Semaphore is the variables which stores the entire wake up calls that are being
transferred from producer to consumer. It is a variable on which read, modify and
update happens automatically in kernel mode.

Semaphore cannot be implemented in the user mode because race condition may
always arise when two or more processes try to access the variable
simultaneously. It always needs support from the operating system to be
implemented.

According to the demand of the situation, Semaphore can be divided into two
categories.

1. Counting Semaphore
2. Binary Semaphore or Mutex

We will discuss each one in detail.

Counting Semaphore

There are the scenarios in which more than one processes need to execute in
critical section simultaneously. However, counting semaphore can be used when
we need to have more than one process in the critical section at the same time.

The programming code of semaphore implementation is shown below which


includes the structure of semaphore and the logic using which the entry and the
exit can be performed in the critical section.
struct Semaphore
{
int value; // processes that can enter in the critical section simultaneously.
queue type L; // L contains set of processes which get blocked
}

//Entry Section Code for Process


P/Down (Semaphore S)
{
SS.value = S.value - 1; //semaphore's value will get decreased when a new
//process enter in the critical section
if (S.value< 0)
{
put_process(PCB) in L; //if the value is negative then
//the process will get into the blocked state.
Sleep();
}
else
return;
}
//Exit Section Code for Process
V/up (Semaphore s)
{
SS.value = S.value+1; //semaphore value will get increased when
//it makes an exit from the critical section.
if(S.value<=0)
{
select a process from L; //if the value of semaphore is positive
//then wake one of the processes in the blocked queue.
wake-up();
}
}
}
In this mechanism, the entry and exit in the critical section are performed on the
basis of the value of counting semaphore. The value of counting semaphore at
any point of time indicates the maximum number of processes that can enter in
the critical section at the same time.

A process which wants to enter in the critical section first decrease the semaphore
value by 1 and then check whether it gets negative or not. If it gets negative then
the process is pushed in the list of blocked processes (i.e. q) otherwise it gets
enter in the critical section.

When a process exits from the critical section, it increases the counting
semaphore by 1 and then checks whether it is negative or zero. If it is negative
then that means that at least one process is waiting in the blocked state hence, to
ensure bounded waiting, the first process among the list of blocked processes will
wake up and gets enter in the critical section.

The processes in the blocked list will get waked in the order in which they slept. If
the value of counting semaphore is negative then it states the number of
processes in the blocked state while if it is positive then it states the number of
slots available in the critical section.

Problem on Counting Semaphore


The questions are being asked on counting semaphore in GATE. Generally the
questions are very simple that contains only subtraction and addition.

1. Wait → Decre → Down → P


2. Signal → Inc → Up → V

The following type questions can be asked in GATE.


A Counting Semaphore was initialized to 12. then 10P (wait)
and 4V (Signal) operations were computed on this semaphore.
What is the result?
1. S = 12 (initial)
2. 10 p (wait) :
3. SS = S -10 = 12 - 10 = 2
4. then 4 V :
5. SS = S + 4 =2 + 4 = 6

Hence, the final value of counting semaphore is 6.

Binary Semaphore or Mutex


In counting semaphore, Mutual exclusion was not provided because we has the
set of processes which required to execute in the critical section simultaneously.

However, Binary Semaphore strictly provides mutual exclusion. Here, instead of


having more than 1 slots available in the critical section, we can only have at most
1 process in the critical section. The semaphore can have only two values, 0 or 1.

Let's see the programming implementation of Binary Semaphore.

Struct Bsemaphore
{
enum Value(0,1); //value is enumerated data type which can only have two values 0
or 1.
Queue type L;
}
/* L contains all PCBs corresponding to process
Blocked while processing down operation unsuccessfully.
*/
Down (Bsemaphore S)
{
if (s.value == 1) // if a slot is available in the
//critical section then let the process enter in the queue.
{
S.value = 0; // initialize the value to 0 so that no other process can read it as 1.
}
else
{
put the process (PCB) in S.L; //if no slot is available
//then let the process wait in the blocked queue.
sleep();
}
}
Up (Bsemaphore S)
{
if (S.L is empty) //an empty blocked processes list implies that no process
//has ever tried to get enter in the critical section.
{
S.Value =1;
}
else
{
Select a process from S.L;
Wakeup(); // if it is not empty then wake the first process of the blocked queue.
}
}
Producer Consumer Problem using
Semaphores
Problem Statement – We have a buffer of fixed size. A producer can
produce an item and can place in the buffer. A consumer can pick items
and can consume them. We need to ensure that when a producer is
placing an item in the buffer, then at the same time consumer should not
consume any item. In this problem, buffer is the critical section.
To solve this problem, we need two counting semaphores – Full and
Empty. “Full” keeps track of number of items in the buffer at any given time
and “Empty” keeps track of number of unoccupied slots.
Initialization of semaphores –
mutex = 1
Full = 0 // Initially, all slots are empty. Thus full slots are 0
Empty = n // All slots are empty initially
Solution for Producer –
do{

//produce an item

wait(empty);
wait(mutex);

//place in buffer

signal(mutex);
signal(full);

}while(true)

When producer produces an item then the value of “empty” is reduced by


1 because one slot will be filled now. The value of mutex is also reduced
to prevent consumer to access the buffer. Now, the producer has placed
the item and thus the value of “full” is increased by 1. The value of mutex
is also increased by 1 because the task of producer has been completed
and consumer can access the buffer.
Solution for Consumer –
do{

wait(full);
wait(mutex);

// remove item from buffer

signal(mutex);
signal(empty);

// consumes item

}while(true)

As the consumer is removing an item from buffer, therefore the value of


“full” is reduced by 1 and the value is mutex is also reduced so that the
producer cannot access the buffer at this moment. Now, the consumer has
consumed the item, thus increasing the value of “empty” by 1. The value
of mutex is also increased so that producer can access the buffer now.
Readers-Writers Problem | Set 1
(Introduction and Readers Preference
Solution)
Consider a situation where we have a file shared between many people.

 If one of the people tries editing the file, no other person should be reading or
writing at the same time, otherwise changes will not be visible to him/her.
 However if some person is reading the file, then others may read it at the same
time.
Precisely in OS we call this situation as the readers-writers problem
Problem parameters:

 One set of data is shared among a number of processes


 Once a writer is ready, it performs its write. Only one writer may write at a time
 If a process is writing, no other process can read it
 If at least one reader is reading, no other process can write
 Readers may not write and only read

Solution when Reader has the Priority over Writer


Here priority means, no reader should wait if the share is currently opened for
reading.
Three variables are used: mutex, wrt, readcnt to implement solution

1. semaphore mutex, wrt; // semaphore mutex is used to ensure mutual exclusion


when readcnt is updated i.e. when any reader enters or exit from the critical
section and semaphore wrt is used by both readers and writers
2. int readcnt; // readcnt tells the number of processes performing read in the
critical section, initially 0
Functions for semaphore :
– wait() : decrements the semaphore value.
– signal() : increments the semaphore value.
Writer process:

1. Writer requests the entry to critical section.


2. If allowed i.e. wait() gives a true value, it enters and performs the write. If not
allowed, it keeps on waiting.
3. It exits the critical section.

do {
// writer requests for critical section
wait(wrt);

// performs the write

// leaves the critical section


signal(wrt);

} while(true);

Reader process:

1. Reader requests the entry to critical section.


2. If allowed:
 it increments the count of number of readers inside the critical section. If this
reader is the first reader entering, it locks the wrt semaphore to restrict the
entry of writers if any reader is inside.
 It then, signals mutex as any other reader is allowed to enter while others are
already reading.
 After performing reading, it exits the critical section. When exiting, it checks
if no more reader is inside, it signals the semaphore “wrt” as now, writer can
enter the critical section.
3. If not allowed, it keeps on waiting.

do {

// Reader wants to enter the critical section


wait(mutex);

// The number of readers has now increased by 1


readcnt++;
// there is atleast one reader in the critical section
// this ensure no writer can enter if there is even one reader
// thus we give preference to readers here
if (readcnt==1)
wait(wrt);

// other readers can enter while this current reader is inside


// the critical section
signal(mutex);

// current reader performs reading here


wait(mutex); // a reader wants to leave

readcnt--;

// that is, no reader is left in the critical section,


if (readcnt == 0)
signal(wrt); // writers can enter

signal(mutex); // reader leaves

} while(true);
Thus, the semaphore ‘wrt‘ is queued on both readers and writers in a manner such
that preference is given to readers if writers are also there. Thus, no reader is
waiting simply because a writer has requested to enter the critical section.
Dining Philosophers Problem (DPP)
More Detail

The dining philosophers problem states that there are 5 philosophers sharing a
circular table and they eat and think alternatively. There is a bowl of rice for each of
the philosophers and 5 chopsticks. A philosopher needs both their right and left
chopstick to eat. A hungry philosopher may only eat if there are both chopsticks
available.Otherwise a philosopher puts down their chopstick and begin thinking
again.
The dining philosopher is a classic synchronization problem as it demonstrates a
large class of concurrency control problems.
Solution of Dining Philosophers Problem
A solution of the Dining Philosophers Problem is to use a semaphore to represent a
chopstick. A chopstick can be picked up by executing a wait operation on the
semaphore and released by executing a signal semaphore.
The structure of the chopstick is shown below −
semaphore chopstick [5];
Initially the elements of the chopstick are initialized to 1 as the chopsticks are on
the table and not picked up by a philosopher.
The structure of a random philosopher i is given as follows −
do {
wait( chopstick[i] );
wait( chopstick[ (i+1) % 5] );
. .
. EATING THE RICE
.
signal( chopstick[i] );
signal( chopstick[ (i+1) % 5] );
.
. THINKING
.
} while(1);
In the above structure, first wait operation is performed on chopstick[i] and
chopstick[ (i+1) % 5]. This means that the philosopher i has picked up the
chopsticks on his sides. Then the eating function is performed.
After that, signal operation is performed on chopstick[i] and chopstick[ (i+1) % 5].
This means that the philosopher i has eaten and put down the chopsticks on his
sides. Then the philosopher goes back to thinking.
Difficulty with the solution
The above solution makes sure that no two neighboring philosophers can eat at the
same time. But this solution can lead to a deadlock. This may happen if all the
philosophers pick their left chopstick simultaneously. Then none of them can eat
and deadlock occurs.
Some of the ways to avoid deadlock are as follows −
 There should be at most four philosophers on the table.
 An even philosopher should pick the right chopstick and then the left chopstick
while an odd philosopher should pick the left chopstick and then the right
chopstick.
 A philosopher should only be allowed to pick their chopstick if both are
available at the same time.
Deadlock System model

Overview :
A deadlock occurs when a set of processes is stalled because each process is holding
a resource and waiting for another process to acquire another resource. In the
diagram below, for example, Process 1 is holding Resource 1 while Process 2
acquires Resource 2, and Process 2 is waiting for Resource 1.

System Model :
 For the purposes of deadlock discussion, a system can be modeled as a
collection of limited resources that can be divided into different categories and
allocated to a variety of processes, each with different requirements.
 Memory, printers, CPUs, open files, tape drives, CD-ROMs, and other resources
are examples of resource categories.
 By definition, all resources within a category are equivalent, and any of the
resources within that category can equally satisfy a request from that category. If
this is not the case (i.e. if there is some difference between the resources within a
category), then that category must be subdivided further. For example, the term
“printers” may need to be subdivided into “laser printers” and “color inkjet
printers.”
 Some categories may only have one resource.
 The kernel keeps track of which resources are free and which are allocated, to
which process they are allocated, and a queue of processes waiting for this
resource to become available for all kernel-managed resources. Mutexes or
wait() and signal() calls can be used to control application-managed resources
(i.e. binary or counting semaphores. )
 When every process in a set is waiting for a resource that is currently assigned to
another process in the set, the set is said to be deadlocked.

Operations :
In normal operation, a process must request a resource before using it and release it
when finished, as shown below.
1. Request –
If the request cannot be granted immediately, the process must wait until the
resource(s) required to become available. The system, for example, uses the
functions open(), malloc(), new(), and request ().
2. Use –
The process makes use of the resource, such as printing to a printer or reading
from a file.
3. Release –
The process relinquishes the resource, allowing it to be used by other processes.

Necessary Conditions or Characterization:


There are four conditions that must be met in order to achieve deadlock as follows.
1. Mutual Exclusion –
At least one resource must be kept in a non-shareable state; if another process
requests it, it must wait for it to be released.

2. Hold and Wait –


A process must hold at least one resource while also waiting for at least one
resource that another process is currently holding.

3. No preemption –
Once a process holds a resource (i.e. after its request is granted), that resource
cannot be taken away from that process until the process voluntarily releases it.

4. Circular Wait –
There must be a set of processes P0, P1, P2,…, PN such that every P[I] is
waiting for P[(I + 1) percent (N + 1)]. (It is important to note that this condition
implies the hold-and-wait condition, but dealing with the four conditions is
easier if they are considered separately).
Deadlock Prevention And Avoidance
Deadlock Characteristics
As discussed in the previous post, deadlock has following characteristics.

1. Mutual Exclusion
2. Hold and Wait
3. No preemption
4. Circular wait
Deadlock Prevention
We can prevent Deadlock by eliminating any of the above four conditions.

Eliminate Mutual Exclusion


It is not possible to dis-satisfy the mutual exclusion because some resources, such as
the tape drive and printer, are inherently non-shareable.
Eliminate Hold and wait
1. Allocate all required resources to the process before the start of its execution,
this way hold and wait condition is eliminated but it will lead to low device
utilization. for example, if a process requires printer at a later time and we have
allocated printer before the start of its execution printer will remain blocked till
it has completed its execution.

2. The process will make a new request for resources after releasing the current set
of resources. This solution may lead to starvation.

Eliminate No Preemption
Preempt resources from the process when resources required by other high priority
processes.
Eliminate Circular Wait
Each resource will be assigned with a numerical number. A process can request the
resources increasing/decreasing. order of numbering.
For Example, if P1 process is allocated R5 resources, now next time if P1 ask for
R4, R3 lesser than R5 such request will not be granted, only request for resources
more than R5 will be granted.

Deadlock Avoidance
Deadlock avoidance can be done with Banker’s Algorithm.
Banker’s Algorithm
Bankers’s Algorithm is resource allocation and deadlock avoidance algorithm
which test all the request made by processes for resources, it checks for the safe
state, if after granting request system remains in the safe state it allows the request
and if there is no safe state it doesn’t allow the request made by the process.
Inputs to Banker’s Algorithm:

1. Max need of resources by each process.


2. Currently, allocated resources by each process.
3. Max free available resources in the system.
The request will only be granted under the below condition:
1. If the request made by the process is less than equal to max need to that process.
2. If the request made by the process is less than equal to the freely available
resource in the system.
Example:
Total resources in system:
A B C D
6 5 7 6

Available system resources are:


A B C D
3 1 1 2

Processes (currently allocated resources):


A B C D
P1 1 2 2 1
P2 1 0 3 3
P3 1 2 1 0
Processes (maximum resources):
A B C D
P1 3 3 2 2
P2 1 2 3 4
P3 1 3 5 0

Need = maximum resources - currently allocated resources.


Processes (need resources):
A B C D
P1 2 1 0 1
P2 0 2 0 1
P3 0 1 4 0

Deadlock Detection And Recovery


In the previous post, we have discussed Deadlock Prevention and Avoidance. In this
post, the Deadlock Detection and Recovery technique to handle deadlock is
discussed.
Deadlock Detection :
1. If resources have a single instance –
In this case for Deadlock detection, we can run an algorithm to check for the cycle
in the Resource Allocation Graph. The presence of a cycle in the graph is a
sufficient condition for deadlock.

2. In the above diagram, resource 1 and resource 2 have single instances. There is a
cycle R1 → P1 → R2 → P2. So, Deadlock is Confirmed.

3. If there are multiple instances of resources –


Detection of the cycle is necessary but not sufficient condition for deadlock
detection, in this case, the system may or may not be in deadlock varies according
to different situations.

Deadlock Recovery :
A traditional operating system such as Windows doesn’t deal with deadlock
recovery as it is a time and space-consuming process. Real-time operating systems
use Deadlock recovery.
1. Killing the process –
Killing all the processes involved in the deadlock. Killing process one by one.
After killing each process check for deadlock again keep repeating the process
till the system recovers from deadlock. Killing all the processes one by one helps
a system to break circular wait condition.
2. Resource Preemption –
Resources are preempted from the processes involved in the deadlock,
preempted resources are allocated to other processes so that there is a possibility
of recovering the system from deadlock. In this case, the system goes into
starvation.

You might also like