0% found this document useful (0 votes)
21 views58 pages

Unit III

The document discusses process synchronization and deadlocks in operating systems, explaining concurrency, concurrent programming, and the principles of synchronization. It highlights the advantages and problems associated with concurrency, such as race conditions and deadlocks, and describes techniques for process coordination and resource allocation. Additionally, it covers interprocess communication methods and synchronization mechanisms like semaphores and mutual exclusion to maintain data consistency.

Uploaded by

Ajit kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views58 pages

Unit III

The document discusses process synchronization and deadlocks in operating systems, explaining concurrency, concurrent programming, and the principles of synchronization. It highlights the advantages and problems associated with concurrency, such as race conditions and deadlocks, and describes techniques for process coordination and resource allocation. Additionally, it covers interprocess communication methods and synchronization mechanisms like semaphores and mutual exclusion to maintain data consistency.

Uploaded by

Ajit kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

Unit III

Process Synchronization and Deadlocks

S.E.
Operating System

Dr. Jyoti Chavhan


Assistant Professor

Dr. Jyoti Chavhan


Concurrency

Concurrency is when multiple tasks can start, run, and complete at the same time, even if they aren't
running at the same moment.

In computer science, concurrency is when there are multiple processes running in parallel within an
operating system

E.g.

You open browser and enter 100 tabs on chrome/mozilla. Each tab works on its own process or
thread. Each runs its own javascript codes on its own web page. This is an example of
concurrent execution in a software

Dr. Jyoti Chavhan


What is concurrent programming?

Concurrency generally refers to events or circumstances that are happening or existing at the same time.

In programming terms, concurrent programming is a technique in which two or more processes start,
run in an interleaved fashion through context switching and complete in an overlapping time period by
managing access to shared resources e.g. on a single core of CPU.

This doesn’t necessarily mean that multiple processes will be running at the same instant – even if the
results might make it seem like it.

Dr. Jyoti Chavhan


Principles of Concurrency
Interleaving − Interleaving refers to the interleaved execution of multiple processes or
threads. The operating system uses a scheduler to determine which process or thread to
execute at any given time. Interleaving allows for efficient use of CPU resources and
ensures that all processes or threads get a fair share of CPU time.
Synchronization − Synchronization refers to the coordination of multiple processes or
threads to ensure that they do not interfere with each other. This is done through the use
of synchronization primitives such as locks, semaphores, and monitors. These
primitives allow processes or threads to coordinate access to shared resources such as
memory and I/O devices.
Mutual exclusion − Mutual exclusion refers to the principle of ensuring that only one
process or thread can access a shared resource at a time. This is typically
implemented using locks or semaphores to ensure that multiple processes or threads do
not access a shared resource simultaneously.

Dr. Jyoti Chavhan


Deadlock avoidance − Deadlock is a situation in which two or more processes or threads are
waiting for each other to release a resource, resulting in a deadlock. Operating systems
use various techniques such as resource allocation graphs and deadlock prevention algorithms
to avoid deadlock.
Process or thread coordination − Processes or threads may need to coordinate their activities
to achieve a common goal. This is typically achieved using synchronization primitives such
as semaphores or message passing mechanisms such as pipes or sockets.
Resource allocation − Operating systems must allocate resources such as memory, CPU time,
and I/O devices to multiple processes or threads in a fair and efficient manner. This is
typically achieved using scheduling algorithms such as round-robin, priority-based, or
real-time scheduling.

Dr. Jyoti Chavhan


Advantages of Concurrency
● It allows running many applications at the same time.
● It allows for the better functioning of the operating system.
● It provides a better average response time.
● It allows that when resources are not in use by one application, they can be used by other
applications.

Problems in Concurrency

Race conditions occur when the output of a system depends on the order and timing of the events,
which leads to unpredictable behavior. Multiple processes or threads accessing shared resources
simultaneously can cause race conditions.
Deadlocks occur when two or more processes or threads are waiting for each other to release
resources, resulting in a circular wait. Deadlocks can occur when multiple processes or threads
compete for exclusive access to shared resources.

Dr. Jyoti Chavhan


Starvation occurs when a process or thread cannot access the resource it needs to complete its
task because other processes or threads are hogging the resource. This results in the process
or thread being stuck in a loop, unable to make progress.
Deadlock avoidance techniques can prevent deadlocks from occurring, but they can lead to
inefficient use of resources or even starvation of certain processes or threads.

Dr. Jyoti Chavhan


What is Race Condition in OS?
A race condition is a problem that occurs in an operating system (OS) where two or more processes or
threads are executing concurrently. The outcome of their execution depends on the order in which they are
executed. In a race condition, the exact timing of events is unpredictable, and the outcome of the execution
may vary based on the timing. This can result in unexpected or incorrect behavior of the system.
Race condition in OS with example
If two threads are simultaneously accessing and changing the same shared resource, such as a variable or a
file, the final state of that resource depends on the order in which the threads execute. If the threads are not
correctly synchronized, they can overwrite each other's changes, causing incorrect results or even system
crashes.

Dr. Jyoti Chavhan


Examples of Race Condition
We can understand race conditions by taking real-life examples.
Consider a situation where two employees are working on the same document in a company's shared folder.
Employee A opens the document and starts editing it, but before they can save their changes,
Employee B opens the same document and starts making changes.
When Employee A tries to save their changes, he is unable to change because the document has been updated
by Employee B, and their changes are lost.
This is an example of a race condition because the two employees are "competing" to access and modify the
same resource (in this case, the document). Depending on the timing and order of the actions performed by
employees, the final state of the document can be unpredictable and inconsistent.
To avoid race conditions in this, the company may implement a version control system or a
check-out/check-in mechanism that allows employees to work on different copies of the document and
merge their changes later.
What is an example of a race condition in software?
A common race condition example is a banking application where multiple users attempt to withdraw funds from the
same account simultaneously. If not properly synchronized, the balance may be incorrect due to simultaneous
withdrawals.

Dr. Jyoti Chavhan


What is Interprocess Communication?

Interprocess communication is the mechanism provided by the operating system that allows processes to
communicate with each other. This communication could involve a process letting another process know that
some event has occurred or the transferring of data from one process to another.

A diagram that illustrates interprocess communication is as follows −

Dr. Jyoti Chavhan


● Signal
Signals are useful in interprocess communication in a limited way. They are system
messages that are sent from one process to another. Normally, signals are not used to
transfer data but are used for remote commands between processes.
● Shared Memory
Shared memory is the memory that can be simultaneously accessed by multiple
processes. This is done so that the processes can communicate with each other. All
POSIX systems, as well as Windows operating systems use shared memory.
● Message Queue
Multiple processes can read and write data to the message queue without being
connected to each other. Messages are stored in the queue until their recipient retrieves
them. Message queues are quite useful for interprocess communication and are used
by most operating systems.

Dr. Jyoti Chavhan


Synchronization in Interprocess Communication

Synchronization is a necessary part of interprocess communication. It is either provided by the interprocess control
mechanism or handled by the communicating processes. Some of the methods to provide synchronization are as follows −
● Semaphore
A semaphore is a variable that controls the access to a common resource by multiple processes. The two
types of semaphores are binary semaphores and counting semaphores.
● Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the critical section at a time. This is useful
for synchronization and also prevents race conditions.
● Barrier
A barrier does not allow individual processes to proceed until all the processes reach it. Many parallel
languages and collective routines impose barriers.
● Spinlock
This is a type of lock. The processes trying to acquire this lock wait in a loop while checking if the lock is
available or not. This is known as busy waiting because the process is not doing any useful operation even
though it is active.

Dr. Jyoti Chavhan


Process Synchronization in OS

Processes Synchronization or Synchronization is the way by which processes that share the same memory space are
managed in an operating system.

It helps maintain the consistency of data by using variables or hardware so that only one process can make changes to the
shared memory at a time.

There are various solutions for the same such as semaphores, mutex locks, synchronization hardware, etc.

Dr. Jyoti Chavhan


What is Process Synchronization in OS?

For example, consider a bank that stores the account balance of each customer in the same
database. Now suppose you initially have x rupees in your account. Now, you take out some
amount of money from your bank account, and at the same time, someone tries to look at the
amount of money stored in your account. As you are taking out some money from your account,
after the transaction, the total balance left will be lower than x. But, the transaction takes time, and
hence the person reads x as your account balance which leads to inconsistent data. If in some
way, we could make sure that only one process occurs at a time, we could ensure consistent data.
In the above image, if Process1 and Process2 happen at
the same time, user 2 will get the wrong account balance
as Y because of Process1 being transacted when the
balance is X.

Inconsistency of data can occur when various processes


share a common resource in a system which is why there
is a need for process synchronization in the operating
system.

Dr. Jyoti Chavhan


How Process Synchronization in OS Works?

Let us take a look at why exactly we need Process Synchronization. For example, If a process1 is trying
to read the data present in a memory location while another process2 is trying to change the data present
at the same location, there is a high chance that the data read by the process1 will be incorrect.

Dr. Jyoti Chavhan


Different elements/sections of a program:

● Entry Section: The entry Section decides the entry of a process.


● Critical Section: Critical section allows and makes sure that only one process is modifying the
shared data.
● Exit Section: The entry of other processes in the shared data after the execution of one process is
handled by the Exit section.
● Remainder Section: The remaining part of the code which is not categorized as above is contained
in the Remainder section.

Dr. Jyoti Chavhan


Race Condition

When more than one process is either running the same code or modifying the same memory or any
shared data, there is a risk that the result or value of the shared data may be incorrect because all
processes try to access and modify this shared resource.
Thus, all the processes race to say that my result is correct.
This condition is called the race condition. Since many processes use the same data, the results of the
processes may depend on the order of their execution.
This is mostly a situation that can arise within the critical section.

In the critical section, a race condition occurs when the end result of multiple thread executions varies
depending on the sequence in which the threads execute.

But how to avoid this race condition? There is a simple solution:


● by treating the critical section as a section that can be accessed by only a single process at a time.
This kind of section is called an atomic section.

Dr. Jyoti Chavhan


What is the Critical Section Problem?

Why do we need to have a critical section? What problems occur if we remove it?

A part of code that can only be accessed by a single process at any moment is known as a critical section.
This means that when a lot of programs want to access and change a single shared data, only one process
will be allowed to change at any given moment. The other processes have to wait until the data is free to
be used.

The wait() function mainly handles the entry to the critical section, while the signal() function handles
the exit from the critical section. If we remove the critical section, we cannot guarantee the consistency
of the end outcome after all the processes finish executing simultaneously.

We'll look at some solutions to Critical Section Problem but before we move on to that, let us take a look
at what conditions are necessary for a solution to Critical Section Problem.

Dr. Jyoti Chavhan


Requirements of Synchronization(Synchronization Mechanism)
The following three requirements must be met by a solution to the critical section problem:

● Mutual exclusion: If a process is running in the critical section, no other process should be
allowed to run in that section at that time.

● Remember that mutual exclusion is essential for maintaining consistency and avoiding conflicts in
concurrent systems
● Progress: If no process is still in the critical section and other processes are waiting outside the
critical section to execute, then any one of the threads must be permitted to enter the critical
section. The decision of which process will enter the critical section will be taken by only those
processes that are not executing in the remaining section.
● No starvation: Starvation means a process keeps waiting forever to access the critical section but
never gets a chance. No starvation is also known as Bounded Waiting.
○ A process should not wait forever to enter inside the critical section.
○ When a process submits a request to access its critical section, there should be a limit or
bound, which is the number of other processes that are allowed to access the critical section
before it.
○ After this bound is reached, this process should be allowed to access the critical section.

Dr. Jyoti Chavhan


Example:

Dr. Jyoti Chavhan


How can race conditions be avoided?

Answer:
Race conditions can be avoided by implementing mutual exclusion techniques, such
as locks, semaphores, or monitors. These techniques ensure that only one process or
thread can access a shared resource at a time, preventing simultaneous conflicting
accesses and maintaining data integrity.

Dr. Jyoti Chavhan


What is Semaphore?

Concurrent Processes are those processes that are executed simultaneously or parallely and might
or might not be dependent on other processes.
Process Synchronization can be defined as the coordination between two process that have access
to common materials such as a common section of code, resources or data etc.
For example: There may be some resource that is shared by 3 different processes, and none of
the processes at a certain time can change the resource, since that might ruin the results of the
other processes sharing the same resource.
Now this Process Synchronization is required for concurrent processes.
For any number of processes that are executing simultaneously, let's say all of them need to access
a section of the code.
This section is called the Critical Section.
We have 2 processes, that are concurrent and since we are talking about Process Synchronization,
let's say they share a variable "shared" which has a value of 5.
What is our goal here? We want to achieve mutual exclusion, meaning that we want to prevent
simultaneous access to a shared resource. The resource here being the variable "shared" with
value 5.

Dr. Jyoti Chavhan


int shared = 5

Process 1
int x = shared; // storing the value of shared variable in the variable x
x++;
sleep(1);
shared = x;

Process 2
int y = shared;
y--;
sleep(1);
shared = y;
We start with the execution of process 1, in which we declare a variable x which has initially the value of the shared variable which is 5. The
value of x is then incremented, and it becomes 6 and post that the process goes into sleep state. Since the current processing is concurrent, the
cpu does not wait and starts the processing of process 2. The integer y has the value of the shared variable initially which is unchanged, and is
5.
Then we decrement the value of y and process 2 goes into sleep state. We move back to process 1 and the value of shared variable becomes 6.
Once that process is complete, in process 2 the value of shared variable is changed to 4.
One would think that if we increment and decrement a number, it's value should be unchanged and that is exactly what was happening in the
two processes, however in this case the value of the "shared" variable is 4, and this is undesired.

Dr. Jyoti Chavhan


For example: If we have 5 resources, and one process uses it, decrementing it's value by 1, just like in
our example -- process X, had done. And if another process Y releases the same resource it had taken
earlier, a similar situation might occur and the resultant would be 4, which instead should have been 5
itself.

This is called a race condition, and due to this condition, problems such as deadlock may occur. Hence
we need proper synchronization between processes, and to prevent these, we use a signaling integer
variable, called - Semaphore.

So to formally define Semaphore we can say that it is an integer variable which is used in a mutually
exclusive manner by concurrent processes, to achieve synchronization.

Since Semaphores are integer variables, their value acts as a signal, which allows or does not allow a
process to access the critical section of code or certain other resources.

Dr. Jyoti Chavhan


Types of Semaphores
There are mainly two types of Semaphores, or two types of signaling integer variables:
1. Binary Semaphores(Mutex Lock):
In these type of Semaphores the integer value of the semaphore can only be either 0 or 1. If the
value of the Semaphore is 1, it means that the process can proceed to the critical section (the
common section that the processes need to access). However, if the value of binary semaphore is
0, then the process cannot continue to the critical section of the code. When a process is using the
critical section of the code, we change the Semaphore value to 0, and when a process is not using
it, or we can allow a process to access the critical section, we change the value of semaphore to 1.
Binary semaphore is also called mutex lock.

Dr. Jyoti Chavhan


Dr. Jyoti Chavhan
Binary Semaphores:

Initially, the value of the semaphore is 1. When the process P1 enters the critical section, the value of
semaphore becomes 0. If P2 would want to enter the critical section at this time, it wouldn't be able to,
since the value of semaphore is not greater 0. It will have to wait till semaphore value is greater than 0,
and this will happen only once P1 leaves the critical section and executes the signal operation which
increments the value of the semaphore.

This is how mutual exclusion is achieved using binary semaphore i.e. both processes cannot access the
critical section at the same time.

Dr. Jyoti Chavhan


struct Semaphore{
enum value(0, 1);
/* This queue contains all the process control blocks (PCB) of the processes that get
blocked while performing the wait operation */
Queue<process> processes;
}
wait (Semaphore mutex){
if (mutex.value == 1){
mutex.value = 0;
}
else {
// since process cannot access critical section, adding it to waiting queue
processes.push(P);
sleep();
}
}

Dr. Jyoti Chavhan


signal (Semaphore mutex){
if (mutex.processes is empty){
mutex.value = 1;
}
else {
// selecting a process from the waiting queue which can next access
the critical section
process p = processes.pop();
wakeup(p);
}
}In this code above we have implemented a binary semaphore which
provides mutual exclusion.

Dr. Jyoti Chavhan


2. Counting Semaphores
Counting semaphores are signaling integers that can take on any integer value. Using these Semaphores we
can coordinate access to resources and here the Semaphore count is the number of resources available. If the
value of the Semaphore is anywhere above 0, processes can access the critical section, or the shared
resources. The number of processes that can access the resources / code is the value of the semaphore.
However, if the value is 0, it means that there aren't any resources that are available or the critical section is
already being accessed by a number of processes and cannot be accessed by more processes. Counting
semaphores are generally used when the number of instances of a resource are more than 1, and
multiple processes can access the resource.

Dr. Jyoti Chavhan


Solving Producer-Consumer with Semaphores
Now that we have understood the working of semaphores, we can take a look at the real life
application of semaphores in classic synchronization problems.

The producer consumer problem is one of the classic process synchronization problems.

Problem Statement

The problem statement states that we have a buffer that is of fixed size, and the producer will
produce items and place them in the buffer. The consumer can pick items from the buffer and
consume them. Our task is to ensure that when the item is placed in the buffer by the producer,
the consumer should not consume it at the same time the producer produces and places an item
into the buffer. The critical section here, is the buffer.

Dr. Jyoti Chavhan


Solution
So to solve this problem, we will use 2 counting semaphores, namely "full" and "empty".
The counting semaphore "full" will keep track of all the slots in the buffer that are used, i.e.
track of all the items in the buffer. And of course, the "empty" semaphore will keep track of
the slots in the buffer that are empty, and the value of mutex will be 1.
Initially, the value of the semaphore "full" will be 0 since all slots in the buffer are
unoccupied and the value of the "empty" buffer will be 'n', where n is the size of the buffer
since all slots are initially empty.

Dr. Jyoti Chavhan


For example, if the size of the buffer is 5, then the semaphore full = 0, since all the slots in the buffer are unoccupied and empty =
5.The deduced solution for the producer section of the problem is:

do{
// producer produces an item
wait(empty);
wait(mutex);
// put the item into the buffer
signal(mutex);
signal(full);
} while(true)

In the above code, we call the wait operations on the empty and mutex semaphores when the producer produces an item.
Since an item is produced, it must be placed in the buffer reducing the number of empty slots by 1, hence we call the wait
operation on the empty semaphore. We must also reduce the value of mutex so as to prevent the consumer from accessing
the buffer.
Post this, the producer has placed the item into the buffer and hence we can increase the value of "full" semaphore by 1
and also increment the value of the mutex as the producer has completed it's task and the signal will now be able to
access the buffer.

Dr. Jyoti Chavhan


Solution to the consumer section of the problem:Solution to the consumer section of the
problem: do{
wait(full);
wait(mutex);
// removal of the item from the buffer
signal(mutex);
signal(empty);
// consumer now consumed the item
} while(true)

The consumer needs to consume the items that are produced by the producer. So when the consumer is
removing the item from the buffer to consume it we need to reduce the value of the "full" semaphore by 1
since one slot will be emptied, and we also need to decrement the value of mutex so that the producer
does not access the buffer.

Now that the consumer has consumed the item, we can increment the value of the empty semaphore
along with the mutex by 1.

Thus, we have solved the producer-consumer problem.

Dr. Jyoti Chavhan


Advantages of Semaphores
As we have read throughout the article, semaphores have proven to be extremely useful in process
synchronization. Here are the advantages summarized:

● They allow processes into the critical section one by one, and provide strict mutual exclusion (in
the case of binary semaphores).
● No resources go to waste due to busy waiting as with the usage of semaphores as we do not waste
processor time in checking the fulfillment of a condition to allow a process to access the critical
section.
● The implementation / code of the semaphores is written in the machine independent code section
of the microkernel, and hence semaphores are machine independent.

Dr. Jyoti Chavhan


Disadvantage of Semaphores
We have already discussed the advantages of the semaphores, however semaphores also have some disadvantages.
They are:

● Semaphores are slightly complicated and the implementation of the wait and signal operations should be done
in such a manner, that deadlocks are prevented.
● The usage of semaphores may cause priority inversion where the high priority processes might get access to
the critical section after the low priority processes.

Dr. Jyoti Chavhan


What is Deadlock in OS?
All the processes in a system require some resources such as central processing
unit(CPU), file storage, input/output devices, etc to execute it. Once the execution is
finished, the process releases the resource it was holding. However, when many
processes run on a system they also compete for these resources they require for
execution. This may arise a deadlock situation.

A deadlock is a situation in which more than one process is blocked because it is


holding a resource and also requires some resource that is acquired by some other
process. Therefore, none of the processes gets executed.

Dr. Jyoti Chavhan


Necessary Conditions for Deadlock
The four necessary conditions for a deadlock to arise are as follows.
● Mutual Exclusion: Only one process can use a resource at any given time i.e. the
resources are non-sharable.
● Hold and wait: A process is holding at least one resource at a time and is waiting to
acquire other resources held by some other process.
● No preemption: The resource can be released by a process voluntarily i.e. after
execution of the process.
● Circular Wait: A set of processes are waiting for each other in a circular fashion. For
example, lets say there are a set of processes {P0,P1,P2,P3}.such that P0 depends on
P1,P1 depends on P2,P2 depends on P3, P3 depends on P0.This creates a circular
relation between all these processes and they have to wait forever to be executed.
● Example

Dr. Jyoti Chavhan


All four conditions are satisfied in above fig.

In the above figure, there are two processes and two resources. Process 1 holds "Resource
1" and needs "Resource 2" while Process 2 holds "Resource 2" and requires "Resource 1".
This creates a situation of deadlock because none of the two processes can be executed.
Since the resources are non-shareable they can only be used by one process at a
time(Mutual Exclusion). Each process is holding a resource and waiting for the other
process the release the resource it requires. None of the two processes releases their
resources before their execution and this creates a circular wait. Therefore, all four
conditions are satisfied.

Dr. Jyoti Chavhan


Methods of Handling Deadlocks in Operating System
The first two methods are used to ensure the system never enters a deadlock.
Deadlock Prevention
This is done by restraining the ways a request can be made. Since deadlock occurs when
all the above four conditions are met, we try to prevent any one of them, thus preventing
a deadlock.
Deadlock Avoidance
When a process requests a resource, the deadlock avoidance algorithm examines the
resource-allocation state. If allocating that resource sends the system into an unsafe
state, the request is not granted.
Therefore, it requires additional information such as how many resources of each type is
required by a process. If the system enters into an unsafe state, it has to take a step back
to avoid deadlock.

Dr. Jyoti Chavhan


Deadlock Detection and Recovery
We let the system fall into a deadlock and if it happens, we detect it using a detection algorithm
and try to recover.
Some ways of recovery are as follows.
● Aborting all the deadlocked processes.
● Abort one process at a time until the system recovers from the deadlock.
● Resource Preemption: Resources are taken one by one from a process and assigned to
higher priority processes until the deadlock is resolved.

Deadlock Ignorance
In the method, the system assumes that deadlock never occurs. Since the problem of deadlock
situation is not frequent, some systems simply ignore it. Operating systems such as UNIX and
Windows follow this approach. However, if a deadlock occurs we can reboot our system and the
deadlock is resolved automatically.

Dr. Jyoti Chavhan


Difference between Starvation
Deadlock and Deadlocks Starvation

A deadlock is a situation in which more than one process is blocked Starvation is a process in which the low priority processes
because it is holding a resource and also requires some resource that is are postponed indefinitely because the resources are never
acquired by some other process. allocated.

Resources are blocked by a set of processes in a circular fashion. Resources are continuously used by high-priority resources.

It is prevented by avoiding anyone necessary condition required for a It can be prevented by aging.(Aging-is a scheduling
deadlock or recovered using a recovery algorithm. technique used to prevent resource starvation and maintain
fair allocation of resources among processes.)

In a deadlock, none of the processes get executed. In starvation, higher priority processes execute while lower
priority processes are postponed.

Deadlock is also called circular wait. Starvation is also called lived lock.

Dr. Jyoti Chavhan


Advantage of Deadlock Method

● No preemption is needed for deadlocks.


● It is a good method if the state of the resource can be saved and restored easily.
● It is good for activities that perform a single burst of activity.
● It does not need run-time computations because the problem is solved in system design.

Disadvantages of Deadlock Method

● The processes must know the maximum resource of each type required to execute it.
● Preemptions are frequently encountered.
● It delays the process initiation.
● There are inherent pre-emption losses.
● It does not support incremental request of resources.

Dr. Jyoti Chavhan


Resource Allocation Graph

We use the resource-allocation graph for the pictographic depiction of a system’s state. It contains all
of the details about the processes that are holding and waiting for resources. The data about available
resources, as well as the resources that the process is consuming, is contained in the resource
allocation graph, which includes all information linked to all instances of resources.

The resource allocation graph is a visual depiction of a system’s current status. The resource allocation
graph, as its name implies, contains all the information about all of the activities that are holding or
waiting for resources.
It also provides information on all instances of all resources, whether available or in use by processes.
The process is represented by a circle in the Resource Allocation Graph, whereas the resource is
represented using a rectangle. Let’s take a closer look at the various types of vertices and edges.

Dr. Jyoti Chavhan


Dr. Jyoti Chavhan
Dr. Jyoti Chavhan
Dr. Jyoti Chavhan
Example

Let's consider 3 processes P1, P2 and P3, and two types of resources R1 and R2. The resources are having 1 instance each.

According to the graph, R1 is being used by P1, P2 is holding R2 and waiting for R1, P3 is waiting for R1 as well as R2.

The graph is deadlock free since no cycle is being formed in the graph.

Dr. Jyoti Chavhan


Multiple Instances of Each Resource Type(Banker's Algorithm)

The above scheme that is a wait-for graph is not applicable to the


resource-allocation system having multiple instances of each resource type. Now
we will move towards a deadlock detection algorithm that is is applicable for
such systems.
This algorithm mainly uses several time-varying data structures that are similar to
those used in Banker's Algorithm and these are as follows:
1. Available
It is an array of length m. It represents the number of available resources of each
type.
2. Allocation
It is an n x m matrix which represents the number of resources of each type
currently allocated to each process.

Dr. Jyoti Chavhan


3. Request
It is an n*m matrix that is used to indicate the request of each process; if Request[i][j] equals to k, then process Pi is
requesting k more instances of resource type Ri.
Allocation and Request are taken as vectors and referred to as Allocation and Request. The Given detection algorithm is
simply used to investigate every possible allocation sequence for the processes that remain to be completed.
1.Let Work and Finish be vectors of length m and n, respectively. Initialize:
Work = Available
Finish[i] =false for i = 0, 1, ... , n - 1.
If Allocationi is not equal to 0,then Finish[i] = false; else Finish[i] = true
2.Find an index i such that both
Finish[i] ==false
Requesti <= Work
If no such i exist then go to step 4.
3.Perform the following:
Work = Work + Allocationi
Finish[i] = true
Go to step 2.
4. If If Finish[i] == false for some i, 0<=i<n, then it means the system is in a deadlocked state.
Moreover,if Finish[i]==false,then process Pi is deadlocked.
This algorithm may require an order of mxn² operations in order to determine whether the system is in a deadlocked state.

Dr. Jyoti Chavhan


Recovery From Deadlock
When a detection algorithm determines that a deadlock exists then there are several available
alternatives. There one possibility and that is to inform the operator about the deadlock and let him deal
with this problem manually.
Another possibility is to let the system recover from the deadlock automatically. These are two options
that are mainly used to break the deadlock.

Dr. Jyoti Chavhan


Process Termination

In order to eliminate deadlock by aborting the process, we will use one of two methods given below. In
both methods, the system reclaims all resources that are allocated to the terminated processes.

● Aborting all deadlocked Processes Clearly, this method is helpful in breaking the cycle of
deadlock, but this is an expensive approach. This approach is not suggestable but can be used if the
problem becomes very serious. If all the processes are killed then there may occur insufficiency in
the system and all processes will execute again from starting.
● Abort one process at a time until the elimination of the deadlock cycle This method can be used
but we have to decide which process to kill and this method incurs considerable overhead. The
process that has done the least amount of work is killed by the Operating system firstly.

Dr. Jyoti Chavhan


Resource Preemption
In order to eliminate the deadlock by using resource preemption, we will successively preempt some resources from processes and will
give these resources to some other processes until the deadlock cycle is broken and there is a possibility that the system will recover from
deadlock. But there are chances that the system goes into starvation.

Deadlock Recovery through RollBack


In this case of deadlock recovery through rollback, whenever a deadlock is detected, it is easy to see which resources are needed.
To recover from deadlock, a process that owns a needed resource is rolled back to a point in time before it acquired some other resource
just by starting one of its earlier checkpoints.

Deadlock Recovery through Killing Processes


This method of deadlock recovery via killing processes is the most basic method of deadlock recovery.
Sometimes it is best to kill a process that can be restarted from the beginning with no ill effects.
Assume we have opened an app, say "Google Chrome," but while browsing, the browser may crash and stop working. And our system
has reached a point where we are unable to do anything; thus, restarting your app is the best way to resolve this issue, followed by simply
opening the task manager to terminate the process.
To do so, if you're on a Windows platform or using a Windows operating system, press "Alt+Ctrl+Del," then click the "Task Manager"
button to open it. You can also access the task manager by pressing the "Windows" button, then typing "Task Manager" and pressing the
"ENTER" key.

Dr. Jyoti Chavhan


Dining Philosophers Problem

● Dining Philosophers Problem in OS is a classical synchronization problem in the operating system.


● With the presence of more than one process and limited resources in the system the synchronization problem
arises.
● If one resource is shared between more than one process at the same time then it can lead to data inconsistency.
● The dining philosophers problem is another classic synchronization problem which is used to evaluate situations
where there is a need of allocating multiple resources to multiple processes.

Dr. Jyoti Chavhan


Dining Philosophers Problem in OS
Consider two processes P1 and P2 executing simultaneously, while trying to access the same resource R1, this raises the question who will
get the resource and when? This problem is solved using process synchronisation.
The act of synchronising process execution such that no two processes have access to the same associated data and resources is referred as
process synchronisation in operating systems.
It's particularly critical in a multi-process system where multiple processes are executing at the same time and trying to access the very
same shared resource or data.
This could lead to discrepancies in data sharing. As a result, modifications implemented by one process may or may not be reflected when
the other processes access the same shared data. The processes must be synchronised with one another to avoid data inconsistency.

Dr. Jyoti Chavhan


And the Dining Philosophers Problem is a typical example of limitations in process synchronisation in systems with multiple processes and
limited resource. According to the Dining Philosopher Problem, assume there are K philosophers seated around a circular table, each with
one chopstick between them. This means, that a philosopher can eat only if he/she can pick up both the chopsticks next to him/her. One of
the adjacent followers may take up one of the chopsticks, but not both.

For example, let’s consider P0, P1, P2, P3, and P4 as the philosophers or processes and C0, C1, C2, C3, and C4 as the 5 chopsticks or
resources between each philosopher. Now if P0 wants to eat, both resources/chopstick C0 and C1 must be free, which would leave in P1
and P4 void of the resource and the process wouldn't be executed, which indicates there are limited resources(C0,C1..) for multiple
processes(P0, P1..), and this problem is known as the Dining Philosopher Problem.

Dr. Jyoti Chavhan


The Solution of the Dining Philosophers Problem
The solution to the process synchronization problem is Semaphores, A semaphore is an integer used in solving critical sections.
The critical section is a segment of the program that allows you to access the shared variables or resources. In a critical section, an atomic
action (independently running process) is needed, which means that only single process can run in that section at a time.
Semaphore has 2 atomic operations: wait() and signal(). If the value of its input S is positive, the wait() operation decrements, it is used to
acquire resource while entry. No operation is done if S is negative or zero. The value of the signal() operation's parameter S is increased, it
used to release the reource once critical section is executed at exit.

Dr. Jyoti Chavhan


Banker’s Algorithm Examples(Deadlock Avoidance)

Dr. Jyoti Chavhan

You might also like