0% found this document useful (0 votes)
13 views34 pages

Os Unit3

The document discusses deadlock in computer processes, defining it as a situation where processes are unable to proceed because each is waiting for a resource held by another. It outlines the differences between deadlock and starvation, necessary conditions for deadlocks, and strategies for handling them, including prevention, avoidance, and detection. Additionally, it introduces the Banker's Algorithm as a method for deadlock avoidance, detailing its advantages, disadvantages, and operational requirements.

Uploaded by

laxmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views34 pages

Os Unit3

The document discusses deadlock in computer processes, defining it as a situation where processes are unable to proceed because each is waiting for a resource held by another. It outlines the differences between deadlock and starvation, necessary conditions for deadlocks, and strategies for handling them, including prevention, avoidance, and detection. Additionally, it introduces the Banker's Algorithm as a method for deadlock avoidance, detailing its advantages, disadvantages, and operational requirements.

Uploaded by

laxmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

UNIT-3

Deadlock

Every process needs some resources to complete its execution. However, the resource is granted
in a sequential order.

1. The process requests for some resource.


2. OS grant the resource if it is available otherwise let the process waits.
3. The process uses it and release on the completion.

A Deadlock is a situation where each of the computer process waits for a resource which is being
assigned to some another process. In this situation, none of the process gets executed since the
resource it needs, is held by some other process which is also waiting for some other resource to
be released.

Let us assume that there are three processes P1, P2 and P3. There are three different resources R1,
R2 and R3. R1 is assigned to P1, R2 is assigned to P2 and R3 is assigned to P3.

After some time, P1 demands for R1 which is being used by P2. P1 halts its execution since it can't
complete without R2. P2 also demands for R3 which is being used by P3. P2 also stops its
execution because it can't continue without R3. P3 also demands for R1 which is being used by P1
therefore P3 also stops its execution.

In this scenario, a cycle is being formed among the three processes. None of the process is
progressing and they are all waiting. The computer becomes unresponsive since all the processes
got blocked.

1
Difference between Starvation and Deadlock

Sr. Deadlock Starvation

1 Deadlock is a situation where no Starvation is a situation where the low priority process
process got blocked and no process got blocked and the high priority processes proceed.
proceeds

2 Deadlock is an infinite waiting. Starvation is a long waiting but not infinite.

3 Every Deadlock is always a Every starvation need not be deadlock.


starvation.

4 The requested resource is blocked The requested resource is continuously be used by the
by the other process. higher priority processes.

5 Deadlock happens when Mutual It occurs due to the uncontrolled priority and resource
exclusion, hold and wait, No management.
preemption and circular wait
occurs simultaneously.

Necessary conditions for Deadlocks

1. Mutual Exclusion

A resource can only be shared in mutually exclusive manner. It implies, if two process
cannot use the same resource at the same time.

2. Hold and Wait

A process waits for some resources while holding another resource at the same time.

3. No preemption

The process which once scheduled will be executed till the completion. No other process
can be scheduled by the scheduler meanwhile.

4. Circular Wait

All the processes must be waiting for the resources in a cyclic manner so that the last
process is waiting for the resource which is being held by the first process.

2
Strategies for handling Deadlock
1. Deadlock Ignorance

Deadlock Ignorance is the most widely used approach among all the mechanism. This is being
used by many operating systems mainly for end user uses. In this approach, the Operating system
assumes that deadlock never occurs. It simply ignores deadlock. This approach is best suitable for
a single end user system where User uses the system only for browsing and all other normal
stuff.There is always a tradeoff between Correctness and performance. The operating systems like
Windows and Linux mainly focus upon performance. However, the performance of the system
decreases if it uses deadlock handling mechanism all the time if deadlock happens 1 out of 100
times then it is completely unnecessary to use the deadlock handling mechanism all the time.

In these types of systems, the user has to simply restart the computer in the case of deadlock.
Windows and Linux are mainly using this approach.

2. Deadlock prevention

Deadlock happens only when Mutual Exclusion, hold and wait, No preemption and circular wait
holds simultaneously. If it is possible to violate one of the four conditions at any time then the
deadlock can never occur in the system.The idea behind the approach is very simple that we have
to fail one of the four conditions but there can be a big argument on its physical implementation
in the system.

3. Deadlock avoidance

In deadlock avoidance, the operating system checks whether the system is in safe state or in unsafe
state at every step which the operating system performs. The process continues until the system is
in safe state. Once the system moves to unsafe state, the OS has to backtrack one step.In simple
words, The OS reviews each allocation so that the allocation doesn't cause the deadlock in the
system.

4. Deadlock detection and recovery

This approach let the processes fall in deadlock and then periodically check whether deadlock
occur in the system or not. If it occurs then it applies some of the recovery methods to the system
to get rid of deadlock.

Deadlock Prevention
3
If we simulate deadlock with a table which is standing on its four legs then we can also simulate
four legs with the four conditions which when occurs simultaneously, cause the deadlock.

However, if we break one of the legs of the table then the table will fall definitely. The same
happens with deadlock, if we can be able to violate one of the four necessary conditions and don't
let them occur together then we can prevent the deadlock.

Let's see how we can prevent each of the conditions.

1. Mutual Exclusion

Mutual section from the resource point of view is the fact that a resource can never be used by
more than one process simultaneously which is fair enough but that is the main reason behind the
deadlock. If a resource could have been used by more than one process at the same time then the
process would have never been waiting for any resource.

However, if we can be able to violate resources behaving in the mutually exclusive manner then
the deadlock can be prevented.

Spooling

For a device like printer, spooling can work. There is a memory associated with the printer which
stores jobs from each of the process into it. Later, Printer collects all the jobs and print each one
of them according to FCFS. By using this mechanism, the process doesn't have to wait for the
printer and it can continue whatever it was doing. Later, it collects the output when it is produced.

Although, Spooling can be an effective approach to violate mutual exclusion but it suffers from
two kinds of problems.

1. This cannot be applied to every resource.


2. After some point of time, there may arise a race condition between the processes to get
space in that spool.

4
We cannot force a resource to be used by more than one process at the same time since it will not
be fair enough and some serious problems may arise in the performance. Therefore, we cannot
violate mutual exclusion for a process practically.

2. Hold and Wait

Hold and wait condition lies when a process holds a resource and waiting for some other resource
to complete its task. Deadlock occurs because there can be more than one process which are
holding one resource and waiting for other in the cyclic order.

However, we have to find out some mechanism by which a process either doesn't hold any resource
or doesn't wait. That means, a process must be assigned all the necessary resources before the
execution starts. A process must not wait for any resource once the execution has been started.

!(Hold and wait) = !hold or !wait (negation of hold and wait is, either you don't hold or you
don't wait)

This can be implemented practically if a process declares all the resources initially. However, this
sounds very practical but can't be done in the computer system because a process can't determine
necessary resources initially.

Process is the set of instructions which are executed by the CPU. Each of the instruction may
demand multiple resources at the multiple times. The need cannot be fixed by the OS.

The problem with the approach is:

1. Practically not possible.


2. Possibility of getting starved will be increases due to the fact that some process may hold
a resource for a very long time.

3. No Preemption

Deadlock arises due to the fact that a process can't be stopped once it starts. However, if we take
the resource away from the process which is causing deadlock then we can prevent deadlock.

This is not a good approach at all since if we take a resource away which is being used by the
process then all the work which it has done till now can become inconsistent.

Consider a printer is being used by any process. If we take the printer away from that process and
assign it to some other process then all the data which has been printed can become inconsistent
and ineffective and also the fact that the process can't start printing again from where it has left
which causes performance inefficiency.

4. Circular Wait

5
To violate circular wait, we can assign a priority number to each of the resource. A process can't
request for a lesser priority resource. This ensures that not a single process can request a resource
which is being utilized by some other process and no cycle will be formed.

Among all the methods, violating Circular wait is the only approach that can be implemented
practically.

Deadlock avoidance

In deadlock avoidance, the request for any resource will be granted if the resulting state of the
system doesn't cause deadlock in the system. The state of the system will continuously be checked
for safe and unsafe states.

In order to avoid deadlocks, the process must tell OS, the maximum number of resources a process
can request to complete its execution.

The simplest and most useful approach states that the process should declare the maximum number
of resources of each type it may ever need. The Deadlock avoidance algorithm examines the
resource allocations so that there can never be a circular wait condition.

Safe and Unsafe States

The resource allocation state of a system can be defined by the instances of available and allocated
resources, and the maximum instance of the resources demanded by the processes.

A state of a system recorded at some random time is shown below.

Resources Assigned
Process Type 1 Type 2 Type 3 Type 4

6
A 3 0 2 2

B 0 0 1 1

C 1 1 1 0

D 2 1 4 0

Resources still needed


Process Type 1 Type 2 Type 3 Type 4

A 1 1 0 0

B 0 1 1 2

C 1 2 1 0

D 2 1 1 2

1. E = (7 6 8 4)
2. P = (6 2 8 3)
3. A = (1 4 0 1)

Above tables and vector E, P and A describes the resource allocation state of a system. There are
4 processes and 4 types of the resources in a system. Table 1 shows the instances of each resource
assigned to each process.

Table 2 shows the instances of the resources, each process still needs. Vector E is the representation
of total instances of each resource in the system.

Vector P represents the instances of resources that have been assigned to processes. Vector A
represents the number of resources that are not in use.

A state of the system is called safe if the system can allocate all the resources requested by all the
processes without entering into deadlock.

If the system cannot fulfill the request of all processes then the state of the system is called unsafe.

The key of Deadlock avoidance approach is when the request is made for resources then the request
must only be approved in the case if the resulting state is also a safe state.

Banker’s Algorithm
7
It is a banker algorithm used to avoid deadlock and allocate resources safely to each process in the
computer system. The 'S-State' examines all possible tests or activities before deciding whether the
allocation should be allowed to each process. It also helps the operating system to successfully share the
resources between all the processes. The banker's algorithm is named because it checks whether a person
should be sanctioned a loan amount or not to help the bank system safely simulate allocation resources. In
this section, we will learn the Banker's Algorithm in detail. Also, we will solve problems based on
the Banker's Algorithm. To understand the Banker's Algorithm first we will see a real word example of it.

Suppose the number of account holders in a particular bank is 'n', and the total money in a bank is 'T'. If an
account holder applies for a loan; first, the bank subtracts the loan amount from full cash and then estimates
the cash difference is greater than T to approve the loan amount. These steps are taken because if another
person applies for a loan or withdraws some amount from the bank, it helps the bank manage and operate
all things without any restriction in the functionality of the banking system.

Similarly, it works in an operating system. When a new process is created in a computer system, the process
must provide all types of information to the operating system like upcoming processes, requests for their
resources, counting them, and delays. Based on these criteria, the operating system decides which process
sequence should be executed or waited so that no deadlock occurs in a system. Therefore, it is also known
as deadlock avoidance algorithm or deadlock detection in the operating system.

Advantages
Following are the essential characteristics of the Banker's algorithm:

1. It contains various resources that meet the requirements of each process.


2. Each process should provide information to the operating system for upcoming resource requests,
the number of resources, and how long the resources will be held.
3. It helps the operating system manage and control process requests for each type of resource in the
computer system.
4. The algorithm has a Max resource attribute that represents indicates each process can hold the
maximum number of resources in a system.

Disadvantages
1. It requires a fixed number of processes, and no additional processes can be started in the system
while executing the process.
2. The algorithm does no longer allows the processes to exchange its maximum needs while
processing its tasks.
3. Each process has to know and state their maximum resource requirement in advance for the system.
4. The number of resource requests can be granted in a finite time, but the time limit for allocating
the resources is one year.

When working with a banker's algorithm, it requests to know about three things:

8
1. How much each process can request for each resource in the system. It is denoted by the [MAX]
request.
2. How much each process is currently holding each resource in a system. It is denoted by the
[ALLOCATED] resource.
3. It represents the number of each resource currently available in the system. It is denoted by the
[AVAILABLE] resource.

Following are the important data structures terms applied in the banker's algorithm as follows:

Suppose n is the number of processes, and m is the number of each type of resource used in a computer
system.

1. Available: It is an array of length 'm' that defines each type of resource available in the system.
When Available[j] = K, means that 'K' instances of Resources type R[j] are available in the system.
2. Max: It is a [n x m] matrix that indicates each process P[i] can store the maximum number of
resources R[j] (each type) in a system.
3. Allocation: It is a matrix of m x n orders that indicates the type of resources currently allocated to
each process in the system. When Allocation [i, j] = K, it means that process P[i] is currently allocated
K instances of Resources type R[j] in the system.
4. Need: It is an M x N matrix sequence representing the number of remaining resources for each
process. When the Need[i] [j] = k, then process P[i] may require K more instances of resources type
Rj to complete the assigned work.
Nedd[i][j] = Max[i][j] - Allocation[i][j].
5. Finish: It is the vector of the order m. It includes a Boolean value (true/false) indicating whether the
process has been allocated to the requested resources, and all resources have been released after
finishing its task.

The Banker's Algorithm is the combination of the safety algorithm and the resource request algorithm
to control the processes and avoid deadlock in a system:

Safety Algorithm

It is a safety algorithm used to check whether or not a system is in a safe state or follows the safe sequence
in a banker's algorithm:

1. There are two vectors Wok and Finish of length m and n in a safety algorithm.

9
Initialize:Work=Available
Finish[i] = false; for I = 0, 1, 2, 3, 4… n - 1.

2. Check the availability status for each type of resources [i], such as:

Need[i]<=Work
Finish[i]==false
If the i does not exist, go to step 4.

3. Work = Work +Allocation(i) // to get new resource allocation

Finish[i] = true

Go to step 2 to check the status of resource availability for the next process.

4. If Finish[i] == true; it means that the system is safe for all processes.

Resource Request Algorithm


A resource request algorithm checks how a system will behave when a process makes each type of resource
request in a system as a request matrix.

Let create a resource request array R[i] for each process P[i]. If the Resource Request i [j] equal to 'K', which
means the process P[i] requires 'k' instances of Resources type R[j] in the system.

1. When the number of requested resources of each type is less than the Need resources, go to step 2 and
if the condition fails, which means that the process P[i] exceeds its maximum claim for the resource. As the
expression suggests:

IfRequest(i)<=Need
Go to step 2;

2. And when the number of requested resources of each type is less than the available resource for each
process, go to step (3). As the expression suggests:

IfRequest(i)<=Available
Else Process P[i] must wait for the resource since it is not available for use.

3. When the requested resource is allocated to the process by changing state:

Available=Available-Request
Allocation(i)=Allocation(i)+Request(i)
Needi = Needi - Requesti

When the resource allocation state is safe, its resources are allocated to the process P(i). And if the new
state is unsafe, the Process P (i) has to wait for each type of Request R(i) and restore the old resource-
allocation state.

10
Example: Consider a system that contains five processes P1, P2, P3, P4, P5 and the three resource types A,
B and C. Following are the resources types: A has 10, B has 5 and the resource type C has 7 instances.

Process Allocation Max Available


A B C A B C A B C

P1 0 1 0 7 5 3 3 3 2

P2 2 0 0 3 2 2

P3 3 0 2 9 0 2

P4 2 1 1 2 2 2

P5 0 0 2 4 3 3

Answer the following questions using the banker's algorithm:

1. What is the reference of the need matrix?


2. Determine if the system is safe or not.
3. What will happen if the resource request (1, 0, 0) for process P1 can the system accept this request
immediately?

Ans. 2: Context of the need matrix is as follows:

Need[i]=Max[i]-Allocation[i]
NeedforP1:(7,5,3)-(0,1,0)=7,4,3
NeedforP2:(3,2,2)-(2,0,0)=1,2,2
NeedforP3:(9,0,2)-(3,0,2)=6,0,0
NeedforP4:(2,2,2)-(2,1,1)=0,1,1
Need for P5: (4, 3, 3) - (0, 0, 2) = 4, 3, 1

Process Need
A B C

P1 7 4 3

P2 1 2 2

P3 6 0 0

11
P4 0 1 1

P5 4 3 1

Hence, we created the context of need matrix.

Ans. 2: Apply the Banker's Algorithm:

Available Resources of A, B and C are 3, 3, and 2.

Now we check if each type of resource request is available for each process.

Step 1: For Process P1:

Need <= Available

7, 4, 3 <= 3, 3, 2 condition is false.

So, we examine another process, P2.

Step 2: For Process P2:

Need <= Available

1, 2, 2 <= 3, 3, 2 condition true

New available = available + Allocation

(3, 3, 2) + (2, 0, 0) => 5, 3, 2

Similarly, we examine another process P3.

Step 3: For Process P3:

P3 Need <= Available

6, 0, 0 < = 5, 3, 2 condition is false.

Similarly, we examine another process, P4.

Step 4: For Process P4:

P4 Need <= Available

0, 1, 1 <= 5, 3, 2 condition is true

12
New Available resource = Available + Allocation

5, 3, 2 + 2, 1, 1 => 7, 4, 3

Similarly, we examine another process P5.

Step 5: For Process P5:

P5 Need <= Available

4, 3, 1 <= 7, 4, 3 condition is true

New available resource = Available + Allocation

7, 4, 3 + 0, 0, 2 => 7, 4, 5

Now, we again examine each type of resource request for processes P1 and P3.

Step 6: For Process P1:

P1 Need <= Available

7, 4, 3 <= 7, 4, 5 condition is true

New Available Resource = Available + Allocation

7, 4, 5 + 0, 1, 0 => 7, 5, 5

So, we examine another process P2.

Step 7: For Process P3:

P3 Need <= Available

6, 0, 0 <= 7, 5, 5 condition is true

New Available Resource = Available + Allocation

7, 5, 5 + 3, 0, 2 => 10, 5, 7

Hence, we execute the banker's algorithm to find the safe state and the safe sequence like P2, P4,
P5, P1 and P3.

Ans. 3: For granting the Request (1, 0, 2), first we have to check that Request <= Available, that is (1, 0, 2)
<= (3, 3, 2), since the condition is true. So the process P1 gets the request immediately.

13
Resource Allocation Graph
The resource allocation graph is the pictorial representation of the state of a system. As its name
suggests, the resource allocation graph is the complete information about all the processes which
are holding some resources or waiting for some resources.It also contains the information about
all the instances of all the resources whether they are available or being used by the processes.

In Resource allocation graph, the process is represented by a Circle while the Resource is
represented by a rectangle. Let's see the types of vertices and edges in detail.

Vertices are mainly of two types, Resource and process. Each of them will be represented by a
different shape. Circle represents process while rectangle represents resource.

A resource can have more than one instance. Each instance will be represented by a dot inside the
rectangle.

14
Edges in RAG are also of two types, one represents assignment and other represents the wait of a
process for a resource. The above image shows each of them.A resource is shown as assigned to a
process if the tail of the arrow is attached to an instance to the resource and the head is attached to
a process.A process is shown as waiting for a resource if the tail of an arrow is attached to the
process while the head is pointing towards the resource.

Example

Let'sconsider 3 processes P1, P2 and P3, and two types of resources R1 and R2. The resources are
having 1 instance each.

According to the graph, R1 is being used by P1, P2 is holding R2 and waiting for R1, P3 is waiting
for R1 as well as R2.

15
The graph is deadlock free since no cycle is being formed in the graph.

Deadlock Detection using RAG


If a cycle is being formed in a Resource allocation graph where all the resources have the single
instance then the system is deadlocked.

In Case of Resource allocation graph with multi-instanced resource types, Cycle is a necessary
condition of deadlock but not the sufficient condition.

The following example contains three processes P1, P2, P3 and three resources R2, R2, R3. All
the resources are having single instances each.

If we analyze the graph then we can find out that there is a cycle formed in the graph since the
system is satisfying all the four conditions of deadlock.

Allocation Matrix

16
Allocation matrix can be formed by using the Resource allocation graph of a system. In Allocation
matrix, an entry will be made for each of the resource assigned. For Example, in the following
matrix, en entry is being made in front of P1 and below R3 since R3 is assigned to P1.

Process R1 R2 R3

P1 0 0 1

P2 1 0 0

P3 0 1 0

Request Matrix

In request matrix, an entry will be made for each of the resource requested. As in the following
example, P1 needs R1 therefore an entry is being made in front of P1 and below R1.

Process R1 R2 R3

P1 1 0 0

P2 0 1 0

P3 0 0 1

Avial = (0,0,0)

Neither we are having any resource available in the system nor a process going to release. Each of
the process needs at least single resource to complete therefore they will continuously be holding
each one of them.

We cannot fulfill the demand of at least one process using the available resources therefore the
system is deadlocked as determined earlier when we detected a cycle in the graph.

Deadlock Detection and Recovery


In this approach, The OS doesn't apply any mechanism to avoid or prevent the deadlocks.
Therefore the system considers that the deadlock will definitely occur. In order to get rid of
deadlocks, The OS periodically checks the system for any deadlock. In case, it finds any of the
deadlock then the OS will recover the system using some recovery techniques.

The main task of the OS is detecting the deadlocks. The OS can detect the deadlocks with the help
of Resource allocation graph.

17
In single instanced resource types, if a cycle is being formed in the system then there will definitely
be a deadlock. On the other hand, in multiple instanced resource type graph, detecting a cycle is
not just enough. We have to apply the safety algorithm on the system by converting the resource
allocation graph into the allocation matrix and request matrix.

In order to recover the system from deadlocks, either OS considers resources or processes.

For Resource

Preempt the resource

We can snatch one of the resources from the owner of the resource (process) and give it to the
other process with the expectation that it will complete the execution and will release this resource
sooner. Well, choosing a resource which will be snatched is going to be a bit difficult.

Rollback to a safe state

System passes through various states to get into the deadlock state. The operating system
canrollback the system to the previous safe state. For this purpose, OS needs to implement check
pointing at every state.

The moment, we get into deadlock, we will rollback all the allocations to get into the previous safe
state.

For Process

Kill a process

Killing a process can solve our problem but the bigger concern is to decide which process to kill.
Generally, Operating system kills a process which has done least amount of work until now.

18
Kill all process

This is not a suggestible approach but can be implemented if the problem becomes very serious.
Killing all process will lead to inefficiency in the system because all the processes will execute
again from starting.

19
Process Management and Synchronization
On the basis of synchronization, processes are categorized as one of the following two types:
• Independent Process: The execution of one process does not affect the execution of other
processes.
• Cooperative Process: A process that can affect or be affected by other processes
executing in the system.
Process synchronization problem arises in the case of Cooperative process also because
resources are shared in Cooperative processes.

Race Condition:

When more than one process is executing the same code or accessing the same memory or
any shared variable in that condition there is a possibility that the output or the value of the
shared variable is wrong so for that all the processes doing the race to say that my output is
correct this condition known as a race condition. Several processes access and process the
manipulations over the same data concurrently, then the outcome depends on the particular
order in which the access takes place. A race condition is a situation that may occur inside a
critical section. This happens when the result of multiple thread execution in the critical
section differs according to the order in which the threads execute. Race conditions in critical
sections can be avoided if the critical section is treated as an atomic instruction. Also, proper
thread synchronization using locks or atomic variables can prevent race conditions.

Critical Section Problem:

A critical section is a code segment that can be accessed by only one process at a time. The
critical section contains shared variables that need to be synchronized to maintain the
consistency of data variables. So the critical section problem means designing a way for
cooperative processes to access shared resources without creating data inconsistencies.

20
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
• Mutual Exclusion: If a process is executing in its critical section, then no other process is
allowed to execute in the critical section.
• Progress: If no process is executing in the critical section and other processes are waiting
outside the critical section, then only those processes that are not executing in their
remainder section can participate in deciding which will enter in the critical section next,
and the selection can not be postponed indefinitely.
• Bounded Waiting: A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical
section and before that request is granted.

Peterson’s Solution:

Peterson’s Solution is a classical software-based solution to the critical section problem. In


Peterson’s solution, we have two shared variables:
• boolean flag[i]: Initialized to FALSE, initially no one is interested in entering the critical
section
• int turn: The process whose turn is to enter the critical section.

Peterson’s Solution preserves all three conditions:


• Mutual Exclusion is assured as only one process can access the critical section at any time.
• Progress is also assured, as a process outside the critical section does not block other
processes from entering the critical section.
• Bounded Waiting is preserved as every process gets a fair chance.

Disadvantages of Peterson’s solution:


• It involves busy waiting.(In the Peterson’s solution, the code statement- “while(flag[j] &&
turn == j);” is responsible for this. Busy waiting is not favored because it wastes CPU
cycles that could be used to perform other tasks.)
• It is limited to 2 processes.
• Peterson’s solution cannot be used in modern CPU architectures.

21
Semaphores:
A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be
signaled by another thread. This is different than a mutex as the mutex can be signaled only
by the thread that is called the wait function.
A semaphore uses two atomic operations, wait and signal for process synchronization.
A Semaphore is an integer variable, which can be accessed only through two operations wait()
and signal().
There are two types of semaphores: Binary Semaphores and Counting Semaphores.
• Binary Semaphores: They can only be either 0 or 1. They are also known as mutex locks,
as the locks can provide mutual exclusion. All the processes can share the same mutex
semaphore that is initialized to 1. Then, a process has to wait until the lock becomes 0. Then,
the process can make the mutex semaphore 1 and start its critical section. When it completes
its critical section, it can reset the value of the mutex semaphore to 0 and some other process
can enter its critical section.
• Counting Semaphores: They can have any value and are not restricted over a certain
domain. They can be used to control access to a resource that has a limitation on the number
of simultaneous accesses. The semaphore can be initialized to the number of instances of the
resource. Whenever a process wants to use that resource, it checks if the number of
remaining instances is more than zero, i.e., the process has an instance available. Then, the
process can enter its critical section thereby decreasing the value of the counting semaphore
by 1. After the process is over with the use of the instance of the resource, it can leave the
critical section thereby adding 1 to the number of available instances of the resource.

Synchronization Hardware

Many systems provide hardware support for critical section code. The critical section problem
could be solved easily in a single-processor environment if we could disallow interrupts to occur
while a shared variable or resource is being modified.

In this manner, we could be sure that the current sequence of instructions would be allowed to
execute in order without pre-emption. Unfortunately, this solution is not feasible in a
multiprocessor environment.Disabling interrupt on a multiprocessor environment can be time-
consuming as the message is passed to all the processors.

This message transmission lag delays the entry of threads into the critical section, and the system
efficiency decreases.

Mutex Locks

As the synchronization hardware solution is not easy to implement for everyone, a strict software
approach called Mutex Locks was introduced. In this approach, in the entry section of code, a

22
LOCK is acquired over the critical resources modified and used inside the critical section, and in
the exit section that LOCK is released.

As the resource is locked while a process executes its critical section hence no other process can
access it.

Classical Problems of Synchronization


1. Bounded Buffer (Producer-Consumer) Problem
2. Dining Philosophers Problem
3. The Readers Writers Problem

Bounded Buffer (Producer-Consumer) Problem

Producer consumer problem is a classical synchronization problem. We can solve this


problem by using semaphores.
A semaphore S is an integer variable that can be accessed only through two standard
operations : wait() and signal().
The wait() operation reduces the value of semaphore by 1 and the signal() operation increases
its value by 1.
wait(S)
{
while(S<=0); // busy waiting
S--;
}

signal(S)
{
S++;
}
Semaphores are of two types:
1. Binary Semaphore – This is similar to mutex lock but not the same thing. It can have only
two values – 0 and 1. Its value is initialized to 1. It is used to implement the solution of
critical section problem with multiple processes.

2. Counting Semaphore – Its value can range over an unrestricted domain. It is used to
control access to a resource that has multiple instances.

23
Problem Statement – We have a buffer of fixed size. A producer can produce an item and
can place in the buffer. A consumer can pick items and can consume them. We need to ensure
that when a producer is placing an item in the buffer, then at the same time consumer should
not consume any item. In this problem, buffer is the critical section.
To solve this problem, we need two counting semaphores – Full and Empty. “Full” keeps
track of number of items in the buffer at any given time and “Empty” keeps track of number
of unoccupied slots.
Initialization of semaphores –
mutex = 1
Full = 0 // Initially, all slots are empty. Thus full slots are 0
Empty = n // All slots are empty initially

Solution for Producer –


do{

//produce an item

wait(empty);
wait(mutex);

//place in buffer

signal(mutex);
signal(full);

}while(true)
When producer produces an item then the value of “empty” is reduced by 1 because one slot
will be filled now. The value of mutex is also reduced to prevent consumer to access the
buffer. Now, the producer has placed the item and thus the value of “full” is increased by 1.
The value of mutex is also increased by 1 because the task of producer has been completed
and consumer can access the buffer.

Solution for Consumer –

do{

wait(full);
wait(mutex);

24
// remove item from buffer

signal(mutex);
signal(empty);

// consumes item

}while(true)
As the consumer is removing an item from buffer, therefore the value of “full” is reduced by 1
and the value is mutex is also reduced so that the producer cannot access the buffer at this
moment. Now, the consumer has consumed the item, thus increasing the value of “empty” by
1. The value of mutex is also increased so that producer can access the buffer now.

The Dining Philosopher Problem

The Dining Philosopher Problem states that K philosophers seated around a circular table with
one chopstick between each pair of philosophers. There is one chopstick between each
philosopher. A philosopher may eat if he can pick up the two chopsticks adjacent to him. One
chopstick may be picked up by any one of its adjacent followers but not both.

25
Semaphore Solution to Dining Philosopher –
Each philosopher is represented by the following pseudocode:

process P[i]
while true do
{ THINK;
PICKUP(CHOPSTICK[i], CHOPSTICK[i+1 mod 5]);
EAT;
PUTDOWN(CHOPSTICK[i], CHOPSTICK[i+1 mod 5])
}
There are three states of the philosopher: THINKING, HUNGRY, and EATING. Here there
are two semaphores: Mutex and a semaphore array for the philosophers. Mutex is used such
that no two philosophers may access the pickup or putdown at the same time. The array is
used to control the behavior of each philosopher. But, semaphores can result in deadlock due
to programming errors.

Readers-Writers Problem

Consider a situation where we have a file shared between many people.

• If one of the people tries editing the file, no other person should be reading or writing at
the same time, otherwise changes will not be visible to him/her.
• However if some person is reading the file, then others may read it at the same time.
Precisely in OS we call this situation as the readers-writers problem

Problem parameters:

• One set of data is shared among a number of processes


• Once a writer is ready, it performs its write. Only one writer may write at a time
• If a process is writing, no other process can read it
• If at least one reader is reading, no other process can write
• Readers may not write and only read

Solution when Reader has the Priority over Writer


Here priority means, no reader should wait if the share is currently opened for reading.
Three variables are used: mutex, wrt, readcnt to implement solution

26
1. semaphore mutex, wrt; // semaphore mutex is used to ensure mutual exclusion
when readcnt is updated i.e. when any reader enters or exit from the critical section and
semaphore wrt is used by both readers and writers
2. int readcnt; // readcnt tells the number of processes performing read in the critical
section, initially 0

Functions for semaphore :


– wait() : decrements the semaphore value.
– signal() : increments the semaphore value.

Writer process:

1. Writer requests the entry to critical section.


2. If allowed i.e. wait() gives a true value, it enters and performs the write. If not allowed, it
keeps on waiting.
3. It exits the critical section.

do {
// writer requests for critical section
wait(wrt);

// performs the write

// leaves the critical section


signal(wrt);

} while(true);

Reader process:

1. Reader requests the entry to critical section.


2. If allowed:
• it increments the count of number of readers inside the critical section. If this reader is
the first reader entering, it locks the wrt semaphore to restrict the entry of writers if any
reader is inside.
• It then, signals mutex as any other reader is allowed to enter while others are already
reading.
• After performing reading, it exits the critical section. When exiting, it checks if no more
reader is inside, it signals the semaphore “wrt” as now, writer can enter the critical
section.
3. If not allowed, it keeps on waiting.
27
do {

// Reader wants to enter the critical section


wait(mutex);

// The number of readers has now increased by 1


readcnt++;

// there is atleast one reader in the critical section


// this ensure no writer can enter if there is even one reader
// thus we give preference to readers here
if (readcnt==1)
wait(wrt);

// other readers can enter while this current reader is inside


// the critical section
signal(mutex);

// current reader performs reading here


wait(mutex); // a reader wants to leave

readcnt--;

// that is, no reader is left in the critical section,


if (readcnt == 0)
signal(wrt); // writers can enter

signal(mutex); // reader leaves

} while(true);
Thus, the semaphore ‘wrt‘ is queued on both readers and writers in a manner such that
preference is given to readers if writers are also there. Thus, no reader is waiting simply
because a writer has requested to enter the critical section.

Monitors
The monitor is one of the ways to achieve Process synchronization. The monitor is supported
by programming languages to achieve mutual exclusion between processes. For example Java
Synchronized methods. Java provides wait() and notify() constructs.

28
1. It is the collection of condition variables and procedures combined together in a special
kind of module or a package.
2. The processes running outside the monitor can’t access the internal variable of the monitor
but can call procedures of the monitor.
3. Only one process at a time can execute code inside monitors.
Syntax:

Condition Variables:
Two different operations are performed on the condition variables of the monitor.
Wait.
signal.
let say we have 2 condition variables
condition x, y; // Declaring variable

Wait operation
x.wait() : Process performing wait operation on any condition variable are suspended. The
suspended processes are placed in block queue of that condition variable.
Note: Each condition variable has its unique block queue.

Signal operation
x.signal(): When a process performs signal operation on condition variable, one of the
blocked processes is given chance.
If (x block queue empty)
// Ignore signal
else
// Resume a process from block queue.
Advantages of Monitor:
Monitors have the advantage of making parallel programming easier and less error prone than
using techniques such as semaphore.

29
Disadvantages of Monitor:

Monitors have to be implemented as part of the programming language . The compiler must
generate code for them. This gives the compiler the additional burden of having to know what
operating system facilities are available to control access to critical sections in concurrent
processes. Some languages that do support monitors are Java,C#,Visual Basic,Ada and
concurrent Euclid.

Difference between the Semaphore and Monitor

Various head-to-head comparisons between the semaphore and monitor are as follows:

Features Semaphore Monitor

Definition A semaphore is an integer It is a synchronization process that


variable that allows many enables threads to have mutual
processes in a parallel system to exclusion and the wait() for a given
manage access to a common condition to become true.
resource like a multitasking
OS.

Syntax // Wait Operation monitor {


wait(Semaphore S) { //shared variable declarations
while (S<=0); data variables;
S--; Procedure P1() { ... }
} Procedure P2() { ... }
// Signal Operation .
signal(Semaphore S) { .
S++; .
} Procedure Pn() { ... }
}

Basic Integer variable Abstract data type

Access When a process uses shared When a process uses shared


resources, it calls the wait() resources in the monitor, it has to
method on S, and when it access them via procedures.
releases them, it uses the
signal() method on S.

Action The semaphore's value shows The Monitor type includes shared
the number of shared resources variables as well as a set of
available in the system. procedures that operate on them.

Condition No condition variables. It has condition variables.


Variable

30
Inter Process Communication
In general, Inter Process Communication is a type of mechanism usually provided by the operating
system (or OS). The main aim or goal of this mechanism is to provide communications in between
several processes. In short, the intercommunication allows a process letting another process know
that some event has occurred.

Let us now look at the general definition of inter-process communication, which will explain the
same thing that we have discussed above.

Definition

"Inter-process communication is used for exchanging useful information between numerous


threads in one or more processes (or programs)."

Role of Synchronization in Inter Process Communication

It is one of the essential parts of inter process communication. Typically, this is provided by
interprocess communication control mechanisms, but sometimes it can also be controlled by
communication processes.

These are the following methods that used to provide the synchronization:

1. Mutual Exclusion
2. Semaphore
3. Barrier
4. Spinlock

Mutual Exclusion:-

It is generally required that only one process thread can enter the critical section at a time. This
also helps in synchronization and creates a stable state to avoid the race condition.

Semaphore:-

Semaphore is a type of variable that usually controls the access to the shared resources by several
processes. Semaphore is further divided into two types which are as follows:

1. Binary Semaphore
2. Counting Semaphore

31
Barrier:-

A barrier typically not allows an individual process to proceed unless all the processes does not
reach it. It is used by many parallel languages, and collective routines impose barriers.

Spinlock:-

Spinlock is a type of lock as its name implies. The processes are trying to acquire the spinlock
waits or stays in a loop while checking that the lock is available or not. It is known as busy waiting
because even though the process active, the process does not perform any functional operation (or
task).

Approaches to Interprocess Communication

We will now discuss some different approaches to inter-process communication which are as
follows:

These are a few different approaches for Inter- Process Communication:

1. Pipes
2. Shared Memory
3. Message Queue
4. Direct Communication
5. Indirect communication
6. Message Passing
7. FIFO

To understand them in more detail, we will discuss each of them individually.

Pipe:-

The pipe is a type of data channel that is unidirectional in nature. It means that the data in this type
of data channel can be moved in only a single direction at a time. Still, one can use two-channel
of this type, so that he can able to send and receive data in two processes. Typically, it uses the
standard methods for input and output. These pipes are used in all types of POSIX systems and in
different versions of window operating systems as well.

Shared Memory:-

It can be referred to as a type of memory that can be used or accessed by multiple processes
simultaneously. It is primarily used so that the processes can communicate with each other.
Therefore the shared memory is used by almost all POSIX and Windows operating systems as
well.

32
Message Queue:-

In general, several different messages are allowed to read and write the data to the message queue.
In the message queue, the messages are stored or stay in the queue unless their recipients retrieve
them. In short, we can also say that the message queue is very helpful in inter-process
communication and used by all operating systems.

To understand the concept of Message queue and Shared memory in more detail, let's take a look
at its diagram given below:

Message Passing:-

It is a type of mechanism that allows processes to synchronize and communicate with each other.
However, by using the message passing, the processes can communicate with each other without
restoring the hared variables.

Usually, the inter-process communication mechanism provides two operations that are as follows:

o send (message)
o received (message)

Note: The size of the message can be fixed or variable.

Direct Communication:-

33
In this type of communication process, usually, a link is created or established between two
communicating processes. However, in every pair of communicating processes, only one link can
exist.

Indirect Communication

Indirect communication can only exist or be established when processes share a common mailbox,
and each pair of these processes shares multiple communication links. These shared links can be
unidirectional or bi-directional.

FIFO:-

It is a type of general communication between two unrelated processes. It can also be considered
as full-duplex, which means that one process can communicate with another process and vice
versa.

Some other different approaches

o Socket:-

It acts as a type of endpoint for receiving or sending the data in a network. It is correct for data
sent between processes on the same computer or data sent between different computers on the
same network. Hence, it used by several types of operating systems.

o File:-

A file is a type of data record or a document stored on the disk and can be acquired on demand by
the file server. Another most important thing is that several processes can access that file as
required or needed.

o Signal:-

As its name implies, they are a type of signal used in inter process communication in a minimal
way. Typically, they are the massages of systems that are sent by one process to another. Therefore,
they are not used for sending data but for remote commands between multiple processes.

Usually, they are not used to send the data but to remote commands in between several processes.

34

You might also like