0% found this document useful (0 votes)
23 views25 pages

OS Unit - II

The document discusses performance comparison, deadlock, and starvation in operating systems, emphasizing the impact of hardware, scheduling algorithms, and workload on performance. It explains deterministic modeling and queuing analysis as methods to evaluate scheduling algorithms, while also detailing the causes and solutions for starvation and deadlock situations. The document concludes with methods for handling deadlocks, including prevention, avoidance, detection, recovery, and ignorance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views25 pages

OS Unit - II

The document discusses performance comparison, deadlock, and starvation in operating systems, emphasizing the impact of hardware, scheduling algorithms, and workload on performance. It explains deterministic modeling and queuing analysis as methods to evaluate scheduling algorithms, while also detailing the causes and solutions for starvation and deadlock situations. The document concludes with methods for handling deadlocks, including prevention, avoidance, detection, recovery, and ignorance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Operating System

CHAPTER 2
PERFORMANCE AND DEADLOCK

PERFORMANCE COMPARISON
The performance of an operating system is dependent on a variety of factors,
such as the hardware specifications of the computer, the design and
implementation of the operating system, the type and number of applications
running on it, and the workload and environment. These elements can have
varying degrees of impact on the performance of an operating system, depending on
the situation and goal. For instance, the CPU, memory, disk, and network can
influence the kernel, scheduler, file system, security features, user interface,
background processes, and network services. Moreover, user input, system load,
and external events can also affect its performance.
There are many scheduling algorithms, each with its parameters. As a
result, selecting an algorithm can be difficult. The first problem is defining the
criteria to be used in selecting an algorithm. Criteria are often defined in terms of
CPU utilization, response time, or throughput. To select an algorithm, we must first
define the relative importance of these measures. Our criteria may include several
measures, such as:
• Maximizing CPU utilization under the constraint that the maximum response
time is 1 second
• Maximizing throughput such that turnaround time is (on average) linearly
proportional to total execution time
Once the selection criteria have been defined, we want to evaluate the
algorithms under consideration. Following are the various evaluation methods we
can use.

DETERMINISTIC MODELLING
In a deterministic model, when one starts running the model with the same
initial condition every time, the result or the outcome is the same. Moreover, a
deterministic model does not involve randomness; it works accordingly. In the case
of the deterministic model when some work starts at a particular time that is at the
same pace every time, then the output of the model always depends on the initial
conditions. For a well-defined linear model, the unique output is produced from a
unique input, and in the case of a non-linear model, multiple outputs are
produced. This model can be described in different stages of temporal variations
viz. time-independent, time-dependent, and dynamic. A deterministic system
assumes an exact relationship between variables. As a result, this relationship
between variables enables one to predict and notice how variables affect the other.

Page 46 of 159
Operating System

This method takes a particular predetermined workload and defines the


performance of each algorithm for that workload. For example, assume that we
have the workload shown below. All five processes arrive at time 0, in the order
given, with the length of the CPU burst given in milliseconds:
Process Burst Time
P1 10
P2 29
P3 3
P4 7
P5 12
Consider the FCFS, SJF, and RR (quantum = 10 milliseconds) scheduling
algorithms for this set of processes. Which algorithm would give the minimum
average waiting time?
For the FCFS algorithm, we would execute the processes as

P1 P2 P3 P4 P5
0 10 39 42 49 61
The waiting time is 0 milliseconds for process P1, 10 milliseconds for process
P2, 39 milliseconds for process P3, 42 milliseconds for process P4, and 49
milliseconds for process P5. Thus, the average waiting time is (0 + 10 + 39 + 42 +
49)/5 = 28 milliseconds.
With non-preemptive SJF scheduling, we execute the processes as

P3 P4 P1 P5 P2
0 3 10 20 32 61
The waiting time is 10 milliseconds for process P1, 32 milliseconds for
process P2, 0 milliseconds for process P3, 3 milliseconds for process P 4, and 20
milliseconds for process P5. Thus, the average waiting time is (10 + 32 + 0 + 3 +
20)/5 = 13 milliseconds.
With the RR algorithm, we execute the processes as

P1 P2 P3 P4 P5 P2 P5 P2
0 10 20 23 30 40 50 52 61
The waiting time is 0 milliseconds for process P1, 10 milliseconds for process
P2, 20 milliseconds for process P3, 23 milliseconds for process P 4, and 40
milliseconds for process P5. Thus, the average waiting time is (0 + 32 + 20 + 23 +
40)/5 = 23 milliseconds.
We see that the average waiting time obtained with the SJF policy is less
than half that obtained with FCFS scheduling; the RR algorithm gives us an
intermediate value.
Deterministic modelling is simple and fast. It gives us exact numbers,
allowing us to compare the algorithms. However, it requires exact numbers for
input. The main uses of deterministic modelling are in describing scheduling
algorithms and providing examples. In cases where we are running the same
program over and over again and can measure the program's processing

Page 47 of 159
Operating System

requirements exactly, we may be able to use deterministic modelling to select a


scheduling algorithm. Furthermore, over a set of examples, deterministic modelling
may indicate trends that can then be analysed and proved separately. For example,
it can be shown that, for the environment described (all processes and their times
available at time 0), the SJF policy will always result in a minimum waiting time.

QUEUING ANALYSIS
On many systems, the processes that are run vary from day to day, so there
is no static set of processes (or times) to use for deterministic modelling. These
distributions can be measured and then approximated or simply estimated. The
result is a mathematical formula describing the probability of a particular CPU
burst. Commonly, this distribution is exponential and is described by its mean.
Similarly, we can describe the distribution of times when processes arrive in the
system (the arrival-time distribution). From these two distributions, it is possible to
compute the average throughput, utilization, waiting time, and so on for most
algorithms.
The computer system is described as a network of servers. Each server has a
queue of waiting processes. The CPU is a server with its ready queue, as is the I/O
system with its device queues. Knowing arrival rates and service rates, we can
compute utilization, average queue length, average wait time, and so on. This area
of study is called queueing network analysis.
As an example, let n be the average queue length (excluding the process
being serviced), let W be the average waiting time in the queue, and let X be the
average arrival rate for new processes in the queue (such as three processes per
second). We expect that when a process waits,  x W new processes will arrive in
the queue. If the system is in a steady state, then the number of processes leaving
the queue must be equal to the number of processes that arrive. Thus,
n=xW
This equation, known as Little's formula, is particularly useful because it is valid
for any scheduling algorithm and arrival distribution.
We can use Little's formula to compute one of the three variables if we know
the other two. For example, if we know that 7 processes arrive every second (on
average) and that there are normally 14 processes in the queue, then we can
compute the average waiting time per process as 2 seconds.
Queueing analysis can be useful in comparing scheduling algorithms, but it
also has limitations. At the moment, the classes of algorithms and distributions
that can be handled are fairly limited. The mathematics of complicated algorithms
and distributions can be difficult to work with. Thus, arrival and service
distributions are often defined in mathematically tractable but unrealistic ways. It
is also generally necessary to make several independent assumptions, which may
not be accurate. As a result of these difficulties, queueing models are often only
approximations of real systems, and the accuracy of the computed results may be
questionable.

Page 48 of 159
Operating System

SIMULATORS
The OS Simulator is designed to support two main aspects of a computer
system’s resource management: process management and memory management.
Once a compiled code is loaded into the CPU Simulator’s memory, its image is also
available to the OS Simulator. It is then possible to create multiple instances of the
program images as separate processes. The OS Simulator displays the running
processes, the ready processes, and the waiting processes. Each process is
assigned a separate Process Control Block (PCB) that contains information on the
process state. This information is displayed in a separate window. The main
memory display demonstrates the dynamic nature of page allocations according to
the currently selected placement policy. The OS maintains a separate page table for
each process which can also be displayed. The simulator demonstrates how data
memory is relocated and the page tables are maintained as the pages are moved in
and out of the main memory illustrating virtual memory activity.
The process scheduler includes various selectable scheduling policies that
include priority-based, pre-emptive, and round-robin scheduling with variable time
quanta. The OS can carry out context-switching which can be visually enhanced by
slowing down or suspending the progress at some key stage to enable the students
to study the states of CPU registers, stack, cache, pipeline, and PCB contents.
The simulator incorporates an input-output console device, incorporating a
virtual keyboard, and is used to display text and accept input data.
The OS simulator supports dynamic library simulation which is supported
by the appropriate language constructs in the teaching language. The benefits of
sharing code between multiple processes are visually demonstrated. There is also a
facility to link static libraries demonstrating the differences between the two types
of libraries and their benefits and drawbacks.
The simulator allows manual allocation and de-allocation of resources to
processes. This facility is used to create and demonstrate
process deadlocks associated with resources and enables experimentation with
deadlock prevention, detection, and resolution.

STARVATION
Starvation is a problem of resource management where in the OS, the
process does not have resources because it is being used by other processes. This
problem occurs mainly in a priority-based scheduling algorithm in which the
requests with high priority get processed first and the least priority process takes
time to get processed.

Page 49 of 159
Operating System

It is a problem when the low-priority process gets jammed for a long


duration of time because of high-priority requests being executed. A stream of high-
priority requests stops the low-priority process from obtaining the processor or
resources. Starvation happens usually when the process is delayed for an infinite
period of duration. Resources needed for the Operating system to handle requests
of the process:
• Memory
• CPU time
• Disk space
• Bandwidth of network
• I/O access to disk or network

What Causes Starvation in OS? Here are a few reasons why starvation in
OS occurs:
• In starvation, a process with low priority might wait forever if the process with
higher priority uses a processor constantly.
• Because the low-priority processes are not interacting with resources the
deadlock does not occur but there are chances of starvation as the low-priority
processes are kept in a wait state.
• Hence starvation is precisely a fail-safe method, that is it prevents deadlock
temporarily but it affects the system in general.
• The important cause of starvation might be that there are not enough resources
to provide for every resource.
• If a process selection is random then there can be a possibility that a process
might have to wait for a long duration.
• Starvation can also occur when a resource is never provided to a process for
execution due to faulty allocation of resources.

Various Methods to Handle Starvation in OS: Here are the following


ways in which the starvation situation in OS can be handled:
• The allocation of resources by CPU should be taken care of by a freelance
manager to ensure that there is an even distribution of resources.
• Random choice of process method should be avoided due to which starvation
occurs.
• The aging criteria of processes should be taken into consideration while
resource allocation to avoid starvation.
• Scheduling algorithm with a priority queue can also be used to handle
starvation.

Page 50 of 159
Operating System

• If the random technique is to be used then use it with a priority queue to


handle starvation.
• A multilevel feedback queue can also be used to avoid starvation in the
operating system.

Example of Starvation:

In the given example, the P2 process has the highest priority, and
process P1 has the lowest priority. In the diagram, it can be seen that there is n
number of processes ready for their execution. So, in the CPU, the process with the
highest priority will come in which is P2, and the process P1 will keep waiting for
its turn because all other processes are at a high priority concerning P1. This
situation in which the process is waiting is called starvation.

DEADLOCK
All the processes in a system require some resources such as a central
processing unit (CPU), file storage, input/output devices, etc to execute it. Once the
execution is finished, the process releases the resource it was holding. However,
when many processes run on a system, they also compete for the resources they
require for execution. This may cause a deadlock situation.
A deadlock is a situation in which more than one process is blocked
because it is holding a resource and also requires some resource that is acquired
by some other process. Therefore, none of the processes gets executed.
Example Of Deadlock

Page 51 of 159
Operating System

In the above figure, there are two processes and two resources. Process 1
holds "Resource 1" and needs "Resource 2" while Process 2 holds "Resource 2" and
requires "Resource 1". This creates a situation of deadlock because none of the two
processes can be executed. Since the resources are non-shareable, they can only be
used by one process at a time (Mutual Exclusion). Each process is holding a
resource and waiting for the other process the release the resource it requires.
None of the two processes releases their resources before their execution and this
creates a circular wait. Therefore, all four conditions are satisfied.

Necessary Conditions for Deadlock: The four necessary conditions for a


deadlock to arise are as follows.
• Mutual Exclusion: Only one process can use a resource at any given time i.e.
the resources are non-sharable.
• Hold and wait: A process is holding at least one resource at a time and is
waiting to acquire other resources held by some other process.
• No preemption: The resource can be released by a process voluntarily i.e. after
execution of the process.
• Circular Wait: A set of processes are circularly waiting for each other. For
example, let's say there are a set of processes {P0 P0, P1 P1, P2 P2, P3 P3} such
that P0P0 depends on P1P1, P1P1 depends on P2P2, P2P2 depends on P3P3
and P3P3 depends on P0P0. This creates a circular relation between all these
processes and they have to wait forever to be executed.

Methods of Handling Deadlocks: The first two methods are used to ensure
the system never enters a deadlock.
Deadlock Prevention: This is done by restraining the ways a request can be
made. Since deadlock occurs when all the above four conditions are met, we try to
prevent any one of them, thus preventing a deadlock.
Deadlock Avoidance: When a process requests a resource, the deadlock
avoidance algorithm examines the resource-allocation state. If allocating that
resource sends the system into an unsafe state, the request is granted.
Therefore, it requires additional information such as how many resources of
each type are required by a process. If the system enters an unsafe state, it must
step back to avoid deadlock.

Page 52 of 159
Operating System

Deadlock Detection and Recovery: We let the system fall into a deadlock and
if it happens, we detect it using a detection algorithm and try to recover. Some
ways of recovery are as follows.
• Aborting all the deadlocked processes.
• Abort one process at a time until the system recovers from the deadlock.
• Resource Preemption: Resources are taken one by one from a process and
assigned to higher priority processes until the deadlock is resolved.
Deadlock Ignorance: In the method, the system assumes that deadlock never
occurs. Since the problem of deadlock situations is not frequent, some systems
simply ignore it. Operating systems such as UNIX and Windows follow this
approach. However, if a deadlock occurs, we can reboot our system, and the
deadlock is resolved automatically.

Difference between Starvation and Deadlocks


Deadlock Starvation

A deadlock is a situation in which more


Starvation is a process in which the
than one process is blocked because it
low-priority processes are postponed
is holding a resource and also requires
indefinitely because the resources are
some resource that is acquired by some
never allocated.
other process.
Resources are blocked by a set of Resources are continuously used by
processes in a circular fashion. high-priority resources.
It is prevented by avoiding anyone
necessary condition required for a
It can be prevented by aging.
deadlock or recovery using a recovery
algorithm.
In starvation, higher-priority processes
In a deadlock, none of the processes get
execute while lower-priority processes
executed.
are postponed.
Deadlock is also called circular wait. Starvation is also called lived lock.

RESOURCE ALLOCATION GRAPH


The Resource Allocation Graph (RAG) is a fundamental concept in the field of
Operating Systems (OS). One of the critical roles of the Resource Allocation
Graph is to identify potential deadlocks.
To describe deadlocks in a more precise way directed graphs are used that
are called system Resource Allocation Graph. This Graph acts as the pictorial
representation of the state of the system. The Resource Allocation graph mainly
consists of a set of vertices V and a set of Edges E. This graph mainly contains
all the information related to the processes that are holding some resources and
also contains the information on the processes that are waiting for some more
resources in the system.

Page 53 of 159
Operating System

Also, this graph contains all the information that is related to all the
instances of the resources which means the information about available resources
and the resources which are being used by the process. In this graph, the circle is
used to represent the process, and the rectangle is used to represent the resource.

Components of Resource Allocation Graph: Given below are the


components of RAG:
1. Vertices
2. Edges

Vertices: There are two kinds of vertices used in the resource allocation graph
and these are:
• Process Vertices
• Resource Vertices
Process Vertices: These vertices are used to represent process vertices. The
circle is used to draw the process vertices and the name of the process is
mentioned inside the circle.
Resource Vertices: These vertices are used to represent resource vertices. The
rectangle is used to draw the resource vertices and we use dots inside the circle to
mention the number of instances of that resource.

In the system, there may exist several instances and according to them,
there are two types of resource vertices and these are single instances and multiple
instances.
Single Instance: In a single instance resource type, there is a single dot inside
the box. The single dot mainly indicates that there is one instance of the resource.
Multiple Instance: In multiple instance resource types, there are multiple dots
inside the box, and these Multiple dots indicate that there are multiple instances of
the resources.

Edges: In the Resource Allocation Graph, Edges are further categorized into two:

Page 54 of 159
Operating System

Assign Edges: Assign Edges are mainly used to represent the allocation of
resources to the process. We can draw assigned edges with the help of an arrow in
which mainly the arrowhead points to the process, and the process mainly tail
points to the instance of the resource.

In the above Figure, the resource is assigned to the process


Request Edges: Request Edge is mainly used to signify the waiting state of the
process. Likewise in the assigned edge, an arrow is used to draw an arrow edge.
But here the arrowhead points to the instance of a resource, and the tail of the
process points to the process.

In the above figure, the process is requesting a resource

Single Instance RAG Example: Suppose there are Four Processes P1, P2,
P3, P4, and two resources R1 and R2, where P1 is holding R1 and P2 is holding R2,
P3 is waiting for R1 and R2 while P4 is waiting for resource R1.

In the above example, there is no circular dependency so there are no


chances for the occurrence of deadlock. Thus, having cycled in a single-instance
resource type must be a sufficient condition for deadlock.

Multiple Instance RAG Example: Suppose there are four processes P1, P2,
P3, P4 and there are two instances of resource R1 and two instances of resource
R2:

Page 55 of 159
Operating System

One instance of R2 is assigned to process P1 and another instance of R2 is


assigned to process P4, Process P1 is waiting for resource R1. One instance of R1 is
assigned to Process P2 while another instance of R2 is assigned to Process P3,
Process P3 is waiting for resource R2.

Advantages of RAG
• It is pretty effective in detecting deadlocks.
• The Banker's Algorithm makes extensive use of it.
• It's a graphical depiction of a system.
• A quick peek at the graph may sometimes tell us if the system is in a deadlock
or not.
• Understanding resource allocation using RAG takes less time.

Disadvantages of RAG
• RAG is beneficial when there are fewer procedures and resources.
• When dealing with a large number of resources or processes, it is preferable to
keep data in a table than an RAG.
• When there are a vast number of resources or processes, the graph becomes
challenging to interpret and very complex.

CONDITIONS FOR DEADLOCK


These four conditions must be met for a deadlock to happen in an operating
system.

1. Mutual Exclusion: In this, two or more processes must compete for the
same resources. There must be some resources that can only be used one
process at a time. This means the resource is non-sharable. This could be a
physical resource like a printer or an abstract concept like a lock on a shared
data structure.
Deadlocks never occur with shared resources like read-only files but with
exclusive access to resources like tape drives and printers.

Page 56 of 159
Operating System

2. Hold and Wait: Hold and wait is when a process is holding a resource and
waiting to acquire another resource that it needs but cannot proceed because
another process is keeping the first resource. Each of these processes must
have a hold on at least one of the resources it’s requesting. If one process
doesn’t have a hold on any of the resources, it can’t wait and will give up
immediately.
When this condition exists, the process must be stopped if it
simultaneously holds one or more resources while waiting for another resource.
In the below diagram, Process 1 currently holding Resource 1 is waiting for
additional Resources 2.

3. No Preemption: When a process completes its task, it can release a


resource voluntarily. A process that holds some resources requests another
that cannot be allocated to it immediately and, in that case, all resources will
become available.
There is a process that is waiting for the list of preempted resources. If the
process can recover its old resources and request a new one, it will be
restarted.
Whenever another resource is available, it is given to the process
requesting it. We release it and give it to the requesting process if it is being
held by another process waiting for another resource.

4. Circular Wait: The circular wait is when two processes wait for each other to
release a resource they are holding, creating a deadlock. There must be a cycle
in the graph below. As you can see, process 1 is holding on to a resource R1
that process 2 in the cycle is waiting for. This is an example of a circular wait.
To better understand let’s understand with another example. For example,
Process A might be holding on to Resource X while waiting for Resource Y,
while Process B is holding on to Resource Y while waiting for Resource Z, and
so on around the cycle.

Page 57 of 159
Operating System

Here,
Process P1 waits for a resource held by process P2.
Process P2 waits for a resource held by process P3.
Process P3 waits for a resource held by process P4.
Process P4 waits for a resource held by process P1.

DEADLOCK PREVENTION
Deadlock prevention is eliminating one of the necessary conditions of
deadlock so that only safe requests are made to OS and the possibility of deadlock
is excluded before making requests. Here OS does not need to do any additional
tasks as it does in deadlock avoidance by running an algorithm on requests
checking for the possibility of deadlock.

Deadlock Prevention Techniques: Deadlock prevention techniques refer


to violating any one of the four necessary conditions. We will see one by one how
we can violate each of them to make safe requests and which is the best approach
to prevent deadlock.

Mutual Exclusion: Some resources are inherently unshakeable, for example,


Printers. For unshareable resources, processes require exclusive control of the
resources. A mutual exclusion means that unshakeable resources cannot be
accessed simultaneously by processes. Shared resources do not
cause deadlock but some resources can't be shared among processes, leading to a
deadlock.
For Example: a read operation on a file can be done simultaneously by multiple
processes, but a write operation cannot. Write operation requires sequential access,
so, some processes have to wait while another process is doing a write operation.
It is not possible to eliminate mutual exclusion, as some resources are
inherently non-shareable. For Example, tape drive, as only one process can access
data from a Tape drive at a time. For other resources like printers, we can use a
technique called Spooling.
Spooling: It stands for Simultaneous Peripheral Operations online. A Printer has
associated memory which can be used as a spooler directory (memory that is used
to store files that are to be printed next).
In spooling, when multiple processes request the printer, their jobs
(instructions of the processes that require printer access) are added to the queue in
the spooler directory. The printer is allocated to jobs on a first come first serve

Page 58 of 159
Operating System

(FCFS) basis. In this way, the process does not have to wait for the printer and it
continues its work after adding its job to the queue. We can understand the
workings of the Spooler directory better with the diagram given below:

Hold and Wait: Hold and wait is a condition in which a process is holding one
resource while simultaneously waiting for another resource that is being held by
another process. The process cannot continue till it gets all the required resources.
In the diagram given below:

• Resource 1 is allocated to Process 2


• Resource 2 is allocated to Process 1
• Resource 3 is allocated to Process 1
• Process 1 is waiting for Resource 1 and holding Resource 2 and Resource 3
• Process 2 is waiting for Resource 2 and holding Resource 1
There are two ways to eliminate hold and wait:
1. By eliminating wait: The process specifies the resources it requires in advance
so that it does not have to wait for allocation after execution starts. For
Example, Process1 declares in advance that it requires both Resource1 and
Resource2
2. By eliminating hold: The process has to release all resources it is currently
holding before making a new request. For Example: Process1 has to release
Resource2 and Resource3 before making request for Resource1

No preemption: Preemption is temporarily interrupting an executing task and


later resuming it. For example, if process P1 is using a resource and a high-priority
process P2 requests for the resource, process P1 is stopped and the resources are
allocated to P2. There are two ways to eliminate this condition by preemption:

Page 59 of 159
Operating System

1. If a process is holding some resources and waiting for other resources, then it
should release all previously held resources and put a new request for the
required resources again. The process can resume once it has all the required
resources.
For example: If a process has resources R1, R2, and R3 and it is waiting for
resource R4, then it has to release R1, R2, and R3 and put a new request for all
resources again.
2. If a process P1 is waiting for some resource, and there is another process P2 that
is holding that resource and is blocked waiting for some other resource. Then
the resource is taken from P2 and allocated to P1. This way process P2 is
preempted and it requests again for its required resources to resume the task.
The above approaches are possible for resources whose states are easily restored
and saved, such as memory and registers.

Circular Wait: In circular wait, two or more processes wait for resources in a
circular order. We can understand this better by the diagram given below:

To eliminate circular wait, we assign a priority to each resource. A process


can only request resources in increasing order of priority.
In the example above, process P3 is requesting resource R1, which has a
number lower than resource R3 which is already allocated to process P3. So, this
request is invalid and cannot be made, as R1 is already allocated to process P1.

DEADLOCK DETECTION
If a system does not employ either a deadlock-prevention or a deadlock
avoidance algorithm, then a deadlock situation may occur. In this environment, the
system must provide:
• An algorithm that examines the state of the system to determine whether a
deadlock has occurred
• An algorithm to recover from the deadlock

Single Instance of Each Resource Type- If all resources have only a single
instance, then we can define a deadlock-detection algorithm that uses a variant of
the resource-allocation graph, called a wait-for graph. We obtain this graph from
the resource-allocation graph by removing the resource nodes and collapsing the
appropriate edges.

Page 60 of 159
Operating System

P5

P5
R1 R3 R4

P1 P2 P3 P1 P2 P3

P4 P4
R2 R5
(a) (b)
Figure: (a) Resource-allocation graph (b) Corresponding wait-for graph
More precisely, an edge from Pi, to Pj, in a wait-for graph implies that
process Pi, is waiting for process Pj, to release a resource that Pi needs. An edge
Pi → P, exists in a wait-for graph if and only if the corresponding resource-
allocation graph contains two edges P → Rj and Rj → Pi for some resource Rj, for
example, in the figure, we present a resource-allocation graph and the
corresponding wait-for graph.
As before, a deadlock exists in the system if and only if the wait-for graph
contains a cycle. To detect deadlocks, the system needs to maintain the wait-for
graph and periodically invoke an algorithm that searches for a cycle in the graph.
An algorithm to detect a cycle in a graph requires an order of n1 operations, where
n is the number of vertices in the graph.

Several Instances of a Resource Type: The wait-for graph scheme does not
apply to a resource-allocation system with multiple instances of each resource type.
We turn now to a deadlock detection algorithm that is applicable to such a system.
The algorithm employs several time-varying data structures that are similar to
those used in the banker's algorithm:
• Available- A number of available resources of each type.
• Allocation- A number of resources of each type currently allocated to each
process.
• Request- A current request of each process.
We consider a system with five processes P1 through P4 and three resource
types A, B, and C. Resource type A has seven instances, resource type B has two
instances, and resource type C has six instances. Suppose that, at time T0, we have
the following resource-allocation state:
Allocation Request Available
ABC ABC ABC
P0 010 000 000
P1 200 202
P2 303 000
P3 211 100

Page 61 of 159
Operating System

P4 002 002
Suppose now that process P2 makes one additional request for an instance of
type C. The Request matrix is modified as follows:
Request
ABC
P0 000
P1 202
P2 001
P3 100
P4 002
We claim that the system is now deadlocked. Although we can reclaim the
resources held by process Po, the number of available resources is not sufficient to
fulfil the requests of the other processes. Thus, a deadlock exists, consisting of
processes P1, P2, P3, and P4.

Detection-Algorithm Usage: The detection algorithm depends on two factors:


1. How often is a deadlock likely to occur?
2. How many processes will be affected by deadlock when it happens?
If deadlocks occur frequently, then the detection algorithm should be
invoked frequently. Resources allocated to deadlocked processes will be idle until
the deadlock can be broken. In addition, the number of processes involved in the
deadlock cycle may grow.
Deadlocks occur only when some process makes a request that cannot be
granted immediately. This request may be the final request that completes a chain
of waiting processes. In the extreme, we can invoke the deadlock detection
algorithm every time a request for allocation cannot be granted immediately. In this
case, we can identify not only the deadlocked set of processes but also the specific
process that "caused" the deadlock. (In reality, each of the deadlocked processes is
a link in the cycle in the resource graph, so all of them, jointly, caused the
deadlock.) If there are many different resource types, one request may create many
cycles in the resource graph, each cycle completed by the most recent request and
"caused" by the one identifiable process.
If the deadlock-detection algorithm is invoked for every resource request,
this will incur a considerable overhead in computation time. A less expensive
alternative is simply to invoke the algorithm at less frequent intervals - for example,
once per hour or whenever CPU utilization drops below 40 percent. (A deadlock
eventually cripples system throughput and causes CPU utilization to drop.) If the
detection algorithm is invoked at arbitrary points in time, there may be many
cycles in the resource graph. In this case, we would generally not be able to tell
which of the many deadlocked processes “caused” the deadlock.

RECOVERY FROM DEADLOCK

Page 62 of 159
Operating System

To eliminate deadlocks by aborting a process, we use one of two methods. In


both methods, the system reclaims all resources allocated to the terminated
processes.
• Abort all deadlocked processes- This method clearly will break the deadlock
cycle but at great expense; the deadlocked processes may have been computed
for a long time, and the results of these partial computations must be discarded
and probably will have to be recomputed later.
• Abort one process at a time until the deadlock cycle is eliminated- This
method incurs considerable overhead, since, after each process is aborted, a
deadlock-detection algorithm must be invoked to determine whether any
processes are still deadlocked.
Aborting a process may not be easy. If the process was updating a file,
terminating it would leave that file in an incorrect state. Similarly, if the process
was printing data on a printer, the system must reset the printer to the correct
state before printing the next job.
If the partial termination method is used, then we must determine which
deadlocked process (or processes) should be terminated. This determination is a
policy decision, similar to CPU-scheduling decisions. We should abort those
processes whose termination will incur the minimum cost. Many factors may affect
which process is chosen, including:
1. What the priority of the process is
2. How long the process has computed and how much longer the process will take
before completing its designated task
3. How many and what type of resources the process has used (for example,
whether the resources are simple to preempt)
4. How many more resources the process needs to complete
5. How many processes will need to be terminated
6. Whether the process is interactive or batch
To eliminate deadlocks using resource preemption, we successively preempt
some resources from processes and give these resources to other processes until
the deadlock cycle is broken.
If preemption is required to deal with deadlocks, then three issues need to
be addressed:
1. Selecting a victim- Which resources and which processes are to be preempted?
As in process termination, we must determine the order of preemption to
minimize cost. Cost factors may include such parameters as the number of
resources a deadlocked process is holding and the amount of time the process
has thus far consumed during its execution.
2. Rollback- If we preempt a resource from a process, what should be done with
that process? It cannot continue with its normal execution; it is missing some
needed resources. We must roll back the process to some safe state and restart
it from that state. Since it is difficult to determine what a safe state is, the
simplest solution is a total rollback: Abort the process and then restart it.
Although it is more effective to roll back the process only as far as necessary to

Page 63 of 159
Operating System

break the deadlock, this method requires the system to keep more information
about the state of all running processes.
3. Starvation- How do we ensure that starvation will not occur? That is, how can
we guarantee that resources will not always be preempted from the same
process?
In a system where victim selection is based primarily on cost factors, the
same process may always pick a victim. As a result, this process never completes
its designated task, a starvation situation that must be dealt with in any practical
system. We must ensure that a process can be picked as a victim only a (small)
finite number of times. The most common solution is to include the number of
rollbacks in the cost factor.

Recovery Strategies/Methods:
• Process Termination: One way to recover from a deadlock is to terminate one
or more of the processes involved in the deadlock. By releasing the resources
held by these terminated processes, the remaining processes may be able to
continue executing. However, this approach should be used cautiously, as
terminating processes could lead to loss of data or incomplete transactions.
• Resource Preemption: Resources can be forcibly taken away from one or
more processes and allocated to the waiting processes. This approach can break
the circular wait condition and allow the system to proceed. However, resource
preemption can be complex and needs careful consideration to avoid disrupting
the execution of processes.
• Process Rollback: In situations where processes have checkpoints or states
saved at various intervals, a process can be rolled back to a previously saved
state. This means that the process will release all the resources acquired after
the saved state, which can then be allocated to other waiting processes.
Rollback, though, can be resource-intensive and may not be feasible for all types
of applications.
• Wait-Die and Wound-Wait Schemes: As mentioned in the Deadlock
Detection in OS section, these schemes can also be used for recovery. Older
processes can preempt resources from younger processes (Wound-Wait), or
younger processes can be terminated if they try to access resources held by older
processes (Wait-Die).
• Kill the Deadlock: In some cases, it might be possible to identify a specific
process that is causing the deadlock and terminate it. This is typically a last
resort option, as it directly terminates a process without any complex recovery
mechanisms.

BANKER’S ALGORITHM
The banker algorithm in the Operating System is used to avoid deadlock and
for resource allocation safely to each process in the system. As the name suggests,
it is mainly used in the banking system to check whether a loan can be sanctioned
to a person or not.

Page 64 of 159
Operating System

The banker's algorithm in OS is a combination of two main algorithms: the


safety algorithm (to check whether the system is in a safe state or not) and the
resource request algorithm (to check how the system behaves when a process
makes a resource request).

Bankers Algorithm in OS: A process in OS can request a resource, can use


the resource, and can also release it. There comes a situation of deadlock in OS in
which a set of processes is blocked because it is holding a resource and also
requires some other resources at the same time that are being acquired by other
processes.
So, to avoid such a situation of deadlock, we have the Bankers algorithm in
the Operating System.
The banker’s algorithm in OS is a deadlock avoidance algorithm and it is
also used to safely allocate the resources to each process within the system.
It was designed by Edsger Dijkstra. As the name suggests, the Bankers
algorithm in OS is mostly used in banking systems to identify whether a loan
should be given to a person or not.
To understand this algorithm in detail, let us discuss a real-world
scenario. Supposing there are 'n' account holders in a particular bank and the total
sum of their money is 'x'.
Now, if a person applies for a loan (let's say for buying a car), then the loan
amount subtracted from the total amount available in the bank gives us the
remaining amount and that should be greater than 'x', then only the loan will be
sanctioned by the bank.
It is done keeping in mind the worst case where all the account holders come
to withdraw their money from the bank at the same time. Thus, the bank always
remains in a safe state.
Bankers’ algorithm in OS works similarly. Each process within the system
must provide all the important necessary details to the operating system like
upcoming processes, requests for the resources, delays, etc.
Based on these details, OS decides whether to execute the process or keep it
in the waiting state to avoid deadlocks in the system. Thus, the Banker algorithm is
sometimes also known as the Deadlock Detection Algorithm.

Banker's Algorithm consists of two algorithms

1. Safety Algorithm: The safety algorithm checks for the safe state of the
system. If the system is in a safe state with any of the resource allocation
permutations, then deadlock can be avoided. The steps in the Safety Algorithm
are:
Step 1: Let there be two vectors Work and Finish of length m and n,
respectively. The work array represents the total available resources of each
type (R0, R1, R2, ...Rm) and the Finish array represents whether the particular
process Pi has finished execution or not. It can have a Boolean value of
true/false.
Work = Available

Page 65 of 159
Operating System

Finish[i] = false, where i = 0,1,2, 3, .... n


Initially, all processes are assumed to be unfinished so for all i Finish[i] = false.
Step 2: Need is also an array of size m for each process. We can iterate over
each process from i=0,1,2,3... to n and find if any process Pi exists for
which Need[i]<Work[i] and Finish[i]=false.
Need[i]<Work[i]
Finish[i] = false, i = 0,1,2,3...n
If no such process exists and finish[i]= false for all processes, then the
system is not in a safe state as there is no such process that can complete its
execution from the available resources. So, the system can go in a deadlock
state. If Finish[i]= true for all processes then proceed to step 4.
Step 3: If any process exists for which Need<Available as in step 2 then
allocate those resources to that particular process and increase the allocated
array for that process by Allocated[i].
Work[i] = Work[i] + Allocated[i]
Finish[i] = true
Once this process is executed then proceed to step-2 and repeat for another
process until all the processes complete their execution and Finish[i]= true for
all i = 0, 1, 2, 3 ... n.
Step 4: If finish[i] = true for all i = 0, 1, 2, 3.. n then the system is considered to
be in a safe state. Number of operations required in the worst case in safety
algorithm = m∗n2

2. Resource request algorithm: The resource request algorithm checks if the


request can be safely granted or not to the process of requesting resources.
If a process Pi request for c instances for resource of type j then it can be
represented as Request [i, j] = c. Steps in Resource request algorithm:
Step 1: If Request [i, j] <= Need [i, j] then it means the process is requesting
resources less than or equal to its need for execution. If Request [i, j] > Need [i,
j] then an error flag can be raised stating that the process is requesting
resources beyond its need. If Request [i, j] <= Need [i, j] then proceed to step 2.
Step 2: If Request [i. j] < Available [i, j] then it means the requested resources
are available in the system and can be allocated to the process. If Request [i. j] <
Available [i, j] then proceed to step-3 else the process has to wait for the
available resource to be greater than equal to the requested resources.
Step 3: Now, if Request [i, j] < Available [i, j] then the resource is allocated to
the process Pi, and the Available [i, j], Allocation [i, j], and Need [i, j] will update
as given below:
Available [i, j] = Available [i, j] – Request [i, j]
Allocation [i, j] = Allocation [i, j] + Request [i, j]
Need [i, j] = Need [i, j] – Request [i, j]
If the above resource allocation is safe then the transaction is completed
and Pi is allocated its resources for execution. But, if the new state of the system is

Page 66 of 159
Operating System

unsafe then the process Pi has to wait for the resources to fulfil its request
demands, and the old allocation state is again restored.

Example of Bankers Algorithm: In this example, we have a process table


with several processes that contains an allocation field (for showing the number of
resources of type: A, B, and C allocated to each process in the table), max field (for
showing the maximum number of resources of type: A, B, and C that can be
allocated to each process) and also, the available field (for showing the currently
available resources of each type in the table).
Processes Allocation Max Available
ABC ABC ABC
P0 112 544 321
P1 212 433
P2 301 913
P3 020 864
P4 112 223
Considering the above processing table, we need to calculate the following two
things: Q.1 Calculate the need matrix. Q.2 Is the system in a safe state?
Ans.1 We can easily calculate the entries of the need matrix using the formula:
(Need)i = (Max)i − (Allocation) i (Need)i =(Max)i − (Allocation)i
Process Need
ABC
P0 432
P1 221
P2 612
P3 844
P4 111
Ans.2 Let us check for the safe sequence:
1. For process P0, Need = (4, 3, 2) and Available = (3, 2, 1) Clearly, the resources
needed are more in number than the available ones. So, now the system will
move to process the next request.
2. For Process P1, Need = (2, 2, 1) and Available = (3, 2, 1) Clearly, the resources
needed are less than equal to the available resources within the system. Hence,
the request of P1 is granted.
Available = Available + Allocation
= (3, 2, 1) + (2, 1, 2) = (5, 3, 3) (New Available)
3. For Process P2, Need = (6, 1, 2) and Available = (5, 3, 3) Clearly, the resources
needed are more in number than the available ones. So, now the system will
move to process the next request.

Page 67 of 159
Operating System

4. For Process P3, Need = (8, 4, 4) and Available = (5, 3, 3) Clearly, the resources
needed are more in number than the available ones. So, now the system will
move to process the next request.
5. For Process P4, Need = (1, 1, 1) and Available = (5, 3, 3) Clearly, the resources
needed are less than equal to the available resources within the system. Hence,
the request for P4 is granted.
Available = Available + Allocation
= (5, 3, 3) + (1, 1, 2) = (6, 4, 5) (New Available)
6. Now again check for Process P2, Need = (6, 1, 2) and Available = (6, 4,
5) Clearly, resources needed are less than equal to the available resources
within the system. Hence, the request of P2 is granted.
Available = Available + Allocation
= (6,4,5) + (3,0,1) = (9,4,6) (NewAvailable)
7. Now again check for Process P3, Need = (8, 4, 4) and Available = (9, 4,
6) Clearly, the resources needed are less than equal to the available resources
within the system. Hence, the request for P3 is granted.
Available = Available + Allocation
= (9,4,6) + (0,2,0) = (9,6,6)
8. Now again check for Process P0, Need = (4, 3, 2), and Available (9, 6, 6) Clearly,
the request for P0 is also granted.
Safe sequence: <P1, P4, P2, P3, P0> <P1, P4, P2, P3, P0>
The system has allocated all the required number of resources to each
process in a particular sequence. Therefore, it is proved that the system is in a safe
state.

Constraints of Banker Algorithm: The only constraint in Banker's algorithm is


that Available should always satisfy at least the process resource need so that the
system doesn't land up in an unsafe state and the algorithm has to roll back to the
original allocated state.

Advantages of Bankers Algorithm:


• Bankers’ algorithm in OS consists of [MAX] array attribute which indicates the
maximum number of resources of each type that a process can hold. Using the
[MAX] array, we can always find the need for resources for a particular
process. [Need] = [MAX] − [Allocated][Need] = [MAX] − [Allocated]
• This algorithm helps in detecting and avoiding deadlock and also, helps in
managing and controlling process requests of each type of resource within the
system.
• Each process should provide information to the operating system about
upcoming resource requests, the number of resources, delays, and how long the
resources will be held by the process before release. This is also one of the main
characteristics of the Bankers algorithm.
• Various types of resources are maintained by the system while using this
algorithm, which can fulfil the needs of at least one process type.

Page 68 of 159
Operating System

• This algorithm also consists of two other advanced algorithms for maximum
resource allocation.

Disadvantages of Bankers Algorithm:


• Bankers’ algorithm in OS doesn't allow a process to change its maximum need of
resources while processing.
• Another disadvantage of this algorithm is that all the processes within the
system must know about the maximum resource needs in advance.
• It requires a fixed number of processes for processing and no additional process
can be started in between.
• This algorithm allows all the resource requests of the processes to be granted in
a fixed finite period, but the maximum period is one year for allocating the
resources.

Difference Between Program and Process


Program Process
The program contains a set of
The process is an instance of an
instructions designed to complete a
executing program.
specific task.
The process is an active entity as it is
A program is a passive entity as it
created during execution and loaded
resides in the secondary memory.
into the main memory.
The process exists for a limited period
Program exists at a single place and
as it gets terminated after the
continues to exist until it is deleted.
completion of the task.
A program is a static entity. The process is a dynamic entity.
The program does not have any The process has a high resource
resource requirement, it only requires requirement, it needs resources like
memory space for storing the CPU, memory address, and I/O
instructions. during its lifetime.
The program does not have any control Process has its control block called
block. Process Control Block.
In addition to program data, a process
The program has two logical also requires additional information
components: code and data. required for the management and
execution.
Many processes may execute a single
program. Their program code may be
The program does not change itself.
the same but program data may be
different. these are never the same.
The process is a sequence of
Program contains instructions
instruction execution.

Page 69 of 159
Operating System

REVIEW QUESTIONS
1. What is deadlock? Explain various conditions arising from deadlock.
2. Write short notes on (1) Resource Allocation Graph and (2) Simulators.
3. What is deadlock detection? Explain recovery of deadlock.
4. Write short notes on (1) Deterministic Modeling (2) Queuing Analysis.
5. Explain starvation.
6. Explain mutual exclusion in detail.
7. What is deadlock? Explain the resource allocation graph.
8. Explain the methods for recovery from deadlock.
9. Explain the Hold and Wait condition in brief.
10. What is deadlock? Explain the conditions for deadlock.
11. What is Deadlock Prevention? How can we prevent a deadlock?
12. Write a note on Multithreading.
13. Explain Banker's algorithm for deadlock avoidance.
14. Differentiate between program and process.
15. Explain circular wait conditions with examples.

Page 70 of 159

You might also like