OS Unit - II
OS Unit - II
CHAPTER 2
PERFORMANCE AND DEADLOCK
PERFORMANCE COMPARISON
The performance of an operating system is dependent on a variety of factors,
such as the hardware specifications of the computer, the design and
implementation of the operating system, the type and number of applications
running on it, and the workload and environment. These elements can have
varying degrees of impact on the performance of an operating system, depending on
the situation and goal. For instance, the CPU, memory, disk, and network can
influence the kernel, scheduler, file system, security features, user interface,
background processes, and network services. Moreover, user input, system load,
and external events can also affect its performance.
There are many scheduling algorithms, each with its parameters. As a
result, selecting an algorithm can be difficult. The first problem is defining the
criteria to be used in selecting an algorithm. Criteria are often defined in terms of
CPU utilization, response time, or throughput. To select an algorithm, we must first
define the relative importance of these measures. Our criteria may include several
measures, such as:
• Maximizing CPU utilization under the constraint that the maximum response
time is 1 second
• Maximizing throughput such that turnaround time is (on average) linearly
proportional to total execution time
Once the selection criteria have been defined, we want to evaluate the
algorithms under consideration. Following are the various evaluation methods we
can use.
DETERMINISTIC MODELLING
In a deterministic model, when one starts running the model with the same
initial condition every time, the result or the outcome is the same. Moreover, a
deterministic model does not involve randomness; it works accordingly. In the case
of the deterministic model when some work starts at a particular time that is at the
same pace every time, then the output of the model always depends on the initial
conditions. For a well-defined linear model, the unique output is produced from a
unique input, and in the case of a non-linear model, multiple outputs are
produced. This model can be described in different stages of temporal variations
viz. time-independent, time-dependent, and dynamic. A deterministic system
assumes an exact relationship between variables. As a result, this relationship
between variables enables one to predict and notice how variables affect the other.
Page 46 of 159
Operating System
P1 P2 P3 P4 P5
0 10 39 42 49 61
The waiting time is 0 milliseconds for process P1, 10 milliseconds for process
P2, 39 milliseconds for process P3, 42 milliseconds for process P4, and 49
milliseconds for process P5. Thus, the average waiting time is (0 + 10 + 39 + 42 +
49)/5 = 28 milliseconds.
With non-preemptive SJF scheduling, we execute the processes as
P3 P4 P1 P5 P2
0 3 10 20 32 61
The waiting time is 10 milliseconds for process P1, 32 milliseconds for
process P2, 0 milliseconds for process P3, 3 milliseconds for process P 4, and 20
milliseconds for process P5. Thus, the average waiting time is (10 + 32 + 0 + 3 +
20)/5 = 13 milliseconds.
With the RR algorithm, we execute the processes as
P1 P2 P3 P4 P5 P2 P5 P2
0 10 20 23 30 40 50 52 61
The waiting time is 0 milliseconds for process P1, 10 milliseconds for process
P2, 20 milliseconds for process P3, 23 milliseconds for process P 4, and 40
milliseconds for process P5. Thus, the average waiting time is (0 + 32 + 20 + 23 +
40)/5 = 23 milliseconds.
We see that the average waiting time obtained with the SJF policy is less
than half that obtained with FCFS scheduling; the RR algorithm gives us an
intermediate value.
Deterministic modelling is simple and fast. It gives us exact numbers,
allowing us to compare the algorithms. However, it requires exact numbers for
input. The main uses of deterministic modelling are in describing scheduling
algorithms and providing examples. In cases where we are running the same
program over and over again and can measure the program's processing
Page 47 of 159
Operating System
QUEUING ANALYSIS
On many systems, the processes that are run vary from day to day, so there
is no static set of processes (or times) to use for deterministic modelling. These
distributions can be measured and then approximated or simply estimated. The
result is a mathematical formula describing the probability of a particular CPU
burst. Commonly, this distribution is exponential and is described by its mean.
Similarly, we can describe the distribution of times when processes arrive in the
system (the arrival-time distribution). From these two distributions, it is possible to
compute the average throughput, utilization, waiting time, and so on for most
algorithms.
The computer system is described as a network of servers. Each server has a
queue of waiting processes. The CPU is a server with its ready queue, as is the I/O
system with its device queues. Knowing arrival rates and service rates, we can
compute utilization, average queue length, average wait time, and so on. This area
of study is called queueing network analysis.
As an example, let n be the average queue length (excluding the process
being serviced), let W be the average waiting time in the queue, and let X be the
average arrival rate for new processes in the queue (such as three processes per
second). We expect that when a process waits, x W new processes will arrive in
the queue. If the system is in a steady state, then the number of processes leaving
the queue must be equal to the number of processes that arrive. Thus,
n=xW
This equation, known as Little's formula, is particularly useful because it is valid
for any scheduling algorithm and arrival distribution.
We can use Little's formula to compute one of the three variables if we know
the other two. For example, if we know that 7 processes arrive every second (on
average) and that there are normally 14 processes in the queue, then we can
compute the average waiting time per process as 2 seconds.
Queueing analysis can be useful in comparing scheduling algorithms, but it
also has limitations. At the moment, the classes of algorithms and distributions
that can be handled are fairly limited. The mathematics of complicated algorithms
and distributions can be difficult to work with. Thus, arrival and service
distributions are often defined in mathematically tractable but unrealistic ways. It
is also generally necessary to make several independent assumptions, which may
not be accurate. As a result of these difficulties, queueing models are often only
approximations of real systems, and the accuracy of the computed results may be
questionable.
Page 48 of 159
Operating System
SIMULATORS
The OS Simulator is designed to support two main aspects of a computer
system’s resource management: process management and memory management.
Once a compiled code is loaded into the CPU Simulator’s memory, its image is also
available to the OS Simulator. It is then possible to create multiple instances of the
program images as separate processes. The OS Simulator displays the running
processes, the ready processes, and the waiting processes. Each process is
assigned a separate Process Control Block (PCB) that contains information on the
process state. This information is displayed in a separate window. The main
memory display demonstrates the dynamic nature of page allocations according to
the currently selected placement policy. The OS maintains a separate page table for
each process which can also be displayed. The simulator demonstrates how data
memory is relocated and the page tables are maintained as the pages are moved in
and out of the main memory illustrating virtual memory activity.
The process scheduler includes various selectable scheduling policies that
include priority-based, pre-emptive, and round-robin scheduling with variable time
quanta. The OS can carry out context-switching which can be visually enhanced by
slowing down or suspending the progress at some key stage to enable the students
to study the states of CPU registers, stack, cache, pipeline, and PCB contents.
The simulator incorporates an input-output console device, incorporating a
virtual keyboard, and is used to display text and accept input data.
The OS simulator supports dynamic library simulation which is supported
by the appropriate language constructs in the teaching language. The benefits of
sharing code between multiple processes are visually demonstrated. There is also a
facility to link static libraries demonstrating the differences between the two types
of libraries and their benefits and drawbacks.
The simulator allows manual allocation and de-allocation of resources to
processes. This facility is used to create and demonstrate
process deadlocks associated with resources and enables experimentation with
deadlock prevention, detection, and resolution.
STARVATION
Starvation is a problem of resource management where in the OS, the
process does not have resources because it is being used by other processes. This
problem occurs mainly in a priority-based scheduling algorithm in which the
requests with high priority get processed first and the least priority process takes
time to get processed.
Page 49 of 159
Operating System
What Causes Starvation in OS? Here are a few reasons why starvation in
OS occurs:
• In starvation, a process with low priority might wait forever if the process with
higher priority uses a processor constantly.
• Because the low-priority processes are not interacting with resources the
deadlock does not occur but there are chances of starvation as the low-priority
processes are kept in a wait state.
• Hence starvation is precisely a fail-safe method, that is it prevents deadlock
temporarily but it affects the system in general.
• The important cause of starvation might be that there are not enough resources
to provide for every resource.
• If a process selection is random then there can be a possibility that a process
might have to wait for a long duration.
• Starvation can also occur when a resource is never provided to a process for
execution due to faulty allocation of resources.
Page 50 of 159
Operating System
Example of Starvation:
In the given example, the P2 process has the highest priority, and
process P1 has the lowest priority. In the diagram, it can be seen that there is n
number of processes ready for their execution. So, in the CPU, the process with the
highest priority will come in which is P2, and the process P1 will keep waiting for
its turn because all other processes are at a high priority concerning P1. This
situation in which the process is waiting is called starvation.
DEADLOCK
All the processes in a system require some resources such as a central
processing unit (CPU), file storage, input/output devices, etc to execute it. Once the
execution is finished, the process releases the resource it was holding. However,
when many processes run on a system, they also compete for the resources they
require for execution. This may cause a deadlock situation.
A deadlock is a situation in which more than one process is blocked
because it is holding a resource and also requires some resource that is acquired
by some other process. Therefore, none of the processes gets executed.
Example Of Deadlock
Page 51 of 159
Operating System
In the above figure, there are two processes and two resources. Process 1
holds "Resource 1" and needs "Resource 2" while Process 2 holds "Resource 2" and
requires "Resource 1". This creates a situation of deadlock because none of the two
processes can be executed. Since the resources are non-shareable, they can only be
used by one process at a time (Mutual Exclusion). Each process is holding a
resource and waiting for the other process the release the resource it requires.
None of the two processes releases their resources before their execution and this
creates a circular wait. Therefore, all four conditions are satisfied.
Methods of Handling Deadlocks: The first two methods are used to ensure
the system never enters a deadlock.
Deadlock Prevention: This is done by restraining the ways a request can be
made. Since deadlock occurs when all the above four conditions are met, we try to
prevent any one of them, thus preventing a deadlock.
Deadlock Avoidance: When a process requests a resource, the deadlock
avoidance algorithm examines the resource-allocation state. If allocating that
resource sends the system into an unsafe state, the request is granted.
Therefore, it requires additional information such as how many resources of
each type are required by a process. If the system enters an unsafe state, it must
step back to avoid deadlock.
Page 52 of 159
Operating System
Deadlock Detection and Recovery: We let the system fall into a deadlock and
if it happens, we detect it using a detection algorithm and try to recover. Some
ways of recovery are as follows.
• Aborting all the deadlocked processes.
• Abort one process at a time until the system recovers from the deadlock.
• Resource Preemption: Resources are taken one by one from a process and
assigned to higher priority processes until the deadlock is resolved.
Deadlock Ignorance: In the method, the system assumes that deadlock never
occurs. Since the problem of deadlock situations is not frequent, some systems
simply ignore it. Operating systems such as UNIX and Windows follow this
approach. However, if a deadlock occurs, we can reboot our system, and the
deadlock is resolved automatically.
Page 53 of 159
Operating System
Also, this graph contains all the information that is related to all the
instances of the resources which means the information about available resources
and the resources which are being used by the process. In this graph, the circle is
used to represent the process, and the rectangle is used to represent the resource.
Vertices: There are two kinds of vertices used in the resource allocation graph
and these are:
• Process Vertices
• Resource Vertices
Process Vertices: These vertices are used to represent process vertices. The
circle is used to draw the process vertices and the name of the process is
mentioned inside the circle.
Resource Vertices: These vertices are used to represent resource vertices. The
rectangle is used to draw the resource vertices and we use dots inside the circle to
mention the number of instances of that resource.
In the system, there may exist several instances and according to them,
there are two types of resource vertices and these are single instances and multiple
instances.
Single Instance: In a single instance resource type, there is a single dot inside
the box. The single dot mainly indicates that there is one instance of the resource.
Multiple Instance: In multiple instance resource types, there are multiple dots
inside the box, and these Multiple dots indicate that there are multiple instances of
the resources.
Edges: In the Resource Allocation Graph, Edges are further categorized into two:
Page 54 of 159
Operating System
Assign Edges: Assign Edges are mainly used to represent the allocation of
resources to the process. We can draw assigned edges with the help of an arrow in
which mainly the arrowhead points to the process, and the process mainly tail
points to the instance of the resource.
Single Instance RAG Example: Suppose there are Four Processes P1, P2,
P3, P4, and two resources R1 and R2, where P1 is holding R1 and P2 is holding R2,
P3 is waiting for R1 and R2 while P4 is waiting for resource R1.
Multiple Instance RAG Example: Suppose there are four processes P1, P2,
P3, P4 and there are two instances of resource R1 and two instances of resource
R2:
Page 55 of 159
Operating System
Advantages of RAG
• It is pretty effective in detecting deadlocks.
• The Banker's Algorithm makes extensive use of it.
• It's a graphical depiction of a system.
• A quick peek at the graph may sometimes tell us if the system is in a deadlock
or not.
• Understanding resource allocation using RAG takes less time.
Disadvantages of RAG
• RAG is beneficial when there are fewer procedures and resources.
• When dealing with a large number of resources or processes, it is preferable to
keep data in a table than an RAG.
• When there are a vast number of resources or processes, the graph becomes
challenging to interpret and very complex.
1. Mutual Exclusion: In this, two or more processes must compete for the
same resources. There must be some resources that can only be used one
process at a time. This means the resource is non-sharable. This could be a
physical resource like a printer or an abstract concept like a lock on a shared
data structure.
Deadlocks never occur with shared resources like read-only files but with
exclusive access to resources like tape drives and printers.
Page 56 of 159
Operating System
2. Hold and Wait: Hold and wait is when a process is holding a resource and
waiting to acquire another resource that it needs but cannot proceed because
another process is keeping the first resource. Each of these processes must
have a hold on at least one of the resources it’s requesting. If one process
doesn’t have a hold on any of the resources, it can’t wait and will give up
immediately.
When this condition exists, the process must be stopped if it
simultaneously holds one or more resources while waiting for another resource.
In the below diagram, Process 1 currently holding Resource 1 is waiting for
additional Resources 2.
4. Circular Wait: The circular wait is when two processes wait for each other to
release a resource they are holding, creating a deadlock. There must be a cycle
in the graph below. As you can see, process 1 is holding on to a resource R1
that process 2 in the cycle is waiting for. This is an example of a circular wait.
To better understand let’s understand with another example. For example,
Process A might be holding on to Resource X while waiting for Resource Y,
while Process B is holding on to Resource Y while waiting for Resource Z, and
so on around the cycle.
Page 57 of 159
Operating System
Here,
Process P1 waits for a resource held by process P2.
Process P2 waits for a resource held by process P3.
Process P3 waits for a resource held by process P4.
Process P4 waits for a resource held by process P1.
DEADLOCK PREVENTION
Deadlock prevention is eliminating one of the necessary conditions of
deadlock so that only safe requests are made to OS and the possibility of deadlock
is excluded before making requests. Here OS does not need to do any additional
tasks as it does in deadlock avoidance by running an algorithm on requests
checking for the possibility of deadlock.
Page 58 of 159
Operating System
(FCFS) basis. In this way, the process does not have to wait for the printer and it
continues its work after adding its job to the queue. We can understand the
workings of the Spooler directory better with the diagram given below:
Hold and Wait: Hold and wait is a condition in which a process is holding one
resource while simultaneously waiting for another resource that is being held by
another process. The process cannot continue till it gets all the required resources.
In the diagram given below:
Page 59 of 159
Operating System
1. If a process is holding some resources and waiting for other resources, then it
should release all previously held resources and put a new request for the
required resources again. The process can resume once it has all the required
resources.
For example: If a process has resources R1, R2, and R3 and it is waiting for
resource R4, then it has to release R1, R2, and R3 and put a new request for all
resources again.
2. If a process P1 is waiting for some resource, and there is another process P2 that
is holding that resource and is blocked waiting for some other resource. Then
the resource is taken from P2 and allocated to P1. This way process P2 is
preempted and it requests again for its required resources to resume the task.
The above approaches are possible for resources whose states are easily restored
and saved, such as memory and registers.
Circular Wait: In circular wait, two or more processes wait for resources in a
circular order. We can understand this better by the diagram given below:
DEADLOCK DETECTION
If a system does not employ either a deadlock-prevention or a deadlock
avoidance algorithm, then a deadlock situation may occur. In this environment, the
system must provide:
• An algorithm that examines the state of the system to determine whether a
deadlock has occurred
• An algorithm to recover from the deadlock
Single Instance of Each Resource Type- If all resources have only a single
instance, then we can define a deadlock-detection algorithm that uses a variant of
the resource-allocation graph, called a wait-for graph. We obtain this graph from
the resource-allocation graph by removing the resource nodes and collapsing the
appropriate edges.
Page 60 of 159
Operating System
P5
P5
R1 R3 R4
P1 P2 P3 P1 P2 P3
P4 P4
R2 R5
(a) (b)
Figure: (a) Resource-allocation graph (b) Corresponding wait-for graph
More precisely, an edge from Pi, to Pj, in a wait-for graph implies that
process Pi, is waiting for process Pj, to release a resource that Pi needs. An edge
Pi → P, exists in a wait-for graph if and only if the corresponding resource-
allocation graph contains two edges P → Rj and Rj → Pi for some resource Rj, for
example, in the figure, we present a resource-allocation graph and the
corresponding wait-for graph.
As before, a deadlock exists in the system if and only if the wait-for graph
contains a cycle. To detect deadlocks, the system needs to maintain the wait-for
graph and periodically invoke an algorithm that searches for a cycle in the graph.
An algorithm to detect a cycle in a graph requires an order of n1 operations, where
n is the number of vertices in the graph.
Several Instances of a Resource Type: The wait-for graph scheme does not
apply to a resource-allocation system with multiple instances of each resource type.
We turn now to a deadlock detection algorithm that is applicable to such a system.
The algorithm employs several time-varying data structures that are similar to
those used in the banker's algorithm:
• Available- A number of available resources of each type.
• Allocation- A number of resources of each type currently allocated to each
process.
• Request- A current request of each process.
We consider a system with five processes P1 through P4 and three resource
types A, B, and C. Resource type A has seven instances, resource type B has two
instances, and resource type C has six instances. Suppose that, at time T0, we have
the following resource-allocation state:
Allocation Request Available
ABC ABC ABC
P0 010 000 000
P1 200 202
P2 303 000
P3 211 100
Page 61 of 159
Operating System
P4 002 002
Suppose now that process P2 makes one additional request for an instance of
type C. The Request matrix is modified as follows:
Request
ABC
P0 000
P1 202
P2 001
P3 100
P4 002
We claim that the system is now deadlocked. Although we can reclaim the
resources held by process Po, the number of available resources is not sufficient to
fulfil the requests of the other processes. Thus, a deadlock exists, consisting of
processes P1, P2, P3, and P4.
Page 62 of 159
Operating System
Page 63 of 159
Operating System
break the deadlock, this method requires the system to keep more information
about the state of all running processes.
3. Starvation- How do we ensure that starvation will not occur? That is, how can
we guarantee that resources will not always be preempted from the same
process?
In a system where victim selection is based primarily on cost factors, the
same process may always pick a victim. As a result, this process never completes
its designated task, a starvation situation that must be dealt with in any practical
system. We must ensure that a process can be picked as a victim only a (small)
finite number of times. The most common solution is to include the number of
rollbacks in the cost factor.
Recovery Strategies/Methods:
• Process Termination: One way to recover from a deadlock is to terminate one
or more of the processes involved in the deadlock. By releasing the resources
held by these terminated processes, the remaining processes may be able to
continue executing. However, this approach should be used cautiously, as
terminating processes could lead to loss of data or incomplete transactions.
• Resource Preemption: Resources can be forcibly taken away from one or
more processes and allocated to the waiting processes. This approach can break
the circular wait condition and allow the system to proceed. However, resource
preemption can be complex and needs careful consideration to avoid disrupting
the execution of processes.
• Process Rollback: In situations where processes have checkpoints or states
saved at various intervals, a process can be rolled back to a previously saved
state. This means that the process will release all the resources acquired after
the saved state, which can then be allocated to other waiting processes.
Rollback, though, can be resource-intensive and may not be feasible for all types
of applications.
• Wait-Die and Wound-Wait Schemes: As mentioned in the Deadlock
Detection in OS section, these schemes can also be used for recovery. Older
processes can preempt resources from younger processes (Wound-Wait), or
younger processes can be terminated if they try to access resources held by older
processes (Wait-Die).
• Kill the Deadlock: In some cases, it might be possible to identify a specific
process that is causing the deadlock and terminate it. This is typically a last
resort option, as it directly terminates a process without any complex recovery
mechanisms.
BANKER’S ALGORITHM
The banker algorithm in the Operating System is used to avoid deadlock and
for resource allocation safely to each process in the system. As the name suggests,
it is mainly used in the banking system to check whether a loan can be sanctioned
to a person or not.
Page 64 of 159
Operating System
1. Safety Algorithm: The safety algorithm checks for the safe state of the
system. If the system is in a safe state with any of the resource allocation
permutations, then deadlock can be avoided. The steps in the Safety Algorithm
are:
Step 1: Let there be two vectors Work and Finish of length m and n,
respectively. The work array represents the total available resources of each
type (R0, R1, R2, ...Rm) and the Finish array represents whether the particular
process Pi has finished execution or not. It can have a Boolean value of
true/false.
Work = Available
Page 65 of 159
Operating System
Page 66 of 159
Operating System
unsafe then the process Pi has to wait for the resources to fulfil its request
demands, and the old allocation state is again restored.
Page 67 of 159
Operating System
4. For Process P3, Need = (8, 4, 4) and Available = (5, 3, 3) Clearly, the resources
needed are more in number than the available ones. So, now the system will
move to process the next request.
5. For Process P4, Need = (1, 1, 1) and Available = (5, 3, 3) Clearly, the resources
needed are less than equal to the available resources within the system. Hence,
the request for P4 is granted.
Available = Available + Allocation
= (5, 3, 3) + (1, 1, 2) = (6, 4, 5) (New Available)
6. Now again check for Process P2, Need = (6, 1, 2) and Available = (6, 4,
5) Clearly, resources needed are less than equal to the available resources
within the system. Hence, the request of P2 is granted.
Available = Available + Allocation
= (6,4,5) + (3,0,1) = (9,4,6) (NewAvailable)
7. Now again check for Process P3, Need = (8, 4, 4) and Available = (9, 4,
6) Clearly, the resources needed are less than equal to the available resources
within the system. Hence, the request for P3 is granted.
Available = Available + Allocation
= (9,4,6) + (0,2,0) = (9,6,6)
8. Now again check for Process P0, Need = (4, 3, 2), and Available (9, 6, 6) Clearly,
the request for P0 is also granted.
Safe sequence: <P1, P4, P2, P3, P0> <P1, P4, P2, P3, P0>
The system has allocated all the required number of resources to each
process in a particular sequence. Therefore, it is proved that the system is in a safe
state.
Page 68 of 159
Operating System
• This algorithm also consists of two other advanced algorithms for maximum
resource allocation.
Page 69 of 159
Operating System
REVIEW QUESTIONS
1. What is deadlock? Explain various conditions arising from deadlock.
2. Write short notes on (1) Resource Allocation Graph and (2) Simulators.
3. What is deadlock detection? Explain recovery of deadlock.
4. Write short notes on (1) Deterministic Modeling (2) Queuing Analysis.
5. Explain starvation.
6. Explain mutual exclusion in detail.
7. What is deadlock? Explain the resource allocation graph.
8. Explain the methods for recovery from deadlock.
9. Explain the Hold and Wait condition in brief.
10. What is deadlock? Explain the conditions for deadlock.
11. What is Deadlock Prevention? How can we prevent a deadlock?
12. Write a note on Multithreading.
13. Explain Banker's algorithm for deadlock avoidance.
14. Differentiate between program and process.
15. Explain circular wait conditions with examples.
Page 70 of 159