0% found this document useful (0 votes)
8 views36 pages

Os Mid-2 Longs Unit-3 Only Theory Answers

The document discusses solutions to the Readers-Writers problem and the Dining Philosophers problem using semaphores, emphasizing the need for synchronization and mutual exclusion to prevent deadlock and starvation. It also covers deadlock detection with multiple resource types, explaining the use of resource allocation graphs and matrices to identify cycles that indicate deadlock. Key concepts include the importance of semaphores in managing access to shared resources and the challenges posed by multiple resource types in deadlock detection.

Uploaded by

PAVANKUMAR KOTNI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views36 pages

Os Mid-2 Longs Unit-3 Only Theory Answers

The document discusses solutions to the Readers-Writers problem and the Dining Philosophers problem using semaphores, emphasizing the need for synchronization and mutual exclusion to prevent deadlock and starvation. It also covers deadlock detection with multiple resource types, explaining the use of resource allocation graphs and matrices to identify cycles that indicate deadlock. Key concepts include the importance of semaphores in managing access to shared resources and the challenges posed by multiple resource types in deadlock detection.

Uploaded by

PAVANKUMAR KOTNI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 36

OS UNIT-3 ONLY THEORY LONGS

1 a Interpret the solution to Readers-Writers problem using


semaphores.
A) The Readers-Writers problem involves synchronizing
access to a shared resource (such as a database) between
multiple readers (who only need to read the resource) and
writers (who need to modify the resource). The challenge is
to allow concurrent reads while ensuring that writers have
exclusive access, preventing conflicts between readers and
writers.
Solution Using Semaphores:
Semaphores can be used to manage access to shared
resources and to enforce the mutual exclusion conditions
needed for synchronization.
Here is how we can use semaphores to solve the Readers-
Writers problem:

Semaphores Used:
1. mutex: A binary semaphore used to ensure mutual
exclusion when readers or writers are updating the
shared variable (such as the count of readers).
2. rw_mutex: A binary semaphore that allows writers to
gain exclusive access to the shared resource.
3. read_count: A counting semaphore that keeps track of
the number of active readers.
Algorithm Explanation:
Readers:
 Entry:
o Acquire the mutex semaphore to update the
read_count variable safely.
o If it's the first reader (i.e., read_count == 0), acquire
the rw_mutex to prevent writers from accessing
the shared resource.
o Increment the read_count to indicate a new reader.
o Release the mutex to allow other readers to check
their conditions.
 Exit:
o Acquire the mutex semaphore to safely update the
read_count variable.
o Decrement read_count. If the last reader (i.e.,
read_count == 0), release the rw_mutex to allow
writers to access the shared resource.
o Release the mutex.
Writers:
 Entry:
o Acquire the rw_mutex semaphore to ensure
exclusive access to the shared resource.
 Exit:
o Release the rw_mutex semaphore to allow other
writers or readers to access the shared resource.
semaphore mutex = 1; // Mutex for critical section in
readers
semaphore rw_mutex = 1; // Mutex for writers' exclusive
access
int read_count = 0; // Track the number of active
readers

// Reader Process
Reader() {
while(true) {
wait(mutex); // Enter critical section for read_count
read_count++; // Increment the number of readers
if (read_count == 1) {
wait(rw_mutex); // First reader acquires write lock
}
signal(mutex); // Exit critical section

// Read the shared resource (reading data)

wait(mutex); // Enter critical section for read_count


read_count--; // Decrement the number of readers
if (read_count == 0) {
signal(rw_mutex); // Last reader releases write lock
}
signal(mutex); // Exit critical section
}
}

// Writer Process
Writer() {
while(true) {
wait(rw_mutex); // Wait for exclusive access to the
resource

// Write to the shared resource (modifying data)

signal(rw_mutex); // Release write lock


}
}

Key Points in the Solution:


1. Reader-Writer Locking: The rw_mutex semaphore
ensures that only one writer can access the resource at a
time, while multiple readers can access the resource
concurrently.
2. Synchronization: The mutex semaphore ensures that
updates to the read_count are done atomically to avoid
race conditions.
3. Efficiency: Multiple readers can read concurrently, but
when a writer needs access, no readers can access the
resource, and the writer will wait until all readers have
finished.
Advantages of Using Semaphores:
1. Concurrency: Allows multiple readers to access the
resource simultaneously without conflicts, while
maintaining mutual exclusion for writers.
2. Deadlock-Free: The careful ordering of semaphore
acquisition ensures that no process gets stuck
indefinitely (no deadlock).
3. Fairness: Semaphores help to enforce the fair access
between readers and writers, especially if the problem
specifies fair access to resources (using additional
mechanisms like a priority queue for requests).

1b Discuss in detail about Deadlock detection with multiple


resource type.
B) Deadlock Detection with Multiple Resource Types:
Deadlock occurs when a set of processes are in a state where
each process is waiting for an event that can only be caused
by another process in the set. In the case of multiple resource
types, deadlock detection becomes more complex, but it can
still be managed using appropriate techniques.
Key Concepts in Deadlock Detection with Multiple Resource
Types:
1. Resource Allocation Graph (RAG): This is a graphical
representation where nodes represent processes and
resources, and edges represent relationships between
them.
o Request Edge: From a process to a resource,
indicating a request for that resource.
o Assignment Edge: From a resource to a process,
indicating that the resource is allocated to the
process.
o Cycle Detection: A cycle in this graph indicates the
possibility of a deadlock. However, the complexity
increases when there are multiple resource types.
2. Deadlock in the context of multiple resource types:
When multiple types of resources are involved, we must
detect a cycle involving not just one type of resource but
combinations of different resource types.
Steps in Deadlock Detection for Multiple Resource Types:
1. Resource Allocation Matrix: We need to maintain a
resource allocation matrix and a resource request
matrix:
o Allocation Matrix (A): Shows the number of
resources of each type allocated to each process.
o Request Matrix (R): Shows the number of
resources of each type that each process is
requesting.
Let’s assume:
o m: The number of resource types.
o n: The number of processes.
These matrices help track which resources are allocated to
which processes and which resources are still being
requested by processes.
2. Availability Vector:
An availability vector tracks the number of instances of
each resource type that are currently available in the
system.
3. Deadlock Detection Algorithm for Multiple Resources:
The key challenge is to identify if there exists a cycle in
the resource allocation graph where resources are held
and requested simultaneously by different processes.
Here's a simplified approach to detecting deadlock in
systems with multiple resource types:
Step-by-Step Algorithm:
o Step 1: Construct the resource allocation graph
with processes and resources.
 The processes are represented by nodes.
 The resources are represented by nodes as
well, and edges between them represent
allocation and request relationships.
o Step 2: Check if the system is in a safe or unsafe
state:
 A safe state is one where there is a sequence
of processes such that each process can
eventually obtain all the resources it needs to
complete and release those resources.
 An unsafe state could potentially lead to
deadlock.
o Step 3: Use the Banker's Algorithm for Deadlock
Detection (if feasible):
 This algorithm works by simulating the
allocation of resources and checking if there
exists a sequence where all processes can be
completed. If such a sequence exists, the
system is safe. If no such sequence exists, the
system is unsafe, and deadlock may occur.
o Step 4: Cycle Detection:
 Use cycle detection algorithms like Depth-First
Search (DFS) or Graph Coloring to detect
cycles in the resource allocation graph.
 A cycle in the graph implies deadlock, as each
process in the cycle is waiting for a resource
held by another process in the cycle.
 If the cycle is detected, it indicates the
presence of deadlock in the system.
4. Deadlock Detection Algorithm (Basic Outline for
Multiple Resource Types):
o For each process, calculate the difference between
its requested resources and the available resources.
o If the available resources are less than the required
resources for any process, mark that process as
waiting.
o If all processes are waiting and none can proceed, a
deadlock is detected.
5. Handling Deadlock: Once a deadlock is detected, several
strategies can be employed to handle the situation:
o Abort a process: Terminate one or more processes
involved in the deadlock to break the cycle.
o Resource Preemption: Preempt resources from
processes and allocate them to other processes to
break the deadlock.
Example:
Consider a system with two types of resources, R1 and R2,
and three processes, P1, P2, and P3.
 Available Resources:
o R1 = 3, R2 = 2
 Allocation Matrix:
o P1: R1 = 1, R2 = 1
o P2: R1 = 2, R2 = 0
o P3: R1 = 0, R2 = 1
 Request Matrix:
o P1: R1 = 0, R2 = 1
o P2: R1 = 1, R2 = 1
o P3: R1 = 1, R2 = 0
In this case:
 P1 is waiting for R2 (requesting 1 R2).
 P2 is waiting for R1 and R2 (requesting 1 R1 and 1 R2).
 P3 is waiting for R1 (requesting 1 R1).
Here, there is a cycle: P1 → P2 → P3 → P1, implying
deadlock.
Challenges in Deadlock Detection with Multiple Resource
Types:
1. Complexity: The detection process is more complex
when there are multiple resource types and numerous
processes involved. Graph algorithms may require
significant computational time and space.
2. Efficiency: Continuously checking for deadlock can be
computationally expensive, especially in large systems.
3. Resource Preemption: Preempting resources to break
deadlocks may require careful management to avoid
other issues like starvation or priority inversion.
2 a Interpret the solution to Dining Philosophers problem
using semaphores.
A) Solution to the Dining Philosophers Problem Using
Semaphores:
The Dining Philosophers problem is a classic synchronization
problem that illustrates the challenges of resource sharing
between competing processes. The problem involves a set of
philosophers who spend their time thinking and eating, but
they need two resources (forks) to eat. Philosophers can pick
up a fork, but they must pick up both forks to eat. The
challenge is to prevent deadlock and ensure that no
philosopher is starved (i.e., waits forever to eat).
The solution to this problem using semaphores can help
manage synchronization and resource allocation.
Key Concepts:
1. Philosophers: Each philosopher can either think or eat.
When thinking, the philosopher does not need any
resources (forks). When eating, the philosopher needs
two forks (one on the left and one on the right).
2. Forks: There are five forks, and each fork is shared
between two philosophers (the one on the left and the
one on the right).
3. Semaphore: Semaphores are used to ensure mutual
exclusion and synchronization between the philosophers
and the forks.
Solution Approach Using Semaphores:
We need to use semaphores to control access to the shared
resources (forks) and ensure that:
 No philosopher picks up two forks at the same time.
 Philosophers do not deadlock (i.e., all philosophers do
not wait for each other indefinitely).
 Philosophers can eat and think in an orderly manner,
without starvation.
Semaphores Used:
1. mutex Semaphore:
o This is used to ensure mutual exclusion when a
philosopher is picking up or putting down forks. It
protects the critical section where a philosopher
changes their state (thinking, eating).
2. fork[i] Semaphore (for each fork):
o These are binary semaphores (initialized to 1) that
represent the availability of each fork. A
philosopher can pick up a fork if the corresponding
semaphore allows it (i.e., the fork is available). After
the philosopher finishes eating, the fork semaphore
is released (set to 1), making the fork available to
others.
Algorithm:
Philosopher's Process:
c
CopyEdit
semaphore mutex = 1; // Mutex for mutual exclusion in
critical section
semaphore fork[5] = {1, 1, 1, 1, 1}; // Semaphores for each
fork (initially available)

void philosopher(int i) {
while (true) {
think(); // Philosopher is thinking
wait(mutex); // Enter critical section to pick up forks
wait(fork[i]); // Pick up left fork
wait(fork[(i + 1) % 5]); // Pick up right fork
signal(mutex); // Exit critical section, others can pick
forks

eat(); // Philosopher is eating

signal(fork[i]); // Put down left fork


signal(fork[(i + 1) % 5]); // Put down right fork
}
}
Detailed Explanation of the Algorithm:
1. Thinking:
o The philosopher spends time thinking. No
synchronization is needed during this phase, so the
philosopher does not interact with the forks.
2. Picking up Forks:
o The philosopher first waits for the mutex to enter
the critical section.
o They then pick up their left fork (i.e., fork[i]) and
right fork (i.e., fork[(i + 1) % 5]).
o The wait(fork[i]) ensures that a philosopher cannot
pick up a fork if it is already being used by another
philosopher (binary semaphore ensures mutual
exclusion).
o After picking up both forks, the philosopher exits
the critical section by signaling mutex.
3. Eating:
o After picking up both forks, the philosopher starts
eating. This is the actual critical task that requires
both forks.
4. Putting down Forks:
o After finishing eating, the philosopher releases the
forks by signaling the corresponding fork
semaphores (signal(fork[i]) and signal(fork[(i + 1) %
5])).
o This makes the forks available for other
philosophers.
Challenges Addressed by this Solution:
1. Deadlock Prevention:
o The solution prevents deadlock by ensuring that all
philosophers do not wait indefinitely for each other.
By using the mutex semaphore, we ensure that
philosophers pick up both forks at once, preventing
them from holding one fork and waiting for another
indefinitely.
2. Starvation Prevention:
o The solution does not guarantee that all
philosophers will always eat in a fixed order.
However, by using semaphores, the chances of any
philosopher being permanently blocked are
minimized. The philosopher is allowed to eat if both
forks are available, and after eating, they release
both forks, allowing others to eat.
3. Mutual Exclusion:
o The semaphores for each fork (fork[i]) ensure that
only one philosopher can use a fork at a time,
preventing multiple philosophers from eating using
the same fork simultaneously.
Key Points of the Solution:
 Mutual Exclusion: Ensures that each fork is used by only
one philosopher at a time.
 Avoidance of Deadlock: By making philosophers wait for
both forks before starting to eat and ensuring forks are
put down after eating, we avoid circular wait conditions,
a necessary condition for deadlock.
 No Starvation: Every philosopher gets a chance to eat
once they acquire both forks.
Improvements or Variations:
 Odd/Even Solution: To prevent the deadlock caused by
all philosophers picking up one fork simultaneously, a
variation of the algorithm involves odd-numbered
philosophers picking up their left fork first and even-
numbered philosophers picking up their right fork first.
This can help in reducing the chances of deadlock.
 Priority or Timeout Mechanisms: Additional
mechanisms like timeouts or priority schemes could be
used to ensure fairness and prevent starvation, where
philosophers are given a maximum time to wait before
giving up.

2b Explain in detail about the necessary conditions for


deadlock.
B) Deadlock in a computing system refers to a situation where
a set of processes are blocked because each process is
waiting for a resource held by another process in the set. For
deadlock to occur, all four of the following necessary
conditions must be true simultaneously. These are known as
the Coffman Conditions:
1. Mutual Exclusion
 Definition: At least one resource must be held in a non-
shareable mode. That is, only one process at a time can
use a resource, and if another process requests the
same resource, it must wait for the resource to be
released.
 Example: In the Dining Philosophers problem, each fork
can only be held by one philosopher at a time. If two
philosophers try to pick up the same fork
simultaneously, they must wait for it to be released by
the other philosopher.
 Relevance to Deadlock: If a resource can be shared
among multiple processes simultaneously, deadlock
cannot occur because the processes would not need to
wait for exclusive access to the resource. The mutual
exclusion condition ensures that processes need
exclusive access to resources, which sets up the
potential for deadlock.
2. Hold and Wait
 Definition: A process that is holding at least one
resource is waiting to acquire additional resources that
are currently being held by other processes.
 Example: Process A holds resource R1 and waits for
resource R2, which is held by process B. Process B holds
resource R2 and waits for resource R1, held by process
A.
 Relevance to Deadlock: The "hold and wait" condition
allows the possibility for circular waiting, which is a key
component of deadlock. Without this condition, a
process would either hold all resources it needs at once
or would not hold any resources while waiting.
3. No Preemption
 Definition: Once a process has been allocated a
resource, the resource cannot be forcibly taken away
from the process holding it. It must release the resource
voluntarily.
 Example: If process A holds resource R1 and process B
holds resource R2, the system cannot preempt (take
away) R1 from process A to give it to process B, or vice
versa, to allow the processes to continue. The resources
can only be released by the processes themselves.
 Relevance to Deadlock: The no preemption condition
means that processes are stuck holding resources until
they complete their tasks. If a process needs resources
held by other processes, and those processes are also
waiting for resources, no one can intervene to release
resources and allow progress. This condition is critical in
enabling deadlock because it prevents the system from
recovering by reallocating resources forcibly.
4. Circular Wait
 Definition: A set of processes exists such that each
process in the set is waiting for a resource that is held by
another process in the set. In other words, there is a
circular chain of processes where each process is waiting
for a resource held by the next process in the chain.
 Example: Process A waits for resource R1, which is held
by process B. Process B waits for resource R2, which is
held by process C. Process C waits for resource R3, which
is held by process A. This creates a circular wait, as each
process is waiting for a resource held by another process
in the chain.
 Relevance to Deadlock: Circular wait is the most critical
condition for deadlock. It represents the essence of
deadlock, where processes are waiting for each other in
a circular manner, leading to a situation where no
process can make progress.

Summary of Coffman Conditions:


1. Mutual Exclusion: Resources cannot be shared.
2. Hold and Wait: Processes holding resources can request
additional resources.
3. No Preemption: Resources cannot be forcibly taken
away from a process.
4. Circular Wait: A cycle of processes exists, where each
process is waiting for a resource held by the next.
For deadlock to occur in a system, all four conditions must be
present simultaneously. If any of these conditions is
eliminated, deadlock cannot occur. For instance, if resources
can be preempted (i.e., forcibly taken away), or if processes
cannot hold resources while waiting for others, then deadlock
is prevented.
Strategies to Prevent Deadlock:
1. Eliminate Mutual Exclusion: Some resources can be
shared by multiple processes at the same time, reducing
the likelihood of deadlock. However, this is not always
feasible (e.g., printers or databases).
2. Eliminate Hold and Wait: This can be done by requiring
processes to request all resources they need at once,
preventing them from holding some resources while
waiting for others.
3. Allow Preemption: Resources can be preempted from
processes if needed, which can break the circular wait
condition and allow processes to continue.
4. Eliminate Circular Wait: One approach to breaking
circular wait is to impose an ordering of resource
requests. Processes must request resources in a fixed
order (e.g., R1, R2, R3), which prevents cycles from
forming.
4 a Discuss in detail about deadlock prevention
A) Deadlock prevention is a set of strategies used to ensure
that deadlock will not occur in a system by eliminating one or
more of the four necessary conditions for deadlock. The goal
is to proactively design systems to avoid deadlock scenarios
by controlling how resources are allocated to processes.
As discussed earlier, deadlock requires the simultaneous
presence of four necessary conditions:
1. Mutual Exclusion
2. Hold and Wait
3. No Preemption
4. Circular Wait
Deadlock prevention techniques aim to prevent at least one
of these conditions from being true, thereby avoiding the
possibility of deadlock.
1. Mutual Exclusion Prevention
 Condition: Mutual exclusion is the condition where
resources are non-shareable; only one process can use a
resource at any time. While this condition cannot be
completely eliminated for certain types of resources
(e.g., printers, disk drives), we can reduce the likelihood
of deadlock by making some resources shareable.
 Solution: In certain cases, resources can be shared by
multiple processes, eliminating the need for mutual
exclusion. For example, many read-only resources can be
shared by multiple processes simultaneously. However,
for exclusive resources like printers, mutual exclusion
cannot be completely eliminated, and hence, this
condition is typically managed in other ways.
 Conclusion: Although mutual exclusion cannot be
entirely avoided, shared resources can be managed to
reduce deadlock risk. It is more about limiting exclusive
access to resources that are truly necessary.
2. Hold and Wait Prevention
 Condition: The hold and wait condition occurs when a
process holding one resource is waiting to acquire
additional resources that are currently being held by
other processes. This leads to potential deadlock
situations where processes wait indefinitely for
resources.
 Solution: To prevent hold and wait, a system can
implement the following strategies:
o Request all resources at once: A process must
request all the resources it will need before starting
execution. This strategy ensures that the process
does not hold any resources while waiting for
others.
o Block processes that hold resources: Alternatively,
the system could block processes that hold some
resources and are waiting for others. These
processes must release all their held resources
before requesting more.
 Disadvantages: While this strategy prevents deadlock, it
may reduce system efficiency since processes must wait
to acquire all resources upfront, potentially leading to
resource underutilization.
 Conclusion: This approach ensures that processes do
not hold resources while waiting for others, but it may
lead to inefficiencies in the system as processes have to
wait for all required resources, even if some are not
immediately needed.
3. No Preemption Prevention
 Condition: The no preemption condition occurs when a
process holding resources cannot be forcibly taken away.
For example, once a process acquires a resource, it is not
preempted until it finishes using it. This leads to
situations where processes cannot make progress if they
are waiting for resources held by other processes.
 Solution: To prevent deadlock through no preemption,
the system can introduce a preemption policy:
o Preempt resources from processes: If a process
holding some resources is waiting for others, and a
deadlock situation seems imminent, the system can
preempt some resources from the process, return
them to the pool, and allow other processes to use
them.
o Roll back and retry: A process holding some
resources can be rolled back to a safe state if it is
waiting for additional resources, allowing the
system to reallocate resources to avoid deadlock.
 Disadvantages: Preempting resources can be complex
because it may involve undoing the work done by the
process (e.g., rolling back transactions) and can lead to a
loss of progress. Additionally, preemption may not
always be feasible for some types of resources.
 Conclusion: Although this strategy can avoid deadlock
by allowing the system to take control of resource
allocation, it can add significant overhead and
complexity.
4. Circular Wait Prevention
 Condition: The circular wait condition occurs when a set
of processes exists such that each process is waiting for
a resource held by the next process in the set, forming a
cycle.
 Solution: To prevent circular wait, we need to break the
cycle of waiting. The system can implement the
following strategies:
o Impose an ordering of resource requests: One way
to prevent circular wait is by enforcing an ordering
of resource acquisition. For example, processes are
required to request resources in a predefined order
(e.g., always request resource R1 before resource
R2, etc.). If all processes follow the same order,
then a circular wait cannot form because there will
always be a clear progression of resource requests.
o Resource hierarchy: Resources can be assigned a
numerical hierarchy. Processes must request
resources in a specific order based on this
hierarchy. For instance, if resource R1 has a lower
number than R2, processes must request R1 before
R2, ensuring that no circular wait occurs.
 Disadvantages: Imposing an ordering or hierarchy can
sometimes limit flexibility in resource allocation. It may
also lead to increased waiting times for some processes,
especially if resources with higher priority are frequently
in use.
 Conclusion: Circular wait prevention strategies, such as
imposing a resource hierarchy or requesting resources in
a specific order, can effectively prevent deadlock by
ensuring that no cycles of waiting occur. However, these
strategies can reduce flexibility and efficiency in resource
allocation.

Deadlock Prevention Summary:


To prevent deadlock, we aim to eliminate or control one or
more of the four necessary conditions:
1. Mutual Exclusion: Share resources when possible.
2. Hold and Wait: Require processes to request all required
resources upfront.
3. No Preemption: Allow resources to be preempted or
rolled back if necessary.
4. Circular Wait: Impose a resource ordering or hierarchy
to prevent cycles.
While deadlock prevention guarantees that deadlock will not
occur, it often comes at the cost of system efficiency and
flexibility. For example, processes may have to wait longer to
acquire resources, or resources may not be fully utilized.
The deadlock prevention strategies should be carefully
chosen based on the specific needs of the system and the
types of resources being managed. In practice, deadlock
prevention is usually applied selectively, balancing the risk of
deadlock with the need for efficient resource allocation.

4b Describe the two methods for deadlock recovery.


B) Once deadlock has occurred in a system, recovery becomes
crucial to bring the system back to a state where processes
can continue execution. There are two primary methods for
deadlock recovery:
1. Process Termination
2. Resource Preemption
Each method has its own advantages, disadvantages, and
strategies for resolving deadlock. Let’s explore them in detail:

1. Process Termination
The process termination method involves aborting processes
involved in the deadlock, either by terminating one or more
of the processes completely or by rolling them back to a safe
state. This method is based on the idea that if processes
involved in a deadlock are terminated or rolled back, the
system can break the cycle and free up resources for the
remaining processes.
Subtypes of Process Termination
 Terminate all processes involved in the deadlock:
o This approach is the most drastic. All the processes
in the deadlock cycle are terminated. By
terminating these processes, the resources they
hold are released, and the deadlock cycle is broken.
o Advantages:
 Simple to implement and ensures that all
processes involved in deadlock are removed
from the cycle.
 No need to track the order of resource
allocation or waiting.
o Disadvantages:
 High overhead because all processes involved
in deadlock are lost.
 Potential for significant loss of work, especially
if processes have performed a lot of work and
are terminated midway.
 Terminate only one process at a time:
o This approach involves selectively terminating
processes that are part of the deadlock. The system
might consider the priority, cost, or progress of
each process and then decide which one to
terminate.
o Advantages:
 More controlled than terminating all
processes. Some processes may continue to
execute if they are not part of the deadlock.
 Less work is lost compared to terminating all
processes.
o Disadvantages:
 The system must carefully determine which
processes to terminate, which can be complex.
 The terminated processes will need to be
restarted or rolled back, potentially leading to
delays and inefficiencies.
Considerations:
 When processes are terminated, resources are freed,
and other processes can proceed. However, there might
be rollback mechanisms involved if the processes have
executed some actions that need to be undone (e.g.,
transactions).
 Process termination often causes wasted CPU time and
resource wastage.

2. Resource Preemption
Resource preemption involves taking resources away from
processes involved in the deadlock cycle and assigning them
to other processes in order to break the cycle. The
preempted processes may be restarted later when the
required resources become available.
How Resource Preemption Works
 Identify processes involved in deadlock:
o The system needs to determine which processes
are involved in the deadlock cycle. This could be
achieved using techniques like wait-for graphs or
resource allocation graphs.
 Preempt resources from processes:
o Once the deadlock processes are identified, the
system preempts (takes away) one or more
resources from a process and assigns it to another
process that is waiting for resources.
 Rollback or restart preempted processes:
o The preempted processes are either rolled back (if
they have made some progress that needs to be
undone) or suspended and later resumed once the
required resources are available.
Subtypes of Resource Preemption
 Preempt the smallest number of resources:
o Preempting only the minimum necessary resources
can help reduce the impact on the system while
breaking the deadlock. This can ensure that
deadlock is resolved with minimal disruption.
 Rollback processes:
o The system may also choose to rollback a process
that is holding a critical resource and is causing a
deadlock. This allows the system to restart that
process and attempt a different approach to
resource allocation.
Advantages of Resource Preemption:
 Less work lost: Since not all processes are terminated,
less work is lost compared to the process termination
approach. Only the preempted processes may need to
restart or roll back.
 Controlled resource allocation: Resources are freed for
use by other processes, allowing the system to recover
from deadlock without wasting all resources.
Disadvantages of Resource Preemption:
 Overhead in preemption: Preempting resources is
expensive in terms of system overhead. The system
must track the resources held by processes, decide
which resources to preempt, and manage the process
states.
 Rollback or re-execution overhead: If a process has
already executed some work, rolling back or restarting it
might result in overhead as the work must be undone
and redone.
 Potential starvation: There is a risk of starvation where
a process may continuously be preempted and not get
enough resources to complete its execution. This might
result in that process being indefinitely postponed.
Considerations for Resource Preemption:
 When preempting resources, it is important to ensure
that the resources are not held in such a way that
preemption could lead to inconsistent states (e.g., in
database systems). Special attention must be given to
the state of the system when preempting resources to
avoid inconsistencies.

Summary of Deadlock Recovery Methods:


Recovery
Description Advantages Disadvantages
Method
Involves
terminating Loss of work,
processes Simple, potential for
Process involved in the ensures all significant
Termination deadlock or resources are overhead and
rolling them released. system
back to a safe disruption.
state.
Preempts
resources from Less work High system
deadlocked lost, more overhead,
Resource processes and controlled, potential for
Preemption allocates them processes starvation, risk of
to others, may inconsistent
possibly rolling continue. states.
back processes.

5b Discuss in detail about Resource Allocation Graph with and


without deadlock using suitable example.
B) A Resource Allocation Graph (RAG) is a graphical
representation of the relationships between processes and
resources in a system. It helps in detecting and preventing
deadlock by visually showing how resources are allocated to
processes and how processes are requesting resources.
In a Resource Allocation Graph, there are two types of
entities:
1. Processes (P): Represented by circles or nodes.
2. Resources (R): Represented by squares or nodes.
The graph consists of two types of edges:
 Request edges: An edge from a process to a resource
indicates that the process is requesting the resource.
 Assignment edges: An edge from a resource to a process
indicates that the resource is assigned to that process.

Resource Allocation Graph with Deadlock


Deadlock in a Resource Allocation Graph occurs when there is
a cycle in the graph. This cycle indicates a set of processes
that are each waiting for a resource held by another process,
resulting in a deadlock condition.
Example of a Resource Allocation Graph with Deadlock:
Let’s consider a system with two processes (P1 and P2) and
two resources (R1 and R2).
1. Process P1 requests R1.
2. Process P2 requests R2.
3. Resource R1 is assigned to P1.
4. Resource R2 is assigned to P2.
5. P1 now requests R2, and P2 requests R1.
The Resource Allocation Graph would look like this:
 Request Edge from P1 to R2 (P1 is requesting R2).
 Request Edge from P2 to R1 (P2 is requesting R1).
 Assignment Edge from R1 to P1 (R1 is assigned to P1).
 Assignment Edge from R2 to P2 (R2 is assigned to P2).
Now, the graph would have a cycle:
 P1 → R2 → P2 → R1 → P1.
This cycle shows that P1 is waiting for R2, while P2 is waiting
for R1, and neither process can proceed. Therefore, the
system is in a deadlock.

Resource Allocation Graph without Deadlock


A Resource Allocation Graph without deadlock would be free
of any cycles. The absence of a cycle indicates that there is no
circular waiting condition, and hence the system is deadlock-
free.
Example of a Resource Allocation Graph without Deadlock:
Let’s consider a system with two processes (P1 and P2) and
two resources (R1 and R2) again.
1. Process P1 requests R1.
2. Process P2 requests R2.
3. Resource R1 is assigned to P1.
4. Resource R2 is assigned to P2.
5. P1 requests R2 and P2 requests R1, but neither process
is currently holding the resource requested by the other.
The Resource Allocation Graph would look like this:
 Request Edge from P1 to R2 (P1 is requesting R2).
 Request Edge from P2 to R1 (P2 is requesting R1).
 Assignment Edge from R1 to P1 (R1 is assigned to P1).
 Assignment Edge from R2 to P2 (R2 is assigned to P2).
In this case, the graph has no cycle because there is no
circular waiting:
 P1 → R2 is a request, but P2 has R2.
 P2 → R1 is a request, but P1 has R1.
There is no cycle in this system, so it is deadlock-free.

Key Points to Note:


1. Deadlock Detection using Resource Allocation Graph:
o Cycle in the graph: A cycle indicates that there is a
set of processes that are each waiting for a
resource held by another process, which is the
hallmark of deadlock.
o Resource Allocation Graph without Cycle: If there
is no cycle, it indicates that no deadlock is present,
and the system is functioning normally.
2. Handling Deadlock:
o Deadlock Occurrence: When a cycle is detected, it
suggests that deadlock has occurred, and action
should be taken to recover from the deadlock by
either terminating processes or preempting
resources.
o Deadlock Prevention: To prevent deadlock, the
system can ensure that one of the conditions
(mutual exclusion, hold and wait, no preemption,
circular wait) is not satisfied, possibly by controlling
the order of resource allocation or enforcing
restrictions on how processes request resources.

Summary of Resource Allocation Graph with and without


Deadlock:
Condition With Deadlock Without Deadlock
Contains no cycle,
Contains a cycle
Graph processes and
between processes
Representation resources are not in
and resources.
a waiting circle.
Processes wait for Processes do not
resources held by form a circular
Process-Resource
other processes, waiting chain; no
Relationships
creating a circular deadlock situation
wait. exists.
Example of P1 → R2 → P2 → R1 P1 → R2 (request),
Condition With Deadlock Without Deadlock
P2 → R1 (request) —
Processes & → P1 (deadlock
No cycle, system is
Resources cycle)
safe.
Yes – Deadlock No – No circular
Deadlock occurs due to wait, system is
circular waiting. deadlock-free.
Deadlock detection
No action needed as
and recovery are
Action Needed the system is
required to resolve
functioning normally.
the issue.

You might also like