Distributed System 2
Distributed System 2
conditions. It is the requirement that a process can not enter its critical section while
another concurrent process is currently present or executing in its critical section i.e only
one process is allowed to execute the critical section at any given instance of time.
Non-Token Based
S.No. Token Based Algorithms Algorithms
8.
In file-sharing systems, many users may try to access or edit the same file at once. To avoid
conflicts and ensure data integrity, mutual exclusion ensures that only one user can write to a file
at a time. Distributed file-locking mechanisms or decision-making techniques are used to control
access.
In distributed resource scheduling, processes may compete for shared resources like memory,
CPU, or network bandwidth. Mutual exclusion ensures that only one task can use a resource at a
time, preventing resource conflicts. Techniques like distributed keys or resource allocation rules
help manage access.
4o
Deadlock
In distributed systems, deadlocks are considered the major problem, where the resources
requested by the process are not available due to other processes holding onto it. A distributed
system contains a set of processes p1,p2,p3…pn that do not share a common memory, and
communication is made only by passing messages through the network. This does not have a
global clock for instant access and communication medium. Each process has two states such as
a running state where the process contains all the resources and is ready for execution and
another state a blocked state where the process is in a waiting state needing some resources.
Below are the four conditions that need to be met for the deadlock to occur −
Hold and wait − one process holds the resources that are needed for another process to use.
Mutual exclusion − when only one process is allowed to use one resource at a time.
No preemption − No process cannot be preempted until it completes the execution of a task.
Circular wait − The process needs to wait for the required resource in a cyclic manner, where
the last process in the queue waits for the resource used by the first process.
ResourceDeadlock
Resource deadlock occurs when a process is waiting for the set of resources that is held by
another process and it waits to receive all the resources requested before it becomes unblocked.
So, the process set which waits for resources is said to be in a resource deadlock state. Consider
an example when two processes P1 and P2 and are in need of resources X and Y. In this case, P1
waits for resource Y where P2 holds resource Y, and P2, in turn, waits for resource X which has to
be released by P1 holding it. So, in a
closed manner, P1 needs resource Y and waits for P2 to release it, and P2 needs X, and waits for
P2 to release it.
The above figure is said to be in a resource deadlock since each process waits for another
process to acquire all the needed sets of resources. Distributed deadlocks are more difficult to
handle because resources and the input process are distributed and cannot be detected at a
commonplace. Few approaches are used to handle distributed deadlocks such as detection,
prevention, avoidance, and ostrich problem (avoidance).
Explore our latest online courses and learn new skills at your own pace. Enroll and become a
certified expert to boost your career.
Communication deadlock
Communication deadlock happens among process which needs to communicate and waits for
another process for its task. The process which is in a waiting state can unblock itself when it
receives communication requests from other processes in the group. When each process in the
set waits for another process to communicate, at this time no other process starts other
communication until it receives further communication from the initial process.
Consider a scenario when process X waits for messages from process Y and process Y in turn
waits for messages from another process called Z. Then process Z waits for the initial process.
This deadlock occurs when communication among the process is locked by each other
The table below defines the major differences between resource and communication deadlock:
Basis of
differenc Resource Deadlocks Communication Deadlocks
e
The process waits for the The process waits for another process
Definition several resources that are on in the group to initiate communication
hold by other processes with one another
A process that is waiting for The process will not enter into an
Process needed resources cannot execution state till it receives a
state continue to its execution state communication message from another
until acquiring all of it. process.
Waiting The process waits for resources The process waits for the messages
for to perform its task. from another process.
Examples Process P1 waits for resource X Process X waits for a message from
and P2 waits for resource Y and process Y, which in turn waits for
each process holds and waits process Z and Z waits for initial
for each of the resources. process X.
When two or more processes try to access the critical section at the same time
and they fail to access simultaneously or stuck while accessing the critical
section then this condition is known as Deadlock.
Deadlock prevention and avoidance are strategies used in computer systems to
ensure that different processes can run smoothly without getting stuck waiting
for each other forever. Think of it like a traffic system where cars (processes)
need to move through intersections (resources) without getting into a gridlock.
In this article, we are going to discuss deadlock prevention and avoidance
strategies in detail.
Deadlock Characteristics
The deadlock has the following characteristics:
Mutual Exclusion
Hold and Wait
No Preemption
Circular Wait
Deadlock Prevention
We can prevent a Deadlock by eliminating any of the above four conditions.
Eliminate Mutual Exclusion: It is not possible to dis-satisfy the mutual
exclusion because some resources, such as the tape drive and printer, are
inherently non-shareable.
Eliminate Hold and Wait: Allocate all required resources to the process before
the start of its execution, this way hold and wait condition is eliminated but it
will lead to low device utilization. for example, if a process requires a printer at a
later time and we have allocated a printer before the start of its execution
printer will remain blocked till it has completed its execution. The process will
make a new request for resources after releasing the current set of resources.
This solution may lead to starvation.
Eliminate No Preemption : Preempt resources from the process when
resources are required by other high-priority processes.
Eliminate Circular Wait : Each resource will be assigned a numerical number.
A process can request the resources to increase/decrease. order of
numbering. For Example, if the P1 process is allocated R5 resources, now next
time if P1 asks for R4, R3 lesser than R5 such a request will not be granted, only
a request for resources more than R5 will be granted.
Detection and Recovery: Another approach to dealing with deadlocks is to
detect and recover from them when they occur. This can involve killing one or
more of the processes involved in the deadlock or releasing some of the
resources they hold.
Deadlock Avoidance
In the centralized approach of deadlock detection, two techniques are used namely:
Completely centralized algorithm and Ho Ramamurthy algorithm (One phase and Two-
phase).
Completely Centralized Algorithm –
In a network of n sites, one site is chosen as a control site. This site is responsible for
deadlock detection. It has control over all resources of the system. If a site requires a
resource it requests the control site, the control site allocates and de-allocates
resources, and maintains a wait-for graph. And at a regular interval of time, it checks
the wait-for graph to detect a cycle. If the cycle exits then it will declare the system as
deadlock otherwise the system will continue working. The major drawbacks of this
technique are as follows:
1. A site has to send requests even for using its own resource.
2. There is a possibility of phantom deadlock.
HO Ramamurthy (Two-Phase Algorithm) –
In this technique a resource status table is maintained by the central or control site, if a
cycle is detected then the system is not declared deadlock at first, the cycle is checked
again as the system is distributed some of the other resources is vacant or freed by
sites at every instant of time. Now, after checking if a cycle is detected again then, the
system is declared as deadlock. This technique reduces the possibility of phantom
deadlock but on the other hand time consumption is more.
Advantages:
Centralized deadlock detection techniques are easy to implement as they require only
one site to be responsible for deadlock detection.
These techniques can efficiently detect deadlocks in large and complex distributed
systems.
They can prevent the wastage of resources in a system due to deadlocks.
Disadvantages:
Centralized deadlock detection techniques can lead to a single point of failure as the
control site can become a bottleneck in the system.
These techniques can cause high network traffic as all the requests and responses are
sent to the control site.
Centralized deadlock detection techniques are not suitable for systems where
resources are widely distributed.
Path pushing algorithms in distributed systems are a class of algorithms designed to propagate
information across a network efficiently. These algorithms rely on the idea of "pushing"
information along paths from a source to other nodes in the network. They are commonly used in
scenarios like distributed graph algorithms, routing, and information dissemination. Here's a
detailed explanation:
Core Concept
Path pushing algorithms are built around the notion that each node in a distributed system can:
The propagation continues until all nodes in the relevant subset of the network have the required
information or a termination condition is met.
Key Features
2. Routing Protocols:
o Used in network routing, where nodes update and share routing tables until
convergence (e.g., Distance Vector Routing).
5. Consensus Algorithms:
o Algorithms like Paxos or Raft use path pushing to propagate agreement proposals
among nodes.
How It Works
1. Initialization:
o A source node begins with a piece of information (e.g., distance, state, message).
o It pushes this information to its immediate neighbors.
2. Propagation:
o Each receiving node processes the information and updates its local state.
o If the new information is relevant or leads to an improvement (e.g., shorter path, better
route), it is forwarded to its neighbors.
3. Termination:
o The process ends when no more updates are propagated, achieving convergence.
Challenges
1. Scalability:
o Large networks may lead to excessive message passing, increasing overhead.
2. Staleness:
o Nodes might act on outdated information if updates are not properly synchronized.
3. Fault Tolerance:
o Handling failures (e.g., node or link failures) is critical to ensure the correctness of the
algorithm.
4. Convergence Time:
o Some algorithms may take a long time to converge, especially in asynchronous systems.
Goal: Compute the shortest paths from a source node to all other nodes.
Process:
1. Source node initializes its distance to 0 and all others to infinity.
2. It sends distance updates to its neighbors.
3. Neighbors update their distances if the received distance is shorter than their current
estimate.
4. They forward the updated distances to their neighbors.
The Edge Chasing Algorithm is a method used in distributed systems to detect deadlocks by
analyzing a Wait-For Graph (WFG).
1. Basic Idea:
Each process sends a probe message along the edges of the WFG to check if a cycle
exists. If the probe returns to the originator, a deadlock is detected.
2. Steps:
o A process (P) that suspects a deadlock initiates the algorithm by sending a probe.
o The probe contains:
The initiator's ID.
The current process's ID.
The destination process's ID.
o Each process receiving the probe:
Forwards it to the next process it is waiting for, according to the WFG.
If the probe returns to the initiator, it confirms a deadlock.
o If no probe returns, there is no deadlock.
3. Key Points:
o Simple and effective for distributed systems.
o Can be used dynamically, as the WFG changes.
o Relies on detecting cycles in the dependency graph.
The Path Pushing Algorithm is another method for detecting deadlocks in distributed systems.
It works by exchanging information about dependencies between processes.
1. Basic Idea:
Instead of sending probes, processes exchange dependency information (paths) to detect
cycles in the WFG.
2. Steps:
o Each process maintains a record of its dependencies (a path list).
o When a process P requests a resource held by another process Q:
P sends its current dependency path to Q.
Q merges this path with its own dependencies and checks for cycles.
o If a cycle is found in the combined path, a deadlock is detected.
o If no cycle is found, the dependency path is forwarded further.
3. Key Points:
o More detailed than edge chasing since it tracks entire dependency paths.
o Requires more communication and storage overhead compared to edge chasing.
o Useful for detecting deadlocks proactively by sharing global dependency information.
Comparison:
Feature Edge Chasing Path Pushing