Distributed System UNIT - III
Distributed System UNIT - III
PART -A
1.What are the different models of deadlocks?
Various deadlock handling strategies in the distributed system are as follows:
There are mainly three approaches to handling deadlocks:
• Deadlock prevention
• Deadlock avoidance
• Deadlock detection
2.What is the purpose of the wait-for-graph (WFG)? Give an example for WFG?
In the blocked state, a process is waiting to acquire some resource. The state of the system can be
modeled by directed graph, called a wait for graph (WFG).
In a WFG , nodes are processes and there is a directed edge from node P1 to mode P2 if P1 is
blocked and is waiting for P2 to release some resource.
3.What is the purpose of associating timestamp with events in Lamport’s algorithm?
Lamport timestamps can be used to create a total ordering of events in a distributed system by
using some arbitrary mechanism to break ties.
4.Define deadlock?
Deadlock is a state of a database system having two or more transactions, when each transaction is
waiting for a data item that is being locked by some other transaction. A deadlock can be indicated by a
cycle in the wait-for-graph.
5.Name the two types of messages used by Ricart-agrawala algorithm ?
The Ricart-Agrawala algorithm assumes the communication channels are FIFO. The algorithm uses
two types of messages: REQUEST and REPLY.
6.What are the conditions for deadlock?
The four necessary conditions for a deadlock situation are
• Mutual exclusion
• No preemption
• Hold and wait
• Circular set.
7. What is meant by Deadlock prevention?
It is achieved by either having a process acquire all the needed resources simultaneously before it
begins execution or by pre-empting a process that holds the needed resource.
8. Write about dead lock avoidance?
In the deadlock avoidance approach to distributed systems, a resource is granted to a process if the
resulting global system is safe. Deadlock detection requires an examination of the status of the process-
resources interaction for the presence of a deadlock condition.
9. Define mutual exclusion?
Mutual exclusion in a distributed system states that only one process is allowed to execute the critical
section (CS) at any given time.
10. Define Message complexity?
This is the number of messages that are required per CS execution by a site.
11. What is Synchronization delay?
After a site leaves the CS, it is the time required and before the next site enters the CS.
12. Give the Performance of Lamport's algorithm?
Synchronization delay is equal to maximum message transmission time. It requires 3(N-1) messages
per CS execution. Algorithm can be optimized to 2(N-1) messages by omitting the REPLY message in
some situations.
13.Define Response time?
This is the time interval a request waits for its CS execution to be over after its request messages
have been sent out. Thus, response time does not include the time a request waits at a site before its
request messages have been sent out.
14.Define System throughput?
This is the rate at which the system executes requests for the CS. If SD is the synchronization delay
and E is the average critical section execution time.
System throughput = 1/(SD+E).
15.What is the Message Complexity of Lamport's algorithm?
Lamport's Algorithm requires invocation of 3(N-1) messages per critical section execution.
These 3(N-1) messages involves
• (N-1) request messages
• (N-1) reply messages
• (N-1) release messages
16. List the drawbacks of Lamport's Algorithm?
Unreliable approach: failure of any one of the processes will halt the progress of entire system.
High message complexity: Algorithm requires 3(N-1) messages per critical section invocation.
17.Give the Performance of Lamport's algorithm?
Synchronization delay is equal to maximum message transmission time. It requires 3(N-1) messages
per CS execution. Algorithm can be optimized to 2(N-1) messages by omitting the REPLY message in
some situations.
18.What is the use of Ricart-Agrawala algorithm?
It is an algorithm to for mutual exclusion in a distributed system proposed by Glenn Ricart and Ashok
Agrawala. This algorithm is an extension and optimization of Lamport's Distributed Mutual Exclusion
Algorithm.. It follows permission based approach to ensure mutual exclusion.
19.Give the Drawbacks and performance of Ricart-Agrawala algorithm?
Unreliable approach: failure of any one of node in the system can halt the progress of the system. In
this situation, the process will starve forever. The problem of failure of node can be solved by detecting
failure after some timeout.
Performance:
Synchronization delay is equal to maximum message transmission time It requires 2(N-1) messages per
Critical section execution
20.What is Mackawa's Algorithm?
It is quorum based approach to ensure mutual exclusion in distributed systems. In permission based
algorithms like Lamport's Algorithm, Ricart-Agrawala Algorithm etc. a site request permission from
every other site but in quorum based approach, a site does not request permission from every other
site but from a subset of sites which is called quorum.
21.What is the Message Complexity Maekawa's Algorithm?
This requires invocation of 3 √N messages per critical section execution as the size of request set is
√N. These 3√N messages involves.
• √N request messages
• √N reply messages
• √N release messages
22.List the drawbacks of Maekawa's Algorithm?
This algorithm is deadlock prone because a site is exclusively locked by other sites and requests are
not prioritized by their timestamp.
23.Give the Performance of Maekawa's Algorithm?
Synchronization delay is equal to twice the message propagation delay time. It requires 3√n
messages per critical section execution.
24.What is Suzuki-Kasami algorithm?
It is a token-based algorithm for achieving mutual exclusion in distributed systems. This is
modification of Ricart-Agrawala algorithm, a permission based (Non- token based) algorithm which
uses REQUEST and REPLY messages to ensure mutual exclusion.
25.Give the Message Complexity of Suzuki-Kasami algorithm?
The algorithm requires 0 message invocation if the site already holds the idle token at the time of
critical section request or maximum of N message per critical section execution. This N messages
involves
PART-B
1.Explain Maekawa’s algorithm for mutual exclusion in distributed system and its drawbacks?
MAEKAWA‘s ALGORITHM
• Maekawa’s Algorithm is quorum based approach to ensure mutual exclusion in distributed
systems.
• In permission based algorithms like Lamport’s Algorithm, Ricart-Agrawala Algorithm etc. a site
request permission from every other site but in quorum based approach, a site does not request
permission from every other site but from a subset of sites which is called quorum.
• Three type of messages ( REQUEST, REPLY and RELEASE) are used.
• A site send a REQUEST message to all other site in its request set or quorum to get their
permission to enter critical section.
• A site send a REPLY message to requesting site to give its permission to enter the critical section.
• A site send a RELEASE message to all other site in its request set or quorum upon exiting the
critical section.
The following are the conditions for Maekawa’s algorithm:
Maekawa used the theory of projective planes and showed that N = K(K – 1)+ 1. This relation gives
|Ri|= √N.
2.Discuss with suitable example to show that a deadlock cannot occur if any one of the four
conditions is absent? (or)How we can achieve deadlock detection in distributed system ?Provide
various models to carry out the same?
Deadlock can neither be prevented nor avoided in distributed system as the system is so vast that
it is impossible to do so. Therefore, only deadlock detection can be implemented. The techniques
of deadlock detection in the distributed system require the following:
• Progress:The method should be able to detect all the deadlocks in the system.
• Safety: The method should not detect false of phantom deadlocks.
There are three approaches to detect deadlocks in distributed systems.
Centralized approach:
Correctness :
Theorem:
Lamport’s algorithm achieves mutual exclusion.
Proof:
Proof is by contradiction.
▪ Suppose two sites Si and Sj are executing the CS concurrently. For this to happen conditions L1
and L2 must hold at both the sites concurrently.
▪ This implies that at some instant in time, say t, both Si and Sj have their own requests at the top
of their request queues and condition L1 holds at them. Without loss of generality, assume that Si
’s request has smaller timestamp than the request of Sj .
▪ From condition L1 and FIFO property of the communication channels, it is clear that at instant t
the request of Si must be present in request queuej when Sj was executing its CS. This implies that
Sj ’s own request is at the top of its own request queue when a smaller timestamp request, Si ’s
request, is present in the request queuej – a contradiction!
Theorem:
Lamport’s algorithm is fair.
Proof:
The proof is by contradiction.
▪ Suppose a site Si ’s request has a smaller timestamp than the request of another site Sj and Sj is
able to execute the CS before Si .
▪ For Sj to execute the CS, it has to satisfy the conditions L1 and L2. This implies that at some
instant in time say t, Sj has its own request at the top of its queue and it has also received a
message with timestamp larger than the timestamp of its request from all other sites.
▪ But request queue at a site is ordered by timestamp, and according to our assumption Si has
lower timestamp. So Si ’s request must be placed ahead of the Sj ’s request in the request queuej .
This is a contradiction!
Message Complexity:
Lamport’s Algorithm requires invocation of 3(N – 1) messages per critical section execution.
These 3(N – 1) messages involves
• (N – 1) request messages
• (N – 1) reply messages
• (N – 1) release messages
Drawbacks of Lamport’s Algorithm:
• Unreliable approach: failure of any one of the processes will halt the progress of entire system.
• High message complexity: Algorithm requires 3(N-1) messages per critical section invocation.
Performance:
Synchronization delay is equal to maximum message transmission time. It requires 3(N – 1)
messages per CS execution. Algorithm can be optimized to 2(N – 1) messages by omitting the
REPLY message in some situations.
4.Discuss in detail the requirements that mutual exclusion algorithm should satisfy and discuss
what matric we use to measure the performance of mutual exclusion algorithm?
DISTRIBUTED MUTUAL EXCLUSION ALGORITHMS :
• Mutual exclusion is a concurrency control property which is introduced to prevent race
conditions.
• It is the requirement that a process cannot access a shared resource while another concurrent
process is currently present or executing the same resource. Mutual exclusion in a distributed
system states that only one process is allowed to execute the critical section (CS) at any given time.
• Message passing is the sole means for implementing distributed mutual exclusion.
• The decision as to which process is allowed access to the CS next is arrived at by message
passing, in which each process learns about the state of all other processes in some consistent
way.
• There are three basic approaches for implementing distributed mutual exclusion:
1. Token-based approach:
− A unique token is shared among all the sites.
− If a site possesses the unique token, it is allowed to enter its critical section
− This approach uses sequence number to order requests for the critical section.
− Each requests for critical section contains a sequence number. This sequence number is used
to distinguish old and current requests. − This approach insures Mutual exclusion as the token
is unique.
− Eg: Suzuki-Kasami’s Broadcast Algorithm.
2.Non-token-based approach:
− A site communicates with other sites in order to determine which sites should execute
critical section next. This requires exchange of two or more successive round of messages
among sites.
− This approach use timestamps instead of sequence number to order requests for the critical
section. − When ever a site make request for critical section, it gets a timestamp. Timestamp is
also used to resolve any conflict between critical section requests.
− All algorithm which follows non-token based approach maintains a logical clock. Logical
clocks get updated according to Lamport’s scheme.
− Eg: Lamport's algorithm, Ricart–Agrawala algorithm.
3.Quorum-based approach:
− Instead of requesting permission to execute the critical section from all other sites, Each
site requests only a subset of sites which is called a quorum.
− Any two subsets of sites or Quorum contains a common site.
− This common site is responsible to ensure mutual exclusion.
− Eg: Maekawa’s Algorithm.
Preliminaries:
• The system consists of N sites, S1, S2, S3, …, SN.
• Assume that a single process is running on each site.
• The process at site Si is denoted by pi. All these processes communicate asynchronously over an
underlying communication network.
• A process wishing to enter the CS requests all other or a subset of processes by sending REQUEST
messages, and waits for appropriate replies before entering the CS.
• While waiting the process is not allowed to make further requests to enter the CS.
• A site can be in one of the following three states: requesting the CS, executing the CS, or neither
requesting nor executing the CS.
• In the requesting the CS state, the site is blocked and cannot make further requests for the CS.
• In the idle state, the site is executing outside the CS.
• In the token-based algorithms, a site can also be in a state where a site holding the token is
executing outside the CS. Such state is referred to as the idle token state.
• At any instant, a site may have several pending requests for CS. A site queues up these requests
and serves them one at a time.
• N denotes the number of processes or sites involved in invoking the critical section, T denotes the
average message delay, and E denotes the average critical section execution time. Requirements of
mutual exclusion algorithms:
• Safety property:
The safety property states that at any instant, only one process can execute the critical section. This
is an essential property of a mutual exclusion algorithm.
• Liveness property:
This property states the absence of deadlock and starvation. Two or more sites should not endlessly
wait for messages that will never arrive. In addition, a site must not wait indefinitely to execute the
CS while other sites are repeatedly executing the CS. That is, every requesting site should get an
opportunity to execute the CS in finite time.
• Fairness:
Fairness in the context of mutual exclusion means that each process gets a fair chance to execute
the CS. In mutual exclusion algorithms, the fairness property generally means that the CS execution
requests are executed in order of their arrival in the system.
Performance metrics
➢ Message complexity: This is the number of messages that are required per CS execution by a
site.
➢ Synchronization delay: After a site leaves the CS, it is the time required and before the next site
enters the CS. (Figure 3.1)
➢ Response time: This is the time interval a request waits for its CS execution to be over after its
request messages have been sent out. Thus, response time does not include the time a request
waits at a site before its request messages have been sent out. (Figure 3.2)
➢ System throughput: This is the rate at which the system executes requests for the CS. If SD is the
synchronization delay and E is the average critical section execution time.
OR Model
• In the OR model, a passive process becomes active only after a message from any process in its
dependent set has arrived.
• This models classical nondeterministic choices of receive statements.
• A process can make a request for numerous resources simultaneously and the request is satisfied
if any one of the requested resources is granted.
• The requested resources may exist at different locations.
• If all requests in the WFG are OR requests, then the nodes are called OR nodes.
• Presence of a cycle in the WFG of an OR model does not imply a deadlock in the OR model.
• In the OR model, the presence of a knot indicates a deadlock.
Deadlock in OR model:
a process Pi is blocked if it has a pending OR request to be satisfied.
• With every blocked process, there is an associated set of processes called dependent set.
• A process shall move from an idle to an active state on receiving agrant message from any of the
processes in its dependent set.
• A process ispermanently blocked if it never receives a grant message from any of the processes
in its dependent set.
• A set of processes S is deadlockedif all the processes in S are permanently blocked.
• In short, a processis deadlocked or permanently blocked, if the following conditions are met: 1.
Each of the process is the set S is blocked. 2. The dependent set for each process in S is a subset of
S. 3. No grant message is in transit between any two processes in set S.
• A blocked process P is the set S becomes active only after receiving a grant message
from a process in its dependent set, which is a subset of S.
Unrestricted model :
• No assumptions are made regarding the underlying structure of resource requests.
• In this model, only one assumption that the deadlock is stable is made and hence it is the most
general model.
• This way of looking at the deadlock problem helps in separation of concerns: concerns about
properties of the problem are separated from underlying distributed systems computations.
Hence, these algorithms can be used to detect other stable properties as they deal with this
general model.
• These algorithms are of more theoretical value for distributed systems since no further
assumptions are made about the underlying distributed systems computations which leads to a
great deal of overhead.