0% found this document useful (0 votes)
11 views13 pages

Module 4

The document discusses distributed mutual exclusion algorithms, which are essential for managing concurrent access to shared resources in distributed systems. It covers key algorithms such as Lamport's Algorithm, Ricart-Agrawala Algorithm, Singhal's Dynamic Information-Structure Algorithm, and quorum-based algorithms like Maekawa's and Agarwal-El Abbadi's, highlighting their mechanisms, advantages, limitations, and performance metrics. The document emphasizes the importance of ensuring mutual exclusion while addressing challenges and efficiency in distributed environments.

Uploaded by

bejagambalu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views13 pages

Module 4

The document discusses distributed mutual exclusion algorithms, which are essential for managing concurrent access to shared resources in distributed systems. It covers key algorithms such as Lamport's Algorithm, Ricart-Agrawala Algorithm, Singhal's Dynamic Information-Structure Algorithm, and quorum-based algorithms like Maekawa's and Agarwal-El Abbadi's, highlighting their mechanisms, advantages, limitations, and performance metrics. The document emphasizes the importance of ensuring mutual exclusion while addressing challenges and efficiency in distributed environments.

Uploaded by

bejagambalu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

Module – 4

Distributed mutual exclusion algorithms:


Distributed Mutual Exclusion Algorithms are algorithms used in distributed systems to ensure
that multiple processes or nodes, which do not share memory, can safely access shared resources
or critical sections in a manner that prevents conflicts and ensures consistency. These algorithms
help in managing concurrent access to shared resources in a way that only one process can access
the critical section at any given time.

System Model
A system model in a distributed system refers to the set of assumptions and abstractions that
describe how different components of the system behave and interact with each other. Below
are several common models

Lamport’s Algorithm
When it comes to distributed mutual exclusion and synchronisation, Lamport's Algorithm is a
distributed algorithm that is used to create a logical sequence of events in a distributed
system. It is most famous for offering a way to create logical clocks for event tracking in
systems where real clocks might not be in sync.
Here are key aspects of Lamport’s algorithm:
1. Logical Clocks:
 Lamport's logical clock is a way of ensuring the sequence of events in a distributed
system respects a partial order. It assigns a number to each event that occurs in the
system, ensuring that events are ordered in a way that respects causality.
 Lamport introduced a counter for each process, and this counter is incremented every
time an event occurs in that process.
2. Event Ordering:
 According to Lamport's algorithm, there are two important rules to maintain event
ordering:
1. If an event e1e1e1 happens before event e2e2e2 (causally), then the timestamp
of e1e1e1 must be less than the timestamp of e2e2e2.
2. If two events are unrelated (no causal relation), their timestamps are assigned
arbitrarily, as long as the first rule is still respected.
This way, Lamport's algorithm allows the system to maintain a logical ordering of events,
even if they occur at different times on different machines.
3. Mutual Exclusion with Lamport’s Algorithm:
Lamport’s algorithm can also be applied to mutual exclusion, i.e., ensuring that no two
processes access a shared resource simultaneously. The idea is to use the logical clocks to
communicate and request access to a shared resource.
The basic steps of Lamport’s mutual exclusion algorithm are as follows:
 Requesting access: A process sends a request message to all other processes,
containing the timestamp of its current event (its logical clock).
 Granting access: Each process compares the timestamp in the request message to its
own timestamp. If the incoming request has a smaller timestamp, the process queues
the request. If the timestamp is equal or greater, the process grants access to the
resource by sending a "grant" message back.
 Critical Section: A process enters the critical section once it has received a grant from
all other processes.
 Release: After finishing, the process sends a message to all other processes indicating
that it has left the critical section, allowing others to enter.
4. Clock Adjustment:
 The logical clock of a process is adjusted as follows: If a process ppp receives a
message from another process qqq, the logical clock of ppp is updated to be the
maximum of its own logical clock and the timestamp in the message (to account for
events that may have happened earlier at qqq).
Advantages of Lamport’s Algorithm:
 Simple and efficient for determining the order of events in distributed systems.
 Provides a way to handle synchronization in distributed systems, where it is difficult
or impossible to have a common global clock.
 Works in asynchronous systems with no shared memory or physical clocks.
Limitations:
 Does not capture causality perfectly: Lamport’s logical clocks can distinguish if one
event happened before another, but they cannot always tell whether two events are
independent or concurrent if they don’t have a causal relation.
 Potential for message delays: In a real-world system, message delays or network
partitions might complicate the precise order of events.

Ricart–Agrawala algorithm:

Ricart–Agrawala algorithm is a distributed mutual exclusion algorithm that ensures that


only one process at a time can access a critical section in a distributed system. It is an
extension and improvement of Lamport's algorithm, designed specifically to address the
mutual exclusion problem in distributed systems where multiple processes are trying to
access shared resources concurrently.

Steps in the Ricart–Agrawala Algorithm:


1. Requesting the Critical Section:
o When a process PiP_iPi wants to enter the critical section, it sends a request
message to all other processes in the system. The request message contains the
process’s timestamp (logical clock value) to maintain the correct order of
events.
2. Receiving Request Messages:
o Each process that receives a request message compares the timestamp in the
request with its own timestamp (logical clock):
 If the timestamp in the incoming request is greater than the process's
own timestamp, the process will queue the request, as it implies the
requesting process’s event occurred earlier and thus should be given
priority.
 If the timestamp is smaller or equal, the process checks if it is in the
critical section or will soon enter it:
 If the process is not in the critical section, it immediately
replies with a grant message to the requesting process.
 If the process is in the critical section or is in the process of
replying to another request, it delays its reply.
3. Entering the Critical Section:
o A process PiP_iPi can enter the critical section when it has received a grant
message from every other process in the system. This ensures that no other
process will be in the critical section at the same time, satisfying the mutual
exclusion requirement.
4. Releasing the Critical Section:
o Once a process has finished executing in the critical section, it sends a release
message to all other processes, notifying them that the critical section is now
available for others to enter.
5. Granting Access to the Critical Section:
o A process grants access to the critical section based on the priority determined
by the timestamps. If it hasn't entered its critical section or replied to a request
yet, it sends a grant message immediately. Othe

Theorem: Ricart-Agrawala algorithm achieves mutual exclusion.


Proof:
o Proof is by contradiction. Suppose two sites Si and Sj ‘ are executing the CS concurrently and
Si ’s request has higher priority than the request of Sj .Clearly, Si received Sj ’s request after it
has made its own request.
o Thus, Sj can concurrently execute the CS with Si only if Si returns a REPLY to Sj (in response
to Sj ’s request) before Si exits the CS.
o However, this is impossible because Sj ’s request has lower priority.Therefore, Ricart-Agrawala
algorithm achieves mutual exclusion.

Performance
o For each CS execution, Ricart-Agrawala algorithm requires (N − 1) REQUEST
messages and (N − 1) REPLY messages.
o Thus, it requires 2(N − 1) messages per CS execution.
o Synchronization delay in the algorithm is T .

Singhal’s Dynamic Information-Structure Algorithm

o Most mutual exclusion algorithms use a static approach to invoke mutual exclusion.
o These algorithms always take the same course of actions to invoke mutual exclusion
no matter what is the state of the system.
o These algorithms lack efficiency because they fail to exploit the changing conditions
in the system.
o An algorithm can exploit dynamic conditions of the system to improve the
performance.
o For example, if few sites are invoking mutual exclusion very frequently and other
sites invoke mutual exclusion much less frequently, then
 A frequently invoking site need not ask for the permission of less frequently
invoking site every time it requests an access to the CS.
 It only needs to take permission from all other frequently invoking sites.
o Singhal developed an adaptive mutual exclusion algorithm based on this observation.
o The information-structure of the algorithm evolves with time as sites learn about the
state of the system through messages.
Challenges
o The design of adaptive mutual exclusion algorithms is challenging:
o How does a site efficiently know what sites are currently actively invoking mutual
exclusion?
o When a less frequently invoking site needs to invoke mutual exclusion, how does it
do it?
o How does a less frequently invoking site makes a transition to more frequently
invoking site and vice-versa.
o How to insure that mutual exclusion is guaranteed when a site does not take the
permission of every other site.
o How to insure that a dynamic mutual exclusion algorithm does not waste resources
and time in collecting systems state, offsetting any gain.
Quorum-Based Mutual Exclusion Algorithms
Quorum-based mutual exclusion algorithms are different in the following two ways:

request set of sites are chosen such that ∀i ∀j :


A site does not request permission from all other sites, but only from a subset of the sites. The

1 ≤ i , j ≤ N :: Ri ∩ Rj Φ. Consequently, every pair of sites has a site


which mediates conflicts between that pair.
A site can send out only one REPLY message at any time. A site can send a REPLY message
only after it has received a RELEASE message for the previous REPLY message.
Since these algorithms are based on the notion of ‘Coteries’ and ‘Quorums’, we next describe
the idea of coteries and quorums.
A coterie C is defined as a set of sets, where each set g C is called a quorum. The
following properties hold for quorums in a coterie:
Intersection property: For every quorum g, h C, g h= .
For example, sets 1,2,3 , 2,5,7 and 5,7,9 cannot be quorums in a coterie because the first and
third sets do not have a common element.
Minimality property: There should be no quorums g, h in coterie C such that g h. For
example, sets 1,2,3 and 1,3 cannot be quorums in a coterie because the first set is a superset
of the second.
Coteries and quorums can be used to develop algorithms to ensure mutual exclusion in a
distributed environment. A simple protocol works as follows:
Let ‘a’ is a site in quorum ‘A’. If ‘a’ wants to invoke mutual exclusion, it requests permission
from all sites in its quorum ‘A’.
Every site does the same to invoke mutual exclusion. Due to the Intersection Property,
quorum ‘A’ contains at least one site that is common to the quorum of every other site.
These common sites send permission to only one site at any time. Thus, mutual exclusion is
guaranteed.
Note that the Minimality property ensures efficiency rather than correctness.
Maekawa’s Algorithm
Maekawa’s algorithm was the first quorum-based mutual exclusion algorithm. The request
sets for sites (i.e., quorums) in Maekawa’s algorithm are constructed to satisfy the following
conditions:

M1: (∀i ∀j : i /= j, 1 ≤ i , j ≤ N :: Ri ∩ Rj /=φ) M2: (∀i : 1 ≤ i ≤ N :: Si ∈ Ri )

M3: (∀i : 1 ≤ i ≤ N :: |Ri | = K )


M4: Any site Sj is contained in K number of Ri s, 1 ≤ i , j ≤ N .
Maekawa used the theory of projective planes a√nd showed that
N = K (K − 1) + 1. This relation gives |Ri | = N.
Conditions M1 and M2 are necessary for correctness; whereas conditions M3 and M4
provide other desirable features to the algorithm.
Condition M3 states that the size of the requests sets of all sites must be equal implying that
all sites should have to do equal amount of work to invoke mutual exclusion.
Condition M4 enforces that exactly the same number of sites should request permission from
any site implying that all sites have “equal responsibility” in granting permission to other
sites.
The Algorithm
A site Si executes the following steps to execute the CS.
Requesting the critical section
(a) A site Si requests access to the CS by sending REQUEST(i ) messages to all sites in
its request set Ri .
(b) When a site Sj receives the REQUEST(i ) message, it sends a REPLY(j) message to
Si provided it hasn’t sent a REPLY message to a site since its receipt of the last RELEASE
message. Otherwise, it queues up the REQUEST(i ) for later consideration.
Executing the critical section
(c) Site Si executes the CS only after it has received a REPLY message from every site in Ri .
The Algorithm
Releasing the critical section
(d) After the execution of the CS is over, site Si sends a RELEASE(i ) message to every
site in Ri .
(e) When a site Sj receives a RELEASE(i ) message from site Si , it sends a REPLY
message to the next site waiting in the queue and deletes that entry from the queue. If the
queue is empty, then the site updates its state to reflect that it has not sent out any REPLY
message since the receipt of the last RELEASE message.
Correctness
Theorem: Maekawa’s algorithm achieves mutual exclusion.
Proof:
Proof is by contradiction. Suppose two sites Si and Sj are concurrently executing the CS.
This means site Si received a REPLY message from all sites in Ri and concurrently site Sj
was able to receive a REPLY message from all sites in Rj .
If Ri Rj = Sk , then site Sk must have sent REPLY messages to both Si
and Sj concurrently, which is a contradiction. Q
Performance
REQUEST,
N REPLY, and

Agarwal-El Abbadi Quorum-Based Algorithm


Agarwal-El Abbadi quorum-based algorithm uses ‘tree-structured quorums’.
All the sites in the system are logically organized into a complete binary tree.
For a complete binary tree with level ‘k’, we have 2k+1 – 1 sites with its root at level k and
leaves at level 0.
The number of sites in a path from the root to a leaf is equal to the level of the tree k+1 which
is equal to O(log n).
A path in a binary tree is the sequence a1, a2. . . ai , ai+1. . . . ak such that ai is the parent of
ai+1.
Algorithm for constructing a tree-structured quorum
The algorithm tries to construct quorums in a way that each quorum represents any path from
the root to a leaf.
If it fails to find such a path (say, because node ’x’ has failed), the control goes to the ELSE
block which specifies that the failed node ‘x’ is substituted by two paths both of which start
with the left and right children of ‘x’ and end at leaf nodes.
If the leaf site is down or inaccessible due to any reason, then the quorum cannot be formed
and the algorithm terminates with an error condition.
The sets that are constructed using this algorithm are termed as tree quorums.
FUNCTION GetQuorum (Tree: NetworkHierarchy): QuorumSet; VAR left, right :
QuorumSet;
BEGIN
IF Empty (Tree) THEN RETURN ({});
ELSE IF GrantsPermission(Tree↑.Node) THEN

RETURN ((Tree↑.Node) ∪ GetQuorum (Tree↑.LeftChild)); OR

RETURN ((Tree↑.Node) ∪ GetQuorum (Tree↑.RightChild));(*line 9*)


ELSE
left←GetQuorum(Tree↑.left); right←GetQuorum(Tree↑.right);

IF (left = ∅ ∨ right = ∅) THEN


(* Unsuccessful in establishing a quorum *) EXIT(-1);
ELSE

RETURN (left ∪ right); END; (* IF *)


END; (* IF *) END; (* IF *)
END GetQuorum
Examples of Tree-Structured Quorums
When there is no node failure, the number of quorums formed is equal to the number of leaf
sites.
Consider the tree of height 3 show in Figure 3, constructed from 15 (=23+1-1) sites.
In this case 8 quorums are formed from 8 possible root-leaf paths: 1-2-4-8, 1-2-4-9, 1-2-5-10,
1-2-5-11, 1-3-6-12, 1-3-6-13, 1-3-7-14 and 1-3-7-15.
If any site fails, the algorithm substitutes for that site two possible paths starting from the
site’s two children and ending in leaf nodes.
For example, when node 3 fails, we consider possible paths starting from children 6 and 7
and ending at leaf nodes. The possible paths starting from child 6 are 6-12 and 6-13, and from
child 7 are 7-14 and 7-15.
So, when node 3 fails, the following eight quorums can be formed:
{1,6,12,7,14}, {1,6,12,7,15}, {1,6,13,7,14}, {1,6,13,7,15}, {1,2,4,8},
{1,2,4,9},{1,2,5,10}, {1,2,5,11}.
Token-Based Algorithms
A token-based algorithm is a type of algorithm where "tokens" are used as a means of representing
or managing various computational resources, such as access to data, computation steps, or rights to
perform actions in a system. Tokens are typically discrete entities that provide a mechanism for
coordination, control, or resource management in various computing contexts.
Key Features of Token-Based Algorithms
1. Token: A token is a special message or data structure that grants permission to enter
the critical section. Only the process holding the token can access the CS, and it
passes the token to another process once it exits the CS.
2. Distributed System: The algorithm works in a distributed setting, where processes
are not necessarily aware of each other’s states except through communication. The
processes are often connected via a network and do not have a global memory or
synchronization point.
3. Mutual Exclusion: The algorithm ensures that only one process can enter the critical
section at any given time, preventing race conditions and ensuring data consistency.
4. Fairness: Token-based algorithms often aim for fairness, meaning that every process
will eventually receive the token and get a chance to enter the critical section.
5. Deadlock and Starvation Freedom: These algorithms are designed to avoid
situations where processes are blocked indefinitely (deadlock) or where some
processes are perpetually denied access to the critical section (starvation).
Working of Token-Based Algorithms
In token-based algorithms, there are usually two key actions that a process can perform:
 Request for Critical Section: A process requests permission to enter the critical
section, which is granted only if it holds the token.
 Release the Token: Once a process exits the critical section, it passes the token to
another process, which may be waiting to enter the CS.
Properties of Token-Based Algorithms
1. Mutual Exclusion: The token ensures that only one process can access the critical
section at any given time.
2. Deadlock Freedom: The circulation of the token ensures that processes are not
blocked forever and can eventually access the critical section.
3. Starvation Freedom: Every process will eventually get the token and be able to enter
the critical section.
4. Fairness: Typically, the token-based algorithms provide fairness by ensuring that
processes get a chance to access the critical section in a round-robin or equal manner.
Suzuki-Kasami’s Broadcast Algorithm
If a site wants to enter the CS and it does not have the token, it broadcasts a REQUEST
message for the token to all other sites.
A site which possesses the token sends it to the requesting site upon the receipt of its
REQUEST message.
If a site receives a REQUEST message when it is executing the CS, it sends the token only
after it has completed the execution of the CS.
This algorithm must efficiently address the following two design issues:
(1) How to distinguish an outdated REQUEST message from a current REQUEST
message:
Due to variable message delays, a site may receive a token request message after the
corresponding request has been satisfied.
If a site can not determined if the request corresponding to a token request has been satisfied,
it may dispatch the token to a site that does not need it.
This will not violate the correctness, however, this may seriously degrade the performance.
(2) How to determine which site has an outstanding request for the CS:
After a site has finished the execution of the CS, it must determine what sites have an
outstanding request for the CS so that the token can be dispatched to one of them.
The first issue is addressed in the following manner:
A REQUEST message of site Sj has the form REQUEST(j, n) where n (n=1, 2, ...) is a
sequence number which indicates that site Sj is requesting its nth CS execution.
A site Si keeps an array of integers RNi [1..N] where RNi [j] denotes the largest sequence
number received in a REQUEST message so far from site Sj .
When site Si receives a REQUEST(j, n) message, it sets RNi [j]:= max(RNi [j], n).
When a site Si receives a REQUEST(j, n) message, the request is outdated if
RNi [j]>n.
The second issue is addressed in the following manner:
The token consists of a queue of requesting sites, Q, and an array of integers LN[1..N], where
LN[j] is the sequence number of the request which site Sj executed most recently.
After executing its CS, a site Si updates LN[i]:=RNi[i] to indicate that its request
corresponding to sequence number RNi [i] has been executed.
At site Si if RNi [j]=LN[j]+1, then site Sj is currently requesting token.

Raymond’s Tree-Based Algorithm


“This algorithm uses a spanning tree to reduce the number of messages exchanged per critical
section execution.
The network is viewed as a graph, a spanning tree of a network is a tree that contains all the
N nodes.
The algorithm assumes that the underlying network guarantees message delivery. All nodes
of the network are ’completely reliable”
How Raymond's Algorithm Works

Step-by-Step Process:

1. Requesting Access:
o When a process PiP_iPi wants to enter the critical section, it sends a request
to its parent in the tree.
o The request message includes the process's logical timestamp and indicates
the process’s desire to enter the critical section.
2. Token Movement:
o The token starts at a process (initially, the root of the tree). The token is passed
down the tree according to the structure, allowing processes to access the
critical section one at a time.
o A process with the token grants the right to enter the critical section only to
the process holding the token.
o After completing the critical section, the token is passed down to one of the
child processes that has requested the critical section.
3. Release and Passing:
o Once a process PiP_iPi finishes executing in the critical section, it passes the
token to its children, if they have requested the critical section. If no child has
requested the critical section, the token is passed to the parent or another child.
4. Reclaiming the Token:
o If a process loses its connection to the tree (for example, if its parent fails or if
the tree structure is reorganized), the algorithm ensures that the token can still
be passed among remaining processes, maintaining mutual exclusion. A
failure detection and recovery mechanism is generally required to handle such
situations.

Properties of Raymond's Tree-Based Algorithm

1. Mutual Exclusion:
o The algorithm ensures that only one process at a time can access the critical
section, thus preventing race conditions and maintaining data consistency.
2. Deadlock-Free:
o Since the token keeps circulating in the tree, a process that requests the critical
section will eventually receive the token. Hence, deadlock is avoided.
3. Starvation-Free:
o Every process that requests the critical section will eventually receive the
token, preventing starvation (where a process might be indefinitely denied
access to the critical section).
4. Fairness:
o The algorithm ensures fairness by passing the token according to a strict
ordering based on request times. This means processes are granted access to
the critical section in the order they request it.
5. Efficient Token Circulation:
o The token is passed in a way that limits unnecessary communication,
minimizing network overhead. Since only parent-child communication is
required, the algorithm is more efficient compared to broadcast-based
approaches.
6. Scalability:
o The tree structure allows the algorithm to scale well in large systems. The
token only needs to move within the tree, which is a more efficient
communication pattern compared to other algorithms where tokens might have
to be passed through all processes.
7. Fault Tolerance:
o While Raymond's algorithm assumes a reliable communication network and
no node failures, it is possible to implement fault-tolerant versions where the
system detects a failure and regenerates the token or reconfigures the tree
structure. However, failure handling adds complexity to the basic algorithm.

Advantages of Raymond’s Tree-Based Algorithm


 Low Message Overhead: Compared to other distributed mutual exclusion algorithms
(like the Lamport's or Ricart-Agrawala algorithm), Raymond's tree-based approach
generally has lower communication overhead because it restricts the communication
to a tree structure instead of broadcasting messages to all processes.
 Fairness: The algorithm ensures that every process will get a fair opportunity to
access the critical section, as the token is passed in a predefined order based on
request times.
 Scalability: The algorithm scales well in systems with a large number of processes, as
the token only needs to move among the processes in the tree rather than involving all
processes directly.
 No Central Coordinator: Unlike some centralized approaches, Raymond’s algorithm is
fully distributed and doesn't rely on a central node to coordinate access to the critical
section.
Disadvantages of Raymond’s Tree-Based Algorithm
 Token Loss: If the token is lost due to network failures or crashes, the system will not
function properly until the token is regenerated, requiring a fault-tolerant recovery
mechanism.
 Tree Construction Complexity: The initial construction of the tree may require
significant effort, especially in large systems, and reorganizing the tree in the event of
failures or changes in the system can be complex.
 Single Point of Failure: Although the algorithm doesn't rely on a central node, the tree
structure still has a hierarchical dependency. A failure in a critical node (like the root)
can disrupt the entire system unless recovery mechanisms are in place.

You might also like