Unit 3 DC - VDP
Unit 3 DC - VDP
SYNCHRONIZA 1
TION
Unit-III 08 Hrs.
Synchronization
• Exclusion, Distributed Mutual Exclusion-Classification of mutual
Exclusion Algorithm, Requirements of Mutual Exclusion
Algorithms, Performance measure.
• Non Token based Algorithms: Lamport Algorithm, Ricart–Agrawala’s
Algorithm, Maekawa’s Algorithm.
• Token Based Algorithms: Suzuki-Kasami’s Broardcast Algorithms,
Singhal’s Heurastic Algorithm, Raymond’s Tree based
Comparative Performance Analysis. Algorithm,
CO3 Evaluate the various techniques used for clock synchronization and L5 Evaluate
mutual exclusion.
2
Distributed Mutual Exclusion
• Mutual exclusion is a fundamental problem in distributed computing
systems.
• Distributed mutual exclusion algorithms must deal with unpredictable message delays
and incomplete knowledge of the system state.
• Two basic approaches for distributed mutual exclusion:
4
1. In the token-based approach, a unique token (also known as the PRIVILEGE
message) is shared among the sites. A site is allowed to enter its CS (Critical
Section) if it possesses the token and it continues to hold the token until the
execution of the CS is over. Mutual exclusion is ensured because the token is unique.
2. In the non-token based approach, two or more successive rounds of messages are
exchanged among the sites to determine which site will enter the CS next. A site
enters the Critical Section (CS) when an assertion, defined on its local variables,
becomes true. Mutual Exclusion is enforced because the assertion becomes true only
3. Freedom from starvation: every requesting site should get to enter CS in a finite
time (desirable)
5. Fault tolerance: failure in the distributed system will be recognized and therefore
6
Preliminaries
• We describe here:
1. System model
7
1. SYSTEM
MODEL
The system consists of N sites, S1, S2, ..., SN. We assume that a single process is running on each
site.
A process wishing to enter the CS, requests all other or a subset of processes by sending REQUEST
messages, and waits for appropriate replies before entering the CS. While waiting the process is
not allowed to make further requests to enter the CS.
In the ‘requesting the CS’ state, the site is blocked and can not make further requests for the CS. In
the ‘idle’ state, the site is executing outside the CS.
In the token-based algorithms, a site can also be in a state where a site holding the token is
8
executing outside the CS. Such state is referred to as the idle token state.
2. REQUIREMENTS OF MUTUAL
EXCLUSION ALGORITHMS
a. Safety Property: The safety property states that at any instant, only one process can execute
b. Liveness Property: This property states the absence of deadlock and starvation. Two or
more sites should not endlessly wait for messages which will never arrive.
In addition, a site must not wait indefinitely to execute the CS while other sites are repeatedly
executing the CS. That is, every requesting site should get an opportunity to execute the CS
in finite time.
c. Fairness: Fairness in the context of mutual exclusion means that each process gets a fair chance
a. Message complexity: It is the number of messages that are required per CS execution by a site.
b. Synchronization delay: After a site leaves the CS, it is the time required and before the next
Figure : Synchronization 10
Delay.
c. Response time: The time interval a request waits for its CS execution to be over after its
system throughput=1/(SD+E)
where SD is the synchronization delay and E is the average critical section execution
time.
• We often study the performance of mutual exclusion algorithms under two special loading
• Under low load conditions - there are rarely more than one request for the critical section
• Under heavy load conditions - there is always a pending request for critical section at a site.
12
• Lamport Algorithm - To synchronize logical clocks, Lamport defined a relation called
happens-before. The expression a ~ b is read "a happens before b" and means that all
processes agree that first event a occurs, then afterward, event b occurs.
• If a and b are events in the same process, and a occurs before b, then a ~ b
• is true.
• If a is the event of a message being sent by one process, and b is the event of the
message being received by another process, then a ~ b is also true. A message cannot be
received before it is sent, or even at the same time it is sent, since it takes a finite, nonzero
• The algorithm is fair in the sense that a request for CS is executed in the order of
• When a site processes a request for the CS, it updates its local clock and assigns
• FIFO order. 15
• The Algorithm
When a site Sj receives the REQUEST(tsi , i ) message from site Si , places site
2.Executing the critical section: Site Si enters the CS when the following two
conditions hold:
L1: Si has received a message with timestamp larger than (tSi , i ) from all other
sites.
16
L2: Sii ’s request is at the top of request queuei.
3. Releasing the critical section:
Site Si, upon exiting the CS, removes its request from the top of its request queue and
When a site Sj receives a RELEASE message from site Si , it removes Si ’s request from
When a site removes a request from its request queue, its own request may come at the
17
• Correctness
This implies that at some instant in time, say t, both Si and Sj have their own
requests at the top of their request queues and condition L1 holds at them. Without
loss of generality, assume that Si ’s request has smaller timestamp than the request of
Sj .
From condition L1 and FIFO property of the communication channels, it is clear that at
instant t the request of Si must be present in request queuej when Sj was executing its
CS. This implies that Sj ’s own request is at the top of its own request queue when a
smaller timestamp request, Si ’s request, is present in the request queuej – a
contradiction! 18
Theorem 2: Lamport’s algorithm is
fair. Proof:
The proof is by contradiction. Suppose a site Si ’s request has a smaller timestamp than
For Sj to execute the CS, it has to satisfy the conditions L1 and L2. This implies that at
some instant in time say t, Sj has its own request at the top of its queue and it has also
received a message with timestamp larger than the timestamp of its request from all
other sites.
assumption Si has lower timestamp. So Si ’s request must be placed ahead of the Sj’s
19
request in the request queuej . This is a contradiction!
• Performance
20
An optimization
example, if site Sj receives a REQUEST message from site Si after it has sent its own
REQUEST message with timestamp higher than the timestamp of site Si ’s request,
This is because when site Si receives site Sj ’s request with timestamp higher than
its own, it can conclude that site Sj does not have any smaller timestamp request which
is still pending.
21
Ricart-Agrawala Algorithm
• When a process wants to access a shared resource, it builds a message containing the
name of the resource, its process number, and the current (logical) time.
• It then sends the message to all other processes, conceptually including itself.
• When a process receives a request message from another process, Three different
cases have to be clearly distinguished:
1. If the receiver is not accessing the resource and does not want to access it, it
sends back an OK message to the sender.
2. If the receiver already has access to the resource, it simply does not reply.
Instead, it queues the request.
3. If the receiver wants to access the resource as well but has not yet done so, it
compares the timestamp of the incoming message with me. one contained in the
message that it has sent everyone. The lowest one wins. If the incoming message
has a lower timestamp, the receiver sends back an OK message. If its own 22
message has a lower timestamp, the receiver queues the incoming request
23
RICART-AGRAWALA
ALGORITHM
1. Requesting the critical section:
a. When a site Si wants to enter the CS, it broadcasts a time stamped
REQUEST message to all other sites.
b. When site Sj receives a REQUEST message from site Si, it sends a REPLY message
to Site Si if site Sj is neither requesting nor executing the CS, or if the site Sj is
requesting And Si’s request’s timestamp is smaller than site Sj’s own
request’s timestamp. otherwise, the reply is deferred
c) Site Si enters the CS after it has received a REPLY message from every site it sent
a REQUEST message to.
number of messages required per entry is now 2(n - 1), where the total number of
Problem 1 - If any process crashes, it will fail to respond to requests. This silence will
in, the receiver always sends a reply, either granting or denying permission.
communication primitive must be used. or each process must maintain the group
membership list itself, including processes entering the group, leaving the group, and
25
MAEKAWAS ALGORITHM
26
27
28
29
30
31
32
33
MAEKAWA’S
1.ALGORITHM
Requesting the critical section
• (a) A site Si requests access to the CS by sending REQUEST(i) messages to all sites in
its request set Ri.
• (b) When a site Sj receives the REQUEST (i) message, it sends a REPLY(j) message to Si
provided it hasn’t sent a REPLY message to a site since its receipt of the last RELEASE
message. Otherwise, it queues up the REQUEST(i) for later consideration.
• (d) After the execution of the CS is over, site Si sends a RELEASE (i) message to every
site in Ri.
• (e) When a site Sj receives a RELEASE(i) message from site Si, it sends a REPLY
34
message to the next site waiting in the queue and deletes that entry from the queue.
• Performance
• Note that the size of a request set is √N. Therefore, an execution of the CS requires √N
execution. Synchronization delay in this algorithm is 2T. This is because after a site Si
exits the CS, it first releases all the sites in Ri and then one of those sites sends a
REPLY message to the next site that executes the CS. Thus, two sequential message
35
• Problem of Deadlocks
• Without the loss of generality, assume three sites Si, Sj, and Sk simultaneously invoke
mutual exclusion. Suppose Ri ∩ Rj= {Sij}, Rj∩ Rk= {Sjk}, and Rk∩ Ri= {Ski}. Since
sites do not send REQUEST messages to the sites in their request sets in any
particular order and message delays are arbitrary, the following scenario is
possible: Sij has been locked by Si (forcing Sj to wait at Sij), Sjk has been locked by
Sj(forcing Sk to wait at Sjk), and Ski has been locked by Sk(forcing Si to wait at Ski).
This state represents a deadlock involving sites Si, Sj, and Sk.
36
• Handling Deadlocks
• FAILED: A FAILED message from site Si to site Sj indicates that Si cannot grant Sj’s
request because it has currently granted permission to a site with a higher priority
request.
• INQUIRE: An INQUIRE message from Si to Sj indicates that Si would like to find out
from Sj if it has succeeded in locking all the sites in its request set.
37
TOKEN BASED ALGORITHMS
38
SUZUKI-KASAMI’S
BROADCAST ALGORITHM
• Suzuki–Kasami algorithm is a token-based algorithm, In token-based algorithms, A site
• Non-token-based algorithms uses timestamp to order requests for the critical section
• Each request for critical section contains a sequence number. This sequence
39
40
• In Suzuki-Kasami’s algorithm if a site that wants to enter the CS, does not have the
token, it broadcasts a REQUEST message for the token to all other sites. A site
which possesses the token sends it to the requesting site upon the receipt of its
CS, it sends the token only after it has completed the execution of the CS.
REQUEST message
• (d) It sets LN[i] element of the token array equal to RNi[i]. (to indicate
that its request corresponding to sequence number RNi[i] has
been executed)
• (e) For every site Sj whose id is not in the token queue, it appends its id to the token
queue if RNi[j]=LN[j]+1. (site Sj is currently requesting token)
• (f) If the token queue is nonempty after the above update, Si deletes the top site id
from the token queue and sends the token to the site indicated by the id.
Thus, after executing the CS, a site gives priority to other sites with outstanding
requests for the CS.
Performance
• Beauty of Suzuki-Kasami algorithm lies in its simplicity and efficiency. No message is
needed and the synchronization delay is zero if a site holds the idle token at the time
of its request. If a site does not hold the token when it makes a request, the algorithm
requires N messages to obtain the token. Synchronization delay in this algorithm is
0 or T.
44
RAYMONDS ALGORITHM
45
46
47
48
49
SUZUKI–KASAMI ALGORITHM
SUMMARIZE
Token-Based: Only the process holding the token can access the critical
section.
Requesting Critical Section: A process sends a request to all other
processes when it wants to enter the critical section.
Token Passing: The process with the token forwards it to the next
requesting process based on request timestamps.
Critical Section Access: The process with the token enters the critical
section and releases it after completion.
Efficiency: Reduces message complexity, particularly efficient in high-
demand scenarios.
EXAMPLE -
Initially, P0 holds the token. Also, P0 is the current root. P3 wants the token to get into its critical section.
So, P3 adds itself to its own FIFO queue and sends
a request message to its parent P2.
51
P2 receives the request from P3. It adds P3 to its P1 receives the request from P2 It adds P3 to its
FIFO queue and passes the request message to its parent FIFO queue and passes the request message to its
P1. parent P0.
52
At this point, P2 also wants the token. Since its FIFO P0 receives the request message from P3 though P1. It
queue is not empty, it adds itself to its own FIFO queue. surrenders the token and passes it on to P1. It also
changes the direction of the arrow between them,
making P1 the root, temporarily.
53
P1 removes the top element of its FIFO queue to see P2 removes the top element of its FIFO queue to see
which node requested the token. Since the token needs which node requested the token. Since the token needs
to go to P3, P1 surrenders the token and passes it on to go to P3, P2 surrenders the token and passes it on
to P2. It also changes the direction of the arrow between to P3. It also changes the direction of the arrow
them, making P2 the root, temporarily. between them, making P3 the root.
54
Now, P3 holds the token. and can execute its critical
section. It is able to clear the top (and only) element of its As soon as P3 completes its critical section, it checks the
FIFO queue. Note that P3 is the current root. In the top element of its FIFO queue to see if it is needed
meantime, P2 checks the top element of its FIFO queue elsewhere. In this case, P2 has requested it, so P3 sends
and realizes that it also needs to request the token. it back to P2. It also changes the direction of the
So, P2 sends a request message to its current parent, P3, arrow between them, making P2 the new root.
who appends the request to its FIFO queue.
55
P2 holds the token and is able to complete its critical
section. Then it checks its FIFO queue, which is empty. So
it waits until some other node requests the token.
56