0% found this document useful (0 votes)
33 views56 pages

Unit 3 DC - VDP

Uploaded by

Riya Jayswal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views56 pages

Unit 3 DC - VDP

Uploaded by

Riya Jayswal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 56

UNIT-III

SYNCHRONIZA 1

TION
Unit-III 08 Hrs.
Synchronization
• Exclusion, Distributed Mutual Exclusion-Classification of mutual
Exclusion Algorithm, Requirements of Mutual Exclusion
Algorithms, Performance measure.
• Non Token based Algorithms: Lamport Algorithm, Ricart–Agrawala’s
Algorithm, Maekawa’s Algorithm.
• Token Based Algorithms: Suzuki-Kasami’s Broardcast Algorithms,
Singhal’s Heurastic Algorithm, Raymond’s Tree based
Comparative Performance Analysis. Algorithm,

CO3 Evaluate the various techniques used for clock synchronization and L5 Evaluate
mutual exclusion.

2
 Distributed Mutual Exclusion
• Mutual exclusion is a fundamental problem in distributed computing
systems.

• Mutual exclusion ensures that concurrent access of processes to a


shared resource or data is serialized, that is, executed in mutually
exclusive manner.

• Distributed Mutual Exclusion Ensures that only one process in a


distributed system can access a shared resource at a time. Without Figure 1: Three processes
accessing a shared resource
a central authority, processes coordinate using message passing to (critical section) simultaneously
prevent conflicts.

• Mutual exclusion in a distributed system states that only one


process is allowed to execute the critical section (CS) at any given
time.
3
• Message passing is the sole means for implementing distributed
 Classification of Mutual Exclusion Algorithms

• Distributed mutual exclusion algorithms must deal with unpredictable message delays
and incomplete knowledge of the system state.
• Two basic approaches for distributed mutual exclusion:

1. Token based approach

2. Non-token based approach

4
1. In the token-based approach, a unique token (also known as the PRIVILEGE

message) is shared among the sites. A site is allowed to enter its CS (Critical

Section) if it possesses the token and it continues to hold the token until the

execution of the CS is over. Mutual exclusion is ensured because the token is unique.

2. In the non-token based approach, two or more successive rounds of messages are

exchanged among the sites to determine which site will enter the CS next. A site

enters the Critical Section (CS) when an assertion, defined on its local variables,

becomes true. Mutual Exclusion is enforced because the assertion becomes true only

at one site at any given time.


5
• Objectives of Mutual Exclusion Algorithms

1. Guarantee mutual exclusion (required)

2. Freedom from deadlocks (desirable)

3. Freedom from starvation: every requesting site should get to enter CS in a finite

time (desirable)

4. Fairness: requests should be executed in the order of arrivals, which would be

based on logical clocks (desirable)

5. Fault tolerance: failure in the distributed system will be recognized and therefore

not cause any unduly prolonged disruptions (desirable)

6
 Preliminaries

• We describe here:

1. System model

2. Requirements of mutual exclusion algorithms

3. Metrics we use to measure the performance of mutual exclusion algorithms.

7
1. SYSTEM
MODEL
 The system consists of N sites, S1, S2, ..., SN. We assume that a single process is running on each
site.

 The process at site Si is denoted by pi.

 A process wishing to enter the CS, requests all other or a subset of processes by sending REQUEST
messages, and waits for appropriate replies before entering the CS. While waiting the process is
not allowed to make further requests to enter the CS.

 A site can be in one of the following three states:

1. Requesting the Critical Section.

2. Executing the Critical Section.

3. Neither requesting nor executing the CS (i.e., idle).

 In the ‘requesting the CS’ state, the site is blocked and can not make further requests for the CS. In
the ‘idle’ state, the site is executing outside the CS.

 In the token-based algorithms, a site can also be in a state where a site holding the token is
8
executing outside the CS. Such state is referred to as the idle token state.
2. REQUIREMENTS OF MUTUAL
EXCLUSION ALGORITHMS

• A mutual exclusion algorithm should satisfy the following properties:

a. Safety Property: The safety property states that at any instant, only one process can execute

the critical section. This is an essential property of a mutual exclusion algorithm.

b. Liveness Property: This property states the absence of deadlock and starvation. Two or

more sites should not endlessly wait for messages which will never arrive.

In addition, a site must not wait indefinitely to execute the CS while other sites are repeatedly

executing the CS. That is, every requesting site should get an opportunity to execute the CS

in finite time.

c. Fairness: Fairness in the context of mutual exclusion means that each process gets a fair chance

to execute the CS. 9


3. PERFORMANCE
METRICS
• The performance of mutual exclusion algorithms is generally measured by the following four
metrics:

a. Message complexity b. Synchronization delay c. Response time d. System


throughput

a. Message complexity: It is the number of messages that are required per CS execution by a site.

b. Synchronization delay: After a site leaves the CS, it is the time required and before the next

site enters the CS (see Figure below).

Figure : Synchronization 10
Delay.
c. Response time: The time interval a request waits for its CS execution to be over after its

request messages have been sent out (see Figure below).

Figure 2: Response Time


11
d. System throughput: The rate at which the system executes requests for the CS.

system throughput=1/(SD+E)

where SD is the synchronization delay and E is the average critical section execution

time.

• Low and High Load Performance:

• We often study the performance of mutual exclusion algorithms under two special loading

conditions, viz., “low load” and “high load”.

• The load is determined by the arrival rate of CS execution requests.

• Under low load conditions - there are rarely more than one request for the critical section

present in the system simultaneously.

• Under heavy load conditions - there is always a pending request for critical section at a site.

12
• Lamport Algorithm - To synchronize logical clocks, Lamport defined a relation called

happens-before. The expression a ~ b is read "a happens before b" and means that all

processes agree that first event a occurs, then afterward, event b occurs.

• The happens-before relation can be observed directly in two situations:

• If a and b are events in the same process, and a occurs before b, then a ~ b

• is true.

• If a is the event of a message being sent by one process, and b is the event of the

message being received by another process, then a ~ b is also true. A message cannot be

received before it is sent, or even at the same time it is sent, since it takes a finite, nonzero

amount of time to arrive.


• Lamport’s Algorithm

• The algorithm is fair in the sense that a request for CS is executed in the order of

their timestamps and time is determined by logical clocks.

• When a site processes a request for the CS, it updates its local clock and assigns

the request a timestamp.

• The algorithm executes CS requests in the increasing order of timestamps.

• Every site Si keeps a queue, request_queuei which contains mutual exclusion

requests ordered by their timestamps.

• This algorithm requires communication channels to deliver messages the

• FIFO order. 15
• The Algorithm

1. Requesting the critical section:


 When a site Si wants to enter the CS, it broadcasts a REQUEST
(tsi,i) message to all other sites and places the request on request queuei.
((tsi , i ) denotes the timestamp of the request.)

 When a site Sj receives the REQUEST(tsi , i ) message from site Si , places site

Si ’s request on request queuej and it returns a time stamped REPLY message to Si .

2.Executing the critical section: Site Si enters the CS when the following two
conditions hold:
 L1: Si has received a message with timestamp larger than (tSi , i ) from all other
sites.
16
 L2: Sii ’s request is at the top of request queuei.
3. Releasing the critical section:

 Site Si, upon exiting the CS, removes its request from the top of its request queue and

broadcasts a time stamped RELEASE message to all other sites.

 When a site Sj receives a RELEASE message from site Si , it removes Si ’s request from

its request queue.

When a site removes a request from its request queue, its own request may come at the

top of the queue, enabling it to enter the CS.

17
• Correctness

Theorem 1: Lamport’s algorithm achieves mutual exclusion.

 Proof is by contradiction. Suppose two sites Si and Sj are executing the CS


concurrently. For this to happen conditions L1 and L2 must hold at both the sites
concurrently.

 This implies that at some instant in time, say t, both Si and Sj have their own
requests at the top of their request queues and condition L1 holds at them. Without
loss of generality, assume that Si ’s request has smaller timestamp than the request of
Sj .

 From condition L1 and FIFO property of the communication channels, it is clear that at
instant t the request of Si must be present in request queuej when Sj was executing its
CS. This implies that Sj ’s own request is at the top of its own request queue when a
smaller timestamp request, Si ’s request, is present in the request queuej – a
contradiction! 18
Theorem 2: Lamport’s algorithm is

fair. Proof:

 The proof is by contradiction. Suppose a site Si ’s request has a smaller timestamp than

the request of another site Sj and Sj is able to execute the CS before Si .

 For Sj to execute the CS, it has to satisfy the conditions L1 and L2. This implies that at

some instant in time say t, Sj has its own request at the top of its queue and it has also

received a message with timestamp larger than the timestamp of its request from all

other sites.

 But request queue at a site is ordered by timestamp, and according to our

assumption Si has lower timestamp. So Si ’s request must be placed ahead of the Sj’s
19
request in the request queuej . This is a contradiction!
• Performance

 For each CS execution, Lamport’s algorithm requires (N − 1) REQUEST messages,

(N − 1) REPLY messages, and (N − 1) RELEASE messages.

 Thus, Lamport’s algorithm requires 3(N − 1) messages per CS invocation.

 Synchronization delay in the algorithm is T.

20
 An optimization

 In Lamport’s algorithm,REPLY messages can be omitted in certain situations. For

example, if site Sj receives a REQUEST message from site Si after it has sent its own

REQUEST message with timestamp higher than the timestamp of site Si ’s request,

then site Sj need not send a REPLY message to site Si .

 This is because when site Si receives site Sj ’s request with timestamp higher than

its own, it can conclude that site Sj does not have any smaller timestamp request which

is still pending.

21
 Ricart-Agrawala Algorithm

• When a process wants to access a shared resource, it builds a message containing the
name of the resource, its process number, and the current (logical) time.

• It then sends the message to all other processes, conceptually including itself.

• When a process receives a request message from another process, Three different
cases have to be clearly distinguished:

1. If the receiver is not accessing the resource and does not want to access it, it
sends back an OK message to the sender.

2. If the receiver already has access to the resource, it simply does not reply.
Instead, it queues the request.

3. If the receiver wants to access the resource as well but has not yet done so, it
compares the timestamp of the incoming message with me. one contained in the
message that it has sent everyone. The lowest one wins. If the incoming message
has a lower timestamp, the receiver sends back an OK message. If its own 22
message has a lower timestamp, the receiver queues the incoming request
23
RICART-AGRAWALA
ALGORITHM
1. Requesting the critical section:
a. When a site Si wants to enter the CS, it broadcasts a time stamped
REQUEST message to all other sites.
b. When site Sj receives a REQUEST message from site Si, it sends a REPLY message
to Site Si if site Sj is neither requesting nor executing the CS, or if the site Sj is
requesting And Si’s request’s timestamp is smaller than site Sj’s own
request’s timestamp. otherwise, the reply is deferred

2. Executing the critical section:

c) Site Si enters the CS after it has received a REPLY message from every site it sent
a REQUEST message to.

3. Releasing the critical section:


d) When site Si exits the CS, it sends all the deferred REPLY messages.
24
RICART-AGRAWALA
ALGORITHM
 Advantage - mutual exclusion is guaranteed without deadlock or starvation. The

number of messages required per entry is now 2(n - 1), where the total number of

processes in the system is n.

 Problem 1 - If any process crashes, it will fail to respond to requests. This silence will

be interpreted (incorrectly) as denial of permission – Solution: When a request comes

in, the receiver always sends a reply, either granting or denying permission.

 Problem 2 - Another problem with this algorithm is that either a multicast

communication primitive must be used. or each process must maintain the group

membership list itself, including processes entering the group, leaving the group, and
25
MAEKAWAS ALGORITHM

26
27
28
29
30
31
32
33
MAEKAWA’S
1.ALGORITHM
Requesting the critical section

• (a) A site Si requests access to the CS by sending REQUEST(i) messages to all sites in
its request set Ri.

• (b) When a site Sj receives the REQUEST (i) message, it sends a REPLY(j) message to Si
provided it hasn’t sent a REPLY message to a site since its receipt of the last RELEASE
message. Otherwise, it queues up the REQUEST(i) for later consideration.

2. Executing the critical section


• (c) Site Si executes the CS only after it has received a REPLY message from every site
in Ri.

3. Releasing the critical section

• (d) After the execution of the CS is over, site Si sends a RELEASE (i) message to every
site in Ri.
• (e) When a site Sj receives a RELEASE(i) message from site Si, it sends a REPLY
34
message to the next site waiting in the queue and deletes that entry from the queue.
• Performance

• Note that the size of a request set is √N. Therefore, an execution of the CS requires √N

REQUEST,√N REPLY, and √N RELEASE messages, resulting in 3√N messages per CS

execution. Synchronization delay in this algorithm is 2T. This is because after a site Si

exits the CS, it first releases all the sites in Ri and then one of those sites sends a

REPLY message to the next site that executes the CS. Thus, two sequential message

transfers are required between two successive CS executions.

35
• Problem of Deadlocks

• Without the loss of generality, assume three sites Si, Sj, and Sk simultaneously invoke

mutual exclusion. Suppose Ri ∩ Rj= {Sij}, Rj∩ Rk= {Sjk}, and Rk∩ Ri= {Ski}. Since

sites do not send REQUEST messages to the sites in their request sets in any

particular order and message delays are arbitrary, the following scenario is

possible: Sij has been locked by Si (forcing Sj to wait at Sij), Sjk has been locked by

Sj(forcing Sk to wait at Sjk), and Ski has been locked by Sk(forcing Si to wait at Ski).

This state represents a deadlock involving sites Si, Sj, and Sk.

36
• Handling Deadlocks

• FAILED: A FAILED message from site Si to site Sj indicates that Si cannot grant Sj’s

request because it has currently granted permission to a site with a higher priority

request.

• INQUIRE: An INQUIRE message from Si to Sj indicates that Si would like to find out

from Sj if it has succeeded in locking all the sites in its request set.

• YIELD: A YIELD message from site Si to Sj indicates that Si is returning the

permission to Sj(to yield to a higher priority request at Sj).

37
TOKEN BASED ALGORITHMS

1. Suzuki-Kasami’s Broardcast Algorithms

2. Singhal’s Heurastic Algorithm

3. Raymond’s Tree based Algorithm

38
SUZUKI-KASAMI’S
BROADCAST ALGORITHM
• Suzuki–Kasami algorithm is a token-based algorithm, In token-based algorithms, A site

is allowed to enter its critical section if it possesses the unique token.

• Non-token-based algorithms uses timestamp to order requests for the critical section

where as sequence number is used in token based algorithms.

• Each request for critical section contains a sequence number. This sequence

number is used to distinguish old and current requests.

39
40
• In Suzuki-Kasami’s algorithm if a site that wants to enter the CS, does not have the

token, it broadcasts a REQUEST message for the token to all other sites. A site

which possesses the token sends it to the requesting site upon the receipt of its

REQUEST message. If a site receives a REQUEST message when it is executing the

CS, it sends the token only after it has completed the execution of the CS.

• There are the following two design issues must be addressed:

• How to distinguishing an outdated REQUEST message from a current

REQUEST message

• How to determine which site has an outstanding request for the CS


41
42
SUZUKI–KASAMI ALGORITHM
1) Requesting the critical section
• (a) If requesting site Si does not have the token, then it increments its
sequence
 number, RNi[i], and sends a REQUEST(i, sn) message to all other sites.
 (‘sn’ is the updated value of RNi[i].)
• (b) When a site Sj receives this message, it sets RNj[i] to max(RNj[i], sn). If Sj has
the idle token, then it sends the token to Si if RNj[i]=LN[i]+1.
 (site Sj is currently requesting token )

2) Executing the critical section


• (c) Site Si executes the CS after it has received the token.
43
3. RELEASING THE CRITICAL SECTION
• - site Si takes the following actions:

• (d) It sets LN[i] element of the token array equal to RNi[i]. (to indicate
that its request corresponding to sequence number RNi[i] has
been executed)
• (e) For every site Sj whose id is not in the token queue, it appends its id to the token
queue if RNi[j]=LN[j]+1. (site Sj is currently requesting token)
• (f) If the token queue is nonempty after the above update, Si deletes the top site id
from the token queue and sends the token to the site indicated by the id.
Thus, after executing the CS, a site gives priority to other sites with outstanding
requests for the CS.
 Performance
• Beauty of Suzuki-Kasami algorithm lies in its simplicity and efficiency. No message is
needed and the synchronization delay is zero if a site holds the idle token at the time
of its request. If a site does not hold the token when it makes a request, the algorithm
requires N messages to obtain the token. Synchronization delay in this algorithm is
0 or T.
44
RAYMONDS ALGORITHM

45
46
47
48
49
SUZUKI–KASAMI ALGORITHM
SUMMARIZE
 Token-Based: Only the process holding the token can access the critical
section.
 Requesting Critical Section: A process sends a request to all other
processes when it wants to enter the critical section.
 Token Passing: The process with the token forwards it to the next
requesting process based on request timestamps.
 Critical Section Access: The process with the token enters the critical
section and releases it after completion.
 Efficiency: Reduces message complexity, particularly efficient in high-
demand scenarios.
EXAMPLE -

Initially, P0 holds the token. Also, P0 is the current root. P3 wants the token to get into its critical section.
So, P3 adds itself to its own FIFO queue and sends
a request message to its parent P2.
51
P2 receives the request from P3. It adds P3 to its P1 receives the request from P2 It adds P3 to its
FIFO queue and passes the request message to its parent FIFO queue and passes the request message to its
P1. parent P0.

52
At this point, P2 also wants the token. Since its FIFO P0 receives the request message from P3 though P1. It
queue is not empty, it adds itself to its own FIFO queue. surrenders the token and passes it on to P1. It also
changes the direction of the arrow between them,
making P1 the root, temporarily.

53
P1 removes the top element of its FIFO queue to see P2 removes the top element of its FIFO queue to see
which node requested the token. Since the token needs which node requested the token. Since the token needs
to go to P3, P1 surrenders the token and passes it on to go to P3, P2 surrenders the token and passes it on
to P2. It also changes the direction of the arrow between to P3. It also changes the direction of the arrow
them, making P2 the root, temporarily. between them, making P3 the root.

54
Now, P3 holds the token. and can execute its critical
section. It is able to clear the top (and only) element of its As soon as P3 completes its critical section, it checks the
FIFO queue. Note that P3 is the current root. In the top element of its FIFO queue to see if it is needed
meantime, P2 checks the top element of its FIFO queue elsewhere. In this case, P2 has requested it, so P3 sends
and realizes that it also needs to request the token. it back to P2. It also changes the direction of the
So, P2 sends a request message to its current parent, P3, arrow between them, making P2 the new root.
who appends the request to its FIFO queue.
55
P2 holds the token and is able to complete its critical
section. Then it checks its FIFO queue, which is empty. So
it waits until some other node requests the token.

56

You might also like