0% found this document useful (0 votes)
7 views17 pages

Distributed System Lecture 6

The document discusses distributed mutual exclusion in concurrent systems, emphasizing the need for mutual exclusion to ensure that only one process can access a critical section at a time. It outlines three basic approaches to achieve this: token-based, non-token based, and quorum based methods, along with the requirements and performance metrics for mutual exclusion algorithms. Additionally, Lamport's algorithm is presented as a method for ensuring mutual exclusion through timestamped requests and a proof of its correctness and fairness is provided.

Uploaded by

vaishg2023
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views17 pages

Distributed System Lecture 6

The document discusses distributed mutual exclusion in concurrent systems, emphasizing the need for mutual exclusion to ensure that only one process can access a critical section at a time. It outlines three basic approaches to achieve this: token-based, non-token based, and quorum based methods, along with the requirements and performance metrics for mutual exclusion algorithms. Additionally, Lamport's algorithm is presented as a method for ensuring mutual exclusion through timestamped requests and a proof of its correctness and fairness is provided.

Uploaded by

vaishg2023
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

LECTURE 6

DISTRIBUTED MUTUAL EXCLUSION

Prof. D. S. Yadav
Department of Computer Science 1
IET Lucknow
Distributed Computing: Principles, Algorithms, and Systems

INTRODUCTION
 Mutual exclusion: Concurrent access of processes to a
shared resource or data is executed in mutually
exclusive manner.
 Only one process is allowed to execute the critical
section (CS) at any given time.
 In a distributed system, shared variables (semaphores)
or a local kernel cannot be used to implement mutual
exclusion.
 Message passing is the sole means for implementing
distributed mutual exclusion.

2
Distributed Computing: Principles, Algorithms, and Systems

INTRODUCTION

Distributed mutual exclusion algorithms must deal with


unpredictable message delays and incomplete knowledge of the
system state.

Three basic approaches for distributed mutual exclusion:


1. Token based approach
2 2. Non-token based approach
3 3. Quorum based approach
Token-based approach:
◮ A unique token is shared among the sites.
◮ A site is allowed to enter its CS if it possesses the token.
◮ Mutual exclusion is ensured because the token is unique.

3
Distributed Computing: Principles, Algorithms, and Systems

INTRODUCTION

Non-token based approach:


◮ Two or more successive rounds of messages are exchanged among the

sites to determine which site will enter the CS next.

Quorum based approach:


◮ Each site requests permission to execute the CS from a subset of sites

(called a quorum).
◮ Any two quorums contain a common site.
◮ This common site is responsible to make sure that only one request

executes the CS at any time.

4
Distributed Computing: Principles, Algorithms, and Systems

PRELIMINARIES
System Model
The system consists of N sites, S1, S2, ..., SN .
We assume that a single process is running on each site. The process at
site
Si is denoted by pi .
A site can be in one of the following three states:
requesting the CS,
executing the CS,
or neither requesting nor executing the CS (i.e., idle).
In the ‘requesting the CS’ state, the site is blocked and can not make
further requests for the CS. In the ‘idle’ state, the site is executing outside
the CS.
In token-based algorithms, a site can also be in a state where a site
holding the token is executing outside the CS (called the idle token state).
At any instant, a site may have several pending requests for CS. A site
queues up these requests and serves them one at a time.

5
Distributed Computing: Principles, Algorithms, and Systems

REQUIREMENTS

Requirements of Mutual Exclusion Algorithms


1 Safety Property: At any instant, only one process can execute the critical
section.
2 Liveness Property: This property states the absence of deadlock and
starvation. Two or more sites should not endlessly wait for messages
which will never arrive.
3 Fairness: Each process gets a fair chance to execute the CS. Fairness
property generally means the CS execution requests are executed in the
order of their arrival (time is determined by a logical clock) in the
system.

6
Distributed Computing: Principles, Algorithms, and Systems

PERFORMANCE METRICS
The performance is generally measured by the following four
metrics:
Message complexity: The number of messages required per CS
execution by a site.
Synchronization delay: After a site leaves the CS, it is the time
required and before the next site enters the CS.
Last site exits the CS
Next site enters the CS

time
Synchronization delay

Synchronization Delay.
7
Distributed Computing: Principles, Algorithms, and Systems

PERFORMANCE METRICS
Response time: The time interval a request waits for its CS
execution to be over after its request messages have been sent
out.
The site enters
CS Request arrives Its request the CS The site exits the CS
messages sent out

CS execution time time

Response Time

Response Time.

System throughput: The rate at which the system executes requests


for the CS.
system throughput=1/(SD+E )
where SD is the synchronization delay and E is the average critical
section execution time.
8
Distributed Computing: Principles, Algorithms, and Systems

PERFORMANCE METRICS

Low and High Load Performance:


We often study the performance of mutual exclusion algorithms under
two special loading conditions, viz., “low load” and “high load”.
The load is determined by the arrival rate of CS execution requests.
Under low load conditions, there is seldom more than one request for
the critical section present in the system simultaneously.
Under heavy load conditions, there is always a pending request for
critical section at a site.

9
Distributed Computing: Principles, Algorithms, and Systems

LAMPORT’S ALGORITHM

Requests for CS are executed in the increasing order of timestamps


and time is determined by logical clocks.
Every site Si keeps a queue, request queuei , which contains mutual
exclusion requests ordered by their timestamps.
This algorithm requires communication channels to deliver messages
the FIFO order.

10
Distributed Computing: Principles, Algorithms, and Systems

THE ALGORITHM

Requesting the critical section:


When a site Si wants to enter the CS, it broadcasts a REQUEST(tsi ,i )
message to all other sites and places the request on request queuei .
((tsi , i ) denotes the timestamp of the request.)
When a site Sj receives the REQUEST(tsi ,i ) message from site Si ,places
site Si ’s request on request queuej and it returns a timestamped
REPLY message to Si .
Executing the critical section: Site Si enters the CS when the following two
conditions hold:
L1: Si has received a message with timestamp larger than (tsi , i ) from all
other sites.
L2: Si ’s request is at the top of request queuei .
Distributed Computing: Principles, Algorithms, and Systems

THE ALGORITHM

Releasing the critical section:


Site Si , upon exiting the CS, removes its request from the top of
its request queue and broadcasts a timestamped RELEASE
message to all other sites.
When a site Sj receives a RELEASE message from site Si , it
removes Si ’s request from its request queue.
When a site removes a request from its request queue, its own
request may come at the top of the queue, enabling it to enter
the CS.

12
Distributed Computing: Principles, Algorithms, and Systems

THE EXAMPLE

13
Distributed Computing: Principles, Algorithms, and Systems

CORRECTNESS
Theorem: Lamport’s algorithm achieves mutual exclusion.
Proof:
Proof is by contradiction. Suppose two sites Si and Sj are executing the
CS concurrently. For this to happen conditions L1 and L2 must hold at
both the sites concurrently.

This implies that at some instant in time, say t, both Si and Sj have their
own requests at the top of their request queues and condition L1 holds at
them. Without loss of generality, assume that Si ’s request has smaller
timestamp than the request of Sj .

From condition L1 and FIFO property of the communication channels, it


is clear that at instant t the request of Si must be present in request
queuej when Sj was executing its CS. This implies that Sj ’s own request is
at the top of its own request queue when a smaller timestamp request, Si
’s request, is present in the request queuej – a contradiction!

14
Distributed Computing: Principles, Algorithms, and Systems

CORRECTNESS

Theorem: Lamport’s algorithm is fair.


Proof:
The proof is by contradiction. Suppose a site Si ’s request has a
smaller timestamp than the request of another site Sj and Sj is
able to execute the CS before Si .
For Sj to execute the CS, it has to satisfy the conditions L1 and
L2. This implies that at some instant in time say t, Sj has its own
request at the top of its queue and it has also received a message
with timestamp larger than the timestamp of its request from all
other sites.
But request queue at a site is ordered by timestamp, and
according to our assumption Si has lower timestamp. So Si ’s
request must be placed ahead of the Sj ’s request in the request
queuej . This is a contradiction!
15
Distributed Computing: Principles, Algorithms, and Systems

PERFORMANCE

 For each CS execution, Lamport’s algorithm requires (N − 1)


REQUEST messages, (N − 1) REPLY messages, and (N − 1)
RELEASE messages.

 Thus, Lamport’s algorithm requires 3(N − 1) messages per CS


invocation. Synchronization delay in the algorithm is T .

16
17

You might also like