0% found this document useful (0 votes)
3 views4 pages

Dos Tae2

The document discusses various algorithms for Distributed Mutual Exclusion, which ensures that no two concurrent processes are in their critical section simultaneously. It covers non-token based algorithms like Lamport, Ricart-Agrawala, and Maekawa, as well as token-based algorithms such as the Suzuki-Kasami algorithm, detailing their mechanisms, message requirements, and synchronization delays. The report emphasizes the importance of safety, liveness, fairness, and the efficient handling of requests in distributed systems.

Uploaded by

nidhi.laddha09
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views4 pages

Dos Tae2

The document discusses various algorithms for Distributed Mutual Exclusion, which ensures that no two concurrent processes are in their critical section simultaneously. It covers non-token based algorithms like Lamport, Ricart-Agrawala, and Maekawa, as well as token-based algorithms such as the Suzuki-Kasami algorithm, detailing their mechanisms, message requirements, and synchronization delays. The report emphasizes the importance of safety, liveness, fairness, and the efficient handling of requests in distributed systems.

Uploaded by

nidhi.laddha09
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

G H Raisoni College of Engineering, Nagpur

Department of Computer Science & Engineering


Session 2015-16, summer 2016

DOS TAE 2
Report On Nptel Lecture On Distributed Operating System

Name of Topic: Distributed Mutual Exclusion


Name of Faculty: Proff. Pallabh Das Gupta(C.S.E.,IIT, Kharagpur)
---------------------------------------------------------------------------------------------------------------------
Mutual exclusion refers to the requirement of ensuring that no two concurrent processes are in
their critical section at the same time.
Requirements:
– at most one process in critical section (safety)
– if more than one requesting process, someone enters (liveness)
– a requesting process enters within a finite time (no starvation)
– requests are granted in order (fairness)

Types of Distributed Mutual Exclusion Algorithms


• Non-token based / Permission based
– Permission from all processes:
E.g. Lamport, Ricart-Agarwala algorithms
– Permission from a subset: ex. Maekawa Algorithm
• Token based
– ex.: Suzuki-Kasami

 Lamport’s Algorithm

• Every node i has a request queue qi


– keeps requests sorted by logical timestamps (total ordering enforced by
including process id in the timestamps)
• To request critical section:
– send timestamped REQUEST( tsi tsi, i , i) to all other nodes
– put ( tsi,i) in its own queue
• On receiving a request ( tsi tsi, i , i):
– send timestamped REPLY to the requesting node i
– put request ( tsi,i) in the queue
• To enter critical section:
– Process i enters critical section if: (tsi,i) is at the top if its own queue, and Process i has
received a message (any message) with timestamp larger than ( tsi , i), from ALL other nodes.
• To release critical section:
- Process i removes its request from its own queue and sends a
timestamped RELEASE message to all other nodes
- On receiving a RELEASE message from i’s request is removed from the local request queue
Some notable points
- Purpose of REPLY messages from node i to j is to ensure that j knows of all requests of i
prior to sending the REPLY (and therefore, possibly any request of i with timestamp lower
than j’s ’s request)
- Requires FIFO channels.
- 3( n – 1 ) messages per critical section invocation
- Synchronization delay = max mesg transmission time
- Requests are granted in order of increasing timestamps

 The Ricart Ricart-Agrawala Algorithm


• Improvement over Lamport’s
• Main Idea:-– node j need not send a REPLY to node i if j has a request with timestamp
lower than the request of i (since i cannot enter before j anyway in this case)
 Does not require FIFO
 2(n – 1) messages per critical section invocation
 Synchronization delay = max. message transmission time
 Requests granted in order of increasing timestamps
To request critical section:
– send timestamped REQUEST message ( tsi , i)
• On receiving request ( tsi, i) at j:
– send REPLY to i if j is neither requesting nor executing critical section or – if j is requesting
and i’s ’s request timestamp is smaller than j’s ’s request timestamp. Otherwise, defer the
request.
• To enter critical section:
– i enters critical section on receiving REPLY from all nodes
• To release critical section:
– send REPLY to all deferred requests

 Maekawa’s Algorithm
•Permission obtained from only a subset of other processes, called the Request SetRequest Set(or
(Quorum)
•Separate Request Set, Ri, for each process , i
•Requirements:
––for all i, j: Ri∩ Rj≠ Φ
––for all I, iЄ Ri–
–for all i: |Ri| = K, for some , K
–any node I is contained in exactly is D Request Sets, for some Request D,K=D=sqrt(N)
•To request critical section:
– i sends REQUEST message to all process in Ri
•On receiving a REQUEST message:
– Send a REPLY message if no REPLY message has sent since the last RELEASE message is
received.
– Update status to indicate that a REPLY has been se
– Otherwise, queue up the REQUEST
• To enter critical section:
– i enters critical section after receiving REPLY from nodes in Ri

Token based Algorithms


• Single token circulates, enter CS when token is present
• Mutual exclusion obvious
• Algorithms differ in how to find and get the token
• Uses sequence numbers rather than timestamps to differentiate between old and current
requests

 Suzuki Kasami Algorithm


• Broadcast a request for the token
•Process with the token sends it to the requestor i not need it

•Issues:
– Current versus outdated requests
– Determining sites with pending requests
– Deciding which site to give the token to

• The token:
– Queue (FIFO) Q of requesting processes
– LN[1..n] : sequence number of request that j execut recently

•The request message:


– REQUEST(i, k): request message from node i for its critical section execution
•Other data structures:– RNi[1..n] for each node i, where RNi[ j ] is the larges sequence number
received so far by i in a REQUES message from j.

•To request critical section:


– If i does not have token, increment RNi[ i ] and send REQUEST(i, RNi[ i ]) to all nodes
– If i has token already, enter critical section if the tok idle (no pending requests), else follow
rule to relea critical section

•On receiving REQUEST(i, sn) at j:


– Set RNj[ i ] = max(RNj[ i ], sn)
– If j has the token and the token is idle, then send it
RNj[ i ] = LN[ i ] + 1. If token is not idle, follow rule to critical section

•To enter critical section:


– Enter CS if token is present

•To release critical section:


– Set LN[ i ] = RNi[ i ]
– For every node j which is not in Q (in token), add no if RNi[ j ] = LN[ j ] + 1
– If Q is non empty after the above, delete first node f and send the token to that node

•No. of messages:
– 0 if node holds the token already, n otherwise

•Synchronization delay:
– 0 (node has the token) or max. message delay (toke elsewhere)

•No starvation

Submitted By:
Nidhi Laddha(08)

You might also like