0% found this document useful (0 votes)
83 views123 pages

Mutual Exclusion Complete

The document discusses distributed mutual exclusion, emphasizing the need for exclusive access to critical sections in distributed systems, which cannot rely on shared variables. It outlines various algorithms for achieving mutual exclusion, including non-token based, token-based, and quorum-based approaches, along with their performance metrics such as message complexity and response time. Additionally, it highlights specific algorithms like Lamport's and Ricart-Agrawala, detailing their mechanisms and complexities.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views123 pages

Mutual Exclusion Complete

The document discusses distributed mutual exclusion, emphasizing the need for exclusive access to critical sections in distributed systems, which cannot rely on shared variables. It outlines various algorithms for achieving mutual exclusion, including non-token based, token-based, and quorum-based approaches, along with their performance metrics such as message complexity and response time. Additionally, it highlights specific algorithms like Lamport's and Ricart-Agrawala, detailing their mechanisms and complexities.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 123

1

Distributed Mutual Exclusion


2
Distributed Mutual Exclusion
❖ Mutual exclusion: Concurrent access of processes to a shared
resource or data is mutually exclusive.
❖ Only one process is allowed to execute the critical section (CS) at
any given time.
❖ In a distributed system, shared variables (semaphores) or a local
kernel cannot be used to implement mutual exclusion.
❖ Message passing is the sole means for implementing distributed
mutual exclusion.
3
Requirements of Mutual Exclusion Algorithms
❖ Safety Property:
At any instant, only one process can execute the critical section.
❖ Liveness Property:
This property states the absence of deadlock and starvation.
Two or more sites should not endlessly wait for messages which will
never arrive. i.e. a process requesting entry to a critical section is
eventually granted it.
❖ Fairness:
Each process gets a fair chance to execute the CS. Fairness property
generally means the CS execution requests are executed in the order
of their arrival (time is determined by a logical clock) in the system.
4
Types of Mutual Exclusion Algorithms
5
Types of Distributed Mutual Exclusion Algorithms
6
Distributed Mutual Exclusion ...
❖ Non token based
❖ A site/process can enter a critical section when an assertion (condition)
becomes true.
❖ Algorithm should ensure that the assertion will be true in only one
site/process.
❖ Two or more successive rounds of messages are exchanged among the sites to
determine which site will enter the CS next.
7
Distributed Mutual Exclusion ...
❖ Token-based Mutual Exclusion
❖ A unique token (a known, unique message) is shared among cooperating
sites/processes.
❖ A site is allowed to enter its CS if it possesses the token.
❖ Mutual exclusion is ensured because the token is unique.
❖ Need to take care of conditions such as loss of token, crash of token holder,
possibility of multiple tokens, etc.
8
Distributed Mutual Exclusion ...
❖ Quorum based approach:
❖ Each site requests permission to execute the CS from a subset of sites (called
a quorum).
❖ Any two quorums contain a common site.
❖ This common site is responsible to make sure that only one request executes
the CS at any time.
Performance Metrics 9

The performance is generally measured by the following four metrics:


❖ Message complexity: The number of messages required per CS execution by a
site. The number of messages sent and received during the execution of an
algorithm.
❖ Synchronization delay: After a site leaves the CS, it is the time required and
before the next site enters the CS (see Figure 1).
10
Performance Metrics
❖ Response time: The time interval a request waits for its CS execution to be over
after its request messages have been sent out (see Figure 2).
❖ System throughput: The rate at which the system executes requests for the CS.
system throughput=1/(SD+E)
where SD is the synchronization delay and E is the average critical section
execution time.
11
Distributed Mutual Exclusion ...
Last site Next site
exits CS enters CS

Synchronization Time
delay

Request
CS Request sent Enter CS Exit CS
arrives

E Time

Response Time
12
Distributed Mutual Exclusion ...
❖ Performance under Low and High Load
Performance of mutual exclusion algorithms in distributed systems can vary based
on the load conditions under two special loading conditions, viz., “low load”
and “high load”.
low load (few requests for the critical section) and high load (many
concurrent requests).
▪ The load is determined by the arrival rate of CS execution requests.
▪ Under low load conditions, there is rarely more than one request for the critical
section present in the system simultaneously.
▪ Under heavy load conditions, there is always a pending request for critical
section at a site.
13
General System Model
14
1. Centralized Algorithm

Coordinator Based Algorithm


The Main Idea
− One of the processes in the system is selected as the coordinator.
− The coordinator is responsible for deciding the order in which critical section
requests are fulfilled.
− Every process sends its request for critical section to the coordinator and waits to
receive permission from it.
− Requests are fulfilled in the order in which they arrive at the coordinator.
− The coordinator grants permission to requests one at a time.
− All other requests are queued in a FIFO queue.
15
16
17
18
19
20
21
22
23
24
25
26
27
28
Coordinator Based Algorithm …
Complexity Analysis
Parameters:
• N: Number of processes in the system
• T: Message transmission time
• E: Critical section execution time
✔ Message complexity: 3
✔ 1 REQUEST message + 1 GRANT message + 1 RELEASE message
✔ Message-size complexity: O(1) (Refers to the amount of data in each message. So
messages carry only basic information e.g., process ID, request type)
✔ The size of each message is constant, making the message-size complexity O(1).
29
Coordinator Based Algorithm …
Complexity Analysis

✔ Response time (under light load): 2T + E (delay for Request msg to


reach the coordinator, Grant msg to return to the requesting process)
✔ Synchronization delay (under heavy load): 2T (RELEASE message ,
Grant message)
30
2. Distributed Algorithms) (non-token based)
a) Lamport’s Algorithm
Notations:
– Si: Site i
– Ri: Request set, containing the ids of all Si ’s from which permission must be
received before accessing CS.
– Non-token based approaches use timestamps to order requests for CS.
– Smaller time stamps get priority over larger ones.
Lamport’s Algorithm
– Ri = {S1, S2, …, Sn}, i.e., all sites.
– Request queue: maintained at each Si. Ordered (Sorted) by time stamps.
– Assumption: message delivered in FIFO order.
31
Lamport’s Algorithm …
The Main Idea
Assumes that all channels are FIFO.
Processes implement Lamport’s logical clock.
Requests are times tamped using logical clock.
Requests are fulfilled in the order of their timestamps.
Each process maintains a priority queue of all requests that are still
outstanding as per its knowledge.
32
Lamport’s Algorithm …
Steps for Process Pi:
❖ On generating a critical section request:
▪ Insert the request into the priority queue.
▪ Broadcast the request to all processes.
❖ On receiving a critical section request from another process:
▪ Insert the request into the priority queue.
▪ Send a REPLY message to the requesting process.
❖ Conditions for critical section entry (unoptimized version):
▪ L1': Pi has received a REPLY from all other processes.
▪ Any request received by Pi in the future will have timestamp larger than that of Pi ’s own request.
▪ L2: Pi ’s own request is at the top of its queue.
▪ Pi ’s request has the smallest timestamp among all requests received by Pi so far.
33
Lamport’s Algorithm …
Steps for Process Pi (Contd.):
❖ On leaving the critical section:
▪ Remove the request from the queue.
▪ Broadcast a RELEASE message to all processes.
❖ On receiving a RELEASE message from another process:
▪ Remove the request of that process from the queue.
34
35
36
37
38
39
40
41
42
43
Lamport’s Algorithm: Proof…
❖ Theorem: Lamport’s algorithm is fair.
❖ Proof:
− The proof is by contradiction. Suppose a site Si ’s request has a smaller
timestamp than the request of another site Sj and Sj is able to execute the CS
before Si .
− For Sj to execute the CS, it has to satisfy the conditions L1' and L2. This implies
that at some instant in time say t, Sj has its own request at the top of its queue
and it has also received a REPLY from all other sites.
− But request queue at a site is ordered by timestamp, and according to our
assumption Si has lower timestamp. So Si ’s request must be placed ahead of the
Sj ’s request in the REQUEST_QUEUEj . This is a contradiction!
44
Lamport’s Algo: Performance
Parameters:
• N: Number of processes in the system
• T: Message transmission time
• E: Critical section execution time
❖ Message complexity: 3(N − 1)
❖ N − 1 REQUEST messages + N − 1 REPLY messages + N − 1 RELEASE
messages
❖ Message-size complexity: O(1)
❖ Response time (under light load): 2T + E
❖ Synchronization delay (under heavy load): T
45
Inefficiencies in Lamport’s Algo
❖ Scenario 1: Assume Pi and Pj concurrently generate requests for critical section and Pi ’s
request has smaller timestamp than Pj ’s request.
− Lamport’s algorithm behavior: Pi first sends a REPLY message to Pj and later sends a
RELEASE message to Pj . Pj enters its critical section only after it has received the
RELEASE message from Pi .
− Improvement: Pi ’s REPLY message can be omitted.
❖ Scenario 2: Pi generates a request for critical section but Pj does not generate any request
for some time.
− Lamport’s algorithm behavior: Pi sends a RELEASE message to Pj on leaving the critical
section.
− Improvement: If Pj generates a critical section request in the future, it will anyway contact
Pi via a REQUEST message. So, there is no need for Pi to send a RELEASE message to
Pj .
46
Drawbacks of Lamport Algorithm
47
Ricart-Agrawala Algo …
❖ The Main Idea
− It is an optimization of Lamport's algorithm.
− Combine REPLY and RELEASE messages.
− On leaving the critical section, only send a REPLY/RELEASE message to
those processes that have unfulfilled requests for critical section.
− Eliminate priority queue.
48
Ricart-Agrawala Algo …
❖ The Ricart-Agrawala algorithm assumes the communication channels are FIFO.
❖ The algorithm uses two types of messages: REQUEST and REPLY.
❖ A process sends a REQUEST message to all other processes to request their
permission to enter the critical section.
❖ A process sends a REPLY message to a process to give its permission to that
process.
❖ Processes use Lamport-style logical clocks to assign a timestamp to critical
section requests and timestamps are used to decide the priority of requests.
❖ Each process pi maintains the Request-Deferred array, RDi , the size of which is
the same as the number of processes in the system.
❖ Initially, ∀i ∀j: RDi[j]=0. Whenever pi defer the request sent by pj , it sets
RDi[j]=1 and after it has sent a REPLY message to pj , it sets RDi [j]=0.
49
Ricart-Agrawala Algo …
50
Ricart-Agrawala Algo …
51
Ricart-Agrawala Algo …
52
Example:
53
54
55
56
57
58
59
Sample Execution (Practice Question)
Suppose there are 4 nodes. Process 1 chooses timestamp 20, process 2 chooses 23
and process 3 chooses 15. All processes want to enter their critical sections.

Time 1: Processes 1, 2 and 3 attempt to enter their critical sections

Node Timestamp # Sents to Receives from Status


1 20 2, 3, 4 2, 4 Waiting reply from 3
2 23 1, 3, 4 4 Waiting reply from 1, 3
3 15 1, 2, 4 1, 2, 4 Executes
4 - -
Sample Execution … 60

Time 2: Process 3 terminates, sends reply to process 1 and 2


Node Timestamp Sents to Receives from Status
#
1 20 2, 3, 4 2, 4 and 3 Executes
2 23 1, 3, 4 4 and 3 Waiting reply from 1
3 - - Terminated
4 - -

Time 3: Process 1 terminates, sends reply to process 2


Node Timestamp Sents to Receives from Status
#
1 - - Terminated
2 23 1, 3, 4 4, 3 and 1 Executes
3 - - Terminated
4 - -
61
Ricart-Agrawala Algo: Proof
❖ Theorem:
Ricart-Agrawala algorithm achieves mutual exclusion.
❖ Proof:
▪ Proof is by contradiction. Suppose two sites Si and Sj are executing the CS
concurrently and Si ’s request has lower timestamp than the request of Sj. Clearly,
Si received Sj ’s request after it has made its own request.
▪ Thus, Sj can concurrently execute the CS with Si only if Si returns a REPLY to Sj
(in response to Sj ’s request) before Si exits the CS.
▪ However, this is impossible because Sj ’s request has higher timestamp.
Therefore, Ricart-Agrawala algorithm achieves mutual exclusion.
62
Ricart-Agrawala: Performance
Parameters:
• N: Number of processes in the system
• T: Message transmission time
• E: Critical section execution time
✔ Message complexity: 2(N − 1)
✔ N − 1 REQUEST messages + N − 1 REPLY messages
✔ Message-size complexity: O(1)
✔ Response time (under light load): 2T + E
✔ Synchronization delay (under heavy load): T
63

Token-passing Algorithms

❖ In token-based algorithms, a unique token is shared among the


sites.
❖ A site is allowed to enter its CS if it possesses the token.
❖ Token-based algorithms use sequence numbers instead of
timestamps. (Used to distinguish between old and current
requests.)
64

Token-passing Algorithms

Few inherent limitations:


❖Single point of failure.
❖Token regeneration is expensive
However, a token can carry global information that can be useful for process
coordination.
The control token for mutual exclusion is centralized and can be used to serialize
requests to critical sections.
(Suzuki-Kasami's broadcast algorithm)
65
Token-passing Algorithms
a) Suzuki-Kasami algorithm
The Main idea

Completely connected network of


processes.

❖ There is one token in the network. The


holder of the token has the permission to
enter CS.
❖ Any other process trying to enter CS must
acquire that token. Thus the token will
move from one process to another based on I want to enter
I want to enter
CS
demand. CS
66

Suzuki-Kasami algorithm
❖ If a site wants to enter the CS and it does
not have the token, it broadcasts a
REQUEST message for the token to all
other sites.
❖ A site which possesses the token, sends it
to the requesting site upon the receipt of its
REQUEST message.
❖ If a site receives a REQUEST message
when it is executing the CS, it sends the
token only after it has completed the
execution of the CS
I want to enter
CS I want to enter
CS
67

Suzuki-Kasami Algorithm
68

Suzuki-Kasami Algorithm req


last
Process i broadcasts (i, num) ith site making req
nth request. er Sequ
queue
b ence numbe Q
um r
Site n of the reques
t
❖ Each process maintains
-an array req: req[j] denotes the sequence
number of the latest request made by process j req
req
(Some requests will be stale soon)
req
❖ Additionally, the token contains
-an array last: last[j] denotes the sequence req: array[0..n-1] of integer
number of the last visit to CS by process j. last: array [0..n-1] of integer
- a queue Q of waiting processes
69

Suzuki-Kasami Algorithm
When a process j receives a request (k, num) from
process k, it sets req[k] to max(req[k], num).

When a process, say j, is the holder of token


--Completes its CS
--Sets last[j]:= its own num
--Updates Q by retaining each process k only if :
1+ last[k] = req[k]
(This guarantees the freshness of the request)
--Sends the token to the head of Q, along with the array last req: array[0..n-1] of integer
and the tail of Q
last: Array [0..n-1] of integer
In fact, token ≡ (Q, last)
70

Suzuki-Kasami’s algorithm
{Program of process j}
Initially, ∀i: req[i] = last[i] = 0
* Entry protocol *
req[j] := req[j] + 1
num := req[j]
Send (j, num) to all
Wait until token (Q, last) arrives
Critical Section
* Exit protocol *
last[j] := req[j]
∀k ≠ j: k ∉ Q ∧ req[k] = last[k] + 1 🡪 append k to Q;
if Q is not empty 🡪 send (tail-of-Q, last) to head-of-Q fi

* Upon receiving a request (k, num) *


req[k] := max(req[k], num)
71

Example

req=[0,0,0,0,0] req=[0,0,0,0,0]
last=[0,0,0,0,0] 1
Q=[ ] 0

2
req=[0,0,0,0,0]

4
req=[0,0,0,0,0]
3
req=[0,0,0,0,0]

initial state
72

Example

req=[1,0,0,0,0] t (0,1) req=[1,0,0,0,0] 5 states, size of


Reques 1
last=[0,0,0,0,0]
Q=[] 0 array=5
Re
qu
es

Request (0,1)
t(

Req
0,1
)

uest
2

(0,1
req=[1,0,0,0,0]

)
4
req=[1,0,0,0,0]
3
req=[1,0,0,0,0]

state 0 sends request to


access CS
73

Example

req=[1,1,1,0,0]
req=[1,1,1,0,0]
last=[0,0,0,0,0] 1
0

2
req=[1,1,1,0,0]

4
req=[1,1,1,0,0]
3
req=[1,1,1,0,0]

1 & 2 send requests


74

Example
req=[1,1,1,0,0]
req=[1,1,1,0,0]
last=[1,0,0,0,0] 1
Q=(1,2) 0

2
req=[1,1,1,0,0]

4
req=[1,1,1,0,0]
3
req=[1,1,1,0,0]

0 prepares to exit CS
75

Example
req=[1,1,1,0,0]
req=[1,1,1,0,0] last=[1,0,0,0,0]
1 Q=(2)
0

2
req=[1,1,1,0,0]

4
req=[1,1,1,0,0]
3
req=[1,1,1,0,0]

0 passes token (Q and last) to 1


76

Example
req=[2,1,1,1,0]
req=[2,1,1,1,0] last=[1,1,0,0,0]
1 Q=(2,0,3)
0

2
req=[2,1,1,1,0]

4
req=[2,1,1,1,0]
3
req=[2,1,1,1,0]

0 and 3 send requests


77

Example

req=[2,1,1,1,0]
req=[2,1,1,1,0] 1
0

2
req=[2,1,1,1,0]
last=[1,1,0,0,0]
4 Q=(0,3)
req=[2,1,1,1,0]
3
req=[2,1,1,1,0]

1 sends token to 2
78

Suzuki-Kasami’s algorithm
❖ Performance

N messages = (N − 1 REQUEST messages + 1 TOKEN message)


79

Suzuki-Kasami’s algorithm
❖ Performance
– Average messages: 0 , if a site holds the idle token at the time of
its request; otherwise N (N − 1 REQUEST messages + 1
TOKEN message) messages needed to obtain the token i.e. per
CS invocation, where N denotes the number of processes.
– Synchronization delay: 0, if a site holds the idle token at the time
of its request; otherwise T, where T denotes the average message
propagation delay.
– Disadvantage: Requires broadcast. Therefore, message overhead
is high.
Suzuki-Kasami’s algorithm (Practice 80

yourself)
1. S1 request for CS
2. S3 and S4 simultaneously
requesting for CS after S1.
81

Raymond’s Algorithm

❖ This algorithm uses a spanning tree to reduce the number of


messages exchanged per critical section execution.
❖ The network is viewed as a graph, a spanning tree of a
network is a tree that contains all the N nodes.
❖ This algorithm assumes that the underlying network
guarantees message delivery. All nodes of the network are
completely reliable.
82

Raymond’s Algorithm

❖ The algorithm assumes the network nodes to be arranged in an


unrooted tree structure.
❖ Messages between nodes traverse along the undirected edges
of the tree
83
Raymond’s Algorithm

A B C D

E F G

Figure 1: An Unrooted Spanning Tree of 7 sites


84

Raymond’s Algorithm
❖ A node needs to hold information about and communicate only to its
immediate neighbouring nodes.
❖ Similar to the concept of tokens used in token-based algorithms, this
algorithm uses a concept of privilege.
❖ Only one node can be in possession of the privilege (called the privileged
node) at any time, except when the privilege is in transit from one node to
another in the form of a PRIVILEGE message.
❖ When there are no nodes requesting for the privilege, it remains in
possession of the node that last used it.
❖ A node cannot pass token to the nodes waiting in its queue until and unless it
does not hold the token.
85
Raymond’s Algorithm …
❖ The HOLDER Variables
− Each node maintains a HOLDER variable that provides information
about the placement of the privilege in relation to the node itself.
− A node stores the identity of a node that it thinks has the privilege or
leads to the node having the privilege in its HOLDER variable .
− For two nodes X and Y, if HOLDERx = Y, then we can redraw the
undirected edge between the nodes X and Y as a directed edge from X to
Y.
− For instance, if node G holds the privilege, Figure 1 can be redrawn with
logically directed edges as shown in the Figure 2.
86
Raymond’s Algorithm …
A B C D

E F G
Figure 2: Tree with logically directed edges, all pointing in a direction towards node G - the privileged node.
•The shaded node in Figure 2 represents the privileged node.
•The following will be the values of the HOLDER variables of various nodes:
HOLDERA = B
HOLDERB = C
HOLDERC = G
HOLDERD = C
HOLDERE = A
HOLDERF = B
HOLDERG = self
87
Raymond’s Algorithm …
A B C D

E F G

Figure 3: Tree with logically directed edges, all pointing in a direction towards
node B - the privileged node.
88

Raymond’s Algorithm
❖ Now suppose node B that does not hold the privilege wants to execute
the critical section.
❖ B sends a REQUEST message to HOLDERB, i.e., C, which in turn
forwards the REQUEST message to HOLDERC, i.e., G.
❖ The privileged node G, if it no longer needs the privilege, sends the
PRIVILEGE message to its neighbor C, which made a request for the
privilege, and resets HOLDERG to C.
❖ Node C, in turn, forwards the PRIVILEGE to node B, since it had
requested the privilege on behalf of B. Node C also resets HOLDERC
to B.
89
Raymond’s Algorithm
90

Raymond’s Algorithm

❖ The value “self” is placed in REQUEST Q if the node makes a request for the
privilege for its own use.
❖ The maximum size of REQUEST Q of a node is the number of immediate
neighbors + 1 (for “self”).
❖ ASKED prevents the sending of duplicate requests for privilege, and also
makes sure that the REQUEST Qs of the various nodes do not contain any
duplicate elements.
91

Raymond’s Algorithm
92
93
94
95

Initial State
96

Two messages are possible:


•Request (Token request)
•Privilege (Grant Token)

D wants to enter into Critical Section


97

D will send request to its immediate neighbour.


98

Now, B will send request to its immediate neighbour.


99
100

Now, the value of A’s holder will change from self to B and queue of A will be
empty.
Token is with B node now.
So its holder is self.
101
102
103

Suppose D is executing in critical Section and E made a request. Value of Empty


queue will be changed now.

RQB=E
Requesting

RQD=B
104

After getting token, value of holder and queue will be changed accordingly.

❖ value of HB changes to self


and queue is E.
105

value of HE changes to self and queue is empty.


106

Raymond’s Algorithm
❖ Performance
▪ Message overhead:- The worst-case cost = 2 x longest path length of the tree.
This happens when the privilege is to be passed between nodes at either ends
of the longest path of the minimal spanning tree. The worst possible topology
for this algorithm will be one where all nodes are arranged in a straight line. In
a straight line, the longest path length will be N-1; Thus, the number of
messages exchanged per CS execution would be 2 x (N-1).

▪ The best topology for the algorithm is the radiating star topology. Extensive
empirical measurements show that average diameter of randomly chosen trees
of size n is O(log n). The message complexity is O(diameter) of the tree.
Therefore, the average message complexity is O(log n).
107
Raymond’s Algorithm …

Node A <−−−− REQUEST −−−− Node B


Node A −−−− PRIVILEGE −−−−> Node B
Node A −−−− REQUEST −−−−> Node B
Node A <−−−− PRIVILEGE −−−− Node B



<pattern repeats>

Figure 4: Logical pattern of message flow between neighboring
nodes A and B.
108
Raymond’s Algorithm …
❖ Performance contd..
▪ Under heavy load condition, when all nodes are sending privilege requests,
PRIVILEGE messages travel along all N – 1 edges of the minimal spanning tree,
exactly twice, to grant the privilege to all N nodes.
▪ Each of these PRIVILEGE messages travel in response to a REQUEST message
(Figure 4).
▪ Thus, a total of 4 * (N - 1) messages travel across the minimal spanning tree.
Hence, the total number of messages exchanged per critical section execution is
4(N-1)/N ≈ 4
Raymond’s Algorithm 109
Complexity: at a glance
Parameters:
• N: Number of processes in the system
• T: Message transmission time to one hop.
• E: Critical section execution time
• D: Diameter of the tree
✔ Message complexity:
✔ Best case: 0
✔ Worst case: 2D (D REQUEST messages + D TOKEN message)
✔ Message-size complexity: O(1)
✔ Response time (under light load):
✔ Best case: E
✔ Worst case: 2D × T + E
✔ Synchronization delay (under heavy load): D × T (Diameter of tree*Time required for a msg
to travel one hop)
110
Comparison
Non-Token Resp. Time(ll) Sync. Delay Messages(ll) Messages(hl)

Lamport 2T+E T 3(N-1) 3(N-1)


Ricart-Agrawala 2T+E T 2(N-1) 2(N-1)
Maekawa 2T+E 2T 3*sq.rt(N) 5*sq.rt(N)

Token Resp. Time(ll) Sync. Delay Messages(ll) Messages(hl)

Suzuki-Kasami 2T+E T N N
Singhal 2T+E T N/2 N
Raymond T(log N)+E Tlog(N)/2 log(N) 4
111
Practice Question
✔ Node G (First)
✔ Node E,D (Next)
✔ Node C (immediately)
112
Practice Question
113
Practice Question
114
Practice Question
115
Practice Question
❖ C has forwarded its request bcoz, it lacks the token.
❖ C does not hold the token, so it cannot process requests in its queue.
116
Practice Question

❖ C has forwarded its request bcoz, it lacks the token.


❖ C does not hold the token, so it cannot process requests in its queue.
❖ Requests in the queue can only be processed once the token arrives
at C.
❖ Forwarding the request up ensures that the token will eventually be
passed down, allowing all waiting processes (E and D) to be served
later.
❖ After the token is forwarded to C, C can then process E and D’s
requests.
❖ This ensures a hierarchical, structured passing of requests in
Raymond’s Algorithm.
117
Practice Question
118
Practice Question
119
Practice Question
120
Practice Question
121

Raymond’s algorithm

4
1

4,7
1,4

1,4,7 want to enter their CS


122

Raymond’s Algorithm

1,4 4,7

3 sends the token to 6


123

Raymond’s Algorithm

4
4

4
4,7

6 forwards the token to 1

You might also like