0% found this document useful (0 votes)
2 views24 pages

Week-6 Lecture 2, 3

The document outlines the content of a lecture series on Parallel and Distributed Computing, focusing on causal ordering of messages and various algorithms such as BSS, SES, and Matrix Algorithm. It explains the importance of causal relationships in message processing and details how each algorithm handles message delivery and timestamping. Additionally, it provides examples and mechanisms for implementing these algorithms in distributed systems.

Uploaded by

Me at Work
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views24 pages

Week-6 Lecture 2, 3

The document outlines the content of a lecture series on Parallel and Distributed Computing, focusing on causal ordering of messages and various algorithms such as BSS, SES, and Matrix Algorithm. It explains the importance of causal relationships in message processing and details how each algorithm handles message delivery and timestamping. Additionally, it provides examples and mechanisms for implementing these algorithms in distributed systems.

Uploaded by

Me at Work
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

CS -432

Parallel
Parallel & Distributed
Distributed Computing
Computing

Week 05 – Lecture 1, 2 & 3


Spring 2025-B.Sc

Dr. Shah Khalid


Email: [email protected]
Last Lecture
❏ Physical Clock
❏ Cristian Algorithm
❏ The Berkeley Algorithm
❏ NTP-Network Time Protocol

❏ Logical Clock Synchronization


❏ Why logical clock Synchronization?
❏ Algorithms…

❏ Lamport’s Logical Clock


❏ Vector’s Clock Algorithm

2
Today Lecture
➢ Causal ordering examples
➢ Algorithms

1) BSS- Birman-Schiper Stephenson Algorithm

2) SEC- Schiper-Eggli-Sandoz Algorithm

3) Matrix Algorithm

3
The purpose of causal ordering of
PI messages is to ensure that the same
causal relationship for the
"message send" events correspond
with "message receive" events.
(i.e. All the messages are processed
P1 in order that they were created.)

Deliver a message only if the


preceding one has already been
delivered. Otherwise, buffer it
P3 up.
e:n
TllllC

Send (M1) -7 Send (M2) -7 Receive (M2) -7 Send (M3)


So, Send (M1) and Send (M3) are causally related. When two messages sent to a
process are causally related, they should be received in the same order.
4
Algorithms for Causal ordering of Messages

1) BSS- Birman-Schiper Stephenson Algorithm


2) SEC- Schiper-Eggli-Sandoz Algorithm
3) Matrix Algorithm

5
BSS- Birman-Schiper Stephenson Algorithm
❏ Broadcast based: when a process P1 has to send message to
processing P2, the message has to be sent to all the processing in the
system.
❏ Deliver a message to a process only if the message preceding it has
been delivered to the process
❏ Otherwise, buffer the message
❏ No clock increment on receiving
❏ Accomplished by using a vector accompanying the message

6
BSS- Birman-Schiper Stephenson Algorithm

1) Process Pi increments the vector time VTpi[i], time stamps and broadcasts the message m.
VTpi[i] - 1 denotes the number of messages preceding m.
2) Pj != Pi receives m. m is delivered when:
a. VTpj[i] == VTm[i] - 1
b. VTpj[k] >= VTm[k] for all k in {1,2,..n} - {i}, n is the total number of processes.
Delayed message are queued in a sorted manner.
c. Concurrent messages are ordered by time of receipt.
3) When m is delivered at Pj, VTpj updated according Rule 2 of vector clocks
3(a) : Pj has received all Pi’s messages preceding m
3(b): Pj has received all other messages received by Pi before sending m

7
Example for BSS Algorithm
(0.0.1)
c:}l t!l~ Tunc
(0.0,0)
PI

buffer
.

e31
---------------------------- ~
e3~
P3
(0.0.1) (0.1.1)

Note: There is NO Clock Increment, upon receiving 8


Example (extended) for BSS Algorithm
(0.0.1) (1.1.1) (2.1.1)
e l I cl~ el3 eJ-t Time
(0. O. 0)

e31 ~3~ e13 ~3-t


(0.0.1) (0.1.1) (1.1.1) (2.1.1)

9
Class Activity- BSS

PI

10
PI
● Last Lecture

○ Causal Ordering

P2
■ BSS Algorithms- examples

● Today P3

Tune
○ SES Algorithm- Examples
Send (M1) ~ Send (M2) ~ Receive (M2) ~ Send (M3)
So, Send (M1) and Send (M3) are causally related. When two messages sent to a
○ Matrix Clock - Examples process are causally related, they should be received in the same order.

11
SES- Schiper-Eggli-Sandoz Algorithm

❏ No need to broadcast - only used unicast


❏ Few messages
❏ Large-size of messages
❏ Lot of state-information
❏ Clock increment on receiving

12
SES- Schiper-Eggli-Sandoz Algorithm
Sending a message:
1.All messages are timestamped and sent out with a list of all the timestamps of messages sent to other
processes.
2.Locally store the timestamp that the message was sent with.
Receiving a message:
•A message cannot be delivered if there is a message mentioned in the list of timestamps that predates this
one.
•Otherwise, a message can be delivered by performing the following steps:
1.Merge in the list of timestamps from the message:
•Add knowledge of messages destined for other processes to our list of processes if we didn’t
know about any other messages destined for one already.
•If the new list has a timestamp greater than one we already had stored, update our timestamp to
match.
2.Update the local logical clock.
3.Check all the local buffered messages to see if they can now be delivered.

13
SES Algorithm Example 1
101 222
ell}-_-{ el2
O~O~O ---{ }-_-{ )- PI
022
P1. 001 ./
..
012
P1,001
000 e21
--------~~J_---------------P2

000
---v---~~-------------------P3
e31 e32
001 002
P1,001 P1,001
14
P2,002
SES Algorithm Example 2
100 ~200
P2,100 P2,100 Tune
ell el2 P3200
l:::----_( )_--'----------------- PI

222

000
202
P2,100/
.
.
_______ ~---_()_·-------------P3
e31 e32
201 202
P2,100 P2,202
15
Note: There is Clock Increment, upon receiving
SES Algorithm
Example 3
Time
ell el2 el3

e32 e33 e34


16
SES Algorithm
Example 3
TIme
000

P3
e31 e32 e33 e3~
001 002 023 334
P1,001 P1,001 P1,001 No need
P2,002 No need of P2 of P1 since
Note: There is Clock Increment, upon receiving 332 ~ 001 17
since 022 ~ 002
Causal Ordering Algorithms
• BSS: Birman-Schiper-Stephenson algorithm
- Broadcast-based: Even when a process P1 has to send a
message onlV to another process P2, the message has to be
sent to all the processes In the system,
, More messages, Smaller SIZefor the messages, imaed state
information

• SES: Schiper-Eggli-Sandoz Algorithm


- No need for broadcast (works only by unicasting)
, Few messages, Moderate - larger size for the messages, lots of
state information

• Matrix Algorithm
- Works for umcast, multicast and broadcast
Few roossages, larg&< size for the messages, lots 01state
informabon

18
Messages
Matrix Clock received by Pi
from others
Messages
sent by Pi
to others
❏ Motivation
❏ My vector clock describe what I “see”
❏ In some applications, I also want to know what other people see
❏ Matrix clock
❏ Each event has n vector clocks, one for each process
❏ The ith vector on process i is called process i ’s principle vector
❏ Principle vector is the same as vector clock before
❏ Non-principle vectors are just piggybacked on messages to update
“knowledge” 19
Matrix Clock

❏ For principle vector C on process i


❏ Increment C[i] at each “local computation” and “send” event
❏ When sending a message, all n vectors are attached to the message
❏ At each “receive” event, let V be the principle vector of the sender,
❏ C = pairwise-max(C, V);
❏ For non-principle vector C on process i, suppose it corresponds to
process j
❏ At each “receive” event, let V be the vector corresponding to process j
as in the received message. C = pairwise-max(C, V)

20
Example 1 For Matrix Algorithm

010 011
000 000
000 000
d~ TIm\!
.:II
--( PI

010
000 010 011
000 000 000 000

------
000 000 010
000 t!~1
P2

~\
000\
011
000 .-
010
.. P"
I!~I c3~
000
000 011 011
000 000 000
000 010
21
Example 2 For Matrix Algorithm

000
000 101
000 110
000 Tllne
PI.
101
000 110
101 000
110 110
e~

000............
101 ~
110 ~
c11
...,-----
000
0001!1~
000 101
110 110 22
Practice- Matrix Clock

user1 (process1)

user2 (process2)

/
user3 (process3)
• L
23
24

You might also like