CS8603 U.ii
CS8603 U.ii
Regulation : 2017
UNIT II CS8603
MESSAGE ORDERING & SNAPSHOTS
(i) non-FIFO
(ii) FIFO
(iii) causal order
(iv) synchronous order
There is always a trade-off between concurrency and ease of use and implementation.
Asynchronous Executions
An asynchronous execution (or A-execution) is an execution (E, ≺) for which the causality relation
is a partial order.
There cannot be any causal relationship between events in asynchronous execution.
The messages can be delivered in any order even in non FIFO.
Though there is a physical link that delivers the messages sent on it in FIFO order due
to the physical properties of the medium, a logicallink may be formed as a composite
of physical links and multiple paths mayexist between the two end points of the
logical link.
Page 1 of 19
CS8603 DS
FIFO executions
Two send events s and s’ are related by causality ordering (not physical time
ordering), then a causally ordered execution requires that their corresponding receive
events r and r’ occur in the same order at all common destinations.
If s and s’ are not related by causality, then CO is vacuously(blankly)satisfied.
Causal order is used in applications that update shared data, distributed shared
memory, or fair resource allocation.
The delayed message m is then given to the application for processing. The event of
an application processing an arrived message is referred to as a delivery event.
No message overtaken by a chain of messages between the same (sender, receiver)
pair.
If send(m1) ≺ send(m2) then for each common destination d of messages m1 and m2,
deliverd(m1) ≺deliverd(m2) must be satisfied.
Page 2 of 19
CS8603 DS
2. Empty Interval Execution: An execution (E ≺) is an empty-interval (EI)
execution if for each pair of events (s, r) ∈ T, the open interval set
Synchronous Execution
When all the communication between pairs of processes uses synchronous send and
receives primitives, the resulting order is the synchronous order.
The synchronous communication always involves a handshake between the receiver
and the sender, the handshake events may appear to be occurring instantaneously and
atomically.
The instantaneous communication property of synchronous executions requires a
modified definition of the causality relation because for each (s, r) ∈ T, the send
event is not causally ordered before the receive event.
The two events are viewed as being atomic and simultaneous, and neither event
precedes the other.
S2: If (s, r ∈ T, then) for all x ∈ E, [(x<< s ⇐⇒ x<<r) and (s<< x ⇐⇒ r<< x)].
Page 3 of 19
CS8603 DS
for any message M, T(s(M)) = T(r(M))
An execution can be modeled to give a total order that extends the partial order
(E, ≺).
In an A-execution, the messages can be made to appear instantaneous if there exist a
linear extension of the execution, such that each send event is immediately followed
by its corresponding receive event in this linear extension.
Non-separated linear extension is an extension of (E, ≺) is a linear extension of (E, ≺) such that
for each pair (s, r) ∈ T, the interval { x∈ E s ≺ x ≺ r } is empty.
A A-execution (E, ≺) is an RSC execution if and only if there exists a non-separated linear
extension of the partial order (E, ≺).
In the non-separated linear extension, if the adjacent send event and its corresponding
receive event are viewed atomically, then that pair of events shares a common past
and a common future with each other.
Crown
Let E be an execution. A crown of size k in E is a sequence <(si, ri), i ∈{0,…, k-1}> of pairs of
corresponding send and receive events such that: s0 ≺ r1, s1 ≺ r2, sk−2 ≺ rk−1, sk−1 ≺ r0.
The crown is <(s1, r1) (s2, r2)> as we have s1 ≺ r2 and s2 ≺ r1. Cyclic dependencies
may exist in a crown. The crown criterion states that an A-computation is RSC, i.e., it can be
realized on a system with synchronous communication, if and only if it contains no crown.
Page 4 of 19
CS8603 DS
Non FIFO order (non-FIFO)
The Execution order have the following results
For an A-execution, A is RSC if and only if A is an S-execution.
RSC ⊂ CO ⊂ FIFO ⊂ A
This hierarchy is illustrated in Figure 2.3(a), and example executions of each class are
shown side-by-side in Figure 2.3(b)
The above hierarchy implies that some executions belonging to a class X will not
belong to any of the classes included in X. The degree of concurrency is most in A
and least in SYNC.
A program using synchronous communication is easiest to develop and verify.
A program using non-FIFO communication, resulting in an A execution, is hardest to
design and verify.
2.2.3 Simulations
The events in the RSC execution are scheduled as per some non-separated linear
extension, and adjacent (s, r) events in this linear extension are executed sequentially
in the synchronous system.
The partial order of the asynchronous execution remains unchanged.
If an A-execution is not RSC, then there is no way to schedule the events to make
them RSC, without actually altering the partial order of the given A-execution.
However, the following indirect strategy that does not alter the partial order can be
used.
Each channel Ci,j is modeled by a control process Pi,j that simulates the channel buffer.
An asynchronous communication from i to j becomes a synchronous communication
from i to Pi,j followed by a synchronous communication from Pi,j to j.
This enables the decoupling of the sender from the receiver, a feature that is essential
in asynchronous systems.
Page 5 of 19
CS8603 DS
2.3.1 Rendezvous
Rendezvous systems are a form of synchronous communication among an arbitrary
number of asynchronous processes. All the processes involved meet with each other, i.e.,
communicate synchronously with each other at one time. Two types of rendezvous systems
are possible:
Binary rendezvous: When two processes agree to synchronize.
Multi-way rendezvous: When more than two processes agree to synchronize.
Page 6 of 19
CS8603 DS
Send and received commands may be individually disabled or enabled. A command is
disabled if it is guarded and the guard evaluates to false. The guard would likely
contain an expression on some local variables.
Synchronous communication is implemented by scheduling messages under the
covers using asynchronous communication.
Scheduling involves pairing of matching send and receives commands that are both
enabled. The communication events for the control messages under the covers do not
alter the partial order of the execution.
The message (M) types used are: M, ack(M), request(M), and permission(M). Execution
events in the synchronous execution are only the send of the message M and receive of the
message M. The send and receive events for the other message types – ack(M), request(M),
and permission(M) which are control messages. The messages request(M), ack(M), and
permission(M) use M’s unique tag; the message M is not included in these messages.
(message types)
Pi executes send(M) and blocks until it receives ack(M) from Pj . The send event SEND(M) now
completes.
Any M’ message (from a higher priority processes) and request(M’) request for synchronization (from
a lower priority processes) received during the blocking period are queued.
Page 7 of 19
CS8603 DS
(i) If a message M’ arrives from a higher priority process Pk, Pi accepts M’ by scheduling a
RECEIVE(M’) event and then executes send(ack(M’)) to Pk.
(ii) If a request(M’) arrives from a lower priority process Pk, Pi executes send(permission(M’)) to Pk
and blocks waiting for the messageM’. WhenM’ arrives, the RECEIVE(M’) event is executed.
(2c) When the permission(M) arrives, Pi knows partner Pj is synchronized and Pi executes send(M).
The SEND(M) now completes.
At the time a request(M) is processed by Pi, process Pi executes send(permission(M)) to Pj and blocks
waiting for the message M. When M arrives, the RECEIVE(M) event is executed and the process
unblocks.
At the time a message M is processed by Pi, process Pi executes RECEIVE(M) (which is assumed to
be always enabled) and then send(ack(M)) to Pj .
When Pi is unblocked, it dequeues the next (if any) message from the queue and processes it as a
message arrival (as per rules 3 or 4).
Page 8 of 19
CS8603 DS
Propagation Constraint II: it is not known that a message has been sent to d in the causal
future of Send(M), and hence it is not guaranteed using a reasoning based on transitivity that
the message M will be delivered to d in CO.
Page 9 of 19
CS8603 DS
The Propagation Constraints also imply that if either (I) or (II) is false, the information
“d ∈ M.Dests” must not be stored or propagated, even to remember that (I) or (II) has been
falsified:
not in the causal future of Deliverd(M1, a)
not in the causal future of e k, c where d ∈Mk,cDests and there is no other
message sent causally between Mi,a and Mk, c to the same destination d.
Page 10 of 19
CS8603 DS
The data structures maintained are sorted row–major and then column–major:
1. Explicit tracking:
Tracking of (source, timestamp, destination) information for messages (i) not known to be
delivered and (ii) not guaranteed tobe delivered in CO, is done explicitly using the I.Dests
field of entries inlocal logs at nodes and o.Dests field of entries in messages.
Sets li,aDestsand oi,a. Dests contain explicit information of destinations to which Mi,ais not
guaranteed to be delivered in CO and is not known to be delivered.
The information about d ∈Mi,a .Destsis propagated up to the earliestevents on all causal
paths from (i, a) at which it is known that Mi,a isdelivered to d or is guaranteed to be
delivered to d in CO.
2. Implicit tracking:
Tracking of messages that are either (i) already delivered, or (ii) guaranteed to be
delivered in CO, is performed implicitly.
Page 11 of 19
CS8603 DS
The information about messages (i) already delivered or (ii) guaranteed tobe delivered
in CO is deleted and not propagated because it is redundantas far as enforcing CO is
concerned.
It is useful in determiningwhat information that is being carried in other messages and
is being storedin logs at other nodes has become redundant and thus can be purged.
Thesemantics are implicitly stored and propagated. This information about messages
that are (i) already delivered or (ii) guaranteed to be delivered in CO is tracked
without explicitly storing it.
The algorithm derives it from the existing explicit information about messages (i) not
known to be delivered and (ii) not guaranteed to be delivered in CO, by examining
only oi,aDests or li,aDests, which is a part of the explicit information.
Multicast M4,3
At event (4, 3), the information P6 ∈M5,1.Dests in Log4 is propagated onmulticast M4,3only to
process P6 to ensure causal delivery using the DeliveryCondition. The piggybacked
information on message M4,3 sent to process P3must not contain this information because of
constraint II. As long as any future message sent to P6 is delivered in causal order w.r.t.
M4,3sent to P6, it will also be delivered in causal order w.r.t. M 5,1. And as M5,1 is already
delivered to P4, the information M5,1Dests = ∅ is piggybacked on M4,3 sent to P 3. Similarly,
the information P6 ∈ M5,1Dests must be deleted from Log4 as it will no longer be needed,
because of constraint II. M5,1Dests = ∅ is stored in Log4 to remember that M5,1 has been
delivered or is guaranteed to be delivered in causal order to all its destinations.
Page 12 of 19
CS8603 DS
Learning implicit information at P2 and P3
When message M4,2 is received by processes P2 and P3, they insert the (new) piggybacked
information in their local logs, as information M5,1.Dests = P6. They both continue to store
this in Log2 and Log3 and propagate this information on multicasts until they learn at events
(2, 4) and (3, 2) on receipt of messages M3,3and M4,3, respectively, that any future message is
expected to be delivered in causal order to process P6, w.r.t. M5,1sent toP6. Hence by
constraint II, this information must be deleted from Log2 andLog3. The flow of events is
given by;
When M4,3 with piggybacked information M5,1Dests = ∅ is received byP3at (3, 2), this
is inferred to be valid current implicit information aboutmulticast M5,1because the log
Log3 already contains explicit informationP6 ∈M5,1.Dests about that multicast.
Therefore, the explicit informationin Log3 is inferred to be old and must be deleted to
achieve optimality. M5,1Dests is set to ∅ in Log3.
The logic by which P2 learns this implicit knowledge on the arrival of M3,3 is
identical.
Processing at P6
When message M5,1 is delivered to P6, only M5,1.Dests = P4 is added to Log6. Further, P6
propagates only M5,1.Dests = P4 on message M6,2, and this conveys the current implicit
information M5,1 has been delivered to P6 by its very absence in the explicit information.
When the information P6 ∈ M5,1Dests arrives on M4,3, piggybacked as M5,1 .Dests
= P6 it is used only to ensure causal delivery of M4,3 using the Delivery Condition,
and is not inserted in Log6 (constraint I) – further, the presence of M5,1 .Dests = P4
in Log6 implies the implicit information that M5,1 has already been delivered to
P6. Also, the absence of P4 in M5,1 .Dests in the explicit piggybacked information
implies the implicit information that M5,1 has been delivered or is guaranteed to be
delivered in causal order to P4, and, therefore, M5,1. Dests is set to ∅ in Log6.
When the information P6 ∈ M5,1 .Dests arrives on M5,2 piggybacked as M5,1. Dests
= {P4, P6} it is used only to ensure causal delivery of M4,3 using the Delivery
Condition, and is not inserted in Log6 because Log6 contains M5,1 .Dests = ∅,
which gives the implicit information that M5,1 has been delivered or is guaranteed
to be delivered in causal order to both P4 and P6.
Processing at P1
When M2,2arrives carrying piggybacked information M5,1.Dests = P6 this (new)
information is inserted in Log1.
When M6,2arrives with piggybacked information M5,1.Dests ={P4}, P1learns implicit
information M5,1 has been delivered to P6 by the very absence of explicit information
P6 ∈ M5,1.Dests in the piggybacked information, and hence marks information P6 ∈
M5,1Dests for deletion from Log1. Simultaneously, M5,1Dests = P6 in Log1 implies
the implicit information that M5,1has been delivered or is guaranteed to be delivered in
causal order to P4.Thus, P1 also learns that the explicit piggybacked information
M5,1.Dests = P4 is outdated. M5,1.Dests in Log1 is set to ∅.
The information “P6 ∈M5,1.Dests piggybacked on M2,3,which arrives at P 1, is
inferred to be outdated usingthe implicit knowledge derived from M5,1.Dest= ∅” in
Log1.
Page 13 of 19
CS8603 DS
2.7 TOTAL ORDER
For each pair of processes Pi and Pj and for each pair of messages Mx and My that are delivered to
both the processes, Pi is delivered Mx before My if and only if Pj is delivered Mxbefore My.
Each process sends the message it wants to broadcast to a centralized process, which
relays all the messages it receives to every other process over FIFO channels.
Complexity: Each message transmission takes two message hops and exactly n messages
in a system of n processes.
Drawbacks: A centralized algorithm has a single point of failure and congestion, and is
not an elegant solution.
Sender side
Phase 1
In the first phase, a process multicasts the message M with a locally unique tag and
the local timestamp to the group members.
Phase 2
The sender process awaits a reply from all the group members who respond with a
tentative proposal for a revised timestamp for that message M.
The await call is non-blocking.
Phase 3
The process multicasts the final timestamp to the group.
Page 14 of 19
CS8603 DS
Phase 2
The receiver sends the revised timestamp back to the sender. The receiver then waits
in a non-blocking manner for the final timestamp.
Phase 3
The final timestamp is received from the multicaster. The corresponding message
entry in temp_Q is identified using the tag, and is marked as deliverable after the
revised timestamp is overwritten by the final timestamp.
The queue is then resorted using the timestamp field of the entries as the key. As the
queue is already sorted except for the modified entry for the message under
consideration, that message entry has to be placed in its sorted position in the queue.
If the message entry is at the head of the temp_Q, that entry, and all consecutive
subsequent entries that are also marked as deliverable, are dequeued from temp_Q,
and enqueued in deliver_Q.
Complexity
This algorithm uses three phases, and, to send a message to n − 1 processes, it uses 3(n – 1)
messages and incurs a delay of three message hops
Page 15 of 19
CS8603 DS
2.8 GLOBAL STATE AND SNAPSHOT RECORDING ALGORITHMS
A distributed computing system consists of processes that do not share a common
memory and communicate asynchronously with eachother by message passing.
Each component ofhas a local state. The state of the process is the local memory and a
history of its activity.
The state of achannel is characterized by the set of messages sent along the channel
lessthe messages received along the channel. The global state of a distributedsystem is
a collection of the local states of its components.
If shared memory were available, an up-to-date state of the entire systemwould be
available to the processes sharing the memory.
The absence ofshared memory necessitates ways of getting a coherent and complete
view ofthe system based on the local states of individual processes.
A meaningfulglobal snapshot can be obtained if the components of the distributed
systemrecord their local states at the same time.
This would be possible if thelocal clocks at processes were perfectly synchronized or
if there were aglobal system clock that could be instantaneously read by the processes.
If processes read time froma single common clock, various indeterminatetransmission
delays during the read operation will cause the processes toidentify various physical
instants as the same time.
Page 16 of 19
CS8603 DS
Law of conservation of messages: Every messagemijthat is recorded as sent in the local state of a
process pi must be capturedin the state of the channel Cij or in the collected local state of the
receiver process pj.
In a consistent global state, every message that is recorded as received isalso recorded
as sent. Such a global state captures the notion of causalitythat a message cannot be
received if it was not sent.
Consistent global statesare meaningful global states and inconsistent global states are
not meaningful in the sense that a distributed system can never be in an
inconsistentstate.
Issue 2:
How to determine the instant when a process takes its snapshot?
The answer
Answer:
A process pj must record its snapshot before processing a message mij that was sent by
process pi after recording its snapshot.
Page 17 of 19
CS8603 DS
2.9 SNAPSHOT ALGORITHMS FOR FIFO CHANNELS
Each distributed application has number of processes running on different physical
servers. These processes communicate with each other through messaging channels.
A snapshot captures the local states of each process along with the state of each communication channel.
Initiating a snapshot
Process Pi initiates the snapshot
Pi records its own state and prepares a special marker message.
Send the marker message to all other processes.
Start recording all incoming messages from channels Cij for j not equal to i.
Propagating a snapshot
For all processes Pjconsider a message on channel Ckj.
Page 18 of 19
CS8603 DS
If marker message is seen for the first time:
Pjrecords own sate and marks Ckj as empty
Send the marker message to all other processes.
Record all incoming messages from channels Clj for 1 not equal to j or k.
Else add all messages from inbound channels.
Terminating a snapshot
All processes have received a marker.
All process have received a marker on all the N-1 incoming channels.
A central server can gather the partial state to build a global snapshot.
Complexity
The recording part of a single instance of the algorithm requires O(e) messages
and O(d) time, where e is the number of edges in the network and d is thediameter of the
network.
Page 19 of 19