0% found this document useful (0 votes)
2 views13 pages

DC Unit-2

Uploaded by

rexclement2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views13 pages

DC Unit-2

Uploaded by

rexclement2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

UNIT-2

Here's a clear, comprehensive summary of Physical Clock Synchronization and the


algorithms mentioned:

Physical Clock Synchronization


In distributed systems, physical clock synchronization ensures that all nodes operate with
consistent time references. This is critical for coordinating distributed processes and
ensuring causality.

Goals of Physical Clock Synchronization


1. Minimize discrepancies between clocks across nodes.
2. Synchronize with an accurate global reference (e.g., UTC).
3. Facilitate coordination of time-sensitive events across nodes.

Key Algorithms for Clock Synchronization


1. Cristian’s Algorithm
• Use Case: Synchronizes a client's clock with a single time server.
• Assumptions: Symmetric message delays between client and server.
Steps:
1. Client sends a time request to the server.
2. Server responds with its current time, TsT_s.
3. Client calculates the Round-Trip Time (RTT): RTT=Tr−ToRTT = T_r - T_o Where TrT_r is
the request's return time, and ToT_o is the request's send time.
4. Client adjusts its clock: Tclient=Ts+RTT2T_{\text{client}} = T_s + \frac{RTT}{2}
Key Point: Effective for systems with consistent network delays.

2. Berkeley’s Algorithm
• Use Case: Synchronizes all clocks in a distributed system.
• Assumptions: Works without a central reference clock (e.g., UTC).
Steps:
1. A master node polls all nodes to get their current time.
2. Master computes the average time:
Tavg=Sum of all clock timesNumber of nodesT_{\text{avg}} = \frac{\text{Sum of all
clock times}}{\text{Number of nodes}}
3. Master instructs each node to adjust its clock to the average time.
Key Point: Useful in peer-based systems where no single external clock is available.

3. Network Time Protocol (NTP)


• Use Case: Synchronizes clocks across large-scale networks with hierarchical time
servers.
• Architecture:
o Stratum 0: Reference clocks (e.g., atomic clocks, GPS).
o Stratum 1: Servers connected to Stratum 0 clocks.
o Stratum 2+: Servers synchronized to higher strata.
How it works:
NTP uses timestamps to calculate:
1. Clock Offset (θ\theta): θ=(T2−T1)+(T3−T4)2\theta = \frac{(T_2 - T_1) + (T_3 - T_4)}{2}
2. Round-Trip Delay (δ\delta): δ=(T4−T1)−(T3−T2)\delta = (T_4 - T_1) - (T_3 - T_2)
Key Features:
• Hierarchical and scalable.
• Handles network delays asymmetrically.
• Filters out errors through localized averaging.

Comparison

Feature Cristian's Algorithm Berkeley's Algorithm Network Time Protocol (NTP)

Reference Clock Centralized (Server) None Hierarchical Strata

Scalability Limited Moderate High

Fault Tolerance Low Moderate High


Feature Cristian's Algorithm Berkeley's Algorithm Network Time Protocol (NTP)

Network Delays Assumes Symmetry Can handle Skews Tolerant of Asymmetry

Conclusion
1. Cristian's Algorithm: Effective for small-scale systems with reliable network delays.
2. Berkeley's Algorithm: Best for peer systems with no external clock reference.
3. NTP: Ideal for large, hierarchical systems requiring high accuracy and fault tolerance.
Let me know if you'd like to dive deeper into any specific algorithm or add diagrams for
clarity!
Here is the content as it is:

A snapshot algorithm for FIFO (First-In, First-Out) channels typically applies in distributed
systems to capture a consistent global state. One commonly used algorithm for this is
Chandy-Lamport's Snapshot Algorithm. Below is an outline of the algorithm tailored for FIFO
channels:

Chandy-Lamport Snapshot Algorithm (for FIFO Channels)


Assumptions:
1. The system consists of NN processes that communicate via unidirectional FIFO
channels.
2. Each process can send and receive messages.
3. There is no shared memory; communication happens only through message passing.
4. Communication channels are reliable.

Steps of the Algorithm


1. Initiating the Snapshot:
o A process PiP_i initiates the snapshot by recording its own state (e.g., local
variables, program counter).
o PiP_i then sends a marker message along all its outgoing channels.
2. Recording States:
o When a process PjP_j receives a marker for the first time:
▪ It records its own state.
▪ It records the state of the channel as empty (since FIFO guarantees no
messages were in transit when the marker arrived).
▪ It propagates the snapshot by sending marker messages on all its
outgoing channels.
o If PjP_j receives subsequent markers:
▪ It records the state of the channel as the set of messages received on
that channel after recording its state and before receiving the marker.
3. Message Handling During the Snapshot:
o While recording the state of incoming channels, any messages arriving before
the marker are treated as part of the pre-snapshot state.
o Messages arriving after the marker are part of the channel's state.
4. Completion:
o The snapshot is complete when every process has:
▪ Recorded its local state.
▪ Recorded the state of all its incoming channels.
5. Global State Assembly:
o The recorded local states and channel states are collected to form the global
state of the system.

Key Points for FIFO Channels


• FIFO channels simplify the algorithm because markers are always received in the
order they are sent. This ensures no ambiguity in separating messages sent before
and after the marker.
• The state of a channel can be computed by collecting messages arriving after the
process has recorded its state but before it receives the marker.

Example Scenario:
1. Processes: P1,P2,P3P_1, P_2, P_3.
2. P1P_1 initiates the snapshot:
o Records its state and sends markers to P2P_2 and P3P_3.
3. P2P_2 receives the marker from P1P_1:
o Records its state and sends markers to P1P_1 and P3P_3.
4. P3P_3 receives the marker from P1P_1:
o Records its state and sends markers to P1P_1 and P2P_2.
5. Processes record the state of their respective incoming channels after handling all
messages between receiving the marker and completing their state recording.

This approach ensures a consistent global state snapshot in a system with FIFO channels.
Message ordering paradigms:
Message ordering paradigms in distributed computing define how messages are delivered
and processed in a distributed system. These paradigms are crucial for maintaining
consistency, ensuring correctness, and achieving synchronization in applications. Below are
the key message ordering paradigms:

1. FIFO Ordering (First In, First Out)


• Ensures that messages sent by the same sender to the same receiver are delivered in
the order they were sent.
• Does not guarantee ordering across different senders.
• Example: If sender AA sends messages M1M1 and M2M2, M1M1 will always be
delivered before M2M2.

2. Causal Ordering
• Maintains the causal relationship between messages. If M1M1 causally affects
M2M2, then M1M1 is delivered before M2M2.
• Based on Lamport's happens-before relation (→\rightarrow).
• Does not impose an order on messages that are independent of each other.
• Example: If M1M1 is a request and M2M2 is its acknowledgment, M1M1 will be
delivered first.

3. Total Ordering
• Ensures that all messages are delivered in the same order to all processes in the
system.
• The order may not match the sending order but is consistent across all recipients.
• Example: If M1M1 is delivered before M2M2 to one process, all other processes will
receive M1M1 before M2M2.

4. Global Ordering
• A stricter form of ordering where a global clock or sequence is used to assign
timestamps to messages.
• Messages are delivered in increasing order of timestamps.
• Example: Logical or physical clocks, such as Lamport clocks or vector clocks, may be
used.

5. Group Ordering
• Guarantees message ordering within a specific group of processes but not
necessarily across different groups.
• Useful for applications with subgroup communication, such as chat applications.

6. Arbitrary Ordering
• No specific order is enforced on message delivery.
• Messages may arrive in any sequence.
• Example: UDP communication, where ordering is not guaranteed.

Applications and Use Cases


• FIFO Ordering: Chat systems, basic messaging applications.
• Causal Ordering: Collaborative systems (e.g., Google Docs), distributed databases.
• Total Ordering: Consensus algorithms (e.g., Paxos, Raft), event logging.
• Global Ordering: Distributed ledgers, transaction processing.
• Group Ordering: Multiplayer games, group messaging systems.
• Arbitrary Ordering: Real-time streaming, non-critical data transfers.
Understanding and choosing the appropriate message ordering paradigm is essential for
designing reliable distributed systems.
Global Ordering

Total Ordering

Group Ordering

Causal Ordering

FIFO Ordering

Arbitrary Ordering
Types of Group Communication
In a distributed system, group communication refers to the mechanisms used to enable
multiple processes or nodes to exchange information and coordinate tasks. The types of
group communication commonly used are:

1. Broadcast Communication
• Definition: Sends a message to all nodes in the network.
• Use Case: Used in scenarios like network discovery or disseminating global
information.
• Example: Sending a request to all servers in a distributed database to sync states.

2. Multicast Communication
• Definition: Sends a message to a specific subset of nodes (a group) instead of all
nodes.
• Use Case: Efficient for applications where only a group of nodes need the
information, such as in collaborative applications or group-based replication.
• Example: Streaming multimedia content to a group of subscribers.
3. Unicast Communication
• Definition: Direct one-to-one communication between two nodes.
• Use Case: Used for private communication or request-reply interactions.
• Example: A client requesting data from a single server.

4. Anycast Communication
• Definition: A message is sent to one of the members of a group, typically the nearest
or most suitable node.
• Use Case: Useful for load balancing or finding the nearest service.
• Example: Sending a DNS request to the nearest DNS server.

5. Gossip Communication
• Definition: Nodes communicate in a peer-to-peer manner, spreading information like
a rumor.
• Use Case: Used for large-scale information dissemination or state synchronization in
decentralized systems.
• Example: Peer-to-peer file sharing systems like BitTorrent.

6. Publish-Subscribe (Pub-Sub) Communication


• Definition: A system where publishers send messages without knowing the receivers,
and subscribers receive messages they are interested in through topics or channels.
• Use Case: Ideal for event-driven architectures and messaging systems.
• Example: Notification services in social media platforms.

7. Consensus-Based Communication
• Definition: A group of nodes communicates to agree on a common value or state.
• Use Case: Used in distributed databases, blockchain systems, and leader election.
• Example: The Raft consensus algorithm for leader election in a distributed database.

8. Point-to-Multipoint Communication
• Definition: A single node sends messages to multiple specific nodes without
broadcasting to the entire network.
• Use Case: Useful for selective communication like updating a set of replicas.
• Example: Sending updates to only the active replicas in a distributed system.

9. Reliable Group Communication


• Definition: Ensures all members of a group receive messages reliably, even in the
presence of node or network failures.
• Use Case: Used in critical systems where message delivery guarantees are essential.
• Example: Distributed database replication systems.

Each of these communication types serves specific needs in a distributed system and is often
implemented through protocols or frameworks such as Multicast UDP, Kafka (for Pub-Sub),
or Raft and Paxos (for Consensus). The choice of method depends on the system's
requirements, including scalability, reliability, and latency.

You might also like