DC Unit-2
DC Unit-2
2. Berkeley’s Algorithm
• Use Case: Synchronizes all clocks in a distributed system.
• Assumptions: Works without a central reference clock (e.g., UTC).
Steps:
1. A master node polls all nodes to get their current time.
2. Master computes the average time:
Tavg=Sum of all clock timesNumber of nodesT_{\text{avg}} = \frac{\text{Sum of all
clock times}}{\text{Number of nodes}}
3. Master instructs each node to adjust its clock to the average time.
Key Point: Useful in peer-based systems where no single external clock is available.
Comparison
Conclusion
1. Cristian's Algorithm: Effective for small-scale systems with reliable network delays.
2. Berkeley's Algorithm: Best for peer systems with no external clock reference.
3. NTP: Ideal for large, hierarchical systems requiring high accuracy and fault tolerance.
Let me know if you'd like to dive deeper into any specific algorithm or add diagrams for
clarity!
Here is the content as it is:
A snapshot algorithm for FIFO (First-In, First-Out) channels typically applies in distributed
systems to capture a consistent global state. One commonly used algorithm for this is
Chandy-Lamport's Snapshot Algorithm. Below is an outline of the algorithm tailored for FIFO
channels:
Example Scenario:
1. Processes: P1,P2,P3P_1, P_2, P_3.
2. P1P_1 initiates the snapshot:
o Records its state and sends markers to P2P_2 and P3P_3.
3. P2P_2 receives the marker from P1P_1:
o Records its state and sends markers to P1P_1 and P3P_3.
4. P3P_3 receives the marker from P1P_1:
o Records its state and sends markers to P1P_1 and P2P_2.
5. Processes record the state of their respective incoming channels after handling all
messages between receiving the marker and completing their state recording.
This approach ensures a consistent global state snapshot in a system with FIFO channels.
Message ordering paradigms:
Message ordering paradigms in distributed computing define how messages are delivered
and processed in a distributed system. These paradigms are crucial for maintaining
consistency, ensuring correctness, and achieving synchronization in applications. Below are
the key message ordering paradigms:
2. Causal Ordering
• Maintains the causal relationship between messages. If M1M1 causally affects
M2M2, then M1M1 is delivered before M2M2.
• Based on Lamport's happens-before relation (→\rightarrow).
• Does not impose an order on messages that are independent of each other.
• Example: If M1M1 is a request and M2M2 is its acknowledgment, M1M1 will be
delivered first.
3. Total Ordering
• Ensures that all messages are delivered in the same order to all processes in the
system.
• The order may not match the sending order but is consistent across all recipients.
• Example: If M1M1 is delivered before M2M2 to one process, all other processes will
receive M1M1 before M2M2.
4. Global Ordering
• A stricter form of ordering where a global clock or sequence is used to assign
timestamps to messages.
• Messages are delivered in increasing order of timestamps.
• Example: Logical or physical clocks, such as Lamport clocks or vector clocks, may be
used.
5. Group Ordering
• Guarantees message ordering within a specific group of processes but not
necessarily across different groups.
• Useful for applications with subgroup communication, such as chat applications.
6. Arbitrary Ordering
• No specific order is enforced on message delivery.
• Messages may arrive in any sequence.
• Example: UDP communication, where ordering is not guaranteed.
1. Broadcast Communication
• Definition: Sends a message to all nodes in the network.
• Use Case: Used in scenarios like network discovery or disseminating global
information.
• Example: Sending a request to all servers in a distributed database to sync states.
2. Multicast Communication
• Definition: Sends a message to a specific subset of nodes (a group) instead of all
nodes.
• Use Case: Efficient for applications where only a group of nodes need the
information, such as in collaborative applications or group-based replication.
• Example: Streaming multimedia content to a group of subscribers.
3. Unicast Communication
• Definition: Direct one-to-one communication between two nodes.
• Use Case: Used for private communication or request-reply interactions.
• Example: A client requesting data from a single server.
4. Anycast Communication
• Definition: A message is sent to one of the members of a group, typically the nearest
or most suitable node.
• Use Case: Useful for load balancing or finding the nearest service.
• Example: Sending a DNS request to the nearest DNS server.
5. Gossip Communication
• Definition: Nodes communicate in a peer-to-peer manner, spreading information like
a rumor.
• Use Case: Used for large-scale information dissemination or state synchronization in
decentralized systems.
• Example: Peer-to-peer file sharing systems like BitTorrent.
7. Consensus-Based Communication
• Definition: A group of nodes communicates to agree on a common value or state.
• Use Case: Used in distributed databases, blockchain systems, and leader election.
• Example: The Raft consensus algorithm for leader election in a distributed database.
8. Point-to-Multipoint Communication
• Definition: A single node sends messages to multiple specific nodes without
broadcasting to the entire network.
• Use Case: Useful for selective communication like updating a set of replicas.
• Example: Sending updates to only the active replicas in a distributed system.
Each of these communication types serves specific needs in a distributed system and is often
implemented through protocols or frameworks such as Multicast UDP, Kafka (for Pub-Sub),
or Raft and Paxos (for Consensus). The choice of method depends on the system's
requirements, including scalability, reliability, and latency.