DC Mod3mu
DC Mod3mu
Use Suitable for smaller networks where quick Suitable for stable systems where processes
Case leader election is needed. do not fail frequently.
Election Initiation Any process can start the election. Only one process starts the election.
Message Complexity O(N²) in worst case. O(N) in best case, O(2N) in worst case.
Coordinator The new leader immediately sends a The new leader sends a COORDINATOR
Announcement COORDINATOR message. message through the ring.
Process Failures If the highest process fails, another If multiple processes fail, the ring may
Handling election starts. break.
4. Conclusion
The Bully Algorithm is better for small systems where fast election is needed but has high
message complexity.
The Ring Algorithm is more efficient in larger systems but slower due to message
circulation.
The choice depends on the system’s needs – Bully for fast, priority-based selection, and
Ring for fair and efficient elections.
Final Verdict:
A single integer counter per process, A vector of counters, one for each
Definition
incremented with each event. process, maintaining causality.
Ensures happens-before (→) relation Detects causal ordering and can identify
Event Ordering
but cannot detect concurrency. concurrent events.
Concurrency
Cannot detect concurrent events. Can detect concurrent events (`A
Detection
Example:
Vector Clock:
If P1 sends a message to P2, vector timestamps might be:
Key Takeaways:
Vector clocks provide better causality tracking but require more memory and computation
Q5. Comparison of Lamport’s, Ricart-Agrawala’s, and Maekawa’s Non-Token-Based Mutual
Exclusion Algorithms
Ricart-Agrawala’s
Feature Lamport’s Algorithm Maekawa’s Algorithm
Algorithm
Number of
Messages per 3N messages. 2(N-1) messages. 2√N messages.
Request
Replies are sent based on If a process is not in CS, it Replies come only from
Reply timestamp priority; replies immediately; the quorum group,
Mechanism delayed if another process otherwise, it delays the reducing message
is in CS. reply. overhead.
High – Requires
Low – Requires Moderate – Still needs
communication with only
Scalability communication with all global communication but
a subset, making it more
processes. fewer messages.
scalable.
Conclusion
Best Choice For Algorithm
Maekawa’s Algorithm is best for large distributed systems due to lower message
complexity.
1. No Shared Memory:
o In distributed systems, each computer (or node) has its own memory.
o This coordinator helps organize and manage activities among all nodes.
o All nodes in the system are equal (uniform); any one can act as a coordinator.
4. Handling Failures:
o If the current coordinator fails or crashes, the system must elect a new one.
o The chosen node is usually the one with the highest priority or ID.
6. Ensures Continuity:
o Ensures that system functions (like clock sync, task assignment) continue smoothly.
Summary:
Election algorithms help distributed systems choose a leader (coordinator) when needed, especially
after failures, ensuring reliability, order, and coordination among independent nodes.
Logical Clocks:
② What usually matters is that processes agree on the order in which events occur rather than the
real-time at which they occurred.
Computers generally obtain logical time using interrupts to update a software clock.
The more interrupts, the higher the overhead.
The most common logical clock synchronization edge for D.S. (Distributed Systems) is
Lamport’s algorithm.
It is used in situations when ordering is important, not time.
Here is the handwritten content from the image in clean text format:
Happened-before:
a→b
If a and b occur on different processes that do not exchange messages, then neither a → b
nor b → a are true.
Then, they are concurrent events.
If a & b are events in the same process, a occurs before b, then a → b is true.
e → h but 5 > 2
f → k but 6 > 2
Lamport's Algorithm
Each message carries a timestamp of the sender’s clock when a message arrives:
Clock must be advanced between any two events in the same process.
time = time + 1;
timestamp = time;
o If it hasn’t already forwarded a request, it sends a REQUEST to its own holder (up
the tree).
o It sends the token to the process at the front of its request queue.
2. Before entering, it removes its own name from the queue and starts executing the critical
section.
🌳 Concept Overview:
Only one process holds the token (required to enter the critical section).
o A Holder (H): the node from which it expects to get the token.
🔍 Diagram Breakdown:
Request chain: P3 → P2 → P1
Each process updates its req-que to reflect who made the request:
o req-que of P3 = P3
o req-que of P2 = P3
o req-que of P1 = P2
Now P3 holds the token and can enter the critical section.
The Holder (H) of all nodes gets updated to reflect the new path to token:
o H1 = P2
o H2 = P3
o H3 = P3
o H4 = P2
o H5 = P3
o H6 = P3
✅ Key Points:
Would you like a summary note or diagram explanation for easier revision?