0% found this document useful (0 votes)
4 views11 pages

PDC 50 Marks Detailed Notes Final

The document outlines key topics in Parallel and Distributed Computing (PDC), including synchronous and asynchronous execution, models of distributed computations, causality, graph algorithms, message ordering, coordination algorithms, consistency and replication, global state recording, self-stabilizing systems, and fault-tolerant message-passing systems. Each topic discusses essential concepts, algorithms, and applications relevant to designing and managing distributed systems. The content emphasizes the importance of reliability, consistency, and coordination in distributed computing environments.

Uploaded by

room3773
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views11 pages

PDC 50 Marks Detailed Notes Final

The document outlines key topics in Parallel and Distributed Computing (PDC), including synchronous and asynchronous execution, models of distributed computations, causality, graph algorithms, message ordering, coordination algorithms, consistency and replication, global state recording, self-stabilizing systems, and fault-tolerant message-passing systems. Each topic discusses essential concepts, algorithms, and applications relevant to designing and managing distributed systems. The content emphasizes the importance of reliability, consistency, and coordination in distributed computing environments.

Uploaded by

room3773
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Parallel and Distributed Computing (PDC) - 50 Marks Syllabus Notes

Topic 1: Definition, Synchronous and Asynchronous Execution

Synchronous Execution:
- All processes proceed step-by-step in lockstep using a global clock.
- Events are tightly coordinated.
- Easy to reason about and simulate.
- Example: Time-division multiplexing in hardware.

Asynchronous Execution:
- No global clock; processes execute at independent speeds.
- Communication delays and execution speeds can vary.
- Realistic model for distributed systems (e.g., Internet, cloud).
- More complex to analyze.

Summary: Synchronous = predictable but unrealistic. Asynchronous = realistic but harder to manage.
Parallel and Distributed Computing (PDC) - 50 Marks Syllabus Notes

Topic 2: Model of Distributed Computations

Purpose:
- To model and reason about distributed executions and communication.

Models:
1. **Processes (nodes):** Perform computation.
2. **Events:** Local actions within a process.
3. **Channels (edges):** Represent message-passing links.

Key Concepts:
- **Happened-before () relation:** Defines event ordering.
- **Partial ordering of events:** Not all events can be totally ordered.
- **Communication model:** Reliable/unreliable, synchronous/asynchronous.

Helps to design correct distributed algorithms by modeling behavior and interaction.


Parallel and Distributed Computing (PDC) - 50 Marks Syllabus Notes

Topic 3: Causality and Time in Distributed Systems

Causality:
- Events in distributed systems may affect each other.
- If A causes B, then A must be observed before B.

Logical Clocks:
1. **Lamport Clock:**
- Scalar values.
- Ensures that if A B, then LC(A) < LC(B).
- Doesnt capture concurrency.

2. **Vector Clock:**
- Vector of timestamps.
- Captures causality and concurrency.
- If VC(A) < VC(B), then A happened before B.

Useful for ordering events, debugging, and building consistent systems.


Parallel and Distributed Computing (PDC) - 50 Marks Syllabus Notes

Topic 4: Graph Algorithms

Graphs in Distributed Systems:


- Nodes represent processes; edges represent communication links.

Common Algorithms:
1. **Spanning Tree Construction:**
- Builds minimal structure for message routing.
- Example: Breadth-First Search Tree.

2. **Shortest Path (Dijkstras Algorithm):**


- Finds shortest route between nodes.

3. **Minimum Spanning Tree (MST):**


- Used in efficient broadcasting (e.g., Prims or Kruskals Algorithm).

Applications:
- Routing protocols, multicast trees, network topology.

Distributed graph algorithms are designed to work using only local information.
Parallel and Distributed Computing (PDC) - 50 Marks Syllabus Notes

Topic 5: Message Ordering and Group Communication

Message Ordering Types:


1. **FIFO Ordering:** Messages from a sender arrive in order.
2. **Causal Ordering:** Preserves causal dependencies between messages.
3. **Total Ordering:** All messages delivered in the same order to all processes.

Group Communication:
- Multiple receivers in a group.
- Requires consistency and reliability.

Reliable Multicast Protocols:


- Ensure delivery and ordering.
- Used in replicated systems and groupware.

Essential for building reliable, consistent distributed systems (e.g., databases, collaborative tools).
Parallel and Distributed Computing (PDC) - 50 Marks Syllabus Notes

Topic 6: Coordination Algorithms (Book 5)

Need for Coordination:


- Processes must access shared resources in an orderly fashion.

Mutual Exclusion Algorithms:


1. **Ricart-Agrawala:** Message-based; uses timestamps.
2. **Token Ring:** A token circulates; whoever holds it accesses the resource.

Election Algorithms:
1. **Bully Algorithm:** Highest-ID process becomes coordinator.
2. **Ring Algorithm:** Uses a logical ring for leader selection.

Barrier Synchronization:
- Ensures all processes reach a certain point before continuing.

These algorithms ensure fairness, avoid deadlocks, and improve reliability.


Parallel and Distributed Computing (PDC) - 50 Marks Syllabus Notes

Topic 7: Coordination Algorithms (Book 2)

Centralized vs Distributed Coordination:


- Centralized: One coordinator. Simple but a single point of failure.
- Distributed: All nodes participate. More resilient.

Timeout-Based Coordination:
- Detect failures using timeouts.

Quorum-Based Access:
- Read/write allowed only if a quorum (majority) agrees.

Middleware Support:
- Tools like ZooKeeper help with coordination, leader election, etc.

Used in cloud systems, resource scheduling, and data access control.


Parallel and Distributed Computing (PDC) - 50 Marks Syllabus Notes

Topic 8: Consistency and Replication

Replication:
- Multiple copies of data to improve reliability and performance.

Consistency Models:
1. **Strong:** Latest write is always seen.
2. **Sequential:** Operations appear in the same order to all.
3. **Causal:** Related updates seen in order.
4. **Eventual:** All replicas eventually become consistent.

Replication Techniques:
- **Primary-Backup:** One node handles all writes.
- **Multi-Master:** Many nodes accept updates; conflicts handled separately.
- **Quorum:** Requires overlap between read/write operations.

CAP Theorem:
- Can't simultaneously guarantee Consistency, Availability, Partition tolerance.

Balance between performance and correctness.


Parallel and Distributed Computing (PDC) - 50 Marks Syllabus Notes

Topic 9: Global State and Snapshot Recording Algorithms

Goal:
- Capture a consistent state of all processes and channels.

Chandy-Lamport Snapshot Algorithm:


1. Initiator records local state and sends markers.
2. Upon marker, process records its own state and channel state.
3. Ensures no messages are missed or duplicated.

Applications:
- Deadlock detection, checkpointing, consistent recovery.

Non-intrusive and doesn't halt system execution.


Parallel and Distributed Computing (PDC) - 50 Marks Syllabus Notes

Topic 10: Self-Stabilizing Distributed System

Definition:
- A system that recovers to a legal state from any arbitrary condition.

Properties:
1. **Convergence:** Eventually reaches correct state.
2. **Closure:** Remains correct unless more faults occur.

Examples:
- Token ring stabilization, clock synchronization.

High fault tolerance, especially in sensor networks and P2P systems.


Parallel and Distributed Computing (PDC) - 50 Marks Syllabus Notes

Topic 11: Fault-Tolerant Message-Passing Systems

Goal:
- Ensure system continues working correctly despite failures.

Types of Failures:
1. Process failure (crash, omission, Byzantine).
2. Communication failure (loss, delay, duplication).

Mechanisms:
- **Acknowledgments and Retransmissions.**
- **Replication and Checkpointing.**
- **Consensus Protocols** (Paxos, Raft).
- **Failure Detectors.**

Group Communication & Atomic Broadcast:


- Ensures reliable and ordered message delivery.

Essential for distributed databases, payment systems, and cloud services.

You might also like