0% found this document useful (0 votes)
19 views7 pages

DC Ans Key

DISTRIBUTED COMPUTING

Uploaded by

Shruthi S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views7 pages

DC Ans Key

DISTRIBUTED COMPUTING

Uploaded by

Shruthi S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

IAT-1 Answer Key for CS3551 – Distributed Computing [IT Department]

PART A (10 * 2 = 20 Marks)

1. *Distributed system:* A collection of independent computers appearing as a single system to


users

2. *Message passing vs Shared memory:*

- *Message passing:* Communication via network messages.

- *Shared memory:* Communication through a common memory space.

3. *Design issues in distributed systems:* Scalability, fault tolerance, consistency, security.

4. *Synchronous vs Asynchronous execution:*

- *Synchronous:* Events occur at fixed intervals.

- *Asynchronous:* No fixed time intervals.

5. *Scalar time:* A simple counter to order events in a distributed system.

6. *Clock skew:* The difference in time between clocks on different systems.

7. *Logical clock:* A mechanism to order events without physical clocks (e.g., Lamport
timestamps).

8. *Global state:* The collective state of processes and communication channels at a specific
time.

9. *Deadlock:* A situation where processes wait lol submission to response delivery.

10.Response time : Response time refers to the duration it takes for a system to react to a given
input or request.
Part-B
11.a. Difference between message passing and shared memory process communication

*Message Passing:* - Processes communicate by sending and receiving messages over a


network. - Suitable for distributed systems where processes are on different machines. - No
shared address space; communication occurs through explicit messages - Example: RPC
(Remote Procedure Call), MPI (Message Passing Interface).

*Shared Memory:* Processes communicate by accessing a shared memory space. - Suitable


for tightly coupled systems like multiprocessors or parallel systems.- Direct memory access for
communication; synchronization is needed (e.g., semaphores, mutexes). - Example: POSIX
shared memory, memory-mapped files.

11.*b. Design issues and challenges in distributed systems:*

- *Scalability:* - *Fault Tolerance:*- - *Consistency:* - *Concurrency:* - *Transparency:*


*Security:* - *Heterogeneity:* -

12 a. Explain how a parallel system differs from a distributed system:**

:* 1. *Definition - *Parallel System:* A parallel system consists of multiple processors that


work simultaneously on different parts of a problem to achieve faster computation. These
processors typically share memory and are located in the same physical system. - *Distributed
System:* A distributed system involves multiple independent computers that work together to
solve a problem. These computers are connected via a network and do not share memory but
communicate through message passing.2. *Memory 3. *Communication 4. *Geographical
Distribution:* - 5. *Fault Tolerance:*- 6. *Example:* Parallel System:* Multi-core
processors within a single computer. - *Distributed System:* Cloud services like Google Cloud
or distributed databases like Cassandra.

*b. Explain in detail about the applications of Distributed Computing:*

1.Mobile Sysytem

2.Pervasive Computing -Intranet

3.Multimedia System- Webcasting

13. a. Explain the Chandy-Lamport Algorithm in detail:**

The *Chandy-Lamport Algorithm* is a distributed algorithm used to record a *global state* of a


distributed system. It is designed to capture a consistent snapshot of the system, even while
processes are executing concurrently. It is particularly useful for detecting properties like
deadlocks, where a system's global state must be analyzed.
### Key Concepts:1. *Global State:* The collection of the local states of all processes and the
messages in transit in a distributed system.2. *Consistent Snapshot:* A snapshot that reflects a
state of the system that could have occurred in a real execution.

### Steps

1. *Initiation:*

2. *Marker Receiving Rule:*- When a process receives a marker for the first time:

3. *Recording In-Transit Messages:*

4. *Completion:*

Properties of the Algorithm:

13*b. Elucidate on the Total Causal Order in Distributed System with a Neat Diagram:*

In distributed systems, *causal ordering* ensures that related events are processed in the order of
their causality. The *Total Causal Order* extends this concept by guaranteeing that all events in
the system follow a single global order, ensuring that all processes see the events in the same
sequence.

### Key Concepts:

1. *Happens-before Relation (→):*- *Causal events:*

2. *Causal Ordering:*3. *Total Causal Order:*

- *Example:* Consider two processes P1 and P2. If P1 sends a message to P2 and this message
triggers a subsequent action, all processes in the system should see the initial event before the
triggered action.### Achieving Total Causal Order

- *Vector Clocks:* ## Example with Diagram:

consider two processes, P1 and P2, and three events: *a, **b, and **c*:### Diagram:

## Applications of Total Causal Order:

- *Distributed Databases- *Multiplayer Gaming:* *Messaging Systems:*

14. a. What are the four different types of ordering the messages?
In distributed systems, message ordering is crucial to maintain consistency and coordination
between processes. The four main types of message ordering are:

1. *FIFO (First-In-First-Out) Ordering:*- *

2. *Causal Ordering:* *

3. *Total Ordering:

4. *Unordered (No specific order):*

14*b. Explain the types of Group Communication used in Distributed Systems:*

Group communication refers to the mechanisms and protocols that allow multiple processes in a
distributed system to communicate efficiently and in coordination. There are several types of
group communication, each suited to different requirements and use cases:

1. *Unicast Communication:

2. *Broadcast Communication:*

3. *Multicast Communication:*

4. *Anycast Communication:*

5. *Reliable Group Communication:*

6. *Ordered Group Communication:*

### Types of Ordered Group Communication:

1. *FIFO Ordered Group Communication:*

2. *Causally Ordered Group Communication:*

3. *Totally Ordered Group Communication:*

15. a. Explain about Ricart Agrawala's Algorithm with an Example:*


The *Ricart-Agrawala Algorithm* is a distributed mutual exclusion algorithm that ensures
processes in a distributed system can access a shared resource without conflict. It is a *request-
reply* based algorithm that requires *2(N-1)* messages for N processes to achieve mutual
exclusion.

### Steps:

1. *Requesting Critical Section:* - When a process wants to enter the critical section (CS), it
sends a request message (with a timestamp) to all other processes.

2. *Receiving Requests:* - When a process receives a request, it replies immediately if it is not


in the critical section and does not wish to enter. If it is in or wants to enter the critical section, it
delays the reply until it exits the CS.

3. *Entering Critical Section:* - A process enters the CS when it has received replies from all
other processes.

4. *Releasing the Critical Section:* - After exiting the CS, the process sends all delayed replies
to pending requests.

*b. Analyze Suzuki-Kasami's Broadcast Algorithm for Mutual Exclusion in Distributed


Systems:The *Suzuki-Kasami Algorithm* is a token-based mutual exclusion algorithm that
minimizes message complexity to *N messages* for N processes. A single *token* is circulated
in the system, granting the right to enter the critical section.

1. *Token-Based Approach:* - Mutual exclusion is guaranteed by a unique token. Only the


process holding the token can enter the critical section.

2. *Requesting the Token:*- If a process wants to enter the critical section and does not have the
token, it broadcasts a request message to all other processes. It enters the CS when it receives the
token.

3. *Token Passing:* - When the process exits the critical section, it passes the token to the next
process that has requested it.

4. *Request Queue and Sequence Numbers:* - Each process maintains a queue of requests and a
sequence number for each process to track the order of requests.

Part C
16.*(a) Types of Failures in Distributed Systems:*

1. *Crash Failure*:

2. *Omission Failure*:

3. *Timing Failure*:.

4. *Response Failure*:

5. *Byzantine Failure*:

16.*(b) Distributed Communication and Global State:*

*i. Primitives for Distributed Communication* (8 Marks):

1. *Send*: Used to transmit messages from one process to another.

2. *Receive*: Enables a process to accept incoming messages from others.

3. *Remote Procedure Call (RPC)*: Allows a program to execute a procedure on a remote server
as if it were local.

4. *Remote Method Invocation (RMI)*: Facilitates invocation of methods on objects located


remotely.

5. *Multicast*: Enables sending a message from one process to a selected group of processes.

6. *Broadcast*: Sends a message from one process to all other processes in the system.

7. *Socket Communication*: Provides a low-level communication channel between processes.

8. *Publish/Subscribe*: Processes publish messages to a channel, and subscribers to the channel


receive updates.

*ii. Global State of Distributed Systems* (7 Marks):

The global state in distributed systems represents the combined states of all processes and
communication channels at a specific point. Due to the lack of global time, algorithms like the
*Chandy-Lamport snapshot* are used to capture a consistent global state, ensuring that
concurrent operations do not interfere. Capturing the global state is crucial for tasks like
deadlock detection, distributed debugging, and ensuring fault tolerance across distributed
systems.

You might also like