Distributed Computing Module 1 Important Topics PYQs
Distributed Computing Module 1 Important Topics PYQs
Important-Topics-PYQs
For more notes visit
https://rtpnotes.vercel.app
Distributed-Computing-Module-1-Important-Topics-PYQs
1. Discuss about the transparency requirements of distributed system.
1. Access Transparency
2. Location Transparency
3. Concurrency Transparency
4. Replication Transparency
5. Failure Transparency
6. Mobility Transparency
7. Performance Transparency
8. Scaling Transparency
2. What do you mean by load balancing in a distributed environment.
Think of it like:
In tech terms:
Common Load Balancing Algorithms:
Types of Load Balancers:
3. What do you mean by a distributed system?
Key Features:
Why Use Distributed Systems?
4. What are the various features of distributed system?
5. List the Characteristics of Distributed System
6. Explain the advantages of distributed system.
7. Define causal precedence relation in distributed executions.
What is Causal Precedence?
Simple Example:
How Does It Work in Distributed Systems?
Logical vs. Physical Concurrency
Why Does This Matter?
Quick Recap:
8. Explain the design issues of a distributed system.
1. Communication
2. Managing Processes
3. Naming Things
4. Synchronization (Keeping Things in Sync)
5. Storing and Accessing Data
6. Consistency and Replication
7. Handling Failures (Fault Tolerance)
8. Security
9. Scalability and Modularity
9. Discuss about various primitives for distributed communication.
How Send() and Receive() Work:
Buffered vs. Unbuffered Communication:
Types of Communication Primitives
1. Synchronous vs. Asynchronous Communication
2. Blocking vs. Non-Blocking Communication
How Non-Blocking Communication Works (Handles & Waits)
Example Scenarios
10. Explain the applications of distributed computing.
11. Explain the models of communication networks.
How These Models Relate to Each Other:
12. Relate a computer system to a distributed system with the aid of neat sketches
What is a Computer System?
What is a Distributed System?
Based on Figure 1.1 – Structure of a Distributed System:
Based on Figure 1.2 – Software Architecture of Each Node:
1. Distributed Application
2. Middleware (Distributed Software)
3. Network Protocol Stack (Bottom Layers)
13. Discuss about the global state of distributed svstems
What is a Global State?
Why Record the Global State?
14. Compare logical and physical concurrency.
15. Which are the different versions of send and receive primitives for distributed
communication? Explain.
Send() and Receive()
Send()
Receive()
Buffering Options
Buffered Send
Unbuffered Send
Synchronous vs Asynchronous Primitives
Synchronous Communication
Asynchronous Communication
16. Explain the three different models of service provided by communication networks.
1. FIFO (First-In First-Out) Model
2. Non-FIFO Model
3. Causal Ordering Model
1. Access Transparency
What it means: Users shouldn’t have to worry about how or where they access resources.
Example: Whether you open a file from your computer or from Google Drive, it feels the
same—you just click and open it.
2. Location Transparency
What it means: You don’t need to know where a resource or service is physically located.
Example: When you visit a website, you don’t know (or care) which data center the server
is in; the website just works.
3. Concurrency Transparency
What it means: Multiple users can use the system at the same time without interfering with
each other.
Example: On Amazon, thousands of people can buy things at once, and no one's orders
get mixed up.
4. Replication Transparency
What it means: The system might have multiple copies (replicas) of data to improve speed
or reliability, but you only see one version.
Example: When you watch a YouTube video, it might come from a server near you, but you
don’t notice—it’s seamless.
5. Failure Transparency
What it means: If a part of the system crashes or fails, you shouldn’t notice any disruption.
Example: If one of Netflix’s servers goes down while you're watching a show, the system
switches to another server without interrupting your stream.
6. Mobility Transparency
What it means: You can move around and still access the system as if nothing changed.
Example: Using WhatsApp on your phone while traveling—you still get your messages no
matter where you are.
7. Performance Transparency
What it means: The system automatically adjusts to provide the best performance, and you
don’t need to manage it.
Example: Google Search feels fast even when millions of people are searching at the same
time because it balances the load across servers.
8. Scaling Transparency
What it means: The system can grow (add more resources) or shrink without affecting how
it works for users.
Example: Adding more servers to a cloud service like Dropbox doesn’t change how you
upload files.
Think of it like:
Imagine a busy restaurant with multiple waiters. If all customers are served by just one waiter,
that waiter gets overwhelmed while others stand idle. Load balancing is like a smart manager
who assigns tables evenly among all the waiters so service stays fast and efficient.
In tech terms:
Key Features:
No Shared Memory: Each computer (or node) has its own memory. They don't directly
share information but talk to each other by sending messages.
No Common Clock: There’s no universal clock keeping time for all the computers,
meaning they operate at their own pace.
Geographical Spread: These systems can be spread out globally (like Google’s servers
worldwide) or locally (like a cluster of servers in a data center).
Autonomy & Diversity: Each computer can run different software, have different speeds,
and even be used for different purposes, but they all collaborate.
Resource Sharing: Share data and tools that are too big or expensive to replicate
everywhere.
Reliability: If one computer fails, others can keep things running.
Scalability: Easily add more computers to handle more work.
Remote Access: Get data from faraway places, like accessing a cloud server.
Causal precedence tells us which events depend on each other in a distributed system (like
in a group chat with multiple people sending messages).
If Event A happens and causes Event B, we say A → B (A happens before B).
If Event A and Event B have nothing to do with each other, they are concurrent (they
happen separately).
Simple Example:
Here, Event B depends on Event A because your friend is replying to you. So, we say:
Now imagine:
These two events have nothing to do with each other, so they are concurrent (happen
independently).
In distributed systems, computers send messages to each other just like people do in a group
chat. These messages (or events) can be connected or independent.
In distributed systems (like cloud servers, online games, or databases), knowing which events
depend on each other is super important. It helps:
Quick Recap:
1. Communication
2. Managing Processes
3. Naming Things
8. Security
Send(destination, data):
destination : Who you are sending the data to.
Receive(source, buffer):
source : Who you are expecting data from (can be anyone or a specific process).
1. Buffered Communication:
Data is first copied from the user's buffer to a temporary system buffer before being
sent over the network.
Safer because if the receiver isn’t ready, the data is still stored temporarily.
2. Unbuffered Communication:
Data goes directly from the user’s buffer to the network.
Faster but riskier—if the receiver isn’t ready, the data could be lost.
Communication primitives can be classified based on how they handle synchronization and
blocking.
Synchronous Primitives:
The sender and receiver must "handshake"—both must be ready for the message
to be sent and received.
The Send() only finishes when the Receive() is also called and completed.
Good for ensuring messages are properly received but can slow things down.
Asynchronous Primitives:
The Send() returns control immediately after copying the data out of the user
buffer, even if the receiver hasn't received it yet.
Receiver doesn't need to be ready immediately.
Faster, but there’s a risk the message might not be delivered right away.
Blocking Primitives:
The process waits (or blocks) until the operation (sending or receiving) is fully done.
Example: In a blocking Send() , the process won’t continue until it knows the data
has been sent.
Non-Blocking Primitives:
The process immediately continues after starting the send or receive operation, even
if it’s not finished.
It gets a handle (like a ticket) that it can use later to check if the message was
successfully sent or received.
Useful for doing other work while waiting for communication to finish.
When you use non-blocking communication, the system gives you a handle (like a reference
number) to check if the operation is complete.
1. Polling: You can keep checking in a loop to see if the operation is done.
2. Wait Operation: You can use a Wait() function with the handle, and it will block until the
communication is complete.
Example Scenarios
1. Distributed Application
The program running across the system (e.g., Google Docs editing together).
2. Middleware (Distributed Software)
A special software layer that lets all nodes talk and coordinate with each other.
Hides differences between systems (e.g., one node may use Linux, another Windows).
Global State = The combined information about what’s happening in all processes and
communication channels in the system at a specific time.
Local State: Each process (computer) has its own local state, which includes its
memory, tasks it's working on, and the messages it has sent/received.
Channel State: Each communication channel (the connection between processes)
has its state, which includes messages that have been sent but not yet received.
1. Detecting Problems: Like finding deadlocks (when processes are stuck waiting for each
other) or checking if tasks have finished.
2. Failure Recovery: Saving the system’s state (called a checkpoint) helps restore it after a
crash.
3. System Analysis: Understanding how the system behaves for testing and verifying
correctness.
14. Compare logical and physical concurrency.
1. Logical Concurrency (No Connection):
Two events are logically concurrent if they don’t affect each other.
Example: You send a text, and your friend posts on Instagram at the same time. These
actions don’t affect each other.
2. Physical Concurrency (Same Time in Real Life):
Two events happen at exactly the same time in real life.
Example: You and your friend both press "send" on a message at the exact same
second.
Send()
Receive()
Buffering Options
Buffered Send
The message is copied into a system buffer.
The sender does not wait for the receiver to be ready.
Unbuffered Send
Message transfer only happens when both sender and receiver are ready.
Requires synchronization.
Synchronous Communication
How it works:
Messages are delivered in the same order they are sent.
📨 If A sends Message 1, then Message 2 → B will receive Message 1 before Message 2.
Example: Like standing in a queue at a shop—first person in line gets served first.
2. Non-FIFO Model
How it works:
Messages may arrive in any order, regardless of when they were sent.
📨 If A sends Message 1, then Message 2 → B might receive Message 2 first, then
Message 1.
Example: Like tossing messages into a box and pulling them out randomly.
Benefits:
Makes sure events happen in a logically correct order.
Includes FIFO, but adds more constraints.
Helps simplify complex distributed algorithms by automatically preserving order.