Synchronous Systems
Synchronous Systems
Asynchronous Systems:
What it means: IPC is how processes on different computers talk to each other.
Two methods of communication:
1. Shared Data Approach: Multiple processes share the same memory space to
exchange information.
2. Message Passing Approach: Processes send messages to each other with the
data they need to share, like texting.
Imagine two people working on the same document saved in a shared folder. Both can
access the document and make changes directly.
What it does: It allows processes to send messages easily without worrying about the
complex details of the underlying network.
Example: Think of it like using a messaging app where you don’t need to know how the
internet works; you just send messages, and they get delivered.
1. Sender & Receiver: Both the sender and receiver need to be properly coordinated to
exchange messages.
2. Acceptance of Message: The receiver needs to be ready to accept and process the
message.
3. Reply to the Message: The sender might need a confirmation or reply from the receiver.
4. Failures in Communication: Messages can get lost, or one of the processes may crash
during the communication.
5. Buffering by the Receiver: The receiver may store messages temporarily if it’s not
ready to process them immediately.
6. Buffer Size: There must be enough buffer space to store incoming messages.
7. Order of Messages: Messages need to be received in the correct order, which can be
challenging when there are many outstanding (waiting) messages.
Synchronization Primitives:
These are methods that determine how the sending and receiving processes interact.
1. Blocking Send Primitive: The sender stops executing until it gets an acknowledgment
that the message has been received.
o Example: A person sends a letter and waits for the delivery confirmation before
doing anything else.
2. Non-Blocking Send Primitive: The sender continues its work right after the message is
sent, without waiting for confirmation.
o Example: You drop a letter in the mailbox and continue your tasks without
waiting for confirmation that it was delivered.
3. Blocking Receive Primitive: The receiver stops its execution until a message arrives.
o Example: You're standing by your mailbox, waiting for a letter to arrive, doing
nothing else.
4. Non-Blocking Receive Primitive: The receiver can continue its work and doesn't wait
for the message to arrive.
o Example: You check your mailbox occasionally while doing other things.
If the receiver crashes or the message is lost, the sender can remain blocked forever. This
is why timeout values are used, where the sender only waits for a limited time.
o Example: If your friend doesn’t reply to your text after an hour, you assume
something went wrong and stop waiting.
Synchronous Communication:
o Easier to implement.
o No need for recovery if a message is lost, but it limits multitasking because the
sender and receiver are waiting for each other.
o Can lead to deadlock where both processes are waiting for each other and nothing
happens.
Asynchronous Communication:
o More complex but allows for greater flexibility and multitasking since processes
don't wait for each other.
What is buffering? When messages are sent between processes, they are temporarily
stored in a buffer if the receiver is not ready.
How does buffering work?
o The message is copied from the sender's address space to the receiver's,
sometimes through the operating system's memory.
Buffering Strategies:
1. Synchronous Mode (No Buffer): No buffering is used. The sender and receiver
must be ready at the same time.
2. Asynchronous Mode (Buffer with Unbounded Capacity): Messages are stored
in a buffer until the receiver is ready, and the buffer has no strict size limit.
In summary, message passing in IPC faces challenges like failures, buffering, and
synchronization. Blocking communication pauses processes until the message is exchanged,
while non-blocking allows processes to continue working. Buffering helps manage message
transfers, especially when the receiver isn’t immediately ready.
How it works:
o The message stays in the Sender Process Address Space (SPAS) until the
receiver is ready.
o The sender delays sending the message until it gets a signal (ACK) that the
receiver is ready.
o After receiving the ACK, the sender sends the message again.
Problem: This strategy can be slow because the sender must wait for the receiver to be
ready, and the message may need to be sent multiple times.
How it works:
o The sender sends the message and waits for an ACK.
o If the ACK is not received within a certain time (timeout), the message is
discarded, and the sender retries sending it.
Problem: This strategy can cause delays and wasted resources if messages keep timing
out and need to be resent frequently.
Synchronous Send with No Buffering
How it works:
o When there is no buffer, the message is transferred directly between the sender
and receiver in synchronous communication.
o However, if the receiver is not ready, the message might have to be sent multiple
times, and the receiver has to wait, causing inefficiency.
How it works:
o A buffer with the capacity to store one message is used at the receiver's side.
o This buffer holds the message if the receiver is not ready, preventing the need to
resend the message.
o The buffer is located either in the kernel (operating system) space or in the
receiver’s process address space.
Benefits: This strategy ensures the message is immediately available when the receiver is
ready, improving efficiency compared to no-buffering.
Limitation: Only one message can be held in the buffer at a time, so if multiple
messages arrive, they must wait for the buffer to be cleared.
Unbounded-Capacity Buffer
How it works:
o In asynchronous communication, a buffer with unlimited capacity can store all
messages sent to the receiver, ensuring that none are lost.
Problem: In practice, this strategy is unrealistic because no system can have infinite
storage.
Finite-Bound Buffer
How it works:
o A finite buffer with limited capacity is used in asynchronous communication.
o Messages are stored in the buffer until the receiver is ready to process them.
Problem: The buffer can overflow if too many messages are sent and the receiver does
not process them quickly enough. This can cause messages to be lost or delayed.
1. Unsuccessful Communication:
o When the buffer is full, new message transfers fail, and the sender must retry
later.
2. Flow-Controlled Communication:
o The sender is blocked (paused) until the receiver processes some messages,
freeing up space in the buffer for new ones.
o This ensures messages are not lost, but it can slow down the sender.
Visualizing the Buffering Process:
Summary:
1st strategy: Waits for the receiver to be ready, keeping the message in the sender's
space.
2nd strategy: Uses timeouts and retries, discarding messages if ACKs aren’t received.
Synchronous communication can use no buffering or single-message buffers, while
asynchronous communication often relies on finite-bound buffers with overflow
management strategies.
What it does: A receiver process can use this system call to create a buffer of a specified
size. This buffer can be placed in either:
1. Kernel Address Space (AS): In the operating system's memory.
2. Receiver Process Address Space: In the memory of the receiving process.
Multidatagram Messages:
What it means: When a message is too large to be sent in one go (larger than the
Maximum Transfer Unit (MTU)), it is broken down into smaller pieces, called
fragments. Each fragment is sent separately, and the Message Passing System (MPS) has
the responsibility to:
1. Break down the large message into fragments (on the sender's side).
2. Reassemble these fragments into the original message (on the receiver's side).
Process Addressing:
1. Explicit Addressing:
o The sender knows exactly who they want to communicate with and provides the
specific process ID in the communication primitive.
o Example: Send(process-id, message) means the sender is sending a message
to a specific process with the given process-id.
2. Implicit Addressing:
o The sender doesn’t specify which process to communicate with, but rather a
service type (e.g., a "printing" service). Any process offering that service can
respond.
o Example: Send_any(service_id, message) sends a message to any process
offering the service identified by service_id.
Functional Addressing:
What it is: Instead of targeting a specific process, the sender uses an address that
identifies a service. It doesn’t matter which particular server or process handles the
request, as long as the service is provided.
1. Migration Problem: When a process moves (or "migrates") from one machine to
another, the original machine-ID might no longer be valid. This is important for load
balancing, where processes from heavily loaded machines may be moved to less busy
machines.
2. Link-based Addressing: To solve this, processes are identified by a combination of
their:
o Original machine-ID.
o Original process-ID.
o Current machine-ID (if they migrate).
During migration, a link is left on the old machine to help find the process at its new
location.
1. Overload of Locating a Process: If a process moves many times, tracking its location
can be inefficient.
2. Node Failure: If an old machine where the process once lived is down, it might be
impossible to find the process.
The two-level naming scheme helps identify processes in a distributed system in a flexible and
efficient way.
1. High-Level Name:
o This is machine-independent, meaning it doesn't depend on the specific machine
where the process is running.
o Typically, it’s an ASCII string that identifies a process or service (e.g.,
“OrderProcessor” or “DataService”).
2. Low-Level Name:
o This is machine-dependent, meaning it includes the actual machine where the
process is running.
o It’s a combination of the machine-id and local-id (e.g., machine_id@local_id),
which identifies the process on that specific machine.
How it Works:
When a process wants to communicate with another process, it uses the high-level name.
The operating system's kernel contacts a name server, which stores a table mapping
high-level names to low-level names (machine IDs and process IDs).
If the high-level name is for a service instead of a specific process (e.g., “FileServer”),
the name server can map the service name to one or more processes offering that service.
Example:
If Process A wants to send a message to Process B, it will provide Process B’s high-level
name.
The kernel of Process A's machine asks the name server, "Where is Process B?"
The name server responds with Process B’s low-level name (e.g.,
machine1@process23).
Process A can now send a message directly to Process B.
1. Poor Reliability: If the name server fails, the system may not be able to map high-level
names to low-level names.
2. Poor Scalability: As the system grows, a single name server can become overloaded.
Solution:
Replication of the Name Server: Multiple copies of the name server are maintained to
ensure reliability and handle more requests.
Drawbacks of Replication:
Extra Overhead: Replicating the name server adds complexity and overhead in keeping
all copies synchronized.
Failures can happen at various stages of communication. Some common failures include:
Handling these failures usually involves time-outs, retry mechanisms, or specialized failure
recovery processes.
To address the issues of message loss and ensure reliability in communication between
processes, a reliable IPC (Inter-Process Communication) protocol is used. The main idea is to
make sure messages are delivered correctly through retransmissions and acknowledgments
(ACKs). This helps in overcoming failures like message loss and provides a more robust
communication system.
Key Concepts:
1. Internal Retransmission:
o If a message is lost or no acknowledgment (ACK) is received within a set time (due to
network issues or other failures), the message is automatically retransmitted by the
sender.
2. Acknowledgment (ACK):
o After receiving a message, the receiver’s machine sends an ACK back to the sender’s
machine to confirm the message was received successfully.
This protocol uses four messages to ensure that the request is executed reliably.
Steps:
1. Request Message: Sender sends a request message to the receiver.
2. ACK for Request: Receiver sends an acknowledgment (ACK) confirming it received the
request.
3. Response Message: Receiver processes the request and sends the result back to the
sender.
4. ACK for Response: Sender sends an acknowledgment confirming it received the
response.
Waiting Time: The sender waits for a time period slightly longer than the round trip time
(RTT) plus the average time required to execute the request before retransmitting the
message.
This protocol reduces the number of messages to three while still ensuring reliability.
Steps:
1. Request Message: Sender sends the request to the receiver.
2. Response Message: Receiver processes the request and sends the response back to the
sender.
3. ACK for Response: Sender acknowledges the receipt of the response message.
This is the simplest form of the protocol, using only two messages for communication.
Steps:
1. Request Message: Sender sends the request.
2. Response Message: Receiver processes the request and sends the response back.
In this case, there is no acknowledgment of the response, making it less reliable compared to
the other protocols, but faster and more efficient in scenarios where loss is rare.
Summary of Protocols:
Four-Message Protocol: Maximum reliability with four steps (request, ACK for request,
response, ACK for response).
Three-Message Protocol: Moderate reliability with three steps (request, response, ACK for
response).
Two-Message Protocol: Minimal reliability but faster, with only two steps (request, response).