Module 1
Module 1
MODULE 1
• Blocking/non-blocking
synchronous/asynchronous primitives
Blocking/Non-blocking,
Synchronous/Asynchronous Primitives
Message send and message receive communication primitives
are denoted as Send() and Receive().
Send() : has at least two parameters, the destination, and the
buffer in the user space
Receive() : at least two parameters, the source from which the
data is to be received and the user buffer into which the data is
to be received.
There are two ways of sending data when the Send primitive is
invoked
1) Buffered option
◦ The buffered option which is the standard option copies the
data from the user buffer to the kernel buffer. The data later gets
copied from the kernel buffer onto the network.
CONT..
2) Unbuffered option
◦ In the unbuffered option, the data gets copied directly from the
user buffer onto the network.
◦ For the Receive primitive, the buffered option is usually
required because the data may already have arrived when the
primitive is invoked, and needs a storage place in the kernel.
Synchronous and Asynchronous Primitives
1. Blocking receive
Does not return until the message has arrived and been copied
into the buffer of the receiver process.
2. Non-blocking receive
Can return before the message has arrived and been copied into
the buffer of the receiver process.
PROCESSOR SYNCHRONY
4. Synchronization
• Mechanisms for synchronization or coordination among the
processes are essential.
• Mutual exclusion is the classical example of synchronization,
but many other forms of synchronization, such as leader
election are also needed.
5. Data Storage and Access
• Schemes for data storage, and implicitly for accessing the data
in a fast and scalable manner across the network are
important for efficiency.
• Traditional issues such as file system design have to be
reconsidered in the setting of a distributed system.
6. Consistency and Replication
• To avoid bottlenecks, to provide fast access to data, and to
provide scalability, replication of data objects is highly
desirable.
• This leads to issues of managing the replicas, and dealing with
consistency among the replicas/caches in a distributed
setting.
• A simple example issue is deciding the level of granularity
(i.e., size) of data access.
7. Fault Tolerance
• All the processes need to agree on which process will play the
role of a distinguished process – called a leader process.
• A leader is necessary even for many distributed algorithms
because there is often some asymmetry as in initiating some
action like a broadcast or collecting the state of the system, or
in “regenerating” a token that gets “lost” in the system
5. Group Communication, Multicast, and
Ordered Message Delivery
• A group is a collection of processes that share a common
context and collaborate on a common task within an
application domain.
• Specific algorithms need to be designed to enable efficient
group communication and group management wherein
processes can join and leave groups dynamically, or even
fail.
• When multiple processes send messages concurrently,
different recipients may receive the messages in different
orders, possibly violating the semantics of the distributed
program.
• Hence, formal specifications of the semantics of ordered
delivery need to be formulated, and then implemented.
6. Monitoring distributed events and
predicates
• Predicates are used for specifying conditions on the global
system state, and are useful for applications such as
debugging, sensing the environment, and in industrial process
control.
• An important paradigm for monitoring distributed events is
that of event streaming, wherein streams of relevant events
reported from different processes are examined collectively to
detect predicates.
7. Distributed Program Design and
Verification Tools
• Methodically designed and verifiably correct programs can
greatly reduce the overhead of software design, debugging,
and engineering.
• Designing mechanisms to achieve these design and
verification goals is a challenge.
8. Debugging Distributed Programs