Module 5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Principles of Message Passing Programming

• The logical view of a machine supporting the message-passing paradigm consists of p processes,
each with its own exclusive address space.
• Each data element must belong to one of the partitions of the space; hence, data must be explicitly
partitioned and placed.
• All interactions (read-only or read/write) require cooperation of two processes - the process
that has the data and the process that wants to access the data.
• These two constraints make underlying costs very explicit to the programmer.
• Message-passing programs are often written using the asynchronous or loosely synchronous
paradigms.
• In the asynchronous paradigm, all concurrent tasks execute asynchronously.
• In the loosely synchronous model, tasks or subsets of tasks synchronize to perform
interactions. Between these interactions, tasks execute completely asynchronously.
• Most message-passing programs are written using the single program multiple data (SPMD)
model.

The Building Blocks: Send and Receive


• The prototypes of these operations are as follows:
send(void *sendbuf, int nelems, int dest)
receive(void *recvbuf, int nelems, int source)
• Consider the following code segments:
P0 P1
a = 100; receive(&a, 1, 0)
send(&a, 1, 1); printf("%d\n", a);
a = 0;
• The semantics of the send operation require that the value received by process P1 must be 100 as
opposed to 0.
• This motivates the design of the send and receive protocols.

Non-Buffered Blocking Message Passing Operations


• A simple method for forcing send/receive semantics is for the send operation to return only
when it is safe to do so.
• In the non-buffered blocking send, the operation does not return until the matching receive has
been encountered at the receiving process.
• Idling and deadlocks are major issues with non-buffered blocking sends.
• Handshake mechanism is used for a blocking non-buffered send/receive operation.
• It is easy to see that in cases where sender and receiver do not reach communication point at
similar times, there can be considerable idling overheads.
Buffered Blocking Message Passing Operations
• A simple solution to the idling and deadlocking problem outlined above is to rely on buffers at
the sending and receiving ends.
• In buffered blocking sends, the sender simply copies the data into the designated buffer and
returns after the copy operation has been completed. The data is copied at a buffer at the receiving
end as well.
• Buffering trades off idling overhead for buffer copying overhead.

Blocking buffered transfer protocols: (a) in the presence of communication hardware with buffers at
send and receive ends; and (b) in the absence of communication hardware, sender interrupts
receiver and deposits data in buffer at receiver end.
• Bounded buffer sizes can have significant impact on performance.
P0 P1
for (i = 0; i < 1000; i++){ for (i = 0; i < 1000; i++){
produce_data(&a); receive(&a, 1, 0);
send(&a, 1, 1); consume_data(&a);
} }
What if consumer was much slower than producer?
• Deadlocks are still possible with buffering since receive operations block.
P0 P1
receive(&a, 1, 1); receive(&a, 1, 0);
send(&b, 1, 1); send(&b, 1, 0);
Non-Blocking Message Passing Operations
• The programmer must ensure semantics of the send and receive.
• This class of non-blocking protocols returns from the send or receive operation before it is
semantically safe to do so.
• Non-blocking operations are generally accompanied by a check-status operation.
• When used correctly, these primitives are capable of overlapping communication overheads
with useful computations.
• Message passing libraries typically provide both blocking and non-blocking primitives.

Non-blocking non-buffered send and receive operations (a) in absence of communication hardware;
(b) in presence of communication hardware.

Space of possible protocols for send and receive operations


Operations MPI: the Message Passing Interface
• MPI defines a standard library for message-passing that can be used to develop portable
message-passing programs using either C or Fortran.
• The MPI standard defines both the syntax as well as the semantics of a core set of library
routines.
• Vendor implementations of MPI are available on almost all commercial parallel computers.
• It is possible to write fully-functional message-passing programs by using only the six routines.
• All MPI routines, data-types, and constants are prefixed by “MPI_”. The return code for successful
completion is MPI_SUCCESS.

1) Starting and Terminating the MPI Library


MPI_Init
- called prior to any calls to other MPI routines.
- its purpose is to initialize the MPI environment.
- strips off any MPI related command-line arguments.
- Prototype: int MPI_Init(int *argc, char ***argv)
MPI_Finalize
- called at the end of the computation
- performs various clean-up tasks to terminate the MPI environment.
- Prototype: int MPI_Finalize()
2) Communicators
• A communicator defines a communication domain - a set of processes that are allowed to
communicate with each other.
• Information about communication domains is stored in variables of type MPI_Comm.
• Communicators are used as arguments to all message transfer MPI routines.
• A process can belong to many different (possibly overlapping) communication domains.
• MPI defines a default communicator called MPI_COMM_WORLD which includes all the
processes.
3) Querying Information
• The MPI_Comm_size and MPI_Comm_rank functions are used to determine the number of
processes and the label of the calling process, respectively.
• The calling sequences of these routines are as follows:
int MPI_Comm_size(MPI_Comm comm, int *size)
int MPI_Comm_rank(MPI_Comm comm, int *rank)
• The rank of a process is an integer that ranges from zero up to the size of the communicator
minus one.
4) Sending and Receiving Messages
• The basic functions for sending and receiving messages in MPI are the MPI_Send and
MPI_Recv, respectively.
• The calling sequences of these routines are as follows:

int MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)

int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm,
MPI_Status *status)

• MPI provides equivalent datatypes for all C datatypes. This is done for portability reasons.
• The message-tag can take values ranging from zero up to the MPI defined constant
MPI_TAG_UB.
• If source is set to MPI_ANY_SOURCE, then any process of the communication domain can be
the source of the message.
• If tag is set to MPI_ANY_TAG, then messages with any tag are accepted.
• On the receive side, the message must be of length equal to or less than the length field
specified.
• On the receiving end, the status variable can be used to get information about the MPI_Recv
operation.
• The corresponding data structure contains:
typedef struct MPI_Status {
int MPI_SOURCE;
int MPI_TAG;
int MPI_ERROR; };

Avoiding deadlock: a common issue in message-passing interfaces (MPI) when processes send and
receive messages. Consider a scenario where process 0 sends two messages to process 1, and process
1 receives them in the reverse order.
If MPI_Send is blocking, process 0 will wait for process 1 to receive the message with tag 1.
Simultaneously, process 1 is waiting to receive the message with tag 2 from process 0. Both processes
are waiting on each other, leading to a deadlock.
Solution: Ensure that send and receive operations are matched in order. For instance, process 0 and
process 1 should both use the same order of messages
Sending and Receiving Messages Simultaneously: In a circular communication pattern, each
process sends a message to its neighbour and receives a message from another neighbour. If
MPI_Send is blocking, each process will wait to send its message until it can receive the message
from its neighbour. All processes end up waiting indefinitely, causing a circular deadlock.
Solution: Split processes into two groups (even and odd ranks) to alternate send and receive
operations. Use MPI_Sendrecv, which combines send and receive operations, ensuring no deadlocks
occur even in complex communication patterns

MPI Program:
#include <mpi.h>
main(int argc, char *argv[])
{
int npes, myrank;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &npes);
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
printf("From process %d out of %d, Hello World!\n",
myrank, npes);
MPI_Finalize();
}
Topology and Embedding
• MPI allows a programmer to organize processors into logical k-d meshes.
• The processor ids in MPI_COMM_WORLD can be mapped to other communicators (corresponding
to higher-dimensional meshes) in many ways.
• The goodness of any such mapping is determined by the interaction pattern of the underlying
program and the topology of the machine.
• MPI does not provide the programmer any control over these mappings.

Different ways to map a set of processes to a two-dimensional grid.


(a) and (b) show a row- and column-wise mapping of these processes,
(c) shows a mapping that follows a space-ling curve (dotted line), and
(d) shows a mapping in which neighboring processes are directly connected in a hypercube.

Creating and Using Cartesian Topologies


We can create cartesian topologies using the function:

int MPI_Cart_create( )

This function takes the processes in the old communicator and creates a new communicator with
dims dimensions.
Each processor can now be identified in this new cartesian topology by a vector of dimension dims.
Since sending and receiving messages still require (one-dimensional) ranks, MPI provides routines
to convert ranks to cartesian coordinates and vice-versa.

Rank to Coordinates: int MPI_Cart_coord( )


Coordinates to Rank: int MPI_Cart_rank( )

The most common operation on cartesian topologies is a shift.


Overlapping Communication with Computation in MPI
In parallel programming, one common goal is to minimize the time processes spend waiting for each
other. This can be achieved by overlapping communication with computation. MPI (Message Passing
Interface) provides non-blocking functions to help with this.
MPI offers non-blocking versions of send and receive functions:

• MPI_Isend: Initiates a non-blocking send.


int MPI_Isend (void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm,
MPI_Request *request);
• MPI_Irecv: Initiates a non-blocking receive.
int MPI_Irecv (void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm,
MPI_Request *request);
Checking and Waiting for Completion
After initiating non-blocking operations, you can continue with other computations and later check
if the communication has completed:
• MPI_Test: Checks if the non-blocking operation is complete.
int MPI_Test(MPI_Request *request, int *flag, MPI_Status *status);
• MPI_Wait: Waits for the non-blocking operation to complete.
int MPI_Wait(MPI_Request *request, MPI_Status *status);

Example: Avoiding Deadlocks


Consider a scenario where process 0 sends two messages to process 1:
Potential Issue: It can cause a deadlock if `MPI_Send` and `MPI_Recv` are blocking and the order of
send/receive does not match between processes.
Solution: Use Non-Blocking Operations:
Non-blocking operations allow a process to perform other tasks while waiting for the message to
be sent or received. (Overlap Communication with Computation)
By avoiding the need to wait for each send/receive operation to complete before starting the next
one, non-blocking operations reduce the risk of deadlocks.
In summary, using `MPI_Isend` and `MPI_Irecv` allows processes to overlap communication with
computation, making programs more efficient and less prone to deadlocks. These non-blocking
operations are crucial for optimizing performance in parallel computing.
Collective Communication and Computation Operations.
• MPI provides an extensive set of functions for performing common collective communication
operations.
• Each of these operations is defined over a group corresponding to the communicator.
• All processors in a communicator must call these operations.
1) The barrier synchronization operation is performed in MPI using:
int MPI_Barrier(MPI_Comm comm)
2) The one-to-all broadcast operation is:
int MPI_Bcast(void *buf, int count, MPI_Datatype datatype, int source, MPI_Comm comm)
3) The all-to-one reduction operation is:
int MPI_Reduce(void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, int
target, MPI_Comm comm)
4) The operation MPI_MAXLOC combines pairs of values (vi, li) and returns the pair (v, l) such that
v is the maximum among all vi 's and l is the corresponding li (if there are more than one, it is the
smallest among all these li 's).
5) MPI_MINLOC does the same, except for minimum value of vi.

6) If the result of the reduction operation is needed by all processes, MPI provides:
int MPI_Allreduce(void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op,
MPI_Comm comm)
7) To compute prefix-sums, MPI provides:
int MPI_Scan(void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op,
MPI_Comm comm)
8) The gather operation is performed in MPI using:
int MPI_Gather(void *sendbuf, int sendcount, MPI_Datatype senddatatype, void *recvbuf, int
recvcount, MPI_Datatype recvdatatype, int target, MPI_Comm comm)
9) MPI also provides the MPI_Allgather function in which the data are gathered at all the processes.
int MPI_Allgather(void *sendbuf, int sendcount, MPI_Datatype senddatatype, void *recvbuf,
int recvcount, MPI_Datatype recvdatatype, MPI_Comm comm)
10)The corresponding scatter operation is:
int MPI_Scatter(void *sendbuf, int sendcount, MPI_Datatype senddatatype, void *recvbuf, int
recvcount, MPI_Datatype recvdatatype, int source, MPI_Comm comm)
Using this core set of collective operations, a number of programs can be greatly simplified.

You might also like