ch5 MPI
ch5 MPI
• In the non-buffered blocking send, the operation does not return until
the matching receive has been encountered at the receiving process.
• In buffered blocking sends, the sender simply copies the data into the
designated buffer and returns after the copy operation has been
completed. The data is copied at a buffer at the receiving end as well.
• The sender simply copies the data into the designated buffer and
returns after the copy operation has been completed.
P0 P1
for (i = 0; i < 1000; i++){ for (i = 0; i < 1000; i++){
produce_data(&a); receive(&a, 1, 0);
send(&a, 1, 1); consume_data(&a);
} }
P0 P1
receive(&a, 1, 1); receive(&a, 1, 0);
send(&b, 1, 1); send(&b, 1, 0);
Non-Blocking Message Passing Operations
• The programmer must ensure semantics of the send and receive.
• The MPI standard defines both the syntax as well as the semantics
of a core set of library routines.
• All MPI routines, data-types, and constants are prefixed by “MPI_”. The return
code for successful completion is MPI_SUCCESS.
Communicators
• A communicator defines a communication domain - a set of
processes that are allowed to communicate with each other.
#include <mpi.h>
• MPI provides equivalent datatypes for all C datatypes. This is done for portability
reasons.
• The message-tag can take values ranging from zero up to the MPI defined constant
MPI_TAG_UB.
MPI Datatypes
MPI Datatype C Datatype
MPI_CHAR signed char
MPI_SHORT signed short int
MPI_INT signed int
MPI_LONG signed long int
MPI_UNSIGNED_CHAR unsigned char
MPI_UNSIGNED_SHORT unsigned short int
MPI_UNSIGNED unsigned int
MPI_UNSIGNED_LONG unsigned long int
MPI_FLOAT float
MPI_DOUBLE double
MPI_LONG_DOUBLE long double
MPI_BYTE
MPI_PACKED
Sending and Receiving Messages
• MPI allows specification of wildcard arguments for both source
and tag.
Consider:
• MPI does not provide the programmer any control over these
mappings.
Topologies and Embeddings
(a) and (b) show a row- and column-wise mapping of these processes,
(c) shows a mapping that follows a space-lling curve (dotted line), and
(d) shows a mapping in which neighboring processes are directly connected in a hypercube.
Creating and Using Cartesian Topologies
• We can create cartesian topologies using the function:
int MPI_Cart_create(MPI_Comm comm_old, int ndims,
int *dims, int *periods, int reorder,
MPI_Comm *comm_cart)
• These operations return before the operations have been completed. Function
MPI_Test tests whether or not the non-blocking send or receive operation identified
by its request has finished.
int MPI_Test(MPI_Request *request, int *flag,
MPI_Status *status)
• MPI also provides the MPI_Allgather function in which the data are gathered at
all the processes.
int MPI_Allgather(void *sendbuf, int sendcount,
MPI_Datatype senddatatype, void *recvbuf,
int recvcount, MPI_Datatype recvdatatype,
MPI_Comm comm)