Introduction To The Message Passing Interface (MPI) : Parallel and High Performance Computing
Introduction To The Message Passing Interface (MPI) : Parallel and High Performance Computing
CPS343
Spring 2020
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 1 / 41
Outline
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 2 / 41
Outline
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 3 / 41
What is MPI?
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 4 / 41
MPI is an API
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 5 / 41
Example MPI routines
The following routines are found in nearly every program that uses MPI:
MPI_Init() starts the MPI runtime environment.
MPI_Finalize() shuts down the MPI runtime environment.
MPI_Comm_size() gets the number of processes, Np .
MPI_Comm_rank()gets the process ID of the current process which is
between 0 and Np − 1, inclusive.
(These last two routines are typically called right after MPI_Init().)
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 6 / 41
More example MPI routines
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 7 / 41
Outline
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 8 / 41
MPI Hello world: hello.c
MPI_Finalize ();
return 0;
}
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 9 / 41
MPI Hello world output
Running the program produces the output
hello from process 3 of 8
hello from process 0 of 8
hello from process 1 of 8
hello from process 7 of 8
hello from process 2 of 8
hello from process 5 of 8
hello from process 6 of 8
hello from process 4 of 8
Note:
All MPI processes (normally) run the same executable
Each MPI process knows which rank it is
Each MPI process knows how many processes are part of the same job
The processes run in a non-deterministic order
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 10 / 41
Communicators
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 11 / 41
MPI is (usually) SPMD
if ( rank == SERVER_RANK )
{
/* do server stuff */
}
else
{
/* do compute node stuff */
}
as shown here, often the rank 0 process plays the role of server or
process coordinator.
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 12 / 41
A second MPI program: greeting.c
The next several slides show the source code for an MPI program that
works on a client-server model.
When the program starts, it initializes the MPI system then
determines if it is the server process (rank 0) or a client process.
Each client process will construct a string message and send it to the
server.
The server will receive and display messages from the clients
one-by-one.
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 13 / 41
greeting.c: main
# include < stdio .h >
# include < mpi .h >
const int SERVER_RANK = 0;
const int MESSAGE_TAG = 0;
if ( rank == SERVER_RANK )
do_server_work ( number_of_processes );
else
do_client_work ( rank );
MPI_Finalize ();
return 0;
}
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 14 / 41
greeting.c: server
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 15 / 41
greeting.c: client
message_length =
sprintf ( message , " Greetings from process % d " , rank );
message_length ++; /* add one for null char */
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 16 / 41
Outline
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 17 / 41
Compiling an MPI program
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 18 / 41
Running an MPI program
Note:
the server process (rank 0) does not send a message, but does display
the contents of messages received from the other processes.
mpirun can be used rather than mpiexec.
the arguments to mpiexec vary between MPI implementations.
mpiexec (or mpirun) may not be available.
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 19 / 41
Deterministic operation?
You may have noticed that in the four-process case the greeting messages
were printed out in-order. Does this mean that the order the messages
were sent is deterministic? Look again at the loop that carries out the
server’s work:
for ( src = 0; src < number_of_processes ; src ++ )
{
if ( src != SERVER_RANK )
{
MPI_Recv ( message , max_message_length , MPI_CHAR ,
src , MESSAGE_TAG , MPI_COMM_WORLD , & status );
printf ( " Received : % s \ n " , message );
}
}
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 21 / 41
Outline
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 22 / 41
MPI function return values
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 23 / 41
Sample MPI error handler
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 24 / 41
Outline
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 25 / 41
MPI point-to-point communication routines
Both of these have several variants that we’ll mention here and see some
of later.
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 26 / 41
MPI Send()
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 27 / 41
MPI Recv()
The calling sequence for MPI_Recv() is
int MPI_Recv (
void * buf , /* pointer to send buffer */
int count , /* number of items to send */
MPI_Datatype datatype , /* datatype of buffer elements */
int source , /* rank of sending process */
int tag , /* message type identifier */
MPI_Comm comm , /* MPI communicator to use */
MPI_Status * status ) /* MPI status object */
Standard Locally blocking, meaning that the routine does not return
until the memory holding the message is available to reuse
(in the case of MPI_Send()) or use (in the case of MPI_Recv()).
Buffered In this mode the user supplies buffer space sufficient to hold
an outgoing or incoming message. The routine MPI_Bsend()
returns as soon as the message is copied into the buffer.
Synchronous Similar to the standard mode, except MPI_Ssend() will not
return until the matching receive has been pointed.
Essentially this is explicit blocking.
Ready Similar to the standard mode, except that it is an error to
call MPI_Rsend() before the matching receive has been posted.
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 29 / 41
Outline
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 30 / 41
MPI Datatypes
Here is a list of the most commonly used MPI datatypes. There are
others and users can construct their own datatypes to handle special
situations.
C/C++ datatype MPI datatype
char MPI_CHAR
int MPI_INT
float MPI_FLOAT
double MPI_DOUBLE
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 31 / 41
MPI Tags
MPI uses tags to identify messages. Why is this necessary? Isn’t just
knowing the source or destination sufficient?
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 32 / 41
Three more communication functions
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 33 / 41
MPI Bcast()
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 34 / 41
MPI Bcast() example
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 35 / 41
MPI Bcast() example output
Of course, the statements could appear in any order. We see that the data
from the process with rank 2 (since root = 2) was broadcast to all
processes.
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 36 / 41
MPI Reduce()
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 37 / 41
MPI Allreduce()
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 38 / 41
MPI Reduce() and MPI Allreduce() example
sum = 0;
MPI_Allreduce ( & rank , & sum , 1 , MPI_INT ,
MPI_SUM , MPI_COMM_WORLD );
printf ( " Allreduce : process % d has %3 d \ n " ,
rank , sum );
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 39 / 41
MPI Reduce() and MPI Allreduce() example output
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 40 / 41
Acknowledgements
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2020 41 / 41