Point-to-Point Communication: MPI Send MPI Recv
Point-to-Point Communication: MPI Send MPI Recv
Point-to-Point Communication
The program in Figure 1.4 is a very simple parallel program. The individual processes
neither exchange data nor coordinate with each other. Point-to-point communication
allows two processes to send data from one to another. Data is sent by using rou-
tines such as MPI Send and is received by using routines such as MPI Recv (we
mention later several specialized forms for both sending and receiving).
We illustrate this type of communication in Figure 1.5 with a simple program that
sums contributions from each process. In this program, each process first determines
its rank and initializes the value that it will contribute to the sum. (In this case, the
sum itself is easily computed analytically; this program is used for illustration only.)
After receiving the contribution from the process with rank one higher, it adds the
received value into its contribution and sends the new value to the process with rank
one lower. The process with rank zero only receives data, and the process with the
largest rank (equal to size−1) only sends data.
The program in Figure 1.5 introduces a number of new points. The most obvi-
ous are the two new MPI routines MPI Send and MPI Recv. These have similar
arguments. Each routine uses the first three arguments to specify the data to be sent
or received. The fourth argument specifies the destination (for MPI Send) or source
(for MPI Recv) process, by rank. The fifth argument, called a tag, provides a way to
include a single integer with the data; in this case the value is not needed, and a zero
is used (the value used by the sender must match the value given by the receiver).
The sixth argument specifies the collection of processes to which the value of rank
is relative; we use MPI COMM WORLD, which is the collection of all processes in the
parallel program (determined by the startup mechanism, such as mpiexec in the
“Hello World” example). There is one additional argument to MPI Recv: status.
This value contains some information about the message that some applications may
need. In this example, we do not need the value, but we must still provide the argu-
ment.
The three arguments describing the data to be sent or received are, in order, the
address of the data, the number of items, and the type of the data. Each basic datatype
in the language has a corresponding MPI datatype, as shown in Table 1.1.
MPI allows the user to define new datatypes that can represent noncontiguous
memory, such as rows of a Fortran array or elements indexed by an integer array
(also called scatter-gathers). Details are beyond the scope of this chapter, however.
This program also illustrates an important feature of message-passing programs:
because these are separate, communicating processes, all variables, such as rank
or valOut, are private to each process and may (and often will) contain different
values. That is, each process has its own memory space, and all variables are private
1 Parallel Programming Models 11
#include "mpi.h"
#include <stdio.h>
int main( int argc, char *argv[] )
{
int size, rank, valIn, valOut;
MPI_Status status;
MPI_Finalize( );
return 0;
}
to that process. The only way for one process to change or access data in another
process is with the explicit use of MPI routines such as MPI Send and MPI Recv.
MPI provides a number of other ways in which to send and receive messages, in-
cluding nonblocking (sometimes incorrectly called asynchronous) and synchronous
routines. Other routines, such as MPI Iprobe, can be used to determine whether a
message is available for receipt. The nonblocking routines can be important in ap-
plications that have complex communication patterns and that send large messages.
See [30, Chapter 4] for more details and examples.
7
One might object that the program in Figure 1.6 doesn’t do exactly what the program in
Figure 1.5 does because, in the latter, all of the intermediate results are computed and available
to those processes. We offer two responses. First, only the value on the rank-zero process
is printed; the others don’t matter. Second, MPI offers the collective routine MPI Scan to
provide the partial sum results if that is required.
1 Parallel Programming Models 13
#include "mpi.h"
#include <stdio.h>
int main( int argc, char *argv[] )
{
int rank, valIn, valOut;
MPI_Status status;
MPI_Finalize( );
return 0;
}
Other Features
The success of MPI created a desire to tackle some of the features not in the original
MPI (henceforth called MPI-1). The major features include parallel I/O, the creation
of new processes in the parallel program, and one-sided (as opposed to point-to-
point) communication. Other important features include bindings for Fortran 90 and