BSP Design Strategy
BSP Design Strategy
CPS343
Spring 2018
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 1 / 38
Outline
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 2 / 38
Acknowledgements
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 3 / 38
Outline
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 4 / 38
What is MPI?
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 5 / 38
MPI is an API
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 6 / 38
Example MPI routines
The following routines are found in nearly every program that uses MPI:
MPI_Init() starts the MPI runtime environment.
MPI_Finalize() shuts down the MPI runtime environment.
MPI_Comm_size() gets the number of processes, Np .
MPI_Comm_rank()gets the process ID of the current process which is
between 0 and Np − 1, inclusive.
(These last two routines are typically called right after MPI_Init().)
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 7 / 38
More example MPI routines
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 8 / 38
Outline
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 9 / 38
MPI Hello world: hello.c
MPI_Finalize ();
return 0;
}
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 10 / 38
MPI Hello world output
Running the program produces the output
hello from process 3 of 8
hello from process 0 of 8
hello from process 1 of 8
hello from process 7 of 8
hello from process 2 of 8
hello from process 5 of 8
hello from process 6 of 8
hello from process 4 of 8
Note:
All MPI processes (normally) run the same executable
Each MPI process knows which rank it is
Each MPI process knows how many processes are part of the same job
The processes run in a non-deterministic order
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 11 / 38
Communicators
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 12 / 38
MPI is (usually) SPMD
if ( rank == SERVER_RANK )
{
/* do server stuff */
}
else
{
/* do compute node stuff */
}
as shown here, often the rank 0 process plays the role of server or
process coordinator.
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 13 / 38
A second MPI program: greeting.c
The next several slides show the source code for an MPI program that
works on a client-server model.
When the program starts, it initializes the MPI system then
determines if it is the server process (rank 0) or a client process.
Each client process will construct a string message and send it to the
server.
The server will receive and display messages from the clients
one-by-one.
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 14 / 38
greeting.c: main
# include < stdio .h >
# include < mpi .h >
const int SERVER_RANK = 0;
const int MESSAGE_TAG = 0;
if ( rank == SERVER_RANK )
do_server_work ( number_of_processes );
else
do_client_work ( rank );
MPI_Finalize ();
return 0;
}
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 15 / 38
greeting.c: server
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 16 / 38
greeting.c: client
message_length =
sprintf ( message , " Greetings from process % d " , rank );
message_length ++; /* add one for null char */
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 17 / 38
Outline
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 18 / 38
Compiling an MPI program
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 19 / 38
Running an MPI program
Note:
the server process (rank 0) does not send a message, but does display
the contents of messages received from the other processes.
mpirun can be used rather than mpiexec.
the arguments to mpiexec vary between MPI implementations.
mpiexec (or mpirun) may not be available.
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 20 / 38
Deterministic operation?
You may have noticed that in the four-process case the greeting messages
were printed out in-order. Does this mean that the order the messages
were sent is deterministic? Look again at the loop that carries out the
server’s work:
for ( src = 0; src < number_of_processes ; src ++ )
{
if ( src != SERVER_RANK )
{
MPI_Recv ( message , max_message_length , MPI_CHAR ,
src , MESSAGE_TAG , MPI_COMM_WORLD , & status );
printf ( " Received : % s \ n " , message );
}
}
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 22 / 38
Outline
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 23 / 38
MPI function return values
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 24 / 38
Sample MPI error handler
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 25 / 38
Outline
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 26 / 38
MPI Send()
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 27 / 38
MPI Recv()
The calling sequence for MPI_Recv() is
int MPI_Recv (
void * buf , /* pointer to send buffer */
int count , /* number of items to send */
MPI_Datatype datatype , /* datatype of buffer elements */
int source , /* rank of sending process */
int tag , /* message type identifier */
MPI_Comm comm , /* MPI communicator to use */
MPI_Status * status ) /* MPI status object */
Here is a list of the most commonly used MPI datatypes. There are
others and users can construct their own datatypes to handle special
situations.
C/C++ datatype MPI datatype
char MPI_CHAR
int MPI_INT
float MPI_FLOAT
double MPI_DOUBLE
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 29 / 38
MPI Tags
MPI uses tags to identify messages. Why is this necessary? Isn’t just
knowing the source or destination sufficient?
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 30 / 38
Three more communication functions
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 31 / 38
MPI Bcast()
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 32 / 38
MPI Bcast() example
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 33 / 38
MPI Bcast() example output
Of course, the statements could appear in any order. We see that the data
from the process with rank 2 (since root = 2) was broadcast to all
processes.
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 34 / 38
MPI Reduce()
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 35 / 38
MPI Allreduce()
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 36 / 38
MPI Reduce() and MPI Allreduce() example
sum = 0;
MPI_Allreduce ( & rank , & sum , 1 , MPI_INT ,
MPI_SUM , MPI_COMM_WORLD );
printf ( " Allreduce : process % d has %3 d \ n " ,
rank , sum );
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 37 / 38
MPI Reduce() and MPI Allreduce() example output
CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 38 / 38