0% found this document useful (0 votes)
8 views16 pages

Introduction To The Message Passing Interface (MPI

Uploaded by

bilqesahmed60
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views16 pages

Introduction To The Message Passing Interface (MPI

Uploaded by

bilqesahmed60
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Introduction to the

Message Passing
Interface (MPI)
What is MPI?
• MPI stands for Message Passing Interface and is a library specication
for message-passing, proposed as a standard by a broadly based
committee of vendors, implementors, and users.
• MPI consists of
1- a header le mpi.h
2- a library of routines and functions, and
3- a runtime system.
• MPI is for parallel computers, clusters, and heterogeneous networks.
• MPI is full-featured.
• MPI is designed to provide access to advanced parallel hardware for
end users
library writers
tool developers
• MPI can be used with C/C++, Fortran, and many other languages.
MPI is an API
MPI is actually just an Application Programming Interface (API). As such,
MPI
• species what a call to each routine should look like, and how each
routine should behave, but
• does not specify how each routine should be implemented, and
• sometimes is intentionally vague about certain aspects of a routines
behavior;
• implementations are often platform vendor specic, and
• has multiple open-source and proprietary implementations.
Example MPI routines
• The following routines are found in nearly every program that uses
MPI:
• MPI_Init() starts the MPI runtime environment.
• MPI_Finalize() shuts down the MPI runtime environment.
• MPI_Comm_size() gets the number of processes, Np.
• MPI_Comm_rank() gets the process ID of the current process which is
• between 0 and Np 􀀀 1, inclusive.
• (These last two routines are typically called right after MPI_Init().
More example MPI routines
Some of the simplest and most common communication routines are:
• MPI_Send() sends a message from the current process to another
process (the destination).
• MPI_Recv() receives a message on the current process from another
process (the source).
• MPI_Bcast() broadcasts a message from one process to all of the
others.
• MPI_Reduce() performs a reduction (e.g. a global sum, maximum, etc.)
of a variable in all processes, with the result ending up in a single
process.
• MPI_Allreduce() performs a reduction of a variable in all processes,
with the result ending up in all processes.
MPI Hello world: hello.c
# include <stdio .h>
# include <mpi.h>
int main ( int argc , char * argv [] )
{
int rank ;
int number_of_processes ;
MPI_Init ( &argc , & argv );
MPI_Comm_size ( MPI_COMM_WORLD , & number_of_processes );
MPI_Comm_rank ( MPI_COMM_WORLD , & rank );
printf ( " hello from process %d of %d\n",rank , number_of_processes );
MPI_Finalize ();
return 0;
}
MPI Hello world output
Running the program produces the output
hello from process 3 of 8
hello from process 0 of 8
hello from process 1 of 8
hello from process 7 of 8
hello from process 2 of 8
hello from process 5 of 8
hello from process 6 of 8
hello from process 4 of 8
Note:
• All MPI processes (normally) run the same executable
• Each MPI process knows which rank it is
• Each MPI process knows how many processes are part of the same job
• The processes run in a non-deterministic order
Communicators
• Recall the MPI initialization sequence:
MPI_Init ( &argc , & argv );
MPI_Comm_size ( MPI_COMM_WORLD , & number_of_processes );
MPI_Comm_rank ( MPI_COMM_WORLD , & rank );
• MPI uses communicators to organize how processes communicate
• with each other.
• A single communicator, MPI_COMM_WORLD, is created by MPI_Init() and
• all the processes running the program have access to it.
• Note that process ranks are relative to a communicator. A program
• may have multiple communicators; if so, a process may have multiple
• ranks, one for each communicator it is associated with.
MPI is (usually) SPMD
• Usually MPI is run in SPMD (Single Program, Multiple Data) mode.
• (It is possible to run multiple programs, i.e. MPMD).
• The program can use its rank to determine its role:

const int SERVER_RANK = 0;

if ( rank == SERVER_RANK )
{
/* do server stuff */
}
else
{
/* do compute node stuff */
}
• as shown here, often the rank 0 process plays the role of server or
• process coordinator.
A second MPI program: greeting.c
The next several slides show the source code for an MPI program that
works on a client-server model.
• When the program starts, it initializes the MPI system then
determines if it is the server process (rank 0) or a client process.
• Each client process will construct a string message and send it to the
server.
• The server will receive and display messages from the clients
one-by-one.
greeting.c: main
# include <stdio .h>
# include <mpi.h>
const int SERVER_RANK = 0;
const int MESSAGE_TAG = 0;
int main ( int argc , char * argv [] )
{
int rank , number_of_processes ;
MPI_Init ( &argc , & argv );
MPI_Comm_size ( MPI_COMM_WORLD , & number_of_processes );
MPI_Comm_rank ( MPI_COMM_WORLD , & rank );
if ( rank == SERVER_RANK )
do_server_work ( number_of_processes );
else
do_client_work ( rank );
MPI_Finalize ();
return 0;
}
greeting.c: server
greeting.c: client
MPI Send()
MPI Recv()
Reference
Some material used in creating these slides comes from
• MPI Programming Model: Desert Islands Analogy by Henry Neeman,
University of Oklahoma Supercomputing Center.
• An Introduction to MPI by William Gropp and Ewing Lusk, Argonne
National Laboratory.

You might also like