Mpi Basic Operations
Mpi Basic Operations
The Message-Passing Interface (MPI)is an attempt to create a standard to allow tasks executing
on multiple processors to communicate through some standardized communication primitives. It
defines a standard library for message passing that one can use to develop message-passing
program using C, C++, or Fortran. The MPI standard define both the syntax and the semantics
of these functional interface to message passing.
A minimum set of MPI functions is described below. MPI functions use the prefix MPI and
after the prefix the remaining keyword start with a capital letter.
Multiple processes from the same source are created by issuing the function MPI Init and these
processes are safely terminated by issuing a MPI Finalize. The arguments of MPI Init are the
command line arguments minus the ones that were used/processed by the MPI implementation.
Thus command line processing should only be performed in the program after the execution of
this function call. Successful return returns a MPI SUCCESS; otherwise an error-code that is
implementation dependent is returned.
(3)MPI: Communicators and Process Control
Under MPI, a communication domain is a set of processors that are allowed to communicate
with each other. Information about such a domain is stored in a communicator that uniquely
identify the processors that participate in a communication operation.
A default communication domain is all the processors of a parallel execution; it is called MPI
COMM WORLD. By using multiple communicators between possibly overlapping groups of
processors we make sure that messages are not interfering with each other.
#include <mpi.h>
int MPI_Comm_size ( MPI_Comm comm, int *size);
int MPI_Comm_rank ( MPI_Comm comm, int *rank);
Thus
MPI_Comm_size ( MPI_COMM_WORLD, &nprocs);
MPI_Comm_rank ( MPI_COMM_WORLD, &pid );
return the number of processors nprocs and the processor id pid of the calling processor.
#include <mpi.h>
Example 1: tag
/* Assume system provides some buffering */
if (my_rank == A) {
tag = 0;
MPI_Send(&x, 1, MPI_FLOAT, B, tag, MPI_COMM_WORLD);
...
tag = 1;
MPI_Send(&y, 1, MPI_FLOAT, B, tag, MPI_COMM_WORLD);
} else if (my_rank == B) {
tag = 1;
MPI_Recv(&y, 1, MPI_FLOAT, A, tag, MPI_COMM_WORLD, &status);
...
tag = 0;
MPI_Recv(&x, 1, MPI_FLOAT, A, tag, MPI_COMM_WORLD, &status); }
Example 2: communicator
For example, suppose now that the user's code is sending a float, x, from process A to process B,
while the library is sending a float, y, from A to B:
/* Assume system provides some buffering */
void User_function(int my_rank, float* x) {
MPI_Status status;
if (my_rank == A) {
/* MPI_COMM_WORLD is pre-defined in MPI */
MPI_Send(x, 1, MPI_FLOAT, B, 0, MPI_COMM_WORLD);
} else if (my_rank == B) {
MPI_Recv(x, 1, MPI_FLOAT, A, 0, MPI_COMM_WORLD, &status);
}
...
}
void Library_function(float* y) {
MPI_Comm library_comm;
MPI_Status status; int my_rank;
/* Create a communicator with the same group */ /* as MPI_COMM_WORLD, but a
different context */ MPI_Comm_dup(MPI_COMM_WORLD, &library_comm);
/* Get process rank in new communicator */
MPI_Comm_rank(library_comm, &my_rank);
if (my_rank == A)
MPI_Send(y, 1, MPI_FLOAT, B, 0, library_comm);
else if (my_rank == B)
{ MPI_Recv(y, 1, MPI_FLOAT, A, 0, library_comm, &status); }
...
}
int main(int argc, char* argv[]) {
...
if (my_rank == A) {
User_function(A, &x);
...
Library_function(&y);
} else if (my_rank == B) {
Library_function(&y);
...
User_function(B, &x);
}
...
}
int MPI_Bcast(void * message, int count, MPI_Datatype datatype, int root, MPI_Comm comm)
The routine MPI_Bcast sends data from one process to all others
Simple Program that Demonstrates MPI_Bcast:#include <mpi.h>
#include <stdio.h>
int main (int argc, char *argv[]){
int k,id,p,size;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD, &id);
MPI_Comm_size(MPI_COMM_WORLD, &size);
if(id == 0)
k = 20;
else
k = 10;
for(p=0; p<size; p++){
if(id == p)
printf("Process %d: k= %d before\n",id,k);
}
//note MPI_Bcast must be put where all other processes
//can see it.
MPI_Bcast(&k,1,MPI_INT,0,MPI_COMM_WORLD);
for(p=0; p<size; p++){
if(id == p)
printf("Process %d: k= %d after\n",id,k);
}
MPI_Finalize();
return 0;
}