0% found this document useful (0 votes)
32 views6 pages

Mpi Basic Operations

The document discusses Message Passing Interface (MPI), which is a standard for message passing between processes. It defines functions for initialization and termination, getting process information, basic send and receive operations, and collective operations like broadcast. An example C program is given to demonstrate MPI broadcast.

Uploaded by

SAI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views6 pages

Mpi Basic Operations

The document discusses Message Passing Interface (MPI), which is a standard for message passing between processes. It defines functions for initialization and termination, getting process information, basic send and receive operations, and collective operations like broadcast. An example C program is given to demonstrate MPI broadcast.

Uploaded by

SAI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Message Passing Interface (MPI)

The Message-Passing Interface (MPI)is an attempt to create a standard to allow tasks executing
on multiple processors to communicate through some standardized communication primitives. It
defines a standard library for message passing that one can use to develop message-passing
program using C, C++, or Fortran. The MPI standard define both the syntax and the semantics
of these functional interface to message passing.

(1)Minimum Set of MPI functions

A minimum set of MPI functions is described below. MPI functions use the prefix MPI and
after the prefix the remaining keyword start with a capital letter.

(2)MPI: Initialization and Termination

MPI Message-Passing primitives:


#include <mpi.h>
int MPI_Init (int *argc, char **argv);
int MPI_Finalize(void);

Multiple processes from the same source are created by issuing the function MPI Init and these
processes are safely terminated by issuing a MPI Finalize. The arguments of MPI Init are the
command line arguments minus the ones that were used/processed by the MPI implementation.
Thus command line processing should only be performed in the program after the execution of
this function call. Successful return returns a MPI SUCCESS; otherwise an error-code that is
implementation dependent is returned.
(3)MPI: Communicators and Process Control

Under MPI, a communication domain is a set of processors that are allowed to communicate
with each other. Information about such a domain is stored in a communicator that uniquely
identify the processors that participate in a communication operation.
A default communication domain is all the processors of a parallel execution; it is called MPI
COMM WORLD. By using multiple communicators between possibly overlapping groups of
processors we make sure that messages are not interfering with each other.

MPI Message-Passing primitives

#include <mpi.h>
int MPI_Comm_size ( MPI_Comm comm, int *size);
int MPI_Comm_rank ( MPI_Comm comm, int *rank);

Thus
MPI_Comm_size ( MPI_COMM_WORLD, &nprocs);
MPI_Comm_rank ( MPI_COMM_WORLD, &pid );
return the number of processors nprocs and the processor id pid of the calling processor.

A hello world! program in MPI is the following one.


#include <mpi.h>
int main(int argc, char **argv)
{
int nprocs, mypid;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&nprocs);
MPI_Comm_rank(MPI_COMM_WORLD,&mypid );
printf("Hello world from process %d of total %d\n",mypid,nprocs);
MPI_Finalize();
}

(4)MPI Basic Collective Operations

1. MPI Message-Passing primitives

#include <mpi.h>

/* Blocking send and receive */


int MPI_Send(void *buf, int count, MPI_Datatype dtype, int dest, int tag, MPI_Comm comm);
int MPI_Recv(void *buf, int count, MPI_Datatype dtype, int src, int tag, MPI_Comm comm,
MPI_Status *stat);

/* Non-Blocking send and receive */


int MPI_Isend(void *buf, int count, MPI_Datatype dtype, int dest, int tag, MPI_Comm comm,
MPI_Request *req);
int MPI_Irecv(void *buf, int count, MPI_Datatype dtype, int src, int tag, MPI_Comm comm,
MPI_Request *req);
int MPI_Wait(MPI_Request *preq, MPI_Status *stat);

buf - initial address of send/receive buffer


count - number of elements in send buffer (nonnegative integer) or maximum number of
elements in receive buffer.
dtyp - datatype of each send/receive buffer element (handle)
dest,src - rank of destination/source (integer)
Wild-card: MPI_ANY_SOURCE for recv only. No wildcard for dest.
tag - message tag (integer). Range 0...32767.
Wild-card: MPI_ANY_TAG for recv only; send must specify tag.
comm - communicator (handle)
stat - status object (Status), which can be the MPI constant; it returns the source and tag of the
message that was acctually received. MPI_STATUS_IGNORE if the return status is not desired
Struct of status:
status->MPI_SOURCE,
status->MPI_TAG,
status->MPI_ERROR

Example 1: tag
/* Assume system provides some buffering */
if (my_rank == A) {
tag = 0;
MPI_Send(&x, 1, MPI_FLOAT, B, tag, MPI_COMM_WORLD);
...
tag = 1;
MPI_Send(&y, 1, MPI_FLOAT, B, tag, MPI_COMM_WORLD);
} else if (my_rank == B) {
tag = 1;
MPI_Recv(&y, 1, MPI_FLOAT, A, tag, MPI_COMM_WORLD, &status);
...
tag = 0;
MPI_Recv(&x, 1, MPI_FLOAT, A, tag, MPI_COMM_WORLD, &status); }
Example 2: communicator
For example, suppose now that the user's code is sending a float, x, from process A to process B,
while the library is sending a float, y, from A to B:
/* Assume system provides some buffering */
void User_function(int my_rank, float* x) {
MPI_Status status;
if (my_rank == A) {
/* MPI_COMM_WORLD is pre-defined in MPI */
MPI_Send(x, 1, MPI_FLOAT, B, 0, MPI_COMM_WORLD);
} else if (my_rank == B) {
MPI_Recv(x, 1, MPI_FLOAT, A, 0, MPI_COMM_WORLD, &status);
}
...
}
void Library_function(float* y) {
MPI_Comm library_comm;
MPI_Status status; int my_rank;
/* Create a communicator with the same group */ /* as MPI_COMM_WORLD, but a
different context */ MPI_Comm_dup(MPI_COMM_WORLD, &library_comm);
/* Get process rank in new communicator */
MPI_Comm_rank(library_comm, &my_rank);
if (my_rank == A)
MPI_Send(y, 1, MPI_FLOAT, B, 0, library_comm);
else if (my_rank == B)
{ MPI_Recv(y, 1, MPI_FLOAT, A, 0, library_comm, &status); }
...
}
int main(int argc, char* argv[]) {
...
if (my_rank == A) {
User_function(A, &x);
...
Library_function(&y);
} else if (my_rank == B) {
Library_function(&y);
...
User_function(B, &x);
}
...
}

1. data type correspondence between MPI and C

MPI_CHAR --> signed char ,


MPI_SHORT --> signed short int ,
MPI_INT --> signed int
MPI_LONG --> signed long int ,
MPI_UNSIGNED_CHAR --> unsigned char ,
MPI_UNSIGNED_SHORT --> unsigned short int ,
MPI_UNSIGNED --> unsigned int
MPI_UNSIGNED_LONG --> unsigned long int ,
MPI_FLOAT --> float ,
MPI_DOUBLE --> double,
MPI_LONG_DOUBLE --> long double,
MPI_BYTE
MPI_PACKED

2. One simple collective operations:

int MPI_Bcast(void * message, int count, MPI_Datatype datatype, int root, MPI_Comm comm)

The routine MPI_Bcast sends data from one process to all others
Simple Program that Demonstrates MPI_Bcast:#include <mpi.h>

#include <stdio.h>
int main (int argc, char *argv[]){
int k,id,p,size;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD, &id);
MPI_Comm_size(MPI_COMM_WORLD, &size);
if(id == 0)
k = 20;
else
k = 10;
for(p=0; p<size; p++){
if(id == p)
printf("Process %d: k= %d before\n",id,k);
}
//note MPI_Bcast must be put where all other processes
//can see it.
MPI_Bcast(&k,1,MPI_INT,0,MPI_COMM_WORLD);
for(p=0; p<size; p++){
if(id == p)
printf("Process %d: k= %d after\n",id,k);
}
MPI_Finalize();
return 0;
}

You might also like