0% found this document useful (0 votes)
21 views3 pages

Imp Points

The document discusses MPI communicators and common MPI routines like MPI_Scatter, MPI_Gather, MPI_Bcast, and MPI_Allreduce. It explains that MPI_COMM_WORLD groups all processes together and each process is given a unique rank. It also provides syntax for common collective operations.

Uploaded by

Nakul Amate
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views3 pages

Imp Points

The document discusses MPI communicators and common MPI routines like MPI_Scatter, MPI_Gather, MPI_Bcast, and MPI_Allreduce. It explains that MPI_COMM_WORLD groups all processes together and each process is given a unique rank. It also provides syntax for common collective operations.

Uploaded by

Nakul Amate
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

MPI_COMM_WORLD :-It is a commutator ,thorugh which all the

process communicate . It groups all the process together .

Size : the no. of process in a communicator

Rank : Each process in a communicator ,each given an unique id .


ranges from 0 to size-1.
MPI_Comm_size(MPI_COMM_WORLD, &size): It gives the
size of the communicator and stores in the size
variable , in same rank can be taken.
Mpirun -np4 .hello.cpp : 4 process.
While sending information from one process to next
one : int MPI_Send(void *buf, int count ,
MPI_Datatype datatype, int dest, int tag, MPI_Comm
comm)

MPI_Scatter : Sends data from one process to all other processes in a


communicator .
int MPI_Scatter(const void *sendbuf, int sendcount,
MPI_Datatype sendtype, void *recvbuf, int recvcount,
MPI_Datatype recvtype, int root, MPI_Comm comm)

MPI_BCast : Sends same piece of information to all


the process in the same communicator.
MPI_Gather : It takes elements from many processes and gathers them
to one single process. This routine is highly useful to many parallel
algorithms, such as parallel sorting and searching. It gathers the
information from all the process and sends to the root node.

MPI_Gather(
void* send_data,
int send_count,
MPI_Datatype send_datatype,
void* recv_data,
int recv_count,
MPI_Datatype recv_datatype,
int root,
MPI_Comm communicator)

MPI_Allreduce(
void* send_data,
void* recv_data,
int count,
MPI_Datatype datatype,
MPI_Op op,
MPI_Comm communicator)

As you might have noticed, MPI_Allreduce is identical


to MPI_Reduce with the exception that it does not need a
root process id (since the results are distributed to all
processes). The following illustrates the communication
pattern of MPI_Allreduce

You might also like