Distributed Systems and Cloud Computing
Distributed Systems and Cloud Computing
Lecture 2
Message Passing
Interface - Part 2
Message Passing interface
Core MPI Functions
Most MPI programs can be written using just these six core functions:
● MPI_Init
○ int MPI_Init(int *argc, char ***argv)
○ initialize the MPI library (must be the first routine called)
● MPI_Comm_size
○ int MPI_Comm_size(MPI_Comm comm, int *size)
○ get the size of a communicator
● MPI_Comm_rank
○ int MPI_Comm_rank(MPI_Comm comm, int *rank)
○ get the rank of the calling process in the communicator
Core MPI Functions
● MPI_Send
○ int MPI_Send(const void *buf, int count, MPI_Datatype datatype,
int dest, int tag, MPI_Comm comm)
○ send a message to another process
● MPI_Recv
○ int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int
source, int tag, MPI_Comm comm, MPI_Status *status)
○ receive a message from another process
● MPI_Finalize
○ int MPI_Finalize()
○ clean up all MPI state (must be the last MPI function called by a
process)
MPI Example
MPI Communication Functions
Processes wait until all members of the group have reached the
synchronization point.
The function used to do this is:
MPI_Barrier (comm)
This causes each process, when reaching the MPI_Barrier call, to block
until all tasks in the group reach the same MPI_Barrier call.
Synchronization (Collective Communication)
MPI_Barrier(MPI_Comm comm)
Synchronization Example
Reductions (Collective computation)
One member of the group collects data from the other members and
performs an operation (min, max, add, multiply, etc.) on that data.
The function to do this is:
MPI_Reduce
One of the parameters of this function is the operation to be performed.
Examples of this are: MPI_MAX, MPI_MIN, MPI_SUM, MPI_PROD, etc.