04_a (Distributed Memory Programming with MPI)
04_a (Distributed Memory Programming with MPI)
Programming
Dr. Muhammad Naveed Akhtar
Lecture – 04a
Distributed Memory Programming with MPI
• MPI_Finalize
• Tells MPI we’re done, so clean up anything allocated for this program.
number of processes in
the communicator
my rank
(the process making this call)
send_type = recv_type
recv_buf_sz ≥ send_buf_sz
Sender Rank = q
Receiver Rank = r (q & r are numbers)
recv_comm = send_comm
recv_tag = send_tag
status.MPI_SOURCE
status.MPI_TAG
Status.MPI_ERROR
Scenario 1 Scenario 2
• Suppose that each process calls MPI_Reduce with operator MPI_SUM, and destination process 0.
• At first glance, it might seem that after the two calls to MPI_Reduce, the value of b will be 3, and
the value of d will be 6.
• However, the names of the memory locations are irrelevant to the matching of the calls to
MPI_Reduce.
• The order of the calls will determine the matching so the value stored in b will be 1+2+1 = 4, and
the value stored in d will be 2+1+2 = 5.
Function Definition
3 Processors
Function Prototype
Serial Program
Function Definition