CS-3006 7 MPI Advanced Topics
CS-3006 7 MPI Advanced Topics
Collective Communication
• Broadcast
• Scatter
• Gather
• Reduce
Collective Communications
• Processes may need to communicate with everyone else
• Properties:
– Must be executed by all processes (of the communicator)
– All processes in group call same operation at (roughly)
the same time
– All collective operations are blocking operations
Broadcast
• A one-to-many communication
before r ed
bcast
after r ed r ed r ed r ed r ed
bcast
e.g., root=1
• rank of the sending process (i.e., root process)
• must be given identically by all processes
Collective communication: Broadcast
Collective communication: Broadcast
Broadcasting with MPI_Bcast
• The root receives data from all processes (from send buffers)
• It stores the data in the receive buffer ordered by the process
number of the senders
MPI_Gather - Example
Demo:
Gather.c
MPI_Scatterv
• MPI_Scatterv is a collective routine that is similar to MPI_Scatter
• It sends variable chunks of an array to different processes
Process-0
Process-1
Process-2
Process-3
Credits: https://www.cineca.it/
MPI_Scatterv Demo:
ScatterV.c
Credits: https://www.cineca.it/
MPI_Gatherv
MPI_Gatherv Demo:
GatherV.c
Credits: https://medium.com/@jaydesai36/barrier-synchronization-in-threads-3c56f947047
MPI_BARRIER Demo:
Barrier.c
Credits: https://dps.uibk.ac.at
Reduction Operations
Reduction Operations:
Data types:
– Operations are defined for appropriate data types
MPI_Allreduce
Credits: https://dps.uibk.ac.at
Any Questions