Week12 - L01 and L02
Week12 - L01 and L02
Distributed Computing
Lecture #19,20
Agenda
Odd-Even Sort
Sequential formulation
Parallel formulation
Computational and communicational complexity
Collective communication operations in MPI
MPI Barrier
MPI Bcast
MPI_Reduce
Predefined Reduction Operations
MPI_Allreduce
MPI_Scan
MPI_Gatherv, MPI_Allgather and MPI_Scatter
MPI_Alltoall
Parallel Odd-Even Sort
* https://www.slideshare.net/richakumari37266/parallel-sorting-algorithm
Collective Communication and Computation Operations
Prefix(scan) operation
Recall 4.3-for prefix-sum: After the operation, every process has sum of
the buffers of the previous processes and its own.
MPI_Scan() is MPI primitive for the prefix operations.
All the operators that can be used for reduction can also be used for the
scan operation
If buffer is an array of elements, then recvbuf is also an array containing
element-wise prefix at each position.
int MPI_Scan(void *sendbuf, void *recvbuf, int
count, MPI_Datatype datatype, MPI_Op
op,
MPI_Comm comm)
Program example: scan.c
Collective Communication and Computation Operations
P0 P1 P2 P3
32 12, 15 4,9,14 20,23,27,31
Recvcounts 1 2 3 4
displs 0 0+1=1 1+2=3 3+3=6
Here every process will have to supply the valid calculated arrays of
recvcounts and displs.
Furthermore, it is also necessary for all the processors to provide a
recvbuf [an array] of sufficient size to store all the elements of all the
processes.
Collective Communication and Computation Operations
MPI_scatter
Scatters data stored in sendbuf of source process between all the
processes as discussed in ch#4.
MPI_scatterv
Here sendcounts is an array of P size such that its ith index contains
number of elements to be sent to ith process.
displs[i] indicates the index in sendbuf from which sendcounts[i] values
are to be sent to ith process.