Assignment Individual - 1 ParallelProg
Assignment Individual - 1 ParallelProg
STUDENT DECLARATION
I declare that this material, which I now submit for assessment, is entirely my own work and has
not been taken from the work of others, save and to the extent that such work has been cited and
acknowledged within the text of my work.
I understand that plagiarism, collusion, and copying are grave and serious offences in the university
and accept the penalties that would be imposed should I engage in plagiarism, collusion or copying.
I have read and understood the Assignment Regulations set out in the assignment documentation.
I have identified and included the source of all facts, ideas, opinions, and viewpoints of others in
the assignment references. Direct quotations from books, journal articles, internet sources, module
text, or any other source whatsoever are acknowledged and the source cited are identified in the
assignment references.
This assignment, or any part of it, has not been previously submitted by me or any other person for
assessment on this or any other course of study
MARKS :
COMMENT :
Question 1
Answer:
a) Block Distribution
Process
0 0 1 2 12
1 3 4 5 13
2 6 7 8
3 9 10 11
b) Cyclic Distribution
Process
0 0 4 8 12
1 1 5 9 13
2 2 6 10
3 3 7 11
b) Draw a diagram that shows how MPI_Gather can be implemented using tree-structured
communication when an n-element array that has been distributed among comm_sz
processes needs to be gathered onto process 0.
Question 3
The following are some collective communication functions available in MPICH. Briefly explain their
use with an example.
MPI_Alltoall ( ), MPI_Scatterv ( ), MPI_Gatherv ( ), MPI_Alltoallv ( ), MPI_Allgatherv ( )
Answer:
a. MPI_Alltoall ( )
- Used to send data from all to all processes
- E.g. suppose there are four processes including the root, each with array u. When those
processes gone through all-to-all operation,
MPI_Alltoall (u, 2, MPI_INT, v, 2, MPI_INT, MPI_COMM_WORLD);
The data will be distributed as shown below on the array v;
Rank array u array v
0 10 11 12 13 14 15 16 17 10 11 20 21 30 31 40 41
1 20 21 22 23 24 25 26 27 12 13 22 23 32 33 42 43
2 30 31 32 33 34 35 36 37 14 15 24 25 34 35 44 45
3 40 41 42 43 44 45 46 47 16 17 26 27 36 37 46 47
b. MPI_Scatterv ( )
- Used to Scatters a buffer in parts to all tasks in a group
- Example:
o The root process scatters sets of 100 ints to the other processes, but the sets of 100 are
stride ints apart in the sending buffer. Requires use of MPI_SCATTERV. Assume Stride >=
100.
c. MPI_GatherV ( )
- Used to gather into specified locations from all processes in a group
- Example:
o Now have each process send 100 ints to root, but place each set (of 100)
stride ints apart at receiving end. Use MPI_GATHERV and the displs
argument to achieve this effect. Assume stride >= 100.
d. MPI_Alltoallv ( )
- Used to Sends data from all to all processes with a displacement
- Example:
e. MPI_AllGatherv ( )
- Used to gathers data from all processes and delivers it to all. Each process may
contribute a different amount of data.
- Example: