Mid 2 Solution
Mid 2 Solution
Mid 2 Solution
iii. For a 2*8 mesh, using a naïve solution, how many messages would you expect the sending process to send,
for a 1-to-all broadcast?
a. Less than 8
b. More than 12
c. Less than 4
d. Any one of the given option
v. The transparency that enables access to local and remote resources using identical operations is
a. Access transparency
b. Location transparency
c. Computation transparency
d. Scaling transparency
vi. With the naïve solution for one-to-all broadcast on a ring, we would expect to send ____ messages
to the other _____ processes, and this may lead to an __________ of the communication network
a. p; p-1; overutilization
b. p-1; p-1; underutilization
c. p-1; p-1; overutilization
d. (p-1)2; p-1; underutilization
True/False
I. The purpose of community cloud is to combine different clouds, e.g. private and public cloud.
a. True
b. False
II. The total exchange in a linear ring with p nodes would take p steps.
a. True
b. False
I. What should be the total time for sending a message of size m across n hops?
Middleware
III. Which form of Operating System is better suited for tightly coupled OS for multiprocessors and
homogenous multi-computers?
Distributed OS
IV. Which form of Operating System is better suited for loosely coupled OS for heterogeneous multi-
computers?
Network OS
Grid Computing
The Size of message doubles at each step in gather operation whereas in reduction operation the
size of message remains the same.
b) Provide total cost estimation for this operation. You have to consider the size of each message!
At first step, the size of the message is 4 then 2 and finally 1 thus amounting to 4+2+1 = 7 which is
equivalent to p – 1.
The only difference is the message to be sent and received so in this case X is an array of P messages or 2 d
messages.
Now at sender end, we need to keep the first half of the array and send second half to the receiver.
If (I am a participant)
{
arraysize = arraysize/2;
If (I am the sender)
Send &X[arraysize], araysize
else
Receive X , araysize
}
a) Write the output for the following piece of code assuming that there are 4 MPI processes. Assume
there is no syntax error.
#include <mpi.h>
#include <stdio.h>
int main (intargc, char** argv) {
MPI_Init (NULL, NULL);
MPI_Status status;
int p, b, my_rank;
MPI_Comm_size(MPI_COMM_WORLD, &p);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
int a = my_rank;
int sTag = a;
int rTag = (a-1+p) % p;
int next = (my_rank + 1) % p;
int prev = ((my_rank - 1 + p) % p);
MPI_Sendrecv(&a,1,MPI_INT,next,sTag, &b,1,MPI_INT,prev,rTag, MPI_COMM_WORLD, &status);
printf("I am %d: Got:%d from %d and Sent:%d to %d\n ", my_rank, b, prev, a, next);
MPI_Finalize();
}
b) What is the one extra parameter used in MPI_Recv() that is not used in MPI_Send(), and in what
circumstances would it be of greater importance?
The status parameter (of MPI_Status type) is the extra parameter. It may be of
greater importance when we call the MPI_Recv() using wildcards for either source
or tag, or possibly both. This parameter provides us with information on the (i)
source, (ii) tag, and (iii) error from the incoming message from another process.