0% found this document useful (0 votes)
4 views9 pages

Cluster Lab Session 03

Uploaded by

YASAS MANUJAYA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views9 pages

Cluster Lab Session 03

Uploaded by

YASAS MANUJAYA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

University of Moratuwa

Department of Information Technology


B18_L4S1_IN 4700 - Semester I Lab Session 03 – 2022

Group Communication

 The Broadcast Group Communication Primitive


o Broadcasting is sending a message to all members (including yourself) of the
group
o Syntax of the MPI_Bcast() call:

MPI_Bcast(void* buffer,
int count,
MPI_Datatype datatype,
int rootID,
MPI_Comm comm )

 On entry (i.e., before the call), only the rootID processor contains the correct
value in the buffer buffer
 On exit (i.e., when the call returns - finishes), all processors have a copy
of buffer

Example 01:

int main(int argc, char **argv)


{
char buff[128];
int secret_num;
int numprocs;
int myid;
int i;

MPI_Status stat;

MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
MPI_Comm_rank(MPI_COMM_WORLD,&myid);

// ------------------------------------------
// Node 0 obtains the secret number
// ------------------------------------------
if ( myid == 0 )
{
secret_num = atoi(argv[1]);
}

// ------------------------------------------
// Node 0 shares the secret with everybody
// ------------------------------------------
MPI_Bcast (&secret_num, 1, MPI_INT, 0, MPI_COMM_WORLD);

if ( myid == 0 )
{
for( i = 1; i < numprocs; i++)
{
MPI_Recv(buff, 128, MPI_CHAR, i, 0, MPI_COMM_WORLD, &stat);
cout << buff << endl;
}
}
else
{
sprintf(buff, "Processor %d knows the secret code: %d",
myid, secret_num);
MPI_Send(buff, 128, MPI_CHAR, 0, 0, MPI_COMM_WORLD);
}

MPI_Finalize();

Compile and run file

1. mpicc -o BCast BCast.c


2. mpirun -np 4 ./BCast
 The Scatter Group Communication primitive
o The MPI_Scatter() is used to split an array into N parts and send one of the
parts to each MPI process
o Syntax of the MPI_Scatter() call:

MPI_Scatter(void* sendbuf,
int sendcount,
MPI_Datatype sendtype,

void* recvbuf,
int recvcount,
MPI_Datatype recvtype,

int rootID,
MPI_Comm comm)

 sendbuf - the data (must be valid for the rootID processor)


 sendcount - number of items sent to each process (valid for rootID
only)
 sendtype - type of data sent (valid for rootID only)
 recvbuf - buffer for receiving data
 recvcount - number of items to receive
 recvtype - type of data received
 rootID - id of root processor (who is doing the send operation)
 comm - the communicator group

 Example 02:
 Processor 0 distributes 2 integer to every processor
 Each processor adds the numbers and returns the sum to proc 0

int main(int argc, char **argv)


{
int buff[100];
int recvbuff[2];
int numprocs;
int myid;
int i, k;
int mysum;

MPI_Status stat;

MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
MPI_Comm_rank(MPI_COMM_WORLD,&myid);

if ( myid == 0 )
{
cout << "WE have " << numprocs << " processors" << endl;
// -----------------------------------------------
// Node 0 prepare 2 number for each processor
// [1][2] [3][4] [5][6] .... etc
// -----------------------------------------------
k = 1;
for ( i = 0; i < 2*numprocs; i += 2 )
{
buff[i] = k++;

buff[i+1] = k++;
}
}

// ------------------------------------------
// Node 0 scatter the array to the processors:
// ------------------------------------------

MPI_Scatter (buff, 2, MPI_INT, recvbuff, 2, MPI_INT, 0, MPI_COMM_WORLD);


^^^ !!! ^^^ !!!

if ( myid == 0 )
{
mysum = recvbuff[0] + recvbuff[1];
cout << "Processor " << myid << ": sum = " << mysum << endl;

for( i = 1; i < numprocs; i++)


{
MPI_Recv(&mysum, 1, MPI_INT, i, 0, MPI_COMM_WORLD, &stat);
cout << "Processor " << i << ": sum = " << mysum << endl;
}
}
else
{
mysum = recvbuff[0] + recvbuff[1];
MPI_Send(&mysum, 1, MPI_INT, 0, 0, MPI_COMM_WORLD);
}

MPI_Finalize();
}

 The Gather Group Communication primitives


o The MPI_Gather() is usually used in conjunction with
the MPI_Gather() call.

It does the reverse of MPI_Gather()....

o Syntax of the MPI_Gather() call:

MPI_Gather(void* sendbuf,
int sendcount,
MPI_Datatype sendtype,

void* recvbuf,
int recvcount,
MPI_Datatype recvtype,

int rootID,
MPI_Comm comm)

 sendbuf - the data (must be valid for ALL processors)


 sendcount - number of items sent to the rootID process
 sendtype - type of data sent
 recvbuf - buffer for receiving data
 recvcount - number of items to receive (per processor)
 recvtype - type of data received
 rootID - id of root processor who is doing the RECEIVE operation
 comm - the communicator group
Example 03: Same example above, with MPI_Gather()

 Processor 0 distributes 2 integer to every processor


 Each processor adds the numbers and returns the sum to proc 0

int main(int argc, char **argv)


{
int buff[100];
int recvbuff[2];
int numprocs;

int myid;
int i, k;
int mysum;

MPI_Status stat;

MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
MPI_Comm_rank(MPI_COMM_WORLD,&myid);

if ( myid == 0 )
{
cout << "WE have " << numprocs << " processors" << endl;

// -----------------------------------------------
// Node 0 prepare 2 number for each processor
// [1][2] [3][4] [5][6] .... etc
// -----------------------------------------------
k = 1;
for ( i = 0; i < 2*numprocs; i += 2 )
{
buff[i] = k++;
buff[i+1] = k++;
}
}

// ------------------------------------------
// Node 0 scatter the array to the processors:
// ------------------------------------------

MPI_Scatter (buff, 2, MPI_INT, recvbuff, 2, MPI_INT, 0, MPI_COMM_WORLD);

mysum = recvbuff[0] + recvbuff[1]; // Everyone calculate sum

// ------------------------------------------
// Node 0 collects the results in "buff":
// ------------------------------------------
MPI_Gather (&mysum, 1, MPI_INT, &buff, 1, MPI_INT, 0, MPI_COMM_WORLD);

// ------------------------------------------
// Node 0 prints result
// ------------------------------------------
if ( myid == 0 )
{
for( i = 0; i < numprocs; i++)
{
cout << "Processor " << i << ": sum = " << buff[i] << endl;
}
}
MPI_Finalize();
}

 The Reduce Group Communication primitives

 Often, the data gathered by the MPI_Gather() need to be operated on to


produce the final result.
 For example, results of the computations could be a part of a sum and the
results gathered needs to be added up to produce the final sum.

 MPI provides a convenient on the fly gather and compute function to make
the programming easier: MPI_Reduce()....
 Syntax of the MPI_Reduce() call:

MPI_Reduce(void* sendbuf,

void* recvbuf,
int recvcount,
MPI_Datatype recvtype,
MPI_Op op,

int rootID,
MPI_Comm comm)

 sendbuf - the data (must be valid for ALL processors)


 recvbuf - buffer used to receiving data and simultaneously perform
the "reducing operation".
 recvcount - number of items to receive (per processor)
 recvtype - type of data received
 op - type of operation used as the "reducing operation"
 rootID - id of root processor who is doing the RECEIVE and
REDUCE operation
 comm - the communicator group

Example 04: computing Pi using MPI_Reduce()

 Each processor computes a part of the integral


 Processor 0 adds the returned sums together

double f(double a)
{
return( 2.0 / sqrt(1 - a*a) );
}

/* =======================
MAIN
======================= */

int main(int argc, char *argv[])


{
int N;
double w, x;
int i, myid;
double mypi, final_pi;

MPI_Init(&argc,&argv); // Initialize
MPI_Comm_size(MPI_COMM_WORLD, &num_procs); // Get # processors
MPI_Comm_rank(MPI_COMM_WORLD, &myid); // Get my rank (id)

if ( myid == 0 )
N = atoi(argv[1]);

MPI_Bcast (&N, 1, MPI_INT, 0, MPI_COMM_WORLD);

w = 1.0/(double) N;

/* *******************************************************************
*/

mypi = 0.0;

for (i = myid; i < N; i = i + num_procs)


{
x = w*(i + 0.5);
mypi = mypi + w*f(x);
}

/* *******************************************************************
*/

MPI_Reduce ( &mypi, &final_pi, 1, MPI_DOUBLE, MPI_SUM, 0,


MPI_COMM_WORLD);

if ( myid == 0 )
{
cout << "Pi = " << final_pi << endl << endl;
}

MPI_Finalize();
}

You might also like