0% found this document useful (0 votes)
123 views7 pages

PDC 5 PDF

The document contains code for implementing MPI programs to demonstrate broadcast, send-receive, and calculating message size. It includes the algorithm, source code, and results for each scenario. The algorithm initializes MPI, gets the process rank and size, and uses MPI functions like broadcast, send, receive, and probe. It prints the message and size.

Uploaded by

Shivam Ahuja
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
123 views7 pages

PDC 5 PDF

The document contains code for implementing MPI programs to demonstrate broadcast, send-receive, and calculating message size. It includes the algorithm, source code, and results for each scenario. The algorithm initializes MPI, gets the process rank and size, and uses MPI functions like broadcast, send, receive, and probe. It prints the message and size.

Uploaded by

Shivam Ahuja
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

NAME-SHIVAM AHUJA

REGN NO-18BCE2015
SLOT-L49+50

EX 5 ( MPI-II)

SCENARIO – I
Implement a MPI program to demonstrate a simple MPI broadcast.
MPI is a message passing interface system which facilitates the processors
which have distributed memory architecture to communicate and send &
receive messages.
Use MPI_Bcast(buffer,count,MPI_INT,source,MPI_COMM_WORLD) method.

ALGORITHM:

• Including the MPI header files using #include <mpi.h>

• Initialize the MPI environment using MPI_init(). Here it takes two


arguments.

• We then use MPI_Comm_rank() to return the rank of a process and


MPI_Comm_size() to return the size of a communicator.

• Broadcast the message using MPI_Bcast from the process with rank
"root" to all other processes of the communicator.

• Lastly MPI_Finalize() is used to clean up the MPI environment. No more


MPI calls can be made after this one.

SOURCE CODE:

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
#include <math.h>

int main(argc,argv) int argc;


char *argv[];
{
int i,myid, numprocs;
NAME-SHIVAM AHUJA
REGN NO-18BCE2015
SLOT-L49+50

int source,count; int buffer[4]; MPI_Status status;


MPI_Request request;

MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
MPI_Comm_rank(MPI_COMM_WORLD,&myid);
source=0;
count=4;
if(myid == source){
for(i=0;i<count;i++) buffer[i]=i;
}
MPI_Bcast(buffer,count,MPI_INT,source,MPI_COMM_WORLD);
for(i=0;i<count;i++)
printf("%d ",buffer[i]);
printf("\n");
MPI_Finalize();
}

OUTPUT SCREEN SHOT:


NAME-SHIVAM AHUJA
REGN NO-18BCE2015
SLOT-L49+50

RESULTS:

The message is broadcasted.

SCENARIO – II
Implement a MPI program to demonstrate a simple Send and Receive.
MPI is a message passing interface system which facilitates the processors
which have distributed memory architecture to communicate and send &
receive messages.
Use the following methods:
MPI_Send(&buffer,count,MPI_INT,destination,tag,MPI_COMM_WORLD);
MPI_Recv(&buffer,count,MPI_INT,source,tag,MPI_COMM_WORLD,&status
);

ALGORITHM:

• Including the MPI header files using #include <mpi.h>

• Initialize the MPI environment using MPI_init(). Here it takes two


arguments.

• We then use MPI_Comm_rank() to return the rank of a process and


MPI_Comm_size() to return the size of the communicator.

• A message (buffer) is send and received using MPI_Send and MPI_Recv.

• Lastly MPI_Finalize() is used to clean up the MPI environment. No more


MPI calls can be made after this one.

SOURCE CODE:

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
NAME-SHIVAM AHUJA
REGN NO-18BCE2015
SLOT-L49+50

#include <math.h>

int main(argc,argv)
int argc;
char *argv[];
{
int myid, numprocs;
int tag,source,destination,count;
int buffer;
MPI_Status status;

MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
MPI_Comm_rank(MPI_COMM_WORLD,&myid);
tag=1234;
source=0;
destination=1;
count=1;
if(myid == source){
buffer=5678;
MPI_Send(&buffer,count,MPI_INT,destination,tag,MPI_COMM_WORLD);
printf("processor %d sent %d\n",myid,buffer);
}
if(myid == destination){

MPI_Recv(&buffer,count,MPI_INT,source,tag,MPI_COMM_WORLD,&status
);
printf("processor %d got %d\n",myid,buffer);
}
MPI_Finalize();
}
NAME-SHIVAM AHUJA
REGN NO-18BCE2015
SLOT-L49+50

OUTPUT:

RESULTS:

The message along with the rank of the processor is printed.

SCENARIO – III

Implement a MPI program to calculate the size of the incoming message.


MPI is a message passing interface system which facilitates the processors
which have distributed memory architecture to communicate and send &
receive messages.

ALGORITHM:

• Including the MPI header files using #include <mpi.h>

• Initialize the MPI environment using MPI_init(). Here it takes two


NAME-SHIVAM AHUJA
REGN NO-18BCE2015
SLOT-L49+50

arguments.

• We then use MPI_Comm_rank() to return the rank of a process and


MPI_Comm_size() to return the size of the communicator.

• A message is sent using MPI_send.

• MPI_probe is then used to block testing for the message.

• The message is received using MPI_recv.

• We then use MPI_Get_count to get the no. of entries received.


• Lastly MPI_Finalize() is used to clean up the MPI environment. No more
MPI calls can be made after this one.

SOURCE CODE:

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
#include <math.h>

int main(int argc,char **argv)


{
int myid, numprocs;
MPI_Status status;
int mytag,ierr,icount,j,*i;

MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
MPI_Comm_rank(MPI_COMM_WORLD,&myid);
printf(" Hello from c process: %d Numprocs is %d\n",myid,numprocs);

mytag=123;
if(myid == 0) {
j=200;
icount=1;
NAME-SHIVAM AHUJA
REGN NO-18BCE2015
SLOT-L49+50

ierr=MPI_Send(&j,icount,MPI_INT,1,mytag,MPI_COMM_WORLD);
}
if(myid == 1)
{
ierr=MPI_Probe(0,mytag,MPI_COMM_WORLD,&status);
ierr=MPI_Get_count(&status,MPI_INT,&icount);
i=(int*)malloc(icount*sizeof(int));
printf("getting %d\n",icount);
ierr =
MPI_Recv(i,icount,MPI_INT,0,mytag,MPI_COMM_WORLD,&status);
printf("i= "); for(j=0;j<icount;j++)
printf("%d ",i[j]); printf("\n");
}
MPI_Finalize();
}

EXECUTION:

RESULTS:
The rank and size of the processor along with the message size is printed.

You might also like