Ms. V. Uma Maheswari, Assistant Lecturer, Department of Information Technology, National Institute of Technology, Surathkal
Ms. V. Uma Maheswari, Assistant Lecturer, Department of Information Technology, National Institute of Technology, Surathkal
Uma Maheswari,
Assistant Lecturer,
Department of Information Technology,
National Institute of Technology,
Surathkal.
Outline
Reference: https://computing.llnl.gov/tutorials/parallel_comp/#ModelsMessage
Hybrid Model
Reference: https://computing.llnl.gov/tutorials/parallel_comp/#ModelsMessage
Parallel Computation:
Large task/computation
P1 P2 P3 P4 P5
○ Portability :
■ An MPI library exists on ALL parallel computing platforms so it is
highly portable.
○ Support heterogeneity
○ High performance through efficient implementations
○ Encourage overlap of communication and
computations.
○ Reliability
MPI is a Middleware
PROCESS/ PROCESS/
USER APP USER APP
MPI MPI
OS OS
NETWORK
MPI is a Middleware
MPI Implementations
● OpenMPI (www.open-mpi.org)
● MPICH (www.mpich.org)
● HP MPI
● Intel MPI
● Scali MPI
● IBM MPI
Outline
● Communicators :
○ To identify the communication world (cluster of processes)
● Getting Information :
○ To get the number of processes and process ids
#include<mpi.h>
MPI_Init(&argc,&argv);
MPI_Finalize();
MPI Start and Terminate Routines
#include<stdio.h>
int main(int argc,char **argv)
{
-----------
-----------
MPI_Init(&argc,&argv);
-----------
-----------
MPI_Finalize();
-----------
return 0;
}
Communicators
P2 P3
Communication P4
Domain
P1 P5
Getting Information
● MPI_Comm_size
● MPI_Comm_rank
● Syntax :
● int MPI_Comm_size(MPI_Comm comm, int *size)
● Int MPI_Comm_rank(MPI_Comm comm, int *rank)
General MPI Program
#include<mpi.h>
int main(int argc,char **argv)
{
-----------
-----------
MPI_Init(&argc,&argv);
-----------
MPI_Comm_size(MPI_COMM_WORLD,&size);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
-----------
MPI_Finalize();
-----------
return 0;
}
Example: Hello World
#include<mpi.h>
int main(int argc,char *argv[ ])
{
int size,myrank;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&size);
MPI_Comm_rank(MPI_COMM_WORLD,&myrank);
printf(“Process %d of %d, Hello World”,myrank,size);
MPI_Finalize();
return 0;
}
MPI Hello World :
MPI Include File
● Point to Point
− Two processes
− Send and Receive are the basic functions
● Collective messages
− Group of processes involved in communication
− Functions like Broadcast, Scatter, Gather, Parallel
Reduction
Point to Point Communication
● Two processes involved in sending and receiving
data.
Send(Data) Receive(Data)
DATA DATA
Parameters:
buf : initial address of send buffer
#include<mpi.h>
int main(int argc,char **argv)
{
...
MPI_Init(&argc,&argv);
...
MPI_Comm_size(MPI_COMM_WORLD,&size);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Finalize();
...
return 0;
}
MPI Example - 1
MPI_Isend(A) A MPI_Irecv(A)
... ...
... ...
Do Computations Do Computations
Non Blocking Send and Receive
MPI_Isend (&buf,count,datatype,dest,tag,comm,&request)
MPI_Irecv (&buf,count,datatype,source,tag,comm,&request)
Parameters:
APPLICATION APPLICATION
BUFFER BUFFER
MPI_Isend(A) MPI_Irecv(A)
... A ...
... ...
Do Computations Do Computations
... …
... …
MPI_Wait() MPI_Wait()
MPI_Wait() and MPI_Test()
Syntax :
if(myrank==0)
{
x=10;
MPI_Isend(&x,1,MPI_INT,1,20,MPI_COMM_WORLD,&request);
printf("Send returned immediately\n");
}
else if(myrank==1)
{
MPI_Irecv(&x,1,MPI_INT,0,25,MPI_COMM_WORLD,&request);
printf("Receive returned immediately\n");
printf("Process %d of %d, Value of x is %d\n",myrank,size,x);
}
What is the risk here?
if(myrank==0)
x=10;
MPI_Isend(&x,1,MPI_INT,1,20,MPI_COMM_WORLD,&request);
x=x+10;
}
Make sure that x is available for reuse:
if(myrank==0)
x=10;
MPI_Isend(&x,1,MPI_INT,1,20,MPI_COMM_WORLD,&request);
MPI_Wait(request, status)
x=x+10;
}
Communication Modes
NOTE: A user may specify a buffer to be used for buffering messages sent in buffered
mode.
Image Reference: https://www.codingame.com/playgrounds/47058/have-fun-with-mpi-in-c/communication-modes
Synchronous Mode
//Blocking send will expect matching receive at the destination In Standard mode,Send will
return after copying the data to the buffer
MPI_Send(x,10,MPI_INT,1,1,MPI_COMM_WORLD);
// This send will be initiated and matching receive is already there so the program will not
lead to deadlock
MPI_Send(y,10,MPI_INT,1,2,MPI_COMM_WORLD);
}
else if(myrank==1)
{
//P1 will block as it has not received a matching send with tag 2
MPI_Recv(x,10,MPI_INT,0,2,MPI_COMM_WORLD,&status);
MPI_Recv(y,10,MPI_INT,0,1,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
}
MPI Example 3
PROCESS 1 PROCESS 2
MPI_Send(x,10,..1,1,..); MPI_Recv(x,10,..,0,2,..,..);
BLOCK
MPI_Send(y,10,..,1,2,..); MPI_Recv(y,10,..,0,1,..,..);
MPI Example - 4
if(myrank==0) {
MPI_Ssend(x,10,MPI_INT,1,1,MPI_COMM_WORLD);
MPI_Send(y,10,MPI_INT,1,2,MPI_COMM_WORLD);
}
else if(myrank==1)
{
MPI_Recv(x,10,MPI_INT,0,2,MPI_COMM_WORLD,&status);
MPI_Recv(y,10,MPI_INT,0,1,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
}
MPI Example - 4
if(myrank==0) {
MPI_Ssend(x,10,MPI_INT,1,1,MPI_COMM_WORLD);
else if(myrank==1)
{
MPI_Recv(x,10,MPI_INT,0,2,MPI_COMM_WORLD,&status); //P1 will block
as it has not received a matching send with tag 2
MPI_Recv(y,10,MPI_INT,0,1,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
}
Outline
NOTE: A user may specify a buffer to be used for buffering messages sent in buffered
mode.
Image Reference: https://www.codingame.com/playgrounds/47058/have-fun-with-mpi-in-c/communication-modes
Synchronous Mode
P2 P3
P4
P1
P6
P5
Collective Communication
● Barrier
● Broadcast
● Scatter
● Gather
● Reduce
● Scatterv
● Gatherv
Collective communication: MPI_Barrier
Syntax: MPI_Barrier(MPI_COMM_WORLD)
Collective Communication:
Broadcast
● MPI_Bcast(buf, count, datatype, source, comm)
○ buf : send buffer of sender and receive buffer of
receiver
○ source : process which sends data to others
MPI Example - 5
if(myrank==0)
{
scanf("%d",&x);
}
MPI_Bcast(&x,1,MPI_INT,0,MPI_COMM_WORLD);
printf("Value of x in process %d : %d\n",myrank,x);
MPI_Finalize();
return 0;
}
Bcast():
Process 1
x=10
Bcast(x)
Process 0
x=10 Process 2
x=10
Process 3
x=10
Broadcast Output:
Collective Communication: Scatter
Collective Communication: Scatter
Example:
MPI Example - 6
if(myrank==0)
{
printf("Enter values into array x:\n");
for(i=0;i<8;i++)
scanf("%d",&x[i]);
}
MPI_Scatter(x,2,MPI_INT,y,2,MPI_INT,0,MPI_COMM_WORLD);
for(i=0;i<2;i++)
printf("\nValue of y in process %d : %d\n",myrank,y[i]);
Output
Collective Communication: Gather
Collective Communication: Gather
Parameters:
x=10, y[50]
MPI_Gather(&x,1,MPI_INT,y,1,MPI_INT,0,MPI_COMM_WORLD);
// Value of x at each process is copied to array y in Process 0
if(myrank==0)
{
for(i=0;i<size;i++)
printf("\nValue of y[%d] in process %d : %d\n",i,myrank,y[i]);
}
Output
Collective Communication: Reduce
Parameters:
operation:
MPI Example - 8
x=myrank;
MPI_Reduce(&x,&y,1,MPI_INT,MPI_SUM,0,MPI_COMM_WORLD)
;
if(myrank==0)
{
printf("Value of y after reduce : %d\n",y);
}
Output
Outline
displacement: array which holds the index from where the data
is to be sent to each process. Ex: disp[0]=0 means Process zero
gets elements starting with index zero. disp[1]=10 means Process
1 will get elements starting from index 10.
MPI_Scatterv
OUTPUT:
MPI_Gatherv
MPI_Gatherv():
Parameters:
Ex: MPI_Comm_split(MPI_COMM_WORLD,0,0,&comm1);
MPI_Comm_split(MPI_COMM_WORLD,1,0,&comm2);
MPI_Comm_split(MPI_COMM_WORLD,2,0,&comm3);
MPI Collective Routine
Reference: Introduction to MPI and OpenMP (with Labs) Brandon Barker Computational Scientist Cornell University Center for Advanced
Computing (CAC) . https://www.cac.cornell.edu/
Summary