Lecture 11 Distributed Memory Programming
Lecture 11 Distributed Memory Programming
Lecture # 11
Distributed Memory Programming
Distributed Memory Systems
Message Passing
Synchronous vs asynchronous
Blocking vs. Non-blocking
Background on MPI
• MPI - Message Passing Interface
• Library standard defined by committee of vendors, implementers, and parallel
programmer
• Used to create parallel SPMD programs based on message passing
• Available on almost all parallel machines in C and Fortran
• About 125 routines including advanced routines
• 6 basic routines
MPI Implementations
• Most parallel machine vendors have optimized versions
• Others:
• http://www-unix.mcs.anl.gov/mpi/mpich/
• GLOBUS:
• http://www3.niu.edu/mpi/
• http://www.globus.org
Key Concepts of MPI
• Used to create parallel SPMD programs based on message passing
• Normally the same program is running on several different nodes
• Nodes communicate using message passing
Advantages of Message Passing
• Universality
• Expressivity
• Ease of debugging
Advantages of Message Passing
• Performance:
• This is the most compelling reason why MP will remain a
permanent part of parallel computing environment
• As modern CPUs become faster, management of their caches and
the memory hierarchy is the key to getting most out of them
• MP allows a way for the programmer to explicitly associate specific
data with processes and allows the compiler and cache
management hardware to function fully
• Memory bound applications can exhibit super-linear speedup
when run on multiple PEs compare to single PE of MP machines
Include files
• The MPI include file
• mpi.h
• Defines many constants used within MPI programs
• In C defines the interfaces for the functions
• Compilers know where to find the include files
Communicators
/* Initialize MPI */
ierr=MPI_Init(&argc, &argv);
#include <stdio.h>
#include <math.h>
#include <mpi.h>
int main(int argc, char *argv[ ])
{
int myid, numprocs;
MPI_Init (&argc, &argv) ;
MPI_Comm_size(MPI_COMM_WORLD, &numprocs) ;
MPI_Comm_rank(MPI_COMM_WORLD, &myid) ;
printf(“Hello from %d\n”, myid) ;
printf(“Numprocs is %d\n”, numprocs) ;
MPI_Finalize():
}
Basic Communications in MPI
• Data values are transferred from one processor to another
• One process sends the data
• Another receives the data
• Synchronous
• Call does not return until the message is sent or received
• Asynchronous
• Call indicates a start of send or receive operation, and another call is made to
determine if call has finished
Synchronous Send
• MPI_Send: Sends data to another processor
• Use MPI_Receive to "get" the data
MPI_Send(&buffer,count,datatype, destination,tag,communicator);