03-MPIProgramStructure[1]
03-MPIProgramStructure[1]
interface
What is the message passing
interface (MPI)?
The message passing interface
(MPI) is a standardized means of
exchanging messages between
multiple computers running a
parallel program across distributed
memory.
Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
MPICH
Message passing
interface
Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Features Of message
passing interface
In distributed computing, MPI (Message
Passing Interface) these essential features:
1. Inter-process Communication: Enables
data exchange between nodes in a
distributed environment.(e.g: point to point &
collective communication )
2. Scalability: Efficiently manages
thousands of processes across different
networked systems.
3. Fault Tolerance: Allows partial failure
handling, helping processes continue where
possible.
Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Continue
Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
A Generic MPI Program
Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Initializing MPI
Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
MPI Header Files
Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
MPI Handles
• MPI defines and maintains its own internal data structures related to
communication, etc. You reference these data structures through
handles. Handles are returned by various MPI calls and may be
used as arguments in other MPI calls.
• In C, handles are pointers to specially defined datatypes (created via
the C typedef mechanism). Arrays are indexed starting at 0.
• Examples:
– MPI_SUCCESS - An integer. Used to test error codes.
– MPI_COMM_WORLD - In C, an object of type MPI_Comm (a
"communicator"); it represents a pre-defined communicator consisting of
all processors.
• Handles may be copied using the standard assignment operation.
MPI Datatypes
Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Basic MPI Data Types
MPI_FLOAT float
MPI_DOUBLE double
MPI_BYTE (none)
MPI_PACKED (none)
Special MPI Datatypes (C)
Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Special MPI Datatypes (C)
Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Terminating MPI
Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Communicators
•A communicator is a handle representing a
group of processors that can communicate with
one another. 1 3
dest
0 4
2
source 5
Communicator
MPI_COMM_WORLD
Hello World!
Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Sample Program: Hello World!
#include <stdio.h>
#include <mpi.h>
void main (int argc, char *argv[]) {
int myrank, size;
/* Initialize MPI */
MPI_Init(&argc, &argv);
/* Get my rank */
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Sample Program: Output
Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
#include <mpi.h>
#include <stdio.h>
Program int main(int argc, char** argv) {
MPI_Init(&argc, &argv);
int rank, size;
MPI_Comm_rank(MPI_COMM_WORLD, &rank); // Get the rank of the
process
MPI_Comm_size(MPI_COMM_WORLD, &size); // Get the total
number of processes
if (size < 2) {
if (rank == 0) {
printf("This program requires at least two processes.\n");
}
MPI_Finalize();
return 1;
}
int number;
if (rank == 0) {
// Process 0 sends a number to Process 1
number = 42; // Example data to send
MPI_Send(&number, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);
printf("Process 0 sent number %d to Process 1\n", number);
} else if (rank == 1) {
// Process 1 receives the number from Process 0
MPI_Recv(&number, 1, MPI_INT, 0, 0, MPI_COMM_WORLD,
MPI_STATUS_IGNORE);
/
Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPIprintf("Process 1 received number %d from Process 0\n",
number);
Output
Process 0 sent
data: Hello
from Process 0
Process 1
received data:
Hello from
Process 0
Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Program
#include <mpi.h>
#include <stdio.h>
#define ARRAY_SIZE 5
int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int numbers[ARRAY_SIZE];
// Each process prints the received array and calculates the sum
int sum = 0;
for (int i = 0; i < ARRAY_SIZE; i++) {
sum += numbers[i];
}
printf("Process %d received the array and calculated sum: %d\n", rank, sum);
MPI_Finalize();
return 0;
Output
Process 0 is initializing the array:
12345
Process 0 received the array and
calculated sum: 15
Process 1 received the array and
calculated sum: 15
Process 2 received the array and
calculated sum: 15
Process 3 received the array and
calculated sum: 15
Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Program scatter
#include <mpi.h>
#include <stdio.h>
int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Finalize();
return 0;
}
Output
int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (rank == 0) {
printf("Root process gathered data: ");
for (int i = 0; i < 4; i++) {
printf("%d ", recv_data[i]);
}
printf("\n");
}
MPI_Finalize();
return 0;
Output