0% found this document useful (0 votes)
3 views

03-MPIProgramStructure[1]

The Message Passing Interface (MPI) is a standardized protocol for exchanging messages in parallel programs across distributed memory systems. Key features include inter-process communication, scalability, and fault tolerance, with a typical MPI program structure involving initialization, computation, and finalization. Various MPI functions and data types are defined to facilitate communication and data handling among processes.

Uploaded by

Ayesha Sana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

03-MPIProgramStructure[1]

The Message Passing Interface (MPI) is a standardized protocol for exchanging messages in parallel programs across distributed memory systems. Key features include inter-process communication, scalability, and fault tolerance, with a typical MPI program structure involving initialization, computation, and finalization. Various MPI functions and data types are defined to facilitate communication and data handling among processes.

Uploaded by

Ayesha Sana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 42

Message passing

interface
What is the message passing
interface (MPI)?
The message passing interface
(MPI) is a standardized means of
exchanging messages between
multiple computers running a
parallel program across distributed
memory.

Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
MPICH
Message passing
interface

Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Features Of message
passing interface
In distributed computing, MPI (Message
Passing Interface) these essential features:
1. Inter-process Communication: Enables
data exchange between nodes in a
distributed environment.(e.g: point to point &
collective communication )
2. Scalability: Efficiently manages
thousands of processes across different
networked systems.
3. Fault Tolerance: Allows partial failure
handling, helping processes continue where
possible.

Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Continue

Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
A Generic MPI Program

• All MPI programs have the following general


structure:
include MPI header file
variable declarations
initialize the MPI environment

...do computation and MPI communication calls...

close MPI communications


General MPI Program Structure

MPI include file

variable declarations #include <mpi.h>


void main (int argc, char *argv[])
{
Initialize MPI environment int np, rank, ierr;
ierr = MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD,&rank)
;
Do work and make
MPI_Comm_size(MPI_COMM_WORLD,&np);
message passing calls
/* Do Some Works */
ierr = MPI_Finalize();
}
Terminate MPI Environment
Initializing MPI

Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Initializing MPI

• The first MPI routine called in any MPI program


must be the initialization routine MPI_INIT. This
routine establishes the MPI environment,
returning an error code if there is a problem.
int ierr;
...
ierr = MPI_Init(&argc, &argv);
• Note that the arguments to MPI_Init are the
addresses of argc and argv, the variables that
contain the command-line arguments for the
program.
MPI Header Files

Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
MPI Header Files

• MPI header files contain the prototypes for MPI


functions/subroutines, as well as definitions of
macros, special constants, and data types used
by MPI. An appropriate "include" statement must
appear in any source file that contains MPI
function calls or constants.
#include <mpi.h>
MPI Handles

Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
MPI Handles

• MPI defines and maintains its own internal data structures related to
communication, etc. You reference these data structures through
handles. Handles are returned by various MPI calls and may be
used as arguments in other MPI calls.
• In C, handles are pointers to specially defined datatypes (created via
the C typedef mechanism). Arrays are indexed starting at 0.
• Examples:
– MPI_SUCCESS - An integer. Used to test error codes.
– MPI_COMM_WORLD - In C, an object of type MPI_Comm (a
"communicator"); it represents a pre-defined communicator consisting of
all processors.
• Handles may be copied using the standard assignment operation.
MPI Datatypes

Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Basic MPI Data Types

MPI Datatype C Type

MPI_CHAR signed char

MPI_SHORT signed short int

MPI_INT signed int

MPI_LONG signed long int

MPI_UNSIGNED_CHAR unsigned char

MPI_UNSIGNED_SHORT unsigned short int


Basic MPI Data Types

MPI Datatype C Type

MPI_UNSIGNED unsigned int

MPI_UNSIGNED_LONG unsigned long int

MPI_FLOAT float

MPI_DOUBLE double

MPI_LONG_DOUBLE long double

MPI_BYTE (none)

MPI_PACKED (none)
Special MPI Datatypes (C)

Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Special MPI Datatypes (C)

• In C, MPI provides several special datatypes (structures).


Examples include
– MPI_Comm - a communicator
– MPI_Status - a structure containing several pieces of status
information for MPI calls
– MPI_Datatype
• These are used in variable declarations, for example,
MPI_Comm some_comm;
• declares a variable called some_comm, which is of type
MPI_Comm (i.e. a communicator).
Terminating MPI

Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Terminating MPI

• The last MPI routine called should be MPI_FINALIZE


which
– cleans up all MPI data structures, cancels operations that never
completed, etc.
– must be called by all processes; if any one process does not
reach this statement, the program will appear to hang.
• Once MPI_FINALIZE has been called, no other MPI
routines (including MPI_INIT) may be called.
int err;
...
err = MPI_Finalize();
Communicators

Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Communicators
•A communicator is a handle representing a
group of processors that can communicate with
one another. 1 3
dest
0 4
2
source 5
Communicator

• The communicator name is required as an


argument to all point-to-point and collective
operations.
– The communicator specified in the send and receive
calls must agree for communication to take place.
– Processors can communicate only if they share a
communicator.
Communicators

• MPI automatically provides a basic communicator called


MPI_COMM_WORLD. It is the communicator consisting of
all processors. Using MPI_COMM_WORLD, every
processor can communicate with every other processor.
You can define additional communicators consisting of
subsets of the available processors.
Communicator:
MPI_COMM_WORLD
Comm1
4 Comm2
1
Comm1 2
0
5
0 Comm2
1 1
3
2 3 6
2 0

MPI_COMM_WORLD
Hello World!

Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Sample Program: Hello World!

• In this modified version of the "Hello World"


program, each processor prints its rank as well
as the total number of processors in the
communicator MPI_COMM_WORLD.
• Notes:
– Makes use of the pre-defined communicator
MPI_COMM_WORLD.
– Not testing for error status of routines!
Sample Program: Hello World!

#include <stdio.h>
#include <mpi.h>
void main (int argc, char *argv[]) {
int myrank, size;

/* Initialize MPI */
MPI_Init(&argc, &argv);

/* Get my rank */
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);

/* Get the total number of processors */


MPI_Comm_size(MPI_COMM_WORLD, &size);

printf("Processor %d of %d: Hello World!\n", myrank, size);

MPI_Finalize(); /* Terminate MPI */


}
Sample Program Output

Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Sample Program: Output

• Running this code on four processors will produce a


result like:
Processor 2 of 4: Hello World!
Processor 1 of 4: Hello World!
Processor 3 of 4: Hello World!
Processor 0 of 4: Hello World!
• Each processor executes the same code, including
probing for its rank and size and printing the string.
• The order of the printed lines is essentially random!
– There is no intrinsic synchronization of operations on different
processors.
– Each time the code is run, the order of the output lines may
change.
Types of MPI
Communication

Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
#include <mpi.h>
#include <stdio.h>
Program int main(int argc, char** argv) {
MPI_Init(&argc, &argv);
int rank, size;
MPI_Comm_rank(MPI_COMM_WORLD, &rank); // Get the rank of the
process
MPI_Comm_size(MPI_COMM_WORLD, &size); // Get the total
number of processes
if (size < 2) {
if (rank == 0) {
printf("This program requires at least two processes.\n");
}
MPI_Finalize();
return 1;
}
int number;
if (rank == 0) {
// Process 0 sends a number to Process 1
number = 42; // Example data to send
MPI_Send(&number, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);
printf("Process 0 sent number %d to Process 1\n", number);
} else if (rank == 1) {
// Process 1 receives the number from Process 0
MPI_Recv(&number, 1, MPI_INT, 0, 0, MPI_COMM_WORLD,
MPI_STATUS_IGNORE);
/
Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPIprintf("Process 1 received number %d from Process 0\n",
number);
Output
Process 0 sent
data: Hello
from Process 0
Process 1
received data:
Hello from
Process 0
Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Program
#include <mpi.h>
#include <stdio.h>

#define ARRAY_SIZE 5

int main(int argc, char** argv) {


MPI_Init(&argc, &argv);

int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);

int numbers[ARRAY_SIZE];

// Only Process 0 initializes the array with values


if (rank == 0) {
printf("Process 0 is initializing the array:\n");
for (int i = 0; i < ARRAY_SIZE; i++) {
numbers[i] = i + 1; // For example, numbers = [1, 2, 3, 4, 5]
printf("%d ", numbers[i]);
}
printf("\n");
}

// Broadcast the array from Process 0 to all other processes


MPI_Bcast(numbers, ARRAY_SIZE, MPI_INT, 0, MPI_COMM_WORLD);

// Each process prints the received array and calculates the sum
int sum = 0;
for (int i = 0; i < ARRAY_SIZE; i++) {
sum += numbers[i];
}

printf("Process %d received the array and calculated sum: %d\n", rank, sum);

MPI_Finalize();
return 0;
Output
Process 0 is initializing the array:
12345
Process 0 received the array and
calculated sum: 15
Process 1 received the array and
calculated sum: 15
Process 2 received the array and
calculated sum: 15
Process 3 received the array and
calculated sum: 15

Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/MPI /
Program scatter
#include <mpi.h>
#include <stdio.h>

int main(int argc, char** argv) {


MPI_Init(&argc, &argv);

int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);

int data[4] = {10, 20, 30, 40}; // Data to scatter


int recv_data;

MPI_Scatter(data, 1, MPI_INT, &recv_data, 1, MPI_INT, 0, MPI_


printf("Process %d received data: %d\n", rank, recv_data);

MPI_Finalize();
return 0;
}
Output

Process 0 received data: 10


Process 1 received data: 20
Process 2 received data: 30
Process 3 received data: 40
Program
#include <mpi.h>
#include <stdio.h>

int main(int argc, char** argv) {


MPI_Init(&argc, &argv);

int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);

int send_data = rank * 10;


int recv_data[4];

MPI_Gather(&send_data, 1, MPI_INT, recv_data, 1, MPI_INT, 0,


MPI_COMM_WORLD);

if (rank == 0) {
printf("Root process gathered data: ");
for (int i = 0; i < 4; i++) {
printf("%d ", recv_data[i]);
}
printf("\n");
}

MPI_Finalize();
return 0;
Output

Root process gathered data: 0 10


Steps 20
: 30
MPI_Init initializes the MPI environment.
2. MPI_Comm_rank gets the rank (ID) of each
process.
3. Each process sets send_data to rank * 10:
Process 0 has send_data = 0
Process 1 has send_data = 10
Process 2 has send_data = 20
Process 3 has send_data = 30
4. MPI_Gather collects these values into
recv_data on Process 0.
End

You might also like