0% found this document useful (0 votes)
13 views27 pages

07 2 Introduction MPI

Uploaded by

mazharmohyuddin1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views27 pages

07 2 Introduction MPI

Uploaded by

mazharmohyuddin1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Introduction to Message Passing Interface

Dr. Mian M. Hamayun


[email protected]
http://seecs.nust.edu.pk/faculty/mianhamayun.html
Some material re-used from Mohamed Zahran (NYU)
This is What We Target With MPI

We will talk about processes

Copyright © 2010, Elsevier Inc.


All rights Reserved
MPI processes
• Identify processes by nonnegative
integer ranks.

• p processes are numbered 0, 1, 2, .. p-1

Copyright © 2010, Elsevier Inc.


All rights Reserved
Compilation
wrapper script to compile
source file
MPI is NOT a language.
mpicc -g -Wall -o mpi_hello mpi_hello.c
Just libraries called from
C/C++, … .

produce create this executable file name


debugging (as opposed to default a.out)
information

turns on all warnings

Copyright © 2010, Elsevier Inc.


All rights Reserved
Execution
mpiexec -n <number of processes> <executable>

mpiexec -n 1 ./mpi_hello

run with 1 process

mpiexec -n 4 ./mpi_hello

run with 4 processes

Copyright © 2010, Elsevier Inc.


All rights Reserved
Our first MPI program
Our first MPI program
Execution
mpiexec -n 1 ./mpi_hello

Greetings from process 0 of 1 !

mpiexec -n 4 ./mpi_hello

Greetings from process 0 of 4 !


Greetings from process 1 of 4 !
Greetings from process 2 of 4 !
Greetings from process 3 of 4 !

Copyright © 2010, Elsevier Inc.


All rights Reserved
MPI Programs
• Written in C.
– Has main.
– Uses stdio.h, string.h, etc.
• Need to add mpi.h header file.
• Identifiers defined by MPI start with “MPI_”.
• First letter following underscore is uppercase.
– For function names and MPI-defined types.
– Helps to avoid confusion.
• All letters following underscore are uppercase.
– MPI defined macros
– MPI defined constants
MPI Components
Pointers to
the two arguments
of main()

Tells MPI to do all the necessary setup.


No MPI functions should be called before this.
MPI Components

•Tells MPI we’re done, so clean up anything allocated for


this program.
• No MPI function should be called after this.

Copyright © 2010, Elsevier Inc.


All rights Reserved
Basic Outline

Copyright © 2010, Elsevier Inc.


All rights Reserved
Communicators
• A collection of processes that can send
messages to each other.
• MPI_Init defines a communicator that
consists of all the processes created
when the program is started.
• Called MPI_COMM_WORLD.

Copyright © 2010, Elsevier Inc.


All rights Reserved
Communicators

number of processes in the communicator


MPI_COMM_WORLD for now

my rank
(the process making this call)

Copyright © 2010, Elsevier Inc.


All rights Reserved
Communication

To distinguish messages
rank of the receiving process

Message sent by a process using one communicator cannot be


received by a process in another communicator.

Copyright © 2010, Elsevier Inc.


All rights Reserved
Data types

Copyright © 2010, Elsevier Inc.


All rights Reserved
Communication

Copyright © 2010, Elsevier Inc.


All rights Reserved
Message matching

recv_buf_sz
>=
send_buf_siz

q
MPI_Send MPI_Recv
Copyright © 2010, Elsevier Inc.
All rights Reserved
src = q dest = r
Scenario 1

What if process 2 message


arrives before process 1?
Scenario 1
Wildcard: MPI_ANY_SOURCE

The loop will then be:

for(q = 1; q < comm_sz; q++) {


MPI_Recv(result, result_sz, result_type,
MPI_ANY_SOURCE,
tag, comm, MPI_STATUS_IGNORE);
}
Scenario 2
What if process 1 sends to process 0
several messages but they arrive out of
order.
– Process 0 is waiting for a message with tag
= 0 but tag = 1 message arrives instead!
Scenario 2
Wildcard: MPI_ANY_TAG

The loop will then be:

for(q = 1; q < comm_sz; q++) {


MPI_Recv(result, result_sz, result_type,
q,
MPI_ANY_TAG, comm,
MPI_STATUS_IGNORE);
}
Receiving messages
• A receiver can get a message without
knowing:
– the amount of data in the message,
– the sender of the message,
– or the tag of the message.

Copyright © 2010, Elsevier Inc.


All rights Reserved
status_p argument

MPI_Status*
a struct

MPI_Status* status; MPI_SOURCE


MPI_TAG
MPI_ERROR
status.MPI_SOURCE
status.MPI_TAG

Copyright © 2010, Elsevier Inc.


All rights Reserved
How much data am I receiving?

Copyright © 2010, Elsevier Inc.


All rights Reserved
Issues
• MPI_Send() is implementation
dependent: can buffer or block .. or
both!
• MPI_Recv() always blocks
– So, if it returns we are sure the message
has been received.
– Be careful: don’t make it block forever!
Conclusions
• MPI is the choice when we have
distributed memory organization.
• It depends on messages.
• You goal: How to reduce messages yet
increase concurrency?

You might also like