0% found this document useful (0 votes)
23 views26 pages

Lecture05 MPI

Uploaded by

Mohamed Ghetas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views26 pages

Lecture05 MPI

Uploaded by

Mohamed Ghetas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

HIGH PERFORMANCE

COMPUTING
MPI I

Dr. Mohamed Ghetas


# Chapter Subtitle
Roadmap
 Writing your first MPI program.
 Using the common MPI functions.
 TheTrapezoidal Rule in MPI.
 Collective communication.
 MPI derived datatypes.
 Performance evaluation of MPI programs.
 Parallel sorting.
 Safety in MPI programs.

Copyright © 2010,
Elsevier Inc. All rights
Reserved 2
A distributed memory system

Copyright © 2010,
Elsevier Inc. All rights
Reserved 3
A shared memory system

Copyright © 2010,
Elsevier Inc. All rights
Reserved 4
Hello World!

(a classic)
Copyright © 2010,
Elsevier Inc. All rights
Reserved 5
Identifying MPI processes
 Common practice to identify processes by
nonnegative integer ranks.

 p processes are numbered 0, 1, 2, .. p-1

Copyright © 2010,
Elsevier Inc. All rights
Reserved 6
Our first MPI program

Copyright © 2010,
Elsevier Inc. All rights
Reserved 7
Compilation
wrapper script to compile
source file

mpicc -g -Wall -o mpi_hello mpi_hello.c

produce create this executable file name


debugging (as opposed to default a.out)
information
turns on all warnings

Copyright © 2010,
Elsevier Inc. All rights
Reserved 8
Execution

mpiexec -n <number of processes> <executable>

mpiexec -n 1 ./mpi_hello

run with 1 process


mpiexec -n 4 ./mpi_hello

run with 4 processes


Copyright © 2010,
Elsevier Inc. All rights
Reserved 9
Output
mpiexec -n 1 ./mpi_hello
Greetings from process 0 of 1 !

mpiexec -n 4 ./mpi_hello

Greetings from process 0 of 4 !


Greetings from process 1 of 4 !
Greetings from process 2 of 4 !
Greetings from process 3 of 4 !
Copyright © 2010,
Elsevier Inc. All rights
10
Reserved
MPI Programs
 Written in C.
Has main.
Usesstdio.h, string.h, etc.
 Need to add mpi.h header file.
 Identifiers defined by MPI start with “MPI_”.
 First letter following underscore is uppercase.
For function names and MPI-definedtypes.
Helps to avoid confusion.

Copyright © 2010,
Elsevier Inc. All rights
Reserved 11
MPI Components
 MPI_Init
Tells MPI to do all the necessary setup.

 MPI_Finalize
Tells MPI we’re done, so clean up anything allocatedfor
this program.

Copyright © 2010,
Elsevier Inc. All rights
Reserved 12
Basic Outline

Copyright © 2010,
Elsevier Inc. All rights
Reserved 13
Communicators

 A collection of processes that can send messages to


each other.
 MPI_Init defines a communicator that consists of all
the processes created when the program is started.

 Called MPI_COMM_WORLD.

Copyright © 2010,
Elsevier Inc. All rights
Reserved 14
Communicators

number of processes in the communicator

my rank
(the process making this call)
Copyright © 2010,
Elsevier Inc. All rights
Reserved 15
SPMD
 Single-Program Multiple-Data
 We compile one program.
 Process 0 does something different.
Receives messages and prints them while the other
processes do the work.

 The if-else construct makes our program SPMD.

Copyright © 2010,
Elsevier Inc. All rights
Reserved 16
Communication

Copyright © 2010,
Elsevier Inc. All rights
Reserved 17
Data types

Copyright © 2010,
Elsevier Inc. All rights
Reserved 18
Communication

Copyright © 2010,
Elsevier Inc. All rights
Reserved 19
Message matching

MPI_Send r
src = q

MPI_Recv
dest = r

q
Copyright © 2010,
Elsevier Inc. All rights
Reserved 20
Message matching

 MPI_ANY_SOURCE
 MPIANYTAG

for (i = 1; i < commsz; i++) {


MPI Recv(result, result_sz, result_type, MPI_ANY_SOURCE,
result_tag, comm, MPI_STATUS_IGNORE);
Process_result(result);
}

21
Receiving messages
 A receiver can get a message without knowing:
the amount of data in the message(Max size),
the sender of the message,
or the tag of the message.

Copyright © 2010,
Elsevier Inc. All rights
Reserved 22
status_p argument

MPI_Status*

MPI_SOURC
MPI_Status* status;
E MPI_TAG
MPI_ERROR
status.MPI_SOURCE
status.MPI_TAG

Copyright © 2010,
Elsevier Inc. All rights
Reserved 23
How much data am I receiving?

Copyright © 2010,
Elsevier Inc. All rights
Reserved 24
Issueswith send and receive
 Exact behavior is determined by the MPI
implementation.
 MPI_Send may behave differently with regard to
buffer size, cutoffs and blocking.
 MPI_Recv always blocks until a matching message is
received.
 Know your implementation;
don’t make assumptions!

Copyright © 2010,
Elsevier Inc. All rights
Reserved 25
Issueswith send and receive
 Be sure that every receive has a matchingsend.
 If a process tries to receive a message and there’s no
matching send, the process will block forever (hang).
If the tags don’t match, or if the dest-rank is the same as the
src-rank, the receive won’t match the send, and either a
process will hang.
 Similarly, If a call to MPI Send blocks and there’s no
matching receive, then the sending process can hang.
If a call to MPI Send is buffered and there’s nomatching
receive, then the message will be lost.
26

You might also like