0% found this document useful (0 votes)
2 views

Parallel Programming 3

Uploaded by

attack3rx0
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Parallel Programming 3

Uploaded by

attack3rx0
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Parallel Programming

Message Passing (MPI)


Explicit Parallelism


Similar thing as multithreading for shared memory.


Explicit parallelism is more common with message
passing.

User has explicit control over processes.


Good: control can be used to performance benefit.


Bad: user has to deal with it.
Distributed Memory - Message Passing

mem1 mem2 mem3 memN

proc1 proc2 proc3 procN

network
Distributed Memory - Message Passing

A variable x, a pointer p, or an array a[] refer to
different memory locations, depending on the
processor.


We discuss message passing as a programming
model (can be on any hardware)
What does the user have to do?


This is what we said for shared memory:

Decide how to decompose the computation into
parallel parts.

Create (and destroy) processes to support that
decomposition.

Add synchronization to make sure
dependencies are covered.

Is the same true for message passing?
SOR Example
for some number of timesteps/iterations {
for (i=0; i<n; i++ )
for( j=0; j<n, j++ )
temp[i][j] = 0.25 *
( grid[i-1][j] + grid[i+1][j] +
grid[i][j-1] + grid[i][j+1] );
for( i=0; i<n; i++ )
for( j=0; j<n; j++ )
grid[i][j] = temp[i][j];
}
Shared Memory

grid 1 temp 1
2 2
3 3
4 4

proc1 proc2 proc3 procN


Message-Passing Data Distribution (only
middle processes)

grid grid
2 3
temp temp
2 3

proc2 proc3
Is this going to work?

Same code as we used for shared memory

for( i=from; i<to; i++ )


for( j=0; j<n; j++ )
temp[i][j] = 0.25*( grid[i-1][j] + grid[i+1][j]
+ grid[i][j-1] + grid[i][j+1]);

No, we need extra boundary elements for grid.


Data Distribution (only middle processes)

grid grid
2 3
temp temp
2 3

proc2 proc3
Is this going to work?

Same code as we used for shared memory


for( i=from; i<to; i++)
for( j=0; j<n; j++ )
temp[i][j] = 0.25*( grid[i-1][j] + grid[i+1][j]
+ grid[i][j-1] + grid[i][j+1]);

No, on the next iteration we need boundary


elements from our neighbors.
Data Communication (only middle processes)

grid grid

proc2 proc3
Is this now going to work?

Same code as we used for shared memory


for( i=from; i<to; i++ )
for( j=0; j<n; j++ )
temp[i][j] = 0.25*( grid[i-1][j] + grid[i+1][j]
+ grid[i][j-1] + grid[i][j+1]);

No, we need to translate the indices.


Index Translation

for( i=0; i<n/p; i++)


for( j=0; j<n; j++ )
temp[i][j] = 0.25*( grid[i-1][j] + grid[i+1][j]
+ grid[i][j-1] + grid[i][j+1]);

Remember, all variables are local.


Index Translation is Optional


Allocate the full arrays on each processor.

Leave indices alone.

Higher memory use.

Sometimes necessary.
What does the user need to do?


Divide up program in parallel parts.

Create and destroy processes to do above.
• Partition and distribute the data.
• Communicate data at the right time.
• (Sometimes) perform index translation.

Still need to do synchronization?

Sometimes, but many times goes hand in hand
with data communication.
Message Passing Systems


Provide process creation and destruction.

Provide message passing facilities (send
and receive, in various flavors) to distribute
and communicate data.

Provide additional synchronization facilities.
MPI (Message Passing Interface)


Is the defacto message passing standard.


Available on virtually all platforms.


Grew out of an earlier message passing
system, PVM, now outdated.
MPI Process Creation/Destruction
MPI_Init( int *argc, char ***argv )
Initiates a computation.
MPI_Finalize()
Terminates a computation.
MPI Process Identification

MPI_Comm_size( comm, &size )


Determines the number of processes.

MPI_Comm_rank( comm, &pid )


Pid is the process identifier of the caller.
MPI Basic Send

MPI_Send(buf, count, datatype, dest, tag, comm)


buf: address of send buffer
count: number of elements
datatype: data type of send buffer elements
dest: process id of destination process
tag: message tag (ignore for now)
comm: communicator (ignore for now)
MPI Basic Receive

MPI_Recv(buf, count, datatype, source, tag, comm, &status)

buf: address of receive buffer


count: size of receive buffer in elements
datatype: data type of receive buffer elements
source: source process id or MPI_ANY_SOURCE
tag and comm: ignore for now
status: status object

You might also like