0% found this document useful (0 votes)
25 views21 pages

Lec 6

The document provides an overview of parallel programming concepts, specifically focusing on the Message Passing Interface (MPI). It includes references for further reading, instructions for installing MPI, and examples of MPI programming with communication primitives like MPI_Send and MPI_Recv. Additionally, it discusses performance measures such as speedup and efficiency in parallel computing.

Uploaded by

1none2none3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views21 pages

Lec 6

The document provides an overview of parallel programming concepts, specifically focusing on the Message Passing Interface (MPI). It includes references for further reading, instructions for installing MPI, and examples of MPI programming with communication primitives like MPI_Send and MPI_Recv. Additionally, it discusses performance measures such as speedup and efficiency in parallel computing.

Uploaded by

1none2none3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

CS677: Lecture 4

Parallel Programming

August 13, 2024


References for MPI
• (CSA) DE Culler, JP Singh and A Gupta, Parallel Computer Architecture:
A Hardware/Software Approach Morgan-Kaufmann, 1998.
• (GGKK) A Grama, A Gupta, G Karypis, and V Kumar, Introduction to
Parallel Computing. 2nd Ed., Addison-Wesley, 2003.
• (MPI) Marc Snir, Steve W. Otto, Steven Huss-Lederman, David W.
Walker and Jack Dongarra, MPI - The Complete Reference, Second
Edition, Volume 1, The MPI Core.
• (GLS) William Gropp, Ewing Lusk, Anthony Skjellum, Using MPI:
portable parallel programming with the message-passing interface, 3rd
Ed., Cambridge MIT Press, 2014.
• (PP) Peter S Pacheco, An Introduction to Parallel Programming, Morgan
Kaufmann, 2011.

2
MPI Implementations
MPI report
• MPICH (ANL)
• MVAPICH (OSU)
• OpenMPI
• Intel MPI
• Cray MPI

3
H.W.: Install MPI on your Laptop

• Linux or Linux VM on Windows


– apt/snap/yum/brew
• Windows
– No support

• https://www.mpich.org/documentation/guides/

4
Programming
• Shell scripts (e.g. bash)
• ssh basics
– E.g. ssh –X
–…
• Mostly in C/C++
• Compilation, Makefiles, ...
• Linux environment variables
– PATH
– LD_LIBRARY_PATH
–…
5
MPI Programming

mpicc -o program.x program.c


mpirun –np 2 ./program.x 6
MPI Programming

mpicc -o program.x program.c


7
mpirun –np 2 ./program.x
Communication using Messages
Core
Process
Memory

Local data Local data Local data Local data


Instruction 1 Instruction 1 Instruction 1 Instruction 1
Instruction 2 Instruction 2 Instruction 2 Instruction 2 SIMD
… … … …

8
Message Passing
Time

Process 0 Process 1 9
Simplest Communication Primitives

• MPI_Send
• MPI_Recv

10
MPI Programming

int MPI_Send (const void *buf, int count, MPI_Datatype datatype,


int dest, int tag, MPI_Comm comm)

SENDER RECEIVER
int MPI_Recv (void *buf, int count, MPI_Datatype datatype, int
source, int tag, MPI_Comm comm, MPI_Status *status)
11
MPI Programming
MPI_Comm_rank (MPI_COMM_WORLD, &myrank);

// Sender process
if (myrank == 0) /* code for process 0 */
{
strcpy (message,"Hello, there");
MPI_Send (message, strlen(message)+1, MPI_CHAR, 1, 99,
MPI_COMM_WORLD);
}

// Receiver process
else if (myrank == 1) /* code for process 1 */
{
MPI_Recv (message, 20, MPI_CHAR, 0, 99, MPI_COMM_WORLD,
&status);
printf ("received :%s\n", message);
} 12
MPI – Parallel Sum
Assume the data array resides in the memory of process 0 initially
MPI_Comm_rank (MPI_COMM_WORLD, &myrank);

// Sender process
if (myrank == 0) /* code for process 0 */
{
for (int rank=1; rank<SIZE ; rank++) {
start = rank*N/size*sizeof(int);
MPI_Send (data+start, N/size, MPI_INT, rank, 99, MPI_COMM_WORLD);
}
}
else /* code for processes 1 … SIZE */
{
MPI_Recv (data, N/size, MPI_CHAR, 0, 99, MPI_COMM_WORLD, &status);
}
13
MPI Timing
double stime = MPI_Wtime();



double etime = MPI_Wtime();

14
Code 1
• Compile
– mpicc –o simple simple.c
• Execute
– mpirun –np 2 ./simple 4

15
Code 2
• Compile
– mpicc –o findMin findMin.c
• Execute
– mpirun –np 2 ./findMin 4
– time mpirun –np 2 ./findMin 4
– time mpirun –np 2 ./findMin 4000
– time mpirun –np 8 ./findMin 4000

16
Performance Measure
• Speedup
Time ( 1 processor)
SP =
Time ( P processors)

• Efficiency
SP
EP =
P

17
Domain Decomposition

18
Nearest Neighbour Computations

19
Code 3
• Compile
– mpicc –o findMinNeighbour findMinNeighbour.c
• Execute
– mpirun –np 2 ./findMinNeighbour 4
– time mpirun –np 2 ./findMinNeighbour 4
– time mpirun –np 2 ./findMinNeighbour 4000
– time mpirun –np 8 ./findMinNeighbour 4000

20
Thank You

21

You might also like