0% found this document useful (0 votes)
1 views

PeterPacheco-ParallelProgramming-Intro-Copy 105

Uploaded by

Noah Minch
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

PeterPacheco-ParallelProgramming-Intro-Copy 105

Uploaded by

Noah Minch
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

84 CHAPTER 3 Distributed-Memory Programming with MPI

CPU CPU CPU CPU

Interconnect

Memory

FIGURE 3.2
A shared-memory system

fundamental issues involved in writing message-passing programs–issues such as


data partitioning and I/O in distributed-memory systems. We’ll also revisit the issue
of parallel program performance.

3.1 GETTING STARTED


Perhaps the first program that many of us saw was some variant of the “hello, world”
program in Kernighan and Ritchie’s classic text [29]:
#include <stdio.h>

int main(void) {
printf("hello, world\n");

return 0;
}

Let’s write a program similar to “hello, world” that makes some use of MPI. Instead
of having each process simply print a message, we’ll designate one process to do the
output, and the other processes will send it messages, which it will print.
In parallel programming, it’s common (one might say standard) for the processes
to be identified by nonnegative integer ranks. So if there are p processes, the pro-
cesses will have ranks 0, 1, 2, . . . , p − 1. For our parallel “hello, world,” let’s make
process 0 the designated process, and the other processes will send it messages. See
Program 3.1.

3.1.1 Compilation and execution


The details of compiling and running the program depend on your system, so you
may need to check with a local expert. However, recall that when we need to be
explicit, we’ll assume that we’re using a text editor to write the program source, and

You might also like