PeterPacheco-ParallelProgramming-Intro-Copy 104
PeterPacheco-ParallelProgramming-Intro-Copy 104
Distributed-Memory
Programming with MPI
3
Recall that the world of parallel multiple instruction, multiple data, or MIMD, com-
puters is, for the most part, divided into distributed-memory and shared-memory
systems. From a programmer’s point of view, a distributed-memory system consists
of a collection of core-memory pairs connected by a network, and the memory asso-
ciated with a core is directly accessible only to that core. See Figure 3.1. On the other
hand, from a programmer’s point of view, a shared-memory system consists of a col-
lection of cores connected to a globally accessible memory, in which each core can
have access to any memory location. See Figure 3.2. In this chapter we’re going to
start looking at how to program distributed-memory systems using message-passing.
Recall that in message-passing programs, a program running on one core-memory
pair is usually called a process, and two processes can communicate by calling func-
tions: one process calls a send function and the other calls a receive function. The
implementation of message-passing that we’ll be using is called MPI, which is an
abbreviation of Message-Passing Interface. MPI is not a new programming lan-
guage. It defines a library of functions that can be called from C, C++, and Fortran
programs. We’ll learn about some of MPI’s different send and receive functions.
We’ll also learn about some “global” communication functions that can involve more
than two processes. These functions are called collective communications. In the pro-
cess of learning about all of these MPI functions, we’ll also learn about some of the
Interconnect
FIGURE 3.1
A distributed-memory system