Distributed Memory
Programming:
Message passing, MPI,
PVM
Parallel and Distributed Computing
Arfan Shahzad
{ [email protected] }
Course Outlines
Distributed Memory Programming
• When two or more computational processes do not share memory,
they cannot communicate with each other.
• Instead, these processes must exchange data using a different
mechanism: like message passing.
• This situation requires distributed memory programming.
Distributed Memory Programming cont…
• The communication could be between processes running on the same
node or between processes running on different nodes in a cluster,
but the underlying communication model is the same.
• Interface standards like Message-Passing Interface (MPI) facilitate
distributed memory programming for cluster machines.
Distributed Memory Programming cont…
• In distributed memory programming, each task owns part of the data,
and other tasks must send a message to the owner in order to update
that part of the data.
• Whether these tasks are on the same node or different nodes in the
cluster, they do not necessarily have a mechanism to read or write to
each other's memory directly.
Distributed Memory Programming cont…
Distributed Memory Programming cont…
• In this regard the following paradigm plays an important roles:
1. Message passing Interface
2. Parallel Virtual Machine
Distributed Memory Programming cont…
Message Passing Interface
• Numerous programming languages (message passing paradigm) and
libraries have been developed for explicit parallel programming.
• The message-passing programming paradigm is one of the oldest and
most widely used approaches for programming parallel computers.
Distributed Memory Programming cont…
Message Passing Interface: Send and Receive
• Since interactions are accomplished by sending and receiving
messages, the basic operations in the message-passing programming
paradigm are send and receive.
• In their simplest form, the prototypes of these operations are defined
as follows:
Distributed Memory Programming cont…
Message Passing Interface: Send and Receive
• send(void *sendbuf, int nelems, int dest)
• receive(void *recvbuf, int nelems, int source)
The sendbuf points to a buffer that stores the data to be sent
The recvbuf points to a buffer that stores the data to be received
The dest is the identifier of the process that receives the data
The source is the identifier of the process that sends the data
Distributed Memory Programming cont…
Parallel Virtual Machine (PVM)
• PVM (Parallel Virtual Machine) is a software package that permits a
heterogeneous collection of Unix and/or Windows computers hooked
together by a network to be used as a single large parallel computer.
• Thus large computational problems can be solved more cost
effectively by using the aggregate power and memory of many
computers.
Distributed Memory Programming cont…
Parallel Virtual Machine (PVM)
• PVM enables users to exploit their existing computer hardware to solve
much larger problems at minimal additional cost.
• Hundreds of sites around the world are using PVM to solve important
scientific, industrial, and medical problems in addition to PVM's use as an
educational tool to teach parallel programming.
• With tens of thousands of users, PVM has become the de facto standard
for distributed computing world-wide.
Distributed Memory Programming cont…
Parallel Virtual Machine (PVM)
• Advantage: It allows applications to use the most appropriate
computing model for the entire application or for individual sub
algorithms.
• The PVM system is composed of a suite of user interface primitives
and supporting software that together enable concurrent computing
on loosely coupled networks of processing elements.