0% found this document useful (0 votes)
44 views4 pages

2.0 Semantic Terms: Distributed Memory System

This document provides definitions and explanations of key terms used in parallel computing and MPI (Message Passing Interface). It discusses the message passing paradigm where a distributed memory system runs the same program across multiple nodes connected by a network. Each instance of the program on a node is called a process. Processes communicate by sending and receiving messages between each other's application buffers. There are blocking and non-blocking methods for sending and receiving data. MPI uses communicators and assigned ranks to identify and group the processes running in a parallel program.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views4 pages

2.0 Semantic Terms: Distributed Memory System

This document provides definitions and explanations of key terms used in parallel computing and MPI (Message Passing Interface). It discusses the message passing paradigm where a distributed memory system runs the same program across multiple nodes connected by a network. Each instance of the program on a node is called a process. Processes communicate by sending and receiving messages between each other's application buffers. There are blocking and non-blocking methods for sending and receiving data. MPI uses communicators and assigned ranks to identify and group the processes running in a parallel program.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

2.

0 Semantic Terms
Before we get started programming MPI, it is infortant that you know the vocabulary that is
used in parallel computing an understand the basic working of a parallel computing
environment. We will start this course with brief but helpful definitions and explanations to
help guide you on your way to becoming a parallel programmer. If you feel that you are
familiar with all the terms, feel free to skip this chapter.
2.1 Message Passing Paradigm
The beowulf clusters that you will be writing, compiling, and running your MPI programs
on is called a Distributed Memory System. In this system we have a master node,
computer, that you log into. Connected to the master node is a network of several other
nodes. When you run your MPI program on the master node, the master node runs the
same program on each one of the nodes in the cluster. This way we have access on each
node to their processor and memory. We can also transfer data between each node giving
the illustion of one giant computer. See the illustration below for example.
Distributed Memory System
Network
A program that runs on a node is called a Process. When your program is run a process is
run on each processor in the cluster. These processes communicate to each other using a
system of message passing. These messages are packets of data that are put into envelopes
that contain routing information. Using the message passing system allows us to copy data
from the memory of one process to another. Here is an illustration:
http://www.rc.usf.edu/tutorials/classes/tutorial/mpi/chapter2...
1 of 4 04/16/2013 03:18 PM
Communication of messages requires that both processes cooperate in a send and receive
operation. The transfer of data is called a send and the receiving of data by a process is
called a receive.
2.2 Sending and Receiving
There are two different kinds of buffers in MPI. The application buffer which is where the
data for each process is held in memory, this is the address space that holds data to be sent
and received. The system buffer is used when messages are needed to be stored, this buffer
will be used depending on what type of communication method is being used. The system
buffer allows us to send messages in asynchronous mode. Asynchronous send operations
are allowed to complete even though the receiving process may not have yet received the
message. In synchronous mode a send will complete when the receiving process gives
acknowledgement that the message was received by the receiving process.
http://www.rc.usf.edu/tutorials/classes/tutorial/mpi/chapter2...
2 of 4 04/16/2013 03:18 PM
Above is an illustration that sends data from Process 1 to Process 2. The variable in the
application buffer is sent through the network and copied into the system buffer on the
receiving process. The data on the receiving system buffer is then copied into the processes
application buffer. There are two methods for sending and receiving:
Blocking In blocking communication a call is dependent on events. To send, the
data in the application buffer must be copied to a system buffer so the data is
available for reuse. For receives the data must be copied into the receive buffer so
it is ready to be used.
Non-Blocking In non-blocking communication a send will complete without
waiting for the receiving process to complete. This allows computation to overlap
communication, but keep in mind that it is not safe to modify or use the
application buffer after a non-blocking send. It is up to the programmer to test if
the receive process is complete and the application buffer is free for reuse.
2.3 Communicators and Groups
MPI needs to have a way to identify all the different process that will run in a parallel
program. To do this we have something called a rank. An integer is assigned to each process
when it initializes. This way the programmer can use the rank to specify a destination or
source for sending and receiving messages. The rank integer will start at zero and increase
by one, for every running process. A communicator is an object that MPI uses to group
collections of process that are allowed to communicate with each other. All the processes
that we have available to us when we begin our MPI program will be ranked and grouped
into one single communicator called MPI_COMM_WORLD. MPI_COMM_WORLD is the
default group when the MPI program is initialized, we can then divide this into seperate
groups to work with.
http://www.rc.usf.edu/tutorials/classes/tutorial/mpi/chapter2...
3 of 4 04/16/2013 03:18 PM
http://www.rc.usf.edu/tutorials/classes/tutorial/mpi/chapter2...
4 of 4 04/16/2013 03:18 PM

You might also like