HP Mpi
HP Mpi
Eighth Edition
ii
Notice
Reproduction, adaptation, or translation without prior written
permission is prohibited, except as allowed under the copyright laws.
The information contained in this document is subject to change without
notice.
Hewlett-Packard makes no warranty of any kind with regard to this
material, including, but not limited to, the implied warranties of
merchantability and fitness for a particular purpose. Hewlett-Packard
shall not be liable for errors contained herein or for incidental or
consequential damages in connection with the furnishing, performance
or use of this material.
Parts of this book came from Cornell Theory Center’s web document.
That document is copyrighted by the Cornell Theory Center.
Parts of this book came from MPI: A Message Passing Interface. That
book is copyrighted by the University of Tennessee. These sections were
copied by permission of the University of Tennessee.
Parts of this book came from MPI Primer/Developing with LAM. That
document is copyrighted by the Ohio Supercomputer Center. These
sections were copied by permission of the Ohio Supercomputer Center.
iii
iv
Contents
Preface
Platforms supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xv
Notational conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Documentation resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Credits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
1. Introduction
The message passing model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
MPI concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Point-to-point communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Collective operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
MPI datatypes and packing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Multilevel parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Advanced topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2. Getting started
Configuring your environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Compiling and running your first application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Building and running on a single host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Directory structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3. Understanding HP MPI
Compiling applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Compilation utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Autodouble functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
64-bit support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Thread-compliant library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Building Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Running applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Running on multiple hosts using remote shell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Running on multiple hosts using prun (Quadrics system) . . . . . . . . . . . . . . . . . . . . . 36
Types of applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Runtime environment variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Runtime utility commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
HyperFabric/HyperMessaging Protocol (HMP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Communicating using daemons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
IMPI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Native language support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
v
Contents
4. Profiling
Using counter instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Creating an instrumentation profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Viewing ASCII instrumentation data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Using the profiling interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Fortran profiling interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5. Tuning
MPI_FLAGS options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Message latency and bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Multiple network interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Processor subscription . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
MPI routine selection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Multilevel parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Coding considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
A. Example applications
send_receive.f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
vi
Contents
send_receive output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
ping_pong.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
ping_pong output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
compute_pi.f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
compute_pi output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
master_worker.f90 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
master_worker output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
cart.C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
cart output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
communicator.c. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
communicator output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
multi_par.f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
multi_par.f output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
io.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
io output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
thread_safe.c. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
thread_safe output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
sort.C. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
sort.C output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
compute_pi_spawn.f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
compute_pi_spawn.f output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
B. Standard-flexibility in HP MPI
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
vii
Contents
viii
Figures
Figure 3-1. Daemon communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69
Figure 4-1. ASCII instrumentation profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .77
Figure 5-1. Multiple network interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89
Figure A-1. Array partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .136
ix
Figures
x
Tables
Table 1. Revision history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
Table 2. Typographic conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Table 1-1. Six commonly used MPI routines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
Table 1-2. MPI blocking and nonblocking calls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
Table 2-1. Organization of the /opt/mpi directory . . . . . . . . . . . . . . . . . . . . . . . . . . . .23
Table 2-2. Man page categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24
Table 3-1. Default compilers for HP-UX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
Table 3-2. Default compilers for Linux Itanium2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28
Table 3-3. Default compilers for Linux IA-32 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28
Table 3-4. Default compilers for Tru64UNIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28
Table 3-5. Compilation environment variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29
Table 5-1. Subscription types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90
Table 6-1. Non-buffered messages and deadlock . . . . . . . . . . . . . . . . . . . . . . . . . . . .108
Table A-1. Example applications shipped with HP MPI . . . . . . . . . . . . . . . . . . . . . .116
Table B-1. HP MPI implementation of standard-flexible issues . . . . . . . . . . . . . . . .164
xi
Tables
xii
Preface
This guide describes the HP MPI (version 2.0) implementation of the
Message Passing Interface (MPI) standard. The guide helps you use HP
MPI to develop and run parallel applications.
xiii
You should already have experience developing UNIX applications. You
should also understand the basic concepts behind parallel processing, be
familiar with MPI, and with the MPI 1.2 and MPI-2 standards (MPI: A
Message-Passing Interface Standard and MPI-2: Extensions to the
Message-Passing Interface, respectively).
You can access HTML versions of the MPI 1.2 and 2 standards at
http://www.mpi-forum.org. This guide supplements the material in the
MPI standards and MPI: The Complete Reference.
Some sections in this book contain command line examples used to
demonstrate HP MPI concepts. These examples use the /bin/csh syntax
for illustration purposes.
xiv
Platforms supported
HP MPI 2.0 is supported on:
• Workstations
• Midrange servers
• High-end servers
HP MPI 2.0 for HP-UX is supported on HP-UX 11i or later operating
systems on PA-RISC 2.0; and HP-UX 11i Version 1.6 or later operating
systems on Itanium-based platforms.
HP MPI 2.0 for Linux is supported on Red Hat Linux V7.2 operating
systems on Intel IA-32 and Itanium2 platforms. HP MPI 2.0 for Linux
was built and tested with Kernel series 2.4 and glibc 2.2.
HP MPI 2.0 for Tru64UNIX is supported on AlphaServers.
xv
Notational conventions
This section describes notational conventions used in this book.
Table 2 Typographic conventions
xvi
Documentation resources
Documentation resources include:
xvii
Credits
HP MPI is based on MPICH from Argonne National Laboratory and
LAM from the University of Notre Dame and Ohio Supercomputer
Center.
HP MPI includes ROMIO, a portable implementation of MPI I/O
developed at the Argonne National Laboratory.
xviii
1 Introduction
Chapter 1 1
Introduction
This chapter contains the syntax for some MPI functions. Refer to MPI: A
Message-Passing Interface Standard for syntax and usage details for all
MPI standard functions. Also refer to MPI: A Message-Passing Interface
Standard and to MPI: The Complete Reference for in-depth discussions of
MPI concepts. The introductory topics covered in this chapter include:
— Point-to-point communication
— Collective operations
— MPI datatypes and packing
— Multilevel parallelism
— Advanced topics
2 Chapter 1
Introduction
The message passing model
Chapter 1 3
Introduction
MPI concepts
MPI concepts
The primary goals of MPI are efficient communication and portability.
Although several message-passing libraries exist on different systems,
MPI is popular for the following reasons:
• Point-to-point communications
• Collective operations
• Process groups
• Communication contexts
• Process topologies
• Datatype manipulation.
Although the MPI library contains a large number of routines, you can
design a large number of applications by using the six routines listed in
Table 1-1.
4 Chapter 1
Introduction
MPI concepts
Chapter 1 5
Introduction
MPI concepts
Point-to-point communication
Point-to-point communication involves sending and receiving messages
between two processes. This is the simplest form of data transfer in a
message-passing model and is described in Chapter 3, “Point-to-Point
Communication” in the MPI 1.0 standard.
The performance of point-to-point communication is measured in terms
of total transfer time. The total transfer time is defined as
total_transfer_time = latency + (message_size/bandwidth)
where
latency Specifies the time between the initiation of the data
transfer in the sending process and the arrival of the
first byte in the receiving process.
message_size Specifies the size of the message in Mbytes.
bandwidth Denotes the reciprocal of the time needed to transfer a
byte. Bandwidth is normally expressed in Mbytes per
second.
Low latencies and high bandwidths lead to better performance.
Communicators
A communicator is an object that represents a group of processes and
their communication medium or context. These processes exchange
messages to transfer data. Communicators encapsulate a group of
processes such that communication is restricted to processes within that
group.
The default communicators provided by MPI are MPI_COMM_WORLD and
MPI_COMM_SELF. MPI_COMM_WORLD contains all processes that are
running when an application begins execution. Each process is the single
member of its own MPI_COMM_SELF communicator.
Communicators that allow processes within a group to exchange data are
termed intracommunicators. Communicators that allow processes in two
different groups to exchange data are called intercommunicators.
Many MPI applications depend upon knowing the number of processes
and the process rank within a given communicator. There are several
communication management functions; two of the more widely used are
MPI_Comm_size and MPI_Comm_rank. The process rank is a unique
6 Chapter 1
Introduction
MPI concepts
Chapter 1 7
Introduction
MPI concepts
8 Chapter 1
Introduction
MPI concepts
NOTE You should not assume message buffering between processes because the
MPI standard does not mandate a buffering strategy. HP MPI does
sometimes use buffering for MPI_Send and MPI_Rsend, but it is
dependent on message size. Deadlock situations can occur when your
code uses standard send operations and assumes buffering behavior for
standard communication mode. Refer to “Frequently asked questions” on
page 112 for an example of how to resolve a deadlock situation.
Chapter 1 9
Introduction
MPI concepts
Blocking Nonblocking
mode mode
MPI_Send MPI_Isend
MPI_Bsend MPI_Ibsend
MPI_Ssend MPI_Issend
MPI_Rsend MPI_Irsend
MPI_Recv MPI_Irecv
Nonblocking calls have the same arguments, with the same meaning as
their blocking counterparts, plus an additional argument for a request.
To code a standard nonblocking send, use
MPI_Isend(void *buf, int count, MPI_datatype dtype, int
dest, int tag, MPI_Comm comm, MPI_Request *req);
where
req Specifies the request used by a completion routine
when called by the application to complete the send
operation.
To complete nonblocking sends and receives, you can use MPI_Wait or
MPI_Test. The completion of a send indicates that the sending process is
free to access the send buffer. The completion of a receive indicates that
the receive buffer contains the message, the receiving process is free to
access it, and the status object, that returns information about the
received message, is set.
Collective operations
Applications may require coordinated operations among multiple
processes. For example, all processes need to cooperate to sum sets of
numbers distributed among them.
10 Chapter 1
Introduction
MPI concepts
Communication
Collective communication involves the exchange of data among all
processes in a group. The communication can be one-to-many,
many-to-one, or many-to-many.
The single originating process in the one-to-many routines or the single
receiving process in the many-to-one routines is called the root.
Collective communications have three basic patterns:
Broadcast and Scatter Root sends data to all processes,
including itself.
Gather Root receives data from all processes,
including itself.
Allgather and Alltoall Each process communicates with
each process, including itself.
The syntax of the MPI collective functions is designed to be consistent
with point-to-point communications, but collective functions are more
restrictive than point-to-point functions. Some of the important
restrictions to keep in mind are:
• The amount of data sent must exactly match the amount of data
specified by the receiver.
• Collective functions come in blocking versions only.
• Collective functions do not use a tag argument meaning that
collective calls are matched strictly according to the order of
execution.
Chapter 1 11
Introduction
MPI concepts
12 Chapter 1
Introduction
MPI concepts
Computation
Computational operations do global reduction operations, such as sum,
max, min, product, or user-defined functions across all members of a
group. There are a number of global reduction functions:
Reduce Returns the result of a reduction at one node.
All–reduce Returns the result of a reduction at all nodes.
Reduce-Scatter Combines the functionality of reduce and scatter
operations.
Scan Performs a prefix reduction on data distributed across
a group.
Section 4.9, “Global Reduction Operations” in the MPI 1.0 standard
describes each of these functions in detail.
Reduction operations are binary and are only valid on numeric data.
Reductions are always associative but may or may not be commutative.
You can select a reduction operation from a predefined list (refer to
section 4.9.2 in the MPI 1.0 standard) or define your own operation. The
operations are invoked by placing the operation name, for example
MPI_SUM or MPI_PROD, in op as described in the MPI_Reduce syntax
below.
To implement a reduction, use
MPI_Reduce(void *sendbuf, void *recvbuf, int count,
MPI_Datatype dtype, MPI_Op op, int root, MPI_Comm comm);
where
sendbuf Specifies the address of the send buffer.
recvbuf Denotes the address of the receive buffer.
count Indicates the number of elements in the send buffer.
dtype Specifies the datatype of the send and receive buffers.
op Specifies the reduction operation.
root Indicates the rank of the root process.
comm Designates the communication context that identifies a
group of processes.
Chapter 1 13
Introduction
MPI concepts
Synchronization
Collective routines return as soon as their participation in a
communication is complete. However, the return of the calling process
does not guarantee that the receiving processes have completed or even
started the operation.
To synchronize the execution of processes, call MPI_Barrier.
MPI_Barrier blocks the calling process until all processes in the
communicator have called it. This is a useful approach for separating two
stages of a computation so messages from each stage do not overlap.
To implement a barrier, use
MPI_Barrier(MPI_Comm comm);
where
comm Identifies a group of processes and a communication
context.
For example, “cart.C” on page 129 uses MPI_Barrier to synchronize data
before printing.
14 Chapter 1
Introduction
MPI concepts
Chapter 1 15
Introduction
MPI concepts
Multilevel parallelism
By default, processes in an MPI application can only do one task at a
time. Such processes are single-threaded processes. This means that
each process has an address space together with a single program
counter, a set of registers, and a stack.
A process with multiple threads has one address space, but each process
thread has its own counter, registers, and stack.
Multilevel parallelism refers to MPI processes that have multiple
threads. Processes become multithreaded through calls to multithreaded
libraries, parallel directives and pragmas, and auto-compiler parallelism.
Multilevel parallelism is beneficial for problems you can decompose into
logical parts for parallel execution; for example, a looping construct that
spawns multiple threads to do a computation and joins after the
computation is complete.
The example program, “multi_par.f ” on page 135 is an example of
multilevel parallelism.
Advanced topics
This chapter only provides a brief introduction to basic MPI concepts.
Advanced MPI topics include:
• Error handling
• Process topologies
• User-defined datatypes
• Process grouping
• Communicator attribute caching
• The MPI profiling interface
To learn more about the basic concepts discussed in this chapter and
advanced MPI topics refer to MPI: The Complete Reference and MPI: A
Message-Passing Interface Standard.
16 Chapter 1
2 Getting started
This chapter describes how to get started quickly using HP MPI. The
semantics of building and running a simple MPI program are described,
for single– and multiple–hosts. You learn how to configure your
environment before running your program. You become familiar with the
Chapter 2 17
Getting started
18 Chapter 2
Getting started
Configuring your environment
Chapter 2 19
Getting started
Configuring your environment
20 Chapter 2
Getting started
Compiling and running your first application
int argc;
char *argv[];
char name[MPI_MAX_PROCESSOR_NAME];
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Get_processor_name(name, &len);
MPI_Finalize();
exit(0);
Chapter 2 21
Getting started
Compiling and running your first application
22 Chapter 2
Getting started
Directory structure
Directory structure
All HP MPI files are stored in the /opt/mpi directory. The directory
structure is organized as described in Table 2-1.
If you move the HP MPI installation directory from its default location in
/opt/mpi, set the MPI_ROOT environment variable to point to the new
location. Refer to “Configuring your environment” on page 19.
Subdirectory Contents
Chapter 2 23
Getting started
Directory structure
mpiclean.1,
mpidebug.1,
mpienv.1,
Describes runtime utilities,
mpiexec.1,
Runtime environment variables, debugging,
mpijob.1,
thread-safe and diagnostic libraries.
mpimtsafe.1,
mpirun.1,
mpistdio.1
24 Chapter 2
3 Understanding HP MPI
Chapter 3 25
Understanding HP MPI
• Compiling applications
— Compilation utilities
— Autodouble functionality
— 64-bit support
— Thread-compliant library
• Running applications
26 Chapter 3
Understanding HP MPI
Compiling applications
Compiling applications
The compiler you use to build HP MPI applications depends upon which
programming language you use. The HP MPI compiler utilities are shell
scripts that invoke the appropriate native compiler. You can pass the
pathname of the MPI header files using the -I option and link an MPI
library (for example, the diagnostic or thread-compliant library) using
the -Wl, -L or -l option.
By default, HP MPI compiler utilities include a small amount of debug
information in order to allow the TotalView debugger to function.
However, certain compiler options are incompatible with this debug
information. Use the -notv option to exclude debug information. The
-notv option will also disable TotalView usage on the resulting
executable. The -notv option applies to archive libraries only.
HP MPI 2.0 now offers a -show option to compiler wrappers. When
compiling by hand, run as mpicc -show and a line will print displaying
exactly what the job was going to do.
Compilation utilities
HP MPI provides separate compilation utilities and default compilers for
the languages shown in the following tables.
Table 3-1 Default compilers for HP-UX
C mpicc /opt/ansic/bin/cc
Chapter 3 27
Understanding HP MPI
Compiling applications
C mpicc /usr/bin/cc
Even though the mpiCC and mpif90 compilation utilities are shipped
with HP MPI, all C++ and Fortran 90 applications use C and Fortran 77
bindings respectively.
28 Chapter 3
Understanding HP MPI
Compiling applications
If you want to use a compiler other than the default one assigned to each
utility, set the corresponding environment variables shown in Table 3-5.
Table 3-5 Compilation environment variables
mpicc MPI_CC
mpiCC MPI_CXX
mpif77 MPI_F77
mpif90 MPI_F90
Autodouble functionality
HP MPI 2.0 supports Fortran programs compiled 64-bit with any of the
following options:
For HP-UX:
• +i8
• +r8
• +autodbl4
• +autodbl
For Linux Itanium2:
• -i2
Set default KIND of integer variables is 2.
• -i4
Set default KIND of integer variables is 4.
• -i8
Set default KIND of integer variables is 8.
• -r8
Set default size of REAL to 8 bytes.
• -r16
Set default size of REAL to 16 bytes.
Chapter 3 29
Understanding HP MPI
Compiling applications
• -autodouble
Same as -r8.
For Tru64UNIX:
• -r8
Defines REAL declarations, constants, functions, and intrinsics as
DOUBLE PRECISION (REAL*8), and defines COMPLEX
declarations, constants, functions, and intrinsics as DOUBLE
COMPLEX (COMPLEX*16). This option is the same as the
-real_size 64 option.
• -r16
Defines REAL and DOUBLE PRECISION declarations, constants,
functions, and intrinsics as REAL*16. For f90, it also defines
COMPLEX and DOUBLE COMPLEX declarations, constants,
functions, and intrinsics as COMPLEX*32. This option is the same
as the -real_size 128 option.
• -i8
Makes default integer and logical variables 8-bytes long (same as the
-integer_size 64 option). The default is -integer_size 32.
The decision of how the Fortran arguments will be interpreted by the
MPI library is made at link time.
If the mpif90 compiler wrapper is supplied with one of the above options
at link time, the necessary object files will automatically link, informing
MPI how to interpret the Fortran arguments.
• MPI_Op_create()
• MPI_Errhandler_create()
• MPI_Keyval_create()
• MPI_Comm_create_errhandler()
30 Chapter 3
Understanding HP MPI
Compiling applications
• MPI_Comm_create_keyval()
• MPI_Win_create_errhandler()
• MPI_Win_create_keyval()
The user-defined callback passed to these functions should accept
normal-sized arguments. These functions are called internally by the
library where normally-sized data types will be passed to them.
64-bit support
HP-UX 11.i and higher is available as a 32- and 64-bit operating system.
You must run 64-bit executables on the 64-bit system (though you can
build 64-bit executables on the 32-bit system).
HP MPI supports a 64-bit version of the MPI library on platforms
running HP-UX 11.i and higher. Both 32- and 64-bit versions of the
library are shipped with HP-UX 11.i and higher. For HP-UX 11i and
higher, you cannot mix 32-bit and 64-bit executables in the same
application.
The mpicc and mpiCC compilation commands link the 64-bit version of
the library if you compile with the +DA2.0W or +DD64 options. Use the
following syntax:
[mpicc | mpiCC] [+DA2.0W | +DD64] -o filename filename.c
When you use mpif90, compile with the +DA2.0W option to link the 64-bit
version of the library. Otherwise, mpif90 links the 32-bit version. For
example, to compile the program myprog.f90 and link the 64-bit library
enter:
% mpif90 +DA2.0W -o myprog myprog.f90
Thread-compliant library
HP MPI provides a thread-compliant library. By default, the non
thread-compliant library (libmpi) is used when running HP MPI jobs.
Linking to the thread-compliant library (libmtmpi) is now required only
for applications that have multiple threads making MPI calls
simultaneously. In previous releases, linking to the thread-compliant
library was required for multithreaded applications even if only one
thread was making a MPI call at a time. See Table B-1 on page 164.
Chapter 3 31
Understanding HP MPI
Compiling applications
32 Chapter 3
Understanding HP MPI
Building Applications
Building Applications
This example shows how to build hello_world.c prior to running.
On HP-UX:
On Linux:
On Tru64UNIX:
Chapter 3 33
Understanding HP MPI
Running applications
Running applications
This section introduces the methods to run your HP MPI application.
Using one of the mpirun methods is required. The examples below
demonstrate two basic methods. Refer to “mpirun (mpirun.all)” on
page 51 for all the mpirun command line options.
There are three methods you can use to start your application:
• Use mpirun with the -np # option and the name of your program. For
example,
% $MPI_ROOT/bin/mpirun -np 4 hello_world
starts an executable file named hello_world with four processes. This
is the recommended method to run applications on a single host with
a single executable file.
• Use mpirun with an appfile. For example,
% $MPI_ROOT/bin/mpirun -f appfile
where -f appfile specifies a text file (appfile) that is parsed by
mpirun and contains process counts and a list of programs.
You can use an appfile when you run a single executable file on a
single host and you must use this appfile method when you run on
multiple hosts or run multiple executables. For details about
building your appfile, refer to “Creating an appfile” on page 59.
• Use mpirun with -prun using the Quadrics Elan3 communication
processor on Linux or Tru64UNIX. For example,
% $MPI_ROOT/bin/mpirun [mpirun options] -prun [prun
options]
This method is only supported when linking with shared libraries.
This method allows full MPI-2 functionality. Some features like
mpirun -stdio processing are still unavailable.
The -np option is not allowed with -prun. The following mpirun
options are allowed with -prun:
34 Chapter 3
Understanding HP MPI
Running applications
Add an entry for wizard in the .rhosts file on jawbone and an entry for
jawbone in the .rhosts file on wizard. In addition to the entries in the
.rhosts file, ensure that your remote machine permissions are set up so
that you can use the remsh command to that machine. Refer to the
HP-UX remsh(1) man page for details.
Step 2. Insure that the executable is accessible from each host either by placing
it in a shared directory or by copying it to a local directory on each host.
The appfile file should contain a separate line for each host. Each line
specifies the name of the executable file and the number of processes to
run on the host. The -h option is followed by the name of the host where
the specified processes must be run. Instead of using the host name, you
may use its IP address.
Chapter 3 35
Understanding HP MPI
Running applications
% $MPI_ROOT/bin/mpirun -f my_appfile
Notice that processes 0 and 1 run on jawbone, the local host, while
processes 2 and 3 run on wizard. HP MPI guarantees that the ranks of
the processes in MPI_COMM_WORLD are assigned and sequentially
ordered according to the order the programs appear in the appfile. The
appfile in this example, my_appfile, describes the local host on the first
line and the remote host on the second line.
Step 1. Insure that the executable is accessible from each host either by placing
it in a shared directory or by copying it to a local directory on each host.
All options after -prun are processed directly by prun. In this example,
-N to prun specifies 2 hosts are to be used and -n starts 4 total processes.
36 Chapter 3
Understanding HP MPI
Running applications
Types of applications
HP MPI supports two programming styles: SPMD applications and
MPMD applications.
Chapter 3 37
Understanding HP MPI
Running applications
where appfile is the text file parsed by mpirun and contains a list of
programs and process counts.
Suppose you decompose the poisson application into two source files:
poisson_master (uses a single master process) and poisson_child (uses
four child processes).
The appfile for the example application contains the two lines shown
below (refer to “Creating an appfile” on page 59 for details).
-np 1 poisson_master
-np 4 poisson_child
To build and run the example application, use the following command
sequence:
% $MPI_ROOT/bin/mpicc -o poisson_master poisson_master.c
% $MPI_ROOT/bin/mpicc -o poisson_child poisson_child.c
% $MPI_ROOT/bin/mpirun -f appfile
See “Creating an appfile” on page 59 for more information about using
appfiles.
38 Chapter 3
Understanding HP MPI
Running applications
• MPI_COMMD
• MPI_DLIB_FLAGS
• MPI_FLAGS
• MP_GANG
• MPI_GLOBMEMSIZE
• MPI_INSTR
• MPI_LOCALIP
• MPI_MT_FLAGS
• MPI_NOBACKTRACE
• MPI_REMSH
• MPI_SHMEMCNTL
• MPI_TMPDIR
• MPI_WORKDIR
• TOTALVIEW
MPI_COMMD
MPI_COMMD routes all off-host communication through daemons rather
than between processes. The MPI_COMMD syntax is as follows:
out_frags,in_frags
where
Chapter 3 39
Understanding HP MPI
Running applications
MPI_DLIB_FLAGS
MPI_DLIB_FLAGS controls runtime options when you use the diagnostics
library. The MPI_DLIB_FLAGS syntax is a comma separated list as
follows:
[ns,][h,][strict,][nmsg,][nwarn,][dump:prefix,]
[dumpf:prefix][xNUM]
where
ns Disables message signature analysis.
h Disables default behavior in the diagnostic library that
ignores user specified error handlers. The default
considers all errors to be fatal.
strict Enables MPI object-space corruption detection. Setting
this option for applications that make calls to routines
in the MPI-2 standard may produce false error
messages.
40 Chapter 3
Understanding HP MPI
Running applications
MPI_FLAGS
MPI_FLAGS modifies the general behavior of HP MPI. The MPI_FLAGS
syntax is a comma separated list as follows:
[edde,][exdb,][egdb,][eadb,][ewdb,][eladebug,][l,][f,][i,]
[s[a|p][#],][y[#],][o,][+E2,][C,][D,][E,][T,][z]
where
edde Starts the application under the dde debugger. The
debugger must be in the command search path. See
“Debugging HP MPI applications” on page 97 for more
information.
Chapter 3 41
Understanding HP MPI
Running applications
42 Chapter 3
Understanding HP MPI
Running applications
Chapter 3 43
Understanding HP MPI
Running applications
44 Chapter 3
Understanding HP MPI
Running applications
MP_GANG
MP_GANG enables gang scheduling on HP-UX systems only. Gang
scheduling improves the latency for synchronization by ensuring that all
runable processes in a gang are scheduled simultaneously. Processes
waiting at a barrier, for example, do not have to wait for processes that
are not currently scheduled. This proves most beneficial for applications
with frequent synchronization operations. Applications with infrequent
synchronization, however, may perform better if gang scheduling is
disabled.
Process priorities for gangs are managed identically to timeshare
policies. The timeshare priority scheduler determines when to schedule a
gang for execution. While it is likely that scheduling a gang will preempt
one or more higher priority timeshare processes, the gang-schedule
policy is fair overall. In addition, gangs are scheduled for a single time
slice, which is the same for all processes in the system.
Chapter 3 45
Understanding HP MPI
Running applications
MPI_GLOBMEMSIZE
MPI_GLOBMEMSIZE specifies the amount of shared memory allocated for
all processes in an HP MPI application. The MPI_GLOBMEMSIZE syntax is
as follows:
amount
where amount specifies the total amount of shared memory in bytes for
all processes. The default is 2 Mbytes for up to 64-way applications and
4 Mbytes for larger applications.
Be sure that the value specified for MPI_GLOBMEMSIZE is less than the
amount of global shared memory allocated for the host. Otherwise,
swapping overhead will degrade application performance.
MPI_INSTR
MPI_INSTR enables counter instrumentation for profiling HP MPI
applications. The MPI_INSTR syntax is a colon-separated list (no spaces
between options) as follows:
prefix[...]][:l][:nc][:off]
where
prefix Specifies the instrumentation output file prefix. The
rank zero process writes the application’s
measurement data to prefix.instr in ASCII. If the
46 Chapter 3
Understanding HP MPI
Running applications
NOTE When you enable instrumentation for multihost runs, and invoke mpirun
either on a host where at least one MPI process is running, or on a host
remote from all your MPI processes, HP MPI writes the instrumentation
output file (prefix.instr) to the working directory on the host that is
running rank 0.
MPI_LOCALIP
MPI_LOCALIP specifies the host IP address that is assigned throughout a
session. Ordinarily, mpirun determines the IP address of the host it is
running on by calling gethostbyaddr. However, when a host uses a SLIP
or PPP protocol, the host’s IP address is dynamically assigned only when
the network connection is established. In this case, gethostbyaddr may
not return the correct IP address.
Chapter 3 47
Understanding HP MPI
Running applications
MPI_MT_FLAGS
MPI_MT_FLAGS controls runtime options when you use the
thread-compliant version of HP MPI. The MPI_MT_FLAGS syntax is a
comma separated list as follows:
[ct,][single,][fun,][serial,][mult]
where
ct Creates a hidden communication thread for each rank
in the job. When you enable this option, be careful not
to oversubscribe your system. For example, if you
enable ct for a 16-process application running on a
16-way machine, the result will be a 32-way job.
single Asserts that only one thread executes.
fun Asserts that a process can be multithreaded, but only
the main thread makes MPI calls (that is, all calls are
funneled to the main thread).
serial Asserts that a process can be multithreaded, and
multiple threads can make MPI calls, but calls are
serialized (that is, only one call is made at a time).
mult Asserts that multiple threads can call MPI at any time
with no restrictions.
Setting MPI_MT_FLAGS=ct has the same effect as setting
MPI_FLAGS=s[a][p]#, when the value of # that is greater than 0.
MPI_MT_FLAGS=ct takes priority over the default MPI_FLAGS=sp0
setting. Refer to “MPI_FLAGS” on page 41.
The single, fun, serial, and mult options are mutually exclusive. For
example, if you specify the serial and mult options in MPI_MT_FLAGS,
only the last option specified is processed (in this case, the mult option).
If no runtime option is specified, the default is mult.
For more information about using MPI_MT_FLAGS with the
thread-compliant library, refer to “Thread-compliant library” on page 31.
48 Chapter 3
Understanding HP MPI
Running applications
MPI_NOBACKTRACE
On PA-RISC systems, a stack trace is printed when the following signals
occur within an application:
• SIGILL
• SIGBUS
• SIGSEGV
• SIGSYS
In the event one of these signals is not caught by a user signal handler,
HP MPI will display a brief stack trace that can be used to locate the
signal in the code.
Signal 10: bus error
PROCEDURE TRACEBACK:
(0) 0x0000489c bar + 0xc [././a.out]
(1) 0x000048c4 foo + 0x1c [,/,/a.out]
(2) 0x000049d4 main + 0xa4 [././a.out]
(3) 0xc013750c _start + 0xa8 [/usr/lib/libc.2]
(4) 0x0003b50 $START$ + 0x1a0 [././a.out]
This feature can be disabled for an individual signal handler by declaring
a user-level signal handler for the signal. To disable for all signals, set
the environment variable MPI_NOBACKTRACE:
% setenv MPI_NOBACKTRACE
See “Backtrace functionality” on page 103 for more information.
MPI_REMSH
MPI_REMSH specifies a command other than the default remsh to start
remote processes. The mpirun, mpijob, and mpiclean utilities support
MPI_REMSH. For example, you can set the environment variable to use a
secure shell:
% setenv MPI_REMSH /bin/ssh
The alternative remote shell command should be a drop-in replacement
for /usr/bin/remsh, that is, the argument syntax for the alternative shell
should be the same as for /usr/bin/remsh.
Chapter 3 49
Understanding HP MPI
Running applications
MPI_SHMEMCNTL
MPI_SHMEMCNTL controls the subdivision of each process’s shared memory
for the purposes of point-to-point and collective communications. The
MPI_SHMEMCNTL syntax is a comma separated list as follows:
nenv, frag, generic
where
nenv Specifies the number of envelopes per process pair. The
default is 8.
frag Denotes the size in bytes of the message-passing
fragments region. The default is 87.5 percent of shared
memory after mailbox and envelope allocation.
generic Specifies the size in bytes of the generic-shared
memory region. The default is 12.5 percent of shared
memory after mailbox and envelope allocation.
MPI_TMPDIR
By default, HP MPI uses the /tmp directory to store temporary files
needed for its operations. MPI_TMPDIR is used to point to a different
temporary directory. The MPI_TMPDIR syntax is
directory
where directory specifies an existing directory used to store temporary
files.
MPI_WORKDIR
By default, HP MPI applications execute in the directory where they are
started. MPI_WORKDIR changes the execution directory. The MPI_WORKDIR
syntax is shown below:
directory
where directory specifies an existing directory where you want the
application to execute.
50 Chapter 3
Understanding HP MPI
Running applications
TOTALVIEW
When you use the TotalView debugger, HP MPI uses your PATH variable
to find TotalView. You can also set the absolute path and TotalView
specific options in the TOTALVIEW environment variable. This
environment variable is used by mpirun.
setenv TOTALVIEW /opt/totalview/bin/totalview
[totalview_options]
• mpirun (mpirun.all)
This section also includes discussion of Appfiles, the Multipurpose
daemon process, and Generating multihost instrumentation profiles.
• prun
• mpiexec
• mpijob
• mpiclean
mpirun (mpirun.all)
The HP MPI start-up provides the following advantages:
Chapter 3 51
Understanding HP MPI
Running applications
52 Chapter 3
Understanding HP MPI
Running applications
Chapter 3 53
Understanding HP MPI
Running applications
54 Chapter 3
Understanding HP MPI
Running applications
Chapter 3 55
Understanding HP MPI
Running applications
56 Chapter 3
Understanding HP MPI
Running applications
Chapter 3 57
Understanding HP MPI
Running applications
CAUTION The -help, -version, -p, and -tv options are not supported with the
bsub pam -mpi mpirun startup method.
Appfiles
An appfile is a text file that contains process counts and a list of
programs. When you invoke mpirun with the name of the appfile, mpirun
parses the appfile to get information for the run. You can use an appfile
when you run a single executable file on a single host, and you must use
an appfile when you run on multiple hosts or run multiple executable
files.
58 Chapter 3
Understanding HP MPI
Running applications
Chapter 3 59
Understanding HP MPI
Running applications
CAUTION Arguments placed after - - are treated as program arguments, and are
not processed by mpirun. Use this option when you want to specify
program arguments for each line of the appfile, but want to avoid editing
the appfile.
60 Chapter 3
Understanding HP MPI
Running applications
Chapter 3 61
Understanding HP MPI
Running applications
Slow communication
process 0 process 2
process 1 process 3
hosta hostb
process 2 process 3
hosta hostb
62 Chapter 3
Understanding HP MPI
Running applications
NOTE Because HP MPI sets up one daemon per host (or appfile entry) for
communication, when you invoke your application with -np x, HP MPI
generates x+1 processes.
prun
It is possible to start applications using the Elan on Linux and
Tru64UNIX systems without mpirun. The following is an example using
prun without mpirun:
% prun [options] application
This method has restrictions. It does not support MPI-2 dynamic
processes or one-sided communication. We recommend certain
environment variables be set before using this method. They are:
• For Linux:
LD_LIBRARY_PATH=$MPI_ROOT/lib/linux_[ia32|ia64]
Shared libraries will be linked by default. prun will not execute if
this is not set.
• For Tru64UNIX:
LD_LIBRARY_PATH=$MPI_ROOT/lib/alpha
Shared libraries will be linked by default. prun will not execute if
this is not set.
• For both Linux and Tru64UNIX:
Chapter 3 63
Understanding HP MPI
Running applications
LIBELAN_SHM_ENABLE=0
This tells the Elan system not to allocate its own shared memory.
Since we allocate our own shared memory, the Elan shared memory
would be ignored.
mpiexec
The MPI-2 standard defines mpiexec as a simple method to start MPI
applications. It supports less features than mpirun, but it is portable.
mpiexec syntax has three formats:
64 Chapter 3
Understanding HP MPI
Running applications
mpijob
mpijob lists the HP MPI jobs running on the system. Invoke mpijob on
the same host as you initiated mpirun. mpijob syntax is shown below:
mpijob [-help] [-a] [-u] [-j id] [id id ...]]
where
-help Prints usage information for the utility.
-a Lists jobs for all users.
-u Sorts jobs by user name.
-j id Provides process status for job id. You can list a
number of job IDs in a space-separated list.
When you invoke mpijob, it reports the following information for each
job:
JOB HP MPI job identifier.
Chapter 3 65
Understanding HP MPI
Running applications
mpiclean
mpiclean kills processes in an HP MPI application. Invoke mpiclean on
the host on which you initiated mpirun.
The MPI library checks for abnormal termination of processes while your
application is running. In some cases, application bugs can cause
processes to deadlock and linger in the system. When this occurs, you can
use mpijob to identify hung jobs and mpiclean to kill all processes in the
hung application.
mpiclean syntax has two forms:
66 Chapter 3
Understanding HP MPI
Running applications
Chapter 3 67
Understanding HP MPI
Running applications
The environment variable MPI_HMP can be set to on, off, ON, or OFF by
the user on a per-job basis. The user can override system defaults of on or
off (advisory), but not system defaults of ON or OFF (forced). Some
combinations of settings (in the file and variable) are illegal and will
generate errors.
NOTE All HMP enabled nodes must be on the same HyperFabric network in
order to allow this functionality.
The preferred method for enabling HMP is use of the mpirun option -hmp
which will enable HMP on every host.
If you developed your applications on a system without HMP installed,
the resulting executables cannot use HMP. When HMP is installed, you
will have to link or relink your applications to enable HMP support. We
recommend building your applications using our scripts to ensure your
executable is built with support for HMP.
Existing compilation scripts that do not use our wrappers will have to
relink using the -show option.
If you develop on a system without HyperFabric hardware, you can still
swinstall HyperFabric software to allow creation of HMP applications.
For more information on the HyperFabric product, refer to
http://software.hp.com.
68 Chapter 3
Understanding HP MPI
Running applications
You can also use an indirect approach and specify that all off-host
communication occur between daemons, by specifying the -commd option
to the mpirun command. In this case, the processes on a host use shared
memory to send messages to and receive messages from the daemon. The
daemon, in turn, uses a socket connection to communicate with daemons
on other hosts.
Figure 3-1 shows the structure for daemon communication.
Socket
connection
Daemon Daemon
process process
Outbound/Inbound
shared-memory
fragments
Application E
A
processes
F
B C
host1 host2
Chapter 3 69
Understanding HP MPI
Running applications
NOTE HP MPI sets up one daemon per host (or appfile entry) for
communication. If you invoke your application with -np x, HP MPI
generates x+1 processes.
IMPI
The Interoperable MPI protocol (IMPI) extends the power of MPI by
allowing applications to run on heterogeneous clusters of machines with
various architectures and operating systems, while allowing the program
to use a different implementation of MPI on each machine.
This is accomplished without requiring any modifications to the existing
MPI specification. That is, IMPI does not add, remove, or modify the
semantics of any of the existing MPI routines. All current valid MPI
programs can be run in this way without any changes to their source
code.
In IMPI, all messages going out of a host go through the daemon. The
messages between daemons have the fixed message format. The
protocols in different IMPI implementations are the same.
Currently, IMPI is not supported in multi-threaded library. If the user
application is a multi-threaded program, it is not allowed to start as an
IMPI job.
An IMPI server is available for download from Notre Dame at:
http://www.lsc.nd.edu/research/impi
The IMPI syntax is:
mpirun [-client # ip:port]
where
-client Specifies this mpirun is an IMPI client.
# Specifies the client number. The first # is 0.
ip Specifies the IP address of the IMPI server.
port Specifies the port number of the IMPI server.
70 Chapter 3
Understanding HP MPI
Running applications
Chapter 3 71
Understanding HP MPI
Running applications
72 Chapter 3
4 Profiling
This chapter provides information about utilities you can use to analyze
HP MPI applications. The topics covered are:
Chapter 4 73
Profiling
74 Chapter 4
Profiling
Using counter instrumentation
Chapter 4 75
Profiling
Using counter instrumentation
76 Chapter 4
Profiling
Using counter instrumentation
NOTE If spin-yield time is changed, overhead and blocking times become less
accurate.
Processes: 2
-----------------------------------------------------------------
-------------------- Instrumentation Data --------------------
-----------------------------------------------------------------
Chapter 4 77
Profiling
Using counter instrumentation
-----------------------------------------------------------------
0 0.040000 0.010000( 25.00%) 0.030000( 75.00%)
1 0.030000 0.010000( 33.33%) 0.020000( 66.67%)
-----------------------------------------------------------------
Rank Proc Wall Time User MPI
-----------------------------------------------------------------
0 0.126335 0.008332( 6.60%) 0.118003( 93.40%)
1 0.126355 0.008260( 6.54%) 0.118095( 93.46%)
-----------------------------------------------------------------
-----------------------------------------------------------------
0 0.118003 0.118003(100.00%) 0.000000( 0.00%)
1 0.118095 0.118095(100.00%) 0.000000( 0.00%)
-----------------------------------------------------------------
-----------------------------------------------------------------
0
MPI_Bcast 1 5.397081 0.000000
MPI_Finalize 1 1.238942 0.000000
MPI_Init 1 107.195973 0.000000
MPI_Reduce 1 4.171014 0.000000
-----------------------------------------------------------------
78 Chapter 4
Profiling
Using counter instrumentation
1
MPI_Bcast 1 5.388021 0.000000
MPI_Finalize 1 1.325965 0.000000
MPI_Init 1 107.228994 0.000000
MPI_Reduce 1 4.152060 0.000000
-----------------------------------------------------------------
-----------------------------------------------------------------
0
1 1 (4, 4) 4
1 [0..64] 4
-----------------------------------------------------------------
1
0 1 (8, 8) 8
1 [0..64] 8
-----------------------------------------------------------------
Chapter 4 79
Profiling
Using the profiling interface
For example:
80 Chapter 4
Profiling
Using the profiling interface
#include <stdio.h>
#include <mpi.h>
int MPI_Send(void *buf, int count, MPI_Datatype type,
int to, int tag, MPI_Comm comm)
{
printf("Calling C MPI_Send to %d\n", to);
return PMPI_Send(buf, count, type, to, tag, comm);
}
#pragma _HP_SECONDARY_DEF mpi_send mpi_send_
void mpi_send(void *buf, int *count, int *type, int *to,
int *tag, int *comm, int *ierr)
{
printf("Calling Fortran MPI_Send to %d\n", *to);
pmpi_send(buf, count, type, to, tag, comm, ierr);
}
Chapter 4 81
Profiling
Using the profiling interface
82 Chapter 4
5 Tuning
• MPI_FLAGS options
Chapter 5 83
Tuning
84 Chapter 5
Tuning
MPI_FLAGS options
MPI_FLAGS options
The function parameter error checking is turned off by default. It can be
turned on by setting MPI_FLAGS=Eon.
If you are running an application stand-alone on a dedicated system,
setting MPI_FLAGS=y allows MPI to busy spin, thereby improving
latency. See “MPI_FLAGS” on page 41 for more information on the y
option.
Chapter 5 85
Tuning
Message latency and bandwidth
86 Chapter 5
Tuning
Message latency and bandwidth
}
MPI_Waitall(size-1, requests, statuses);
Chapter 5 87
Tuning
Multiple network interfaces
• host0-ethernet0
• host0-ethernet1
• host1-ethernet0
• host1-ethernet1
If your executable is called beavis.exe and uses 64 processes, your appfile
should contain the following entries:
-h host0-ethernet0 -np 16 beavis.exe
-h host0-ethernet1 -np 16 beavis.exe
-h host1-ethernet0 -np 16 beavis.exe
-h host1-ethernet1 -np 16 beavis.exe
88 Chapter 5
Tuning
Multiple network interfaces
Now, when the appfile is run, 32 processes run on host0 and 32 processes
run on host1 as shown in Figure 5-1.
shmem shmem
host0 host1
Chapter 5 89
Tuning
Processor subscription
Processor subscription
Subscription refers to the match of processors and active processes on a
host. Table 5-1 lists possible subscription types.
Table 5-1 Subscription types
90 Chapter 5
Tuning
MPI routine selection
Chapter 5 91
Tuning
Multilevel parallelism
Multilevel parallelism
There are several ways to improve the performance of applications that
use multilevel parallelism:
92 Chapter 5
Tuning
Coding considerations
Coding considerations
The following are suggestions and items to consider when coding your
MPI applications to improve performance:
• Use MPI derived datatypes when you exchange several small size
messages that have no dependencies.
Chapter 5 93
Tuning
Coding considerations
94 Chapter 5
6 Debugging and troubleshooting
Chapter 6 95
Debugging and troubleshooting
— Building
— Starting
— Running
— Completing
• Frequently asked questions
96 Chapter 6
Debugging and troubleshooting
Debugging HP MPI applications
NOTE Visual MPI usage requires that your application is linked with the MPI
shared libraries, and be started with the mpirun command.
Chapter 6 97
Debugging and troubleshooting
Debugging HP MPI applications
Step 1. Set the eadb, exdb, edde, ewdb, egdb, or eladebug option in the
MPI_FLAGS environment variable to use the ADB, XDB, DDE, WDB,
GDB, or LADEBUG debugger respectively. Refer to “MPI_FLAGS” on
page 41 for information about MPI_FLAGS options.
Step 2. On remote hosts, set DISPLAY to point to your console. In addition, use
xhost to allow remote hosts to redirect their windows to your console.
(adb) mpi_debug_cont/w 1
98 Chapter 6
Debugging and troubleshooting
Debugging HP MPI applications
NOTE For the ladebug debugger, /usr/bin/X11 may need to be added to the
command search path.
Each process runs and stops at the breakpoint you set after MPI_Init.
Step 7. Continue to debug each process using the appropriate commands for
your debugger.
Chapter 6 99
Debugging and troubleshooting
Debugging HP MPI applications
NOTE When attaching to a running MPI application, you should attach to the
MPI daemon process to enable debugging of all the MPI ranks in the
application. You can identify the daemon process as the one at the top of
a hierarchy of MPI jobs (the daemon also usually has the lowest PID
among the MPI jobs).
Limitations
The following limitations apply to using TotalView with HP MPI
applications:
100 Chapter 6
Debugging and troubleshooting
Debugging HP MPI applications
Chapter 6 101
Debugging and troubleshooting
Debugging HP MPI applications
102 Chapter 6
Debugging and troubleshooting
Debugging HP MPI applications
Backtrace functionality
HP MPI 2.0 handles several common termination signals differently
than earlier versions of HP MPI. If any of the following signals are
generated by an MPI application, a stack trace is printed prior to
termination:
Chapter 6 103
Debugging and troubleshooting
Debugging HP MPI applications
responsible for handling the signal. Any signal handler installed after
MPI_Init will also override the backtrace functionality for that signal
after the point it is established. If multiple processes cause a signal, each
of them will print a backtrace.
In some cases, the prepending and buffering options available in HP MPI
2.0’s standard IO processing are useful in providing more readable
output.
The default behavior is to print a stack trace.
Backtracing can be turned off entirely by setting the environment
variable MPI_NOBACKTRACE. See“MPI_NOBACKTRACE” on page 49.
Backtracing is only supported on HP PA-RISC systems.
104 Chapter 6
Debugging and troubleshooting
Troubleshooting HP MPI applications
• Building
• Starting
• Running
• Completing
• Frequently asked questions
To get information about the version of HP MPI installed on your system,
use the what command. The following is an example of the command and
its output:
% what $MPI_ROOT/bin/mpicc
$MPI_ROOT/bin/mpicc:
HP MPI 02.00.00.00 (dd/mm/yyyy) B6060BA - HP-UX 11.i
This command returns the HP MPI version number, the date this version
was released, HP MPI product numbers, and the operating system
version.
Building
You can solve most build-time problems by referring to the
documentation for the compiler you are using.
If you use your own build script, specify all necessary input libraries. To
determine what libraries are needed, check the contents of the
compilation utilities stored in the HP MPI $MPI_ROOT/bin subdirectory.
Chapter 6 105
Debugging and troubleshooting
Troubleshooting HP MPI applications
Starting
• All remote hosts are listed in your .rhosts file on each machine and
you can remsh to the remote machines. The mpirun command has the
-ck option you can use to determine whether the hosts and programs
specified in your MPI application are available, and whether there
are access or permission problems. Refer to “mpirun (mpirun.all)” on
page 51.
• Application binaries are available on the necessary remote hosts and
are executable on those machines
• The -sp option is passed to mpirun to set the target shell PATH
environment variable. You can set this option in your appfile
• The .cshrc file does not contain tty commands such as stty if you
are using a /bin/csh-based shell
Running
Run time problems originate from many sources and may include:
• Shared memory
• Message buffering
106 Chapter 6
Debugging and troubleshooting
Troubleshooting HP MPI applications
Shared memory
When an MPI application starts, each MPI process attempts to allocate a
section of shared memory. This allocation can fail if the system-imposed
limit on the maximum number of allowed shared-memory identifiers is
exceeded or if the amount of available physical memory is not sufficient
to fill the request.
After shared-memory allocation is done, every MPI process attempts to
attach to the shared-memory region of every other process residing on
the same host. This attachment can fail if the number of shared-memory
segments attached to the calling process exceeds the system-imposed
limit. In this case, use the MPI_GLOBMEMSIZE environment variable to
reset your shared-memory allocation.
Furthermore, all processes must be able to attach to a shared-memory
region at the same virtual address. For example, if the first process to
attach to the segment attaches at address ADR, then the virtual-memory
region starting at ADR must be available to all other processes. Placing
MPI_Init to execute first can help avoid this problem. A process with a
large stack size is also prone to this failure. Choose process stack size
carefully.
Message buffering
According to the MPI standard, message buffering may or may not occur
when processes communicate with each other using MPI_Send. MPI_Send
buffering is at the discretion of the MPI implementation. Therefore, you
should take care when coding communications that depend upon
buffering to work correctly.
For example, when two processes use MPI_Send to simultaneously send a
message to each other and use MPI_Recv to receive the messages, the
results are unpredictable. If the messages are buffered, communication
works correctly. If the messages are not buffered, however, each process
hangs in MPI_Send waiting for MPI_Recv to take the message. For
Chapter 6 107
Debugging and troubleshooting
Troubleshooting HP MPI applications
Deadlock No Deadlock
Interoperability
Depending upon what server resources are available, applications may
run on heterogeneous systems.
For example, suppose you create an MPMD application that calculates
the average acceleration of particles in a simulated cyclotron. The
application consists of a four-process program called sum_accelerations
and an eight-process program called calculate_average.
Because you have access to a K-Class server called K_server and an
V-Class server called V_server, you create the following appfile:
-h K_server -np 4 sum_accelerations
-h V_server -np 8 calculate_average
108 Chapter 6
Debugging and troubleshooting
Troubleshooting HP MPI applications
Then, you invoke mpirun passing it the name of the appfile you created.
Even though the two application programs run on different platforms, all
processes can communicate with each other, resulting in twelve-way
parallelism. The four processes belonging to the sum_accelerations
application are ranked 0 through 3, and the eight processes belonging to
the calculate_average application are ranked 4 through 11 because HP
MPI assigns ranks in MPI_COMM_WORLD according to the order the
programs appear in the appfile.
Chapter 6 109
Debugging and troubleshooting
Troubleshooting HP MPI applications
110 Chapter 6
Debugging and troubleshooting
Troubleshooting HP MPI applications
Completing
In HP MPI, MPI_Finalize is a barrier-like collective routine that waits
until all application processes have called it before returning. If your
application exits without calling MPI_Finalize, pending requests may
not complete.
When running an application, mpirun waits until all processes have
exited. If an application detects an MPI error that leads to program
termination, it calls MPI_Abort instead.
You may want to code your error conditions using MPI_Abort, which
cleans up the application.
Each HP MPI application is identified by a job ID, unique on the server
where mpirun is invoked. If you use the -j option, mpirun prints the job
ID of the application that it runs. Then, you can invoke mpijob with the
job ID to display the status of your application.
If your application hangs or terminates abnormally, you can use
mpiclean to kill any lingering processes and shared-memory segments.
mpiclean uses the job ID from mpirun -j to specify the application to
terminate.
Chapter 6 111
Debugging and troubleshooting
Frequently asked questions
• Time in MPI_Finalize
• MPI clean up
• Application hangs in MPI_Send
Time in MPI_Finalize
QUESTION: When I build with HP MPI and then turn tracing on, the
application takes a long time inside MPI_Finalize. What is causing this?
ANSWER: When you turn tracing on MPI_Finalize spends time
consolidating the raw trace generated by each process into a single
output file (with a .tr extension).
MPI clean up
QUESTION: How does HP MPI clean up when something goes wrong?
ANSWER: HP MPI uses several mechanisms to clean up job files. Note
that all processes in your application must call MPI_Finalize.
112 Chapter 6
Debugging and troubleshooting
Frequently asked questions
ANSWER: Deadlock situations can occur when your code uses standard
send operations and assumes buffering behavior for standard
communication mode. You should not assume message buffering between
processes because the MPI standard does not mandate a buffering
strategy. HP MPI does sometimes use buffering for MPI_Send and
MPI_Rsend, but it is dependent on message size and at the discretion of
the implementation.
QUESTION: How can I tell if the deadlock is because my code depends on
buffering?
ANSWER: To quickly determine whether the problem is due to your code
being dependent on buffering, set the z option for MPI_FLAGS. MPI_FLAGS
modifies the general behavior of HP MPI, and in this case converts
MPI_Send and MPI_Rsend calls in your code to MPI_Ssend, without you
having to rewrite your code. MPI_Ssend guarantees synchronous send
semantics, that is, a send can be started whether or not a matching
receive is posted. However, the send completes successfully only if a
matching receive is posted and the receive operation has started to
receive the message sent by the synchronous send.
If your application still hangs after you convert MPI_Send and MPI_Rsend
calls to MPI_Ssend, you know that your code is written to depend on
buffering. You should rewrite it so that MPI_Send and MPI_Rsend do not
depend on buffering.
Alternatively, use nonblocking communication calls to initiate send
operations. A nonblocking send-start call returns before the message is
copied out of the send buffer, but a separate send-complete call is needed
to complete the operation. Refer also to “Sending and receiving
messages” on page 7 for information about blocking and nonblocking
communication. Refer to “MPI_FLAGS” on page 41 for information about
MPI_FLAGS options.
Chapter 6 113
Debugging and troubleshooting
Frequently asked questions
114 Chapter 6
A Example applications
Appendix A 115
Example applications
116 Appendix A
Example applications
Step 2. Copy all files from the help directory to the current writable directory:
% cp $MPI_ROOT/help/* .
Appendix A 117
Example applications
To compile and run all the examples in the /help directory, at your UNIX
prompt enter:
% make
% make thread_safe
118 Appendix A
Example applications
send_receive.f
send_receive.f
In this Fortran 77 example, process 0 sends an array to other processes
in the default communicator MPI_COMM_WORLD.
program main
include 'mpif.h'
call MPI_Init(ierr)
call MPI_Comm_rank(MPI_COMM_WORLD, rank, ierr)
call MPI_Comm_size(MPI_COMM_WORLD, size, ierr)
do i=1, 10
data(i) = 1
enddo
Appendix A 119
Example applications
send_receive.f
st_source = status(MPI_SOURCE)
st_tag = status(MPI_TAG)
endif
call MPI_Finalize(ierr)
stop
end
send_receive output
The output from running the send_receive executable is shown below.
The application was run with -np = 10.
Process 0 of 10 is alive
Process 1 of 10 is alive
Process 2 of 10 is alive
Process 3 of 10 is alive
Process 4 of 10 is alive
Process 5 of 10 is alive
Process 6 of 10 is alive
Process 7 of 10 is alive
Process 8 of 10 is alive
Process 9 of 10 is alive
Status info: source = 0 tag = 2001 count = 10
9 received 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
120 Appendix A
Example applications
ping_pong.c
ping_pong.c
This C example is used as a performance benchmark to measure the
amount of time it takes to send and receive data between two processes.
The buffers are aligned and offset from each other to avoid cache
conflicts caused by direct process-to-process byte-copy operations
To run this example:
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
if (size != 2) {
/*
Appendix A 121
Example applications
ping_pong.c
/*
* Ping-pong.
*/
if (rank == 0) {
printf("ping-pong %d bytes ...\n", nbytes);
/*
* warm-up loop
*/
start = MPI_Wtime();
for (i = 0; i < NLOOPS; i++) {
#ifdef CHECK
for (j = 0; j < nbytes; j++) buf[j] = (char)
(j + i);
#endif
MPI_Send(buf, nbytes, MPI_CHAR,1, 1000 + i,
MPI_COMM_WORLD);
#ifdef CHECK
memset(buf, 0, nbytes);
#endif
MPI_Recv(buf, nbytes, MPI_CHAR,1, 2000 + i,
MPI_COMM_WORLD,&status);
122 Appendix A
Example applications
ping_pong.c
#ifdef CHECK
if (nbytes > 0) {
printf("%d bytes: %.2f MB/sec\n",
nbytes,nbytes / 1000000./
((stop - start) / NLOOPS /
2));
}
}
else {
/*
* warm-up loop
*/
for (i = 0; i < 5; i++) {
MPI_Recv(buf, nbytes, MPI_CHAR,0, 1,
MPI_COMM_WORLD, &status);
MPI_Send(buf, nbytes, MPI_CHAR, 0, 1,
MPI_COMM_WORLD);
}
MPI_COMM_WORLD,&status);
MPI_Send(buf, nbytes, MPI_CHAR,0, 2000 + i,
MPI_COMM_WORLD);
}
}
MPI_Finalize();
exit(0);
}
ping_pong output
The output from running the ping_pong executable is shown below. The
application was run with -np = 2.
Appendix A 123
Example applications
ping_pong.c
124 Appendix A
Example applications
compute_pi.f
compute_pi.f
This Fortran 77 example computes pi by integrating f(x) = 4/(1 + x2).
Each process:
include 'mpif.h'
sizetype = 1
sumtype = 2
do 20 i = myid + 1, n, numprocs
x = h * (dble(i) - 0.5d0)
sum = sum + f(x)
20 continue
mypi = h * sum
Appendix A 125
Example applications
compute_pi.f
C
C Collect all the partial sums.
C
call MPI_REDUCE(mypi, pi, 1, MPI_DOUBLE_PRECISION,
+ MPI_SUM, 0, MPI_COMM_WORLD, ierr)
C
C Process 0 prints the result.
C
if (myid .eq. 0) then
write(6, 97) pi, abs(pi - PI25DT)
97 format(' pi is approximately: ', F18.16,
+ ' Error is: ', F18.16)
endif
call MPI_FINALIZE(ierr)
stop
end
compute_pi output
The output from running the compute_pi executable is shown below. The
application was run with -np = 10.
Process 0 of 10 is alive
Process 1 of 10 is alive
Process 2 of 10 is alive
Process 3 of 10 is alive
Process 4 of 10 is alive
Process 5 of 10 is alive
Process 6 of 10 is alive
Process 7 of 10 is alive
Process 8 of 10 is alive
Process 9 of 10 is alive
pi is approximately: 3.1416009869231249
Error is: 0.0000083333333318
126 Appendix A
Example applications
master_worker.f90
master_worker.f90
In this Fortran 90 example, a master task initiates (numtasks - 1)
number of worker tasks. The master distributes an equal portion of an
array to each worker task. Each worker task receives its portion of the
array and sets the value of each element to (the element’s index + 1).
Each worker task then sends its portion of the modified array back to the
master.
program array_manipulation
include 'mpif.h'
call MPI_Init(ierr)
call MPI_Comm_rank(MPI_COMM_WORLD, taskid, ierr)
call MPI_Comm_size(MPI_COMM_WORLD, numtasks, ierr)
numworkers = numtasks - 1
chunksize = (ARRAYSIZE / numworkers)
arraymsg = 1
indexmsg = 2
int4 = 4
real4 = 4
numfail = 0
do i = 1, numworkers
source = i
call MPI_Recv(index, 1, MPI_INTEGER, source, 1,
MPI_COMM_WORLD, &
status, ierr)
Appendix A 127
Example applications
master_worker.f90
do i = 1, numworkers*chunksize
if (result(i) .ne. (i+1)) then
print *, 'element ', i, ' expecting ', (i+1), ' actual
is ', result(i)
numfail = numfail + 1
endif
enddo
call MPI_Finalize(ierr)
master_worker output
The output from running the master_worker executable is shown below.
The application was run with -np = 2.
correct results!
128 Appendix A
Example applications
cart.C
cart.C
This C++ program generates a virtual topology. The class Node
represents a node in a 2-D torus. Each process is assigned a node or
nothing. Each node holds integer data, and the shift operation exchanges
the data with its neighbors. Thus, north-east-south-west shifting returns
the initial data.
#include <stdio.h>
#include <mpi.h>
#define NDIMS 2
// A constructor
Node::Node(void)
{
int i, nnodes, periods[NDIMS];
Appendix A 129
Example applications
cart.C
data = lrank;
MPI_Cart_coords(comm, lrank, NDIMS, coords);
}
}
// A destructor
Node::~Node(void)
{
if (comm != MPI_COMM_NULL) {
MPI_Comm_free(&comm);
}
}
// Shift function
void Node::shift(Direction dir)
{
if (comm == MPI_COMM_NULL) { return; }
if (dir == NORTH) {
direction = 0; disp = -1;
} else if (dir == SOUTH) {
direction = 0; disp = 1;
} else if (dir == EAST) {
direction = 1; disp = 1;
} else {
direction = 1; disp = -1;
}
MPI_Cart_shift(comm, direction, disp, &src, &dest);
MPI_Status stat;
MPI_Sendrecv_replace(&data, 1, MPI_INT, dest, 0, src, 0, comm,
&stat);
}
// Synchronize and print the data being held
void Node::print(void)
{
if (comm != MPI_COMM_NULL) {
MPI_Barrier(comm);
if (lrank == 0) { puts(""); } // line feed
MPI_Barrier(comm);
printf("(%d, %d) holds %d\n", coords[0], coords[1], data);
}
}
130 Appendix A
Example applications
cart.C
MPI_Barrier(comm);
// Program body
//
// Define a torus topology and demonstrate shift operations.
//
void body(void)
{
Node node;
node.profile();
node.print();
node.shift(NORTH);
node.print();
node.shift(EAST);
node.print();
node.shift(SOUTH);
node.print();
node.shift(WEST);
node.print();
}
//
// Main program---it is probably a good programming practice to
call
// MPI_Init() and MPI_Finalize() here.
//
int main(int argc, char **argv)
{
MPI_Init(&argc, &argv);
body();
MPI_Finalize();
}
cart output
The output from running the cart executable is shown below. The
application was run with -np = 4.
Dimensions: (2, 2)
global rank 0: cartesian rank 0, coordinate (0, 0)
global rank 1: cartesian rank 1, coordinate (0, 1)
global rank 3: cartesian rank 3, coordinate (1, 1)
global rank 2: cartesian rank 2, coordinate (1, 0)
Appendix A 131
Example applications
cart.C
(0, 0) holds 0
(1, 0) holds 2
(1, 1) holds 3
(0, 1) holds 1
(0, 0) holds 2
(1, 0) holds 0
(0, 1) holds 3
(1, 1) holds 1
(0, 0) holds 3
(0, 1) holds 2
(1, 0) holds 1
(1, 1) holds 0
(0, 0) holds 1
(1, 0) holds 3
(0, 1) holds 0
(1, 1) holds 2
(0, 0) holds 0
(1, 0) holds 2
(0, 1) holds 1
(1, 1) holds 3
132 Appendix A
Example applications
communicator.c
communicator.c
This C example shows how to make a copy of the default communicator
MPI_COMM_WORLD using MPI_Comm_dup.
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
int
main(argc, argv)
int argc;
char *argv[];
{
int rank, size, data;
MPI_Status status;
MPI_Comm libcomm;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
if (size != 2) {
if ( ! rank) printf("communicator: must have two
processes\n");
MPI_Finalize();
exit(0);
}
MPI_Comm_dup(MPI_COMM_WORLD, &libcomm);
if (rank == 0) {
data = 12345;
MPI_Send(&data, 1, MPI_INT, 1, 5,
MPI_COMM_WORLD);
data = 6789;
MPI_Send(&data, 1, MPI_INT, 1, 5, libcomm);
} else {
MPI_Recv(&data, 1, MPI_INT, 0, 5, libcomm,
&status);
printf("received libcomm data = %d\n", data);
MPI_Recv(&data, 1, MPI_INT, 0, 5, MPI_COMM_WORLD,
&status);
printf("received data = %d\n", data);
}
MPI_Comm_free(&libcomm);
MPI_Finalize();
return(0);
}
Appendix A 133
Example applications
communicator.c
communicator output
The output from running the communicator executable is shown below.
The application was run with -np = 2.
received libcomm data = 6789
received data = 12345
134 Appendix A
Example applications
multi_par.f
multi_par.f
The Alternating Direction Iterative (ADI) method is often used to solve
differential equations. In this example, multi_par.f, a compiler that
supports OPENMP directives is required in order to achieve multi-level
parallelism.
multi_par.f implements the following logic for a 2-dimensional compute
region:
DO J=1,JMAX
DO I=2,IMAX
A(I,J)=A(I,J)+A(I-1,J)
ENDDO
ENDDO
DO J=2,JMAX
DO I=1,IMAX
A(I,J)=A(I,J)+A(I,J-1)
ENDDO
ENDDO
There are loop carried dependencies on the first dimension (array's row)
in the first innermost DO loop and the second dimension (array's
column) in the second outermost DO loop.
A simple method for parallelizing the fist outer-loop implies a
partitioning of the array in column blocks, while another for the second
outer-loop implies a partitioning of the array in row blocks.
With message-passing programming, such a method will require massive
data exchange among processes because of the partitioning change.
"Twisted data layout" partitioning is better in this case because the
Appendix A 135
Example applications
multi_par.f
column block
0 1 2 3
0 0 1 2 3
1 3 0 1 2
row block
2 2 3 0 1
3 1 2 3 0
136 Appendix A
Example applications
multi_par.f
Appendix A 137
Example applications
multi_par.f
datatypes
integer cdtype(:) ! column block
communication datatypes
integer twdtype(:) ! twisted distribution
datatypes
integer ablen(:) ! array of block lengths
integer adisp(:) ! array of displacements
integer adtype(:) ! array of datatypes
allocatable
rbs,rbe,cbs,cbe,rdtype,cdtype,twdtype,ablen,adisp,
* adtype
integer rank ! rank iteration counter
integer comm_size ! number of MPI processes
integer comm_rank ! sequential ID of MPI
process
integer ierr ! MPI error code
integer mstat(mpi_status_size) ! MPI function status
integer src ! source rank
integer dest ! destination rank
integer dsize ! size of double
precision in bytes
double precision startt,endt,elapsed ! time keepers
external compcolumn,comprow ! subroutines execute in
threads
c
c MPI initialization
c
call mpi_init(ierr)
call mpi_comm_size(mpi_comm_world,comm_size,ierr)
call mpi_comm_rank(mpi_comm_world,comm_rank,ierr)
c
c Data initialization and start up
c
if (comm_rank.eq.0) then
write(6,*) 'Initializing',nrow,' x',ncol,' array...'
call getdata(nrow,ncol,array)
write(6,*) 'Start computation'
endif
call mpi_barrier(MPI_COMM_WORLD,ierr)
startt=mpi_wtime()
c
c Compose MPI datatypes for row/column send-receive
c
c Note that the numbers from rbs(i) to rbe(i) are the indices
c of the rows belonging to the i'th block of rows. These
indices
c specify a portion (the i'th portion) of a column and the
c datatype rdtype(i) is created as an MPI contiguous datatype
c to refer to the i'th portion of a column. Note this is a
c contiguous datatype because fortran arrays are stored
c column-wise.
c
c For a range of columns to specify portions of rows, the
situation
138 Appendix A
Example applications
multi_par.f
allocate(rbs(0:comm_size-1),rbe(0:comm_size-1),cbs(0:comm_size-1)
,
* cbe(0:comm_size-1),rdtype(0:comm_size-1),
* cdtype(0:comm_size-1),twdtype(0:comm_size-1))
do blk=0,comm_size-1
call blockasgn(1,nrow,comm_size,blk,rbs(blk),rbe(blk))
call mpi_type_contiguous(rbe(blk)-rbs(blk)+1,
* mpi_double_precision,rdtype(blk),ierr)
call mpi_type_commit(rdtype(blk),ierr)
call blockasgn(1,ncol,comm_size,blk,cbs(blk),cbe(blk))
call mpi_type_vector(cbe(blk)-cbs(blk)+1,1,nrow,
* mpi_double_precision,cdtype(blk),ierr)
call mpi_type_commit(cdtype(blk),ierr)
enddo
Appendix A 139
Example applications
multi_par.f
enddo
enddo
deallocate(adtype,adisp,ablen)
c
c Computation
c
c Sum up in each column.
c Each MPI process, or a rank, computes blocks that it is
assigned.
c The column block number is assigned in the variable 'cb'.
The
c starting and ending subscripts of the column block 'cb' are
c stored in 'cbs(cb)' and 'cbe(cb)', respectively. The row
block
c number is assigned in the variable 'rb'. The starting and
ending
c subscripts of the row block 'rb' are stored in 'rbs(rb)'
and
c 'rbe(rb)', respectively, as well.
src=mod(comm_rank+1,comm_size)
dest=mod(comm_rank-1+comm_size,comm_size)
ncb=comm_rank
do rb=0,comm_size-1
cb=ncb
c
c Compute a block. The function will go thread-parallel if
the
c compiler supports OPENMP directives.
c
call compcolumn(nrow,ncol,array,
* rbs(rb),rbe(rb),cbs(cb),cbe(cb))
140 Appendix A
Example applications
multi_par.f
if (rb.lt.comm_size-1) then
c
c Send the last row of the block to the rank that is to
compute the
c block next to the computed block. Receive the last row of
the
c block that the next block being computed depends on.
c
nrb=rb+1
ncb=mod(nrb+comm_rank,comm_size)
call
mpi_sendrecv(array(rbe(rb),cbs(cb)),1,cdtype(cb),dest,
*
0,array(rbs(nrb)-1,cbs(ncb)),1,cdtype(ncb),src,0,
* mpi_comm_world,mstat,ierr)
endif
enddo
c
c Sum up in each row.
c The same logic as the loop above except rows and columns
are
c switched.
c
src=mod(comm_rank-1+comm_size,comm_size)
dest=mod(comm_rank+1,comm_size)
do cb=0,comm_size-1
rb=mod(cb-comm_rank+comm_size,comm_size)
call comprow(nrow,ncol,array,
* rbs(rb),rbe(rb),cbs(cb),cbe(cb))
if (cb.lt.comm_size-1) then
ncb=cb+1
nrb=mod(ncb-comm_rank+comm_size,comm_size)
call
mpi_sendrecv(array(rbs(rb),cbe(cb)),1,rdtype(rb),dest,
*
0,array(rbs(nrb),cbs(ncb)-1),1,rdtype(nrb),src,0,
* mpi_comm_world,mstat,ierr)
endif
enddo
c
c Gather computation results
c
call mpi_barrier(MPI_COMM_WORLD,ierr)
endt=mpi_wtime()
if (comm_rank.eq.0) then
do src=1,comm_size-1
call
mpi_recv(array,1,twdtype(src),src,0,mpi_comm_world,
* mstat,ierr)
enddo
elapsed=endt-startt
write(6,*) 'Computation took',elapsed,' seconds'
else
call
Appendix A 141
Example applications
multi_par.f
mpi_send(array,1,twdtype(comm_rank),0,0,mpi_comm_world,
* ierr)
endif
c
c Dump to a file
c
c if (comm_rank.eq.0) then
c print*,'Dumping to adi.out...'
c open(8,file='adi.out')
c write(8,*) array
c close(8,status='keep')
c endif
c
c Free the resources
c
do rank=0,comm_size-1
call mpi_type_free(twdtype(rank),ierr)
enddo
do blk=0,comm_size-1
call mpi_type_free(rdtype(blk),ierr)
call mpi_type_free(cdtype(blk),ierr)
enddo
deallocate(rbs,rbe,cbs,cbe,rdtype,cdtype,twdtype)
c
c Finalize the MPI system
c
call mpi_finalize(ierr)
end
c****************************************************************
******
subroutine blockasgn(subs,sube,blockcnt,nth,blocks,blocke)
c
c This subroutine:
c is given a range of subscript and the total number of
blocks in
c which the range is to be divided, assigns a subrange to the
caller
c that is n-th member of the blocks.
c
implicit none
integer subs ! (in) subscript start
integer sube ! (in) subscript end
integer blockcnt ! (in) block count
integer nth ! (in) my block (begin from 0)
integer blocks ! (out) assigned block start
subscript
integer blocke ! (out) assigned block end
subscript
c
integer d1,m1
c
d1=(sube-subs+1)/blockcnt
m1=mod(sube-subs+1,blockcnt)
blocks=nth*d1+subs+min(nth,m1)
blocke=blocks+d1-1
if(m1.gt.nth)blocke=blocke+1
142 Appendix A
Example applications
multi_par.f
end
c
c****************************************************************
******
subroutine compcolumn(nrow,ncol,array,rbs,rbe,cbs,cbe)
c
c This subroutine:
c does summations of columns in a thread.
c
implicit none
c
c Local variables
c
integer i,j
c
c The OPENMP directive below allows the compiler to split the
c values for "j" between a number of threads. By making i
and j
c private, each thread works on its own range of columns "j",
c and works down each column at its own pace "i".
c
c Note no data dependency problems arise by having the
threads all
c working on different columns simultaneously.
c
c****************************************************************
******
subroutine comprow(nrow,ncol,array,rbs,rbe,cbs,cbe)
c
c This subroutine:
c does summations of rows in a thread.
c
Appendix A 143
Example applications
multi_par.f
implicit none
c
c Local variables
c
integer i,j
c
c The OPENMP directives below allow the compiler to split the
c values for "i" between a number of threads, while "j" moves
c forward lock-step between the threads. By making j shared
c and i private, all the threads work on the same column "j"
at
c any given time, but they each work on a different portion
"i"
c of that column.
c
c This is not as efficient as found in the compcolumn
subroutine,
c but is necessary due to data dependencies.
c
end
c
c****************************************************************
******
subroutine getdata(nrow,ncol,array)
c
c Enter dummy data
c
integer nrow,ncol
double precision array(nrow,ncol)
c
144 Appendix A
Example applications
multi_par.f
do j=1,ncol
do i=1,nrow
array(i,j)=(j-1.0)*ncol+i
enddo
enddo
end
multi_par.f output
The output from running the multi_par.f executable is shown below. The
application was run with -np = 1.
Initializing 1000 x 1000 array...
Start computation
Computation took 4.088211059570312E-02 seconds
Appendix A 145
Example applications
io.c
io.c
In this C example, each process writes to a separate file called iodatax,
where x represents each process rank in turn. Then, the data in
iodatax is read back.
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <mpi.h>
/*Each process writes to separate files and reads them back. The
file name is “iodata” and the process rank is appended to it.*/
main(argc, argv)
int argc;
char **argv;
{
int *buf, i, rank, nints, len, flag;
char *filename;
MPI_File fh;
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_File_open(MPI_COMM_SELF, filename,
MPI_MODE_CREATE | MPI_MODE_RDWR,
MPI_INFO_NULL, &fh);
146 Appendix A
Example applications
io.c
if (!flag) {
printf("Process %d: data read back is correct\n",
rank);
MPI_File_delete(filename, MPI_INFO_NULL);
}
free(buf);
free(filename);
MPI_Finalize();
exit(0);
}
io output
The output from running the io executable is shown below. The
application was run with -np = 4.
Process 0: data read back is correct
Process 1: data read back is correct
Process 2: data read back is correct
Process 3: data read back is correct
Appendix A 147
Example applications
thread_safe.c
thread_safe.c
In this C example, N clients loop MAX_WORK times. As part of a single
work item, a client must request service from one of Nservers at random.
Each server keeps a count of the requests handled and prints a log of the
requests to stdout. Once all the clients are done working, the servers are
shutdown.
#include <stdio.h>
#include <mpi.h>
#include <pthread.h>
#define MAX_WORK 40
#define SERVER_TAG 88
#define CLIENT_TAG 99
#define REQ_SHUTDOWN -1
int process_request(request)
int request;
{
if (request != REQ_SHUTDOWN) service_cnt++;
return request;
}
void* server(args)
void *args;
{
int rank, request;
MPI_Status status;
rank = *((int*)args);
while (1) {
MPI_Recv(&request, 1, MPI_INT, MPI_ANY_SOURCE,
SERVER_TAG, MPI_COMM_WORLD, &status);
if (process_request(request) == REQ_SHUTDOWN)
break;
MPI_Send(&rank, 1, MPI_INT,
status.MPI_SOURCE,
CLIENT_TAG, MPI_COMM_WORLD);
148 Appendix A
Example applications
thread_safe.c
return (void*) 0;
}
{
int w, server, ack;
MPI_Status status;
if (ack != server) {
printf("server failed to process my
request\n");
MPI_Abort(MPI_COMM_WORLD, MPI_ERR_OTHER);
}
}
}
void shutdown_servers(rank)
int rank;
{
int request_shutdown = REQ_SHUTDOWN;
MPI_Barrier(MPI_COMM_WORLD);
MPI_Send(&request_shutdown, 1, MPI_INT, rank,
SERVER_TAG, MPI_COMM_WORLD);
}
main(argc, argv)
int argc;
char *argv[];
{
int rank, size, rtn;
pthread_t mtid;
MPI_Status status;
int my_value, his_value;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
Appendix A 149
Example applications
thread_safe.c
client(rank, size);
shutdown_servers(rank);
MPI_Finalize();
exit(0);
}
thread_safe output
The output from running the thread_safe executable is shown below. The
application was run with -np = 2.
server [1]: processed request 1 for client 1
server [0]: processed request 1 for client 1
server [1]: processed request 1 for client 1
server [1]: processed request 0 for client 0
server [0]: processed request 0 for client 0
server [1]: processed request 1 for client 1
server [1]: processed request 0 for client 0
server [1]: processed request 1 for client 1
server [1]: processed request 1 for client 1
server [0]: processed request 1 for client 1
server [1]: processed request 0 for client 0
server [0]: processed request 1 for client 1
server [1]: processed request 1 for client 1
server [1]: processed request 1 for client 1
server [0]: processed request 1 for client 1
server [1]: processed request 1 for client 1
server [0]: processed request 1 for client 1
server [1]: processed request 0 for client 0
server [0]: processed request 0 for client 0
server [0]: processed request 0 for client 0
server [1]: processed request 1 for client 1
server [1]: processed request 1 for client 1
server [1]: processed request 0 for client 0
server [1]: processed request 0 for client 0
server [0]: processed request 1 for client 1
server [0]: processed request 0 for client 0
server [0]: processed request 1 for client 1
server [1]: processed request 0 for client 0
server [0]: processed request 0 for client 0
server [0]: processed request 1 for client 1
server [1]: processed request 0 for client 0
server [1]: processed request 0 for client 0
server [0]: processed request 1 for client 1
server [0]: processed request 0 for client 0
server [0]: processed request 0 for client 0
server [0]: processed request 0 for client 0
server [0]: processed request 0 for client 0
150 Appendix A
Example applications
thread_safe.c
Appendix A 151
Example applications
sort.C
sort.C
This program does a simple integer sort in parallel. The sort input is
built using the "rand" random number generator. The program is
self-checking and can run with any number of ranks.
#define NUM_OF_ENTRIES_PER_RANK100
#include <stdio.h>
#include <stdlib.h>
#include <iostream.h>
#include <mpi.h>
#include <limits.h>
#include <iostream.h>
#include <fstream.h>
//
// Class declarations.
//
class Entry {
private:
int value;
public:
Entry()
{ value = 0; }
Entry(int x)
{ value = x; }
Entry(const Entry &e)
{ value = e.getValue(); }
Entry& operator= (const Entry &e)
{ value = e.getValue(); return (*this); }
int getValue() const { return value; }
int operator> (const Entry &e) const
{ return (value > e.getValue()); }
};
class BlockOfEntries {
private:
Entry **entries;
152 Appendix A
Example applications
sort.C
int numOfEntries;
public:
BlockOfEntries(int *numOfEntries_p, int offset);
~BlockOfEntries();
int getnumOfEntries()
{ return numOfEntries; }
void setLeftShadow(const Entry &e)
{ *(entries[0]) = e; }
void setRightShadow(const Entry &e)
{ *(entries[numOfEntries-1]) = e; }
void singleStepOddEntries();
void singleStepEvenEntries();
void verifyEntries(int myRank, int baseLine);
void printEntries(int myRank);
};
//
// Class member definitions.
//
const Entry MAXENTRY(INT_MAX);
const Entry MINENTRY(INT_MIN);
//
//BlockOfEntries::BlockOfEntries
//
//Function:- create the block of entries.
//
BlockOfEntries::BlockOfEntries(int *numOfEntries_p, int myRank)
{
//
// Initialize the random number generator's seed based on the
caller's rank;
// thus, each rank should (but might not) get different random
values.
//
srand((unsigned int) myRank);
Appendix A 153
Example applications
sort.C
numOfEntries = NUM_OF_ENTRIES_PER_RANK;
*numOfEntries_p = numOfEntries;
//
// Add in the left and right shadow entries.
//
numOfEntries += 2;
//
// Allocate space for the entries and use rand to initialize the
values.
//
entries = new Entry *[numOfEntries];
for(int i = 1; i < numOfEntries-1; i++) {
entries[i] = new Entry;
*(entries[i]) = (rand()%1000) * ((rand()%2 == 0)? 1 : -1);
}
//
// Initialize the shadow entries.
//
entries[0] = new Entry(MINENTRY);
entries[numOfEntries-1] = new Entry(MAXENTRY);
}
//
//BlockOfEntries::~BlockOfEntries
//
//Function:- delete the block of entries.
//
BlockOfEntries::~BlockOfEntries()
{
for(int i = 1; i < numOfEntries-1; i++) {
delete entries[i];
}
delete entries[0];
delete entries[numOfEntries-1];
delete [] entries;
}
154 Appendix A
Example applications
sort.C
//
//BlockOfEntries::singleStepOddEntries
//
//Function: - Adjust the odd entries.
//
void
BlockOfEntries::singleStepOddEntries()
{
for(int i = 0; i < numOfEntries-1; i += 2) {
if (*(entries[i]) > *(entries[i+1]) ) {
Entry *temp = entries[i+1];
entries[i+1] = entries[i];
entries[i] = temp;
}
}
}
//
//BlockOfEntries::singleStepEvenEntries
//
//Function: - Adjust the even entries.
//
void
BlockOfEntries::singleStepEvenEntries()
{
for(int i = 1; i < numOfEntries-2; i += 2) {
if (*(entries[i]) > *(entries[i+1]) ) {
Entry *temp = entries[i+1];
entries[i+1] = entries[i];
entries[i] = temp;
}
}
}
//
//BlockOfEntries::verifyEntries
//
//Function: - Verify that the block of entries for rank myRank
// is sorted and each entry value is greater than
// or equal to argument baseLine.
//
void
Appendix A 155
Example applications
sort.C
{
for(int i = 1; i < numOfEntries-2; i++) {
if (entries[i]->getValue() < baseLine) {
cout << "Rank " << myRank
<< " wrong answer i = " << i
<< " baseLine = " << baseLine
<< " value = " << entries[i]->getValue()
<< endl;
MPI_Abort(MPI_COMM_WORLD, MPI_ERR_OTHER);
}
//
//BlockOfEntries::printEntries
//
//Function: - Print myRank's entries to stdout.
//
void
BlockOfEntries::printEntries(int myRank)
{
cout << endl;
cout << "Rank " << myRank << endl;
for(int i = 1; i < numOfEntries-1; i++)
cout << entries[i]->getValue() << endl;
}
int
main(int argc, char **argv)
{
int myRank, numRanks;
156 Appendix A
Example applications
sort.C
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &myRank);
MPI_Comm_size(MPI_COMM_WORLD, &numRanks);
//
// Have each rank build its block of entries for the global sort.
//
int numEntries;
BlockOfEntries *aBlock = new BlockOfEntries(&numEntries,
myRank);
//
// Compute the total number of entries and sort them.
//
numEntries *= numRanks;
for(int j = 0; j < numEntries / 2; j++) {
//
// Synchronize and then update the shadow entries.
//
MPI_Barrier(MPI_COMM_WORLD);
int recvVal, sendVal;
MPI_Request sortRequest;
MPI_Status status;
//
// Everyone except numRanks-1 posts a receive for the right's
rightShadow.
//
if (myRank != (numRanks-1)) {
MPI_Irecv(&recvVal, 1, MPI_INT, myRank+1,
MPI_ANY_TAG, MPI_COMM_WORLD,
&sortRequest);
}
//
// Everyone except 0 sends its leftEnd to the left.
//
if (myRank != 0) {
sendVal = aBlock->getLeftEnd().getValue();
MPI_Send(&sendVal, 1, MPI_INT,
myRank-1, 1, MPI_COMM_WORLD);
Appendix A 157
Example applications
sort.C
}
if (myRank != (numRanks-1)) {
MPI_Wait(&sortRequest, &status);
aBlock->setRightShadow(Entry(recvVal));
}
//
// Everyone except 0 posts for the left's leftShadow.
//
if (myRank != 0) {
MPI_Irecv(&recvVal, 1, MPI_INT, myRank-1,
MPI_ANY_TAG, MPI_COMM_WORLD,
&sortRequest);
}
//
// Everyone except numRanks-1 sends its rightEnd right.
//
if (myRank != (numRanks-1)) {
sendVal = aBlock->getRightEnd().getValue();
MPI_Send(&sendVal, 1, MPI_INT,
myRank+1, 1, MPI_COMM_WORLD);
}
if (myRank != 0) {
MPI_Wait(&sortRequest, &status);
aBlock->setLeftShadow(Entry(recvVal));
}
//
// Have each rank fix up its entries.
//
aBlock->singleStepOddEntries();
aBlock->singleStepEvenEntries();
}
//
// Print and verify the result.
//
if (myRank == 0) {
intsendVal;
aBlock->printEntries(myRank);
158 Appendix A
Example applications
sort.C
aBlock->verifyEntries(myRank, INT_MIN);
sendVal = aBlock->getRightEnd().getValue();
if (numRanks > 1)
MPI_Send(&sendVal, 1, MPI_INT, 1, 2, MPI_COMM_WORLD);
} else {
int recvVal;
MPI_Status Status;
MPI_Recv(&recvVal, 1, MPI_INT, myRank-1, 2,
MPI_COMM_WORLD, &Status);
aBlock->printEntries(myRank);
aBlock->verifyEntries(myRank, recvVal);
if (myRank != numRanks-1) {
recvVal = aBlock->getRightEnd().getValue();
MPI_Send(&recvVal, 1, MPI_INT, myRank+1, 2,
MPI_COMM_WORLD);
}
}
delete aBlock;
MPI_Finalize();
exit(0);
}
sort.C output
The output from running the sort executable is shown below. The
application was run with -np = 4.
Rank 0
-998
-996
-996
-993
...
-567
-563
-544
-543
Rank 1
-535
-528
-528
Appendix A 159
Example applications
sort.C
...
-90
-90
-84
-84
Rank 2
-78
-70
-69
-69
...
383
383
386
386
Rank 3
386
393
393
397
...
950
965
987
987
160 Appendix A
Example applications
compute_pi_spawn.f
compute_pi_spawn.f
This example computes pi by integrating f(x) = 4/(1 + x**2) using
MPI_Spawn. It starts with one process and spawns a new world that does
the computation along with the original process. Each newly spawned
process receives the # of intervals used, calculates the areas of its
rectangles, and synchronizes for a global summation. The original
process 0 prints the result and the time it took.
program mainprog
include 'mpif.h'
double precision PI25DT
parameter(PI25DT = 3.141592653589793238462643d0)
double precision mypi, pi, h, sum, x, f, a
integer n, myid, numprocs, i, ierr
integer parenticomm, spawnicomm, mergedcomm, high
C
C Function to integrate
C
f(a) = 4.d0 / (1.d0 + a*a)
call MPI_INIT(ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD, myid, ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD, numprocs, ierr)
call MPI_COMM_GET_PARENT(parenticomm, ierr)
if (parenticomm .eq. MPI_COMM_NULL) then
print *, "Original Process ", myid, " of ", numprocs,
+ " is alive"
call MPI_COMM_SPAWN("./compute_pi_spawn", MPI_ARGV_NULL, 3,
+ MPI_INFO_NULL, 0, MPI_COMM_WORLD, spawnicomm,
+ MPI_ERRCODES_IGNORE, ierr)
call MPI_INTERCOMM_MERGE(spawnicomm, 0, mergedcomm, ierr)
call MPI_COMM_FREE(spawnicomm, ierr)
else
print *, "Spawned Process ", myid, " of ", numprocs,
+ " is alive"
call MPI_INTERCOMM_MERGE(parenticomm, 1, mergedcomm, ierr)
call MPI_COMM_FREE(parenticomm, ierr)
endif
call MPI_COMM_RANK(mergedcomm, myid, ierr)
call MPI_COMM_SIZE(mergedcomm, numprocs, ierr)
print *, "Process ", myid, " of ", numprocs,
+ " in merged comm is alive"
Appendix A 161
Example applications
compute_pi_spawn.f
sizetype = 1
sumtype = 2
if (myid .eq. 0) then
n = 100
endif
call MPI_BCAST(n, 1, MPI_INTEGER, 0, mergedcomm, ierr)
C
C Calculate the interval size.
C
h = 1.0d0 / n
sum = 0.0d0
do 20 i = myid + 1, n, numprocs
x = h * (dble(i) - 0.5d0)
sum = sum + f(x)
20 continue
mypi = h * sum
C
C Collect all the partial sums.
C
call MPI_REDUCE(mypi, pi, 1, MPI_DOUBLE_PRECISION,
+ MPI_SUM, 0, mergedcomm, ierr)
C
C Process 0 prints the result.
C
if (myid .eq. 0) then
write(6, 97) pi, abs(pi - PI25DT)
97 format(' pi is approximately: ', F18.16,
+ ' Error is: ', F18.16)
endif
call MPI_COMM_FREE(mergedcomm, ierr)
call MPI_FINALIZE(ierr)
stop
end
compute_pi_spawn.f output
The output from running the compute_pi_spawn executable is shown
below. The application was run with -np = 1 and with the -spawn option.
Original Process 0 of 1 is alive
Spawned Process 0 of 3 is alive
Spawned Process 2 of 3 is alive
Spawned Process 1 of 3 is alive
Process 0 of 4 in merged comm is alive
Process 2 of 4 in merged comm is alive
Process 3 of 4 in merged comm is alive
Process 1 of 4 in merged comm is alive
pi is approximately: 3.1416009869231254
Error is: 0.0000083333333323
162 Appendix A
B Standard-flexibility in HP MPI
Appendix B 163
Standard-flexibility in HP MPI
MPI does not mandate what an MPI processes are UNIX processes
MPI process is. MPI does not and can be multithreaded.
specify the execution model for
each process; a process can be
sequential or multithreaded. See
MPI-1.2 Section 2.6.
164 Appendix B
Standard-flexibility in HP MPI
The value returned for MPI_HOST HP MPI always sets the value of
gets the rank of the host process MPI_HOST to MPI_PROC_NULL.
in the group associated with
MPI_COMM_WORLD.
MPI_PROC_NULL is returned if
there is no host. MPI does not
specify what it means for a
process to be a host, nor does it
specify that a HOST exists.
The current MPI definition does The default HP MPI library does
not require messages to carry not carry this information due to
data type information. Type overload, but the HP MPI
information might be added to diagnostic library (DLIB) does. To
messages to allow the system to link with the diagnostic library,
detect mismatches. See MPI-1.2 use -ldmpi on the link line.
Section 3.3.2.
Appendix B 165
Standard-flexibility in HP MPI
166 Appendix B
Glossary
Glossary167
Glossary
communicator
communicator Global object that groups explicit parallelism Programming style
application processes together. Processes in that requires you to specify parallel
a communicator can communicate with each constructs directly. Using the MPI library is
other or with processes in another group. an example of explicit parallelism.
Conceptually, communicators define a
communication context and a static group of functional decomposition Breaking down
processes within that context. an MPI application’s computational space
into separate tasks such that all
context Internal abstraction used to define computation on these tasks is performed in
a safe communication space for processes. parallel.
Within a communicator, context separates
point-to-point and collective gather Many-to-one collective operation
communications. where each process (including the root)
sends the contents of its send buffer to the
data-parallel model Design model where root.
data is partitioned and distributed to each
process in an application. Operations are granularity Measure of the work done
performed on each set of data in parallel and between synchronization points.
intermediate results are exchanged between Fine-grained applications focus on execution
processes until a problem is solved. at the instruction level of a program. Such
applications are load balanced but suffer
derived data types User-defined from a low computation/communication
structures that specify a sequence of basic ratio. Coarse-grained applications focus on
data types and integer displacements for execution at the program level where
noncontiguous data. You create derived data multiple programs may be executed in
types through the use of type-constructor parallel.
functions that describe the layout of sets of
primitive types in memory. Derived types group Set of tasks that can be used to
may contain arrays as well as combinations organize MPI applications. Multiple groups
of other primitive data types. are useful for solving problems in linear
algebra and domain decomposition.
determinism A behavior describing
repeatability in observed parameters. The HMP HyperMessaging Protocol is a
order of a set of events does not vary from messaging-based protocol that significantly
run to run. enhances performance of parallel and
technical applications by optimizing the
domain decomposition Breaking down an processing of various communication tasks
MPI application’s computational space into across interconnected hosts for HP-UX
regular data structures such that all systems.
computation on these structures is identical
and performed in parallel. implicit parallelism Programming style
where parallelism is achieved by software
layering (that is, parallel constructs are
168 Glossary
Glossary
multilevel parallelism
generated through the software). High byte range of the message to be stored in the
performance Fortran is an example of bin—use the MPI_INSTR environment
implicit parallelism. variable.
load balancing Measure of how evenly the MPMD Multiple data multiple program.
work load is distributed among an Implementations of
application’s processes. When an application HP MPI that use two or more separate
is perfectly balanced, all processes share the executables to construct an application. This
total work load and complete at the same design style can be used to simplify the
time. application source and reduce the size of
spawned processes. Each process may run a
locality Degree to which computations different executable.
performed by a processor depend only upon
local data. Locality is measured in several multilevel parallelism Refers to
ways including the ratio of local to nonlocal multithreaded processes that call MPI
data accesses. routines to perform computations. This
approach is beneficial for problems that can
message bin A message bin stores be decomposed into logical parts for parallel
messages according to message length. You execution (for example, a looping construct
can define a message bin by defining the that spawns multiple threads to perform a
computation and then joins after the
computation is complete).
Glossary 169
Glossary
multihost
multihost A mode of operation for an MPI can only perform one task at a time.
application where a cluster is used to carry Multithreaded processes can perform
out a parallel application run. multiple tasks concurrently as when
overlapping computation and
nonblocking receive Communication in communication.
which the receiving process returns before a
message is stored in the receive buffer. race condition Situation in which multiple
Nonblocking receives are useful when processes vie for the same resource and
communication and computation can be receive it in an unpredictable manner. Race
effectively overlapped in an MPI application. conditions can lead to cases where
Use of nonblocking receives may also avoid applications do not run correctly from one
system buffering and memory-to-memory invocation to the next.
copying.
rank Integer between zero and (number of
nonblocking send Communication in processes - 1) that defines the order of a
which the sending process returns before a process in a communicator. Determining the
message is stored in the send buffer. rank of a process is important when solving
Nonblocking sends are useful when problems where a master process partitions
communication and computation can be and distributes work to slave processes. The
effectively overlapped in an MPI application. slaves perform some computation and return
the result to the master as the solution.
non–determinism A behavior describing
non repeatable observed parameters. The ready send mode Form of blocking send
order of a set of events depends on run time where the sending process cannot start until
conditions and so varies from run to run. a matching receive is posted. The sending
process returns immediately.
parallel efficiency An increase in speed in
the execution of a parallel application. reduction Binary operations (such as
summation, multiplication, and boolean)
point-to-point communication applied globally to all processes in a
communicator. These operations are only
Communication where data transfer
valid on numeric data and are always
involves sending and receiving messages
associative but may or may not be
between two processes. This is the simplest
commutative.
form of data transfer in a message-passing
model.
scalable Ability to deliver an increase in
application performance proportional to an
polling Mechanism to handle asynchronous
increase in hardware resources (normally,
events by actively checking to determine if
adding more processors).
an event has occurred.
scatter One-to-many operation where the
process Address space together with a
root’s send buffer is partitioned into n
program counter, a set of registers, and a
segments and distributed to all processes
stack. Processes can be single threaded or
multithreaded. Single-threaded processes
170 Glossary
Glossary
thread
such that the ith process receives the ith number of identical child processes. The
segment. n represents the total number of master and the children all run the same
processes in the communicator. executable.
Glossary 171
Glossary
thread-compliant
stack. This allows rapid context switching
because threads require little or no memory
management.
thread-compliant An implementation
where an MPI process may be
multithreaded. If it is, each thread can issue
MPI calls. However, the threads themselves
are not separately addressable.
172 Glossary
Symbols with mpirun, 34
+DA2 option, 31 appfiles
+DD64 option, 31 adding program arguments, 59
.mpiview file, 75 assigning ranks in, 61
/opt/aCC/bin/aCC, 27 creating, 59
/opt/ansic/bin/cc, 27 improving communication on multihost
/opt/fortran/bin/f77, 27 systems, 61
/opt/fortran90/bin/f90, 27 setting remote environment variables in, 60
/opt/mpi argument checking, enable, 45
subdirectories, 23 array partitioning, 136
/opt/mpi directory ASCII instrumentation profile, 76
organization of, 23 asynchronous communication, 4
/opt/mpi/bin, 23 autodouble, 29
/opt/mpi/doc, 23
/opt/mpi/help, 23 B
/opt/mpi/include, 23
/opt/mpi/lib/alpha, 23 backtrace, 103
/opt/mpi/lib/hpux32, 23 bandwidth, 6, 86, 91
/opt/mpi/lib/hpux64, 23 barrier, 14, 93
/opt/mpi/lib/linux_ia32, 23 blocking communication, 7
/opt/mpi/lib/linux_ia64, 23 buffered mode, 8
/opt/mpi/lib/pa2.0, 23 MPI_Bsend, 8
/opt/mpi/lib/pa20_64, 23 MPI_Recv, 8
/opt/mpi/newconfig/, 23 MPI_Rsend, 8
/opt/mpi/share/man/man1*, 23 MPI_Send, 8
/opt/mpi/share/man/man3*, 23 MPI_Ssend, 8
/usr/bin/cc, 28 read mode, 8
/usr/bin/cxx, 28 receive mode, 8
/usr/bin/f77, 28 send mode, 8
/usr/bin/f90, 28
/usr/bin/g++, 28 standard mode, 8
/usr/bin/g77, 28 synchronous mode, 8
/usr/bin/gcc, 28 blocking receive, 8
blocking send, 8
Numerics broadcast, 11, 12
buf variable, 8, 9, 10, 12
64-bit support, 31 buffered send mode, 8
build
A examples, 117
aCC, 28 MPI on multiple hosts, 59, 66
ADB, 98 MPI on single host, 21
allgather, 11 problems, 105
allows, 85 building applications, 33
all-reduce, 13
alltoall, 11 C
alternating direction iterative method, 116,
135 C compiler, 27
amount variable, 46 C examples
appfile communicator.c, 116, 133
configure for multiple network interfaces, io.c, 146
88 ping_pong.c, 116, 121
description of, 35 thread_safe.c, 148
C++ examples
173
cart.C, 116, 129 -i8, 29, 30
sort.C, 152 -L, 27
cart.C, 116 -l, 27
change -notv, 27
execution location, 50 -r16, 29, 30
code a -r8, 29, 30
blocking receive, 8 -show, 27
blocking send, 8 -Wl, 27
broadcast, 12 compilers
nonblocking send, 10 default, 27
scatter, 12 compiling applications, 27
code error conditions, 111 completing HP MPI, 111
collect profile information completion routine, 8
ASCII report, 76 computation, 13
collective communication, 11 compute_pi.f, 75, 116
all-reduce, 13 configuration files, 23
reduce, 13 configure environment, 19
reduce-scatter, 13 setenv MPI_ROOT, 23
scan, 13 setenv NLSPATH, 71
collective operations, 10, 10–14 constructor functions
communication, 11 contiguous, 15
computation, 13 indexed, 15
synchronization, 14 structure, 15
comm variable, 8, 9, 10, 12, 13 vector, 15
communication context
context, 9, 13 communication, 9, 13
hot spot, 77 context switching, 90
hot spots, 61 contiguous and noncontiguous data, 14
improving interhost, 61 contiguous constructor, 15
using daemons, 68 count variable, 8, 9, 10, 12
communicator counter instrumentation, 46, 75
ASCII format, 76
defaults, 6
determine no. of processes, 7 create profile, 75
create
freeing memory, 42
appfile, 59
communicator.c, 116 ASCII profile, 75
commutative reductions, 93
compilation instrumentation profile, 75
utilities, 24
compilation utilities, 27 D
compiler options daemons
+autodbl, 29 multipurpose, 62
+autodbl4, 29 number of processes, 62
+DA2.0W, 31 daemons, communication, 68
+DD64, 31 DDE, 41, 98, 113
+i8, 29 debug HP MPI, 41, 97, 113
+r8, 29 debuggers, 97
32- and 64-bit library, 31 default compilers, 27
-autobouble, 29 HP-UX, 27
-i2, 29 Linux IA-32, 28
-i4, 29 Linux Itanium2, 28
174
Tru64UNIX, 28 setting via command line, 38, 60
derived data types, 14 TOTALVIEW, 51
dest variable, 8, 10 error checking, enable, 45
determine error conditions, 111
group size, 5 ewdb, 41, 98, 113
no. of processes in communicator, 7 example applications, 115–162
rank of calling process, 5 cart.C, 116, 129
diagnostics library communicator.c, 116, 133
message signature analysis, 102 compiling and running, 117
MPI object-space corruption, 102 compute_pi.f, 75, 116, 125
multiple buffer writes detection, 102 compute_pi_spawn.f, 161
using, 102 copy default communicator, 116, 133
directory structure, MPI, 23 distribute sections/compute in parallel, 116,
distribute sections/compute in parallel, 116, 127
127 generate virtual topology, 116
dtype variable, 8, 9, 10, 12, 13 io.c, 146
dump shmem configuration, 45 master_worker.f90, 116, 127
measure send/receive time, 116
E multi_par.f, 116, 135
eadb, 98 ping_pong.c, 116, 121
ecc, 28 receive operation, 116
edde, 41, 98, 113 send operation, 116
efc, 28 send_receive.f, 119
egdb, 41, 98, 113 sort.C, 152
eladebug, 98
Elan, 34, 52 thread_safe.c, 148
enable use ADI on 2D compute region, 116
instrumentation, 52 exceeding file descriptor limit, 109
enhanced debugging output, 103 exdb, 41, 98, 113
environment variables external input and output, 110
MP_GANG, 45
MPI_2BCOPY, 99 F
MPI_CC, 29 f90, 28
MPI_COMMD, 39 FAQ, 112
MPI_CXX, 29 file descriptor limit, 109
MPI_DLIB_FLAGS, 40 Fortran 77 examples
MPI_F77, 29 array partitioning, 136
MPI_F90, 29 compute_pi.f, 116, 125
MPI_FLAGS, 41, 97 multi_par.f, 116, 135
MPI_GLOBMEMSIZE, 46 send_receive.f, 116, 119
MPI_INSTR, 46, 75 Fortran 90 examples
MPI_LOCALIP, 47 master_worker.f90, 127
MPI_MT_FLAGS, 48, 49, 50 Fortran 90 troubleshooting, 109
Fortran profiling, 80
MPI_REMSH, 49 Fortran77 examples
MPI_SHMEMCNTL, 50 compute_pi_spawn.f, 161
MPI_WORKDIR, 50 freeing memory, 42
NLSPATH, 71 frequently asked questions, 112
run-time, 38
runtime, 38–45 G
setting in appfiles, 60
gang scheduling, 45–46, 90
175
gather, 11 improve
GDB, 41, 98, 113 bandwidth, 86
gethostname, 164 coding HP MPI, 93
getting started, 17 latency, 86
ght, 75 network performance, 88
global reduce-scatter, 13 improving interhost communication, 61
global reduction, 13 indexed constructor, 15
global variables initialize MPI environment, 5
MPI_DEBUG_CONT, 98 instrumentation
group membership, 4 ASCII profile, 77
group size, 5 counter, 75
creating profile, 75
H multihost, 63
header files, 23 output file, 75
heart-beat signals, 43 intercommunicators, 6
HMP (hypermessaging protocol), 67 interoperability problems, 108
hosts intracommunicators, 6
assigning using LSF, 53
multiple, 59, 59–66 K
HP MPI
building, 105 kill MPI jobs, 66
change behavior, 41, 113
clean-up, 112 L
completing, 111 LADEBUG, 98
debug, 95 language bindings, 164
FAQ, 96, 112 language interoperability, 43
frequently asked questions, 112 latency, 6, 86, 91
linking thread-compliant library, 31
jobs running, 65 logical values in Fortran77, 45
kill, 66 LSF (load sharing facility), 53
multi-process debuggers, 99 invoking, 53
running, 106 on Itanium2, 54
single-process debuggers, 98
specify shared memory, 46 M
starting, 52, 106
troubleshooting, 105–113 Makefile, 117
man pages
twisted-data layout, 137 categories, 24
utility files, 23 compilation utilities, 24
HP MPI utility files, 23
HP-UX gang scheduling, 45–46, 90 general HP MPI, 24
hyperfabric, 67 HP MPI library, 23
HP MPI utilities, 23
I runtime, 24
master_worker.f90, 116
-i option, 47, 56 memory leaks, 42
I/O, 164 message bandwidth
icc, 28 achieve highest, 91
ifc, 28 message buffering problems, 107
IMPI, 70 message label, 9
implement message latency
barrier, 14 achieve lowest, 91
reduction, 13 message latency/bandwidth, 85, 86
176
message passing MPI_COMMD, 39
advantages, 3 MPI_DEBUG_CONT, 98
message signature analysis, 102 MPI_DLIB_FLAGS, 39, 40
message size, 6 MPI_Finalize, 5, 112
message status, 8 MPI_FLAGS, 39, 41, 85
MP_GANG, 39, 45 using to troubleshoot, 98
MPI MPI_FLAGS options
allgather operation, 11 ADB, 98
alltoall operation, 11 DDE, 98
app hangs at MPI_Send, 112 E, 85
broadcast operation, 11 GDB, 98
build application on single host, 21 LADEBUG, 98
change execution source, 50 WDB, 98
directory structure, 23 XDB, 98
gather operation, 11 y, 85
initialize environment, 5 MPI_GET_PROCESSOR_NAME, 164
prefix, 80 MPI_GLOBMEMSIZE, 39, 46
routine selection, 91 MPI_handler_function, 164
run application, 21, 34 MPI_Ibsend, 10
run application on multiple hosts, 35, 36 MPI_Init, 5
MPI_INSTR, 39, 46, 75
run application on single host, 21 MPI_Irecv, 10
scatter operation, 11 MPI_Irsend, 10
terminate environment, 5 MPI_Isend, 10
MPI application, starting, 21 MPI_Issend, 10
MPI concepts, 4–16 MPI_LOCALIP, 39, 47
MPI library extensions MPI_MT_FLAGS, 48, 49
32-bit Fortran, 23 MPI_NOBACKTRACE
32-bit Linux, 23 , 39
64-bit Fortran, 23 MPI_Recv, 5, 9
64-bit Linux, 23 high message bandwidth, 91
Tru64UNIX 64-bit, 23 low message latency, 91
MPI library routines MPI_Reduce, 13
MPI_Comm_rank, 5 MPI_REMSH, 49
MPI_Comm_size, 5 MPI_ROOT variable, 23
MPI_Finalize, 5 MPI_Rsend, 8
MPI_init, 5 convert to MPI_Ssend, 45
MPI_Recv, 5 MPI_Scatter, 12
MPI_Send, 5 MPI_Send, 5, 8, 112
convert to MPI_Ssend, 45
number of, 4
high message bandwidth, 91
MPI object-space corruption, 102 low message latency, 91
MPI_2BCOPY, 99
MPI_Abort, 164 MPI_SHMCNTL, 45
MPI_Barrier, 14, 93 MPI_SHMEMCNTL, 39, 50
MPI_Bcast, 5, 12 MPI_Ssend, 8
MPI_BOTTOM, 43 MPI_TMPDIR, 39, 50
MPI_Bsend, 8 MPI_WORKDIR, 39, 50
MPI_Cancel, 44 MPI_XMPI, 39
MPI_Comm_rank, 5, 37 mpiCC utility, 29
MPI_COMM_SELF, 6 mpicc utility, 27, 28, 29
MPI_Comm_size, 5 mpiclean, 51, 66, 111
MPI_COMM_WORLD, 6 mpiexec, 51, 64
177
command line options, 65 OPENMP, block partitioning, 137
mpif77 utility, 29 optimization report, 45
mpif90 utility, 29 organization of /opt/mpi, 23
MPIHP_Trace_off, 76
MPIHP_Trace_on, 76 P
mpijob, 51, 65 p2p_bcopy, 99
mpirun, 51
packing and unpacking, 14
appfiles, 58
parent process, 11
command line options, 51–58 performance
mpirun.all, 51 collective routines, 93
MPMD, 169 communication hot spots, 61
MPMD applications, 37, 59
derived data types, 93
multi_par.f, 116
multilevel parallelism, 16, 92 latency/bandwidth, 85, 86
multiple buffer writes detection, 102 polling schemes, 93
multiple hosts, 35, 36, 59–63 synchronization, 93
assigning ranks in appfiles, 61 ping_pong.c, 116
communication, 61 PMPI prefix, 80
multiple network interfaces, 88 point-to-point communications
configure in appfile, 88 overview, 6
diagram of, 89 portability, 4
improve performance, 88 prefix
for output file, 75
using, 88
MPI, 80
multiple threads, 16, 92
multi-process debugger, 99 PMPI, 80
problems
application hangs at MPI_Send, 112
N
build, 105
Native Language Support (NLS), 71 exceeding file descriptor limit, 109
network interfaces, 88 external input and output, 110
NLS, 71
NLSPATH, 71 Fortran 90 behavior, 109
no clobber, 47 interoperability, 108
nonblocking communication, 7, 10 message buffering, 107
buffered mode, 10 performance, 85, 86–93
MPI_Ibsend, 10 propagation of environment variables, 108
MPI_Irecv, 10 runtime, 106, 109
MPI_Irsend, 10 shared memory, 107
MPI_Isend, 10 UNIX open file descriptors, 109
MPI_Issend, 10 process
ready mode, 10 multi-threaded, 16
receive mode, 10 rank, 6
standard mode, 10 rank of root, 13
synchronous mode, 10 rank of source, 9
nonblocking send, 10 reduce communications, 86
noncontiguous and contiguous data, 14 single-threaded, 16
nonportable code, uncovering, 45 process placement
number of MPI library routines, 4 multihost, 61
processor subscription, 90
O process-to-process copy, 99
profiling
op variable, 13 interface, 80
178
using counter instrumentation, 75 run examples, 117
progression, 87 runtime
propagation of environment variables, 108 environment variables, 38
prun, 51, 52, 63 problems, 106–109
MPI on multiple hosts, 36 utilities, 24, 51–58
with mpirun, 34 utility commands, 51
pthreads, 32 mpiclean, 66
mpijob, 65
R mpirun, 51
race condition, 98 mpirun.all, 51
rank, 6 run-time environment variables
of calling process, 5 MP_GANG, 45
of root process, 13 runtime environment variables
of source process, 9 MP_GANG, 39, 45
reordering, 45 MPI_COMMD, 39
raw trace files, 112 MPI_DLIB_FLAGS, 39, 40
ready send mode, 8 MPI_FLAGS, 39, 41
receive MPI_GLOBMEMSIZE, 39, 46
message information, 9 MPI_INSTR, 39, 46
message methods, 7 MPI_LOCALIP, 39, 47
messages, 5, 6 MPI_MT_FLAGS, 48, 49
receive buffer MPI_NOBACKTRACE, 39
address, 13 MPI_REMSH, 49
data type of, 13 MPI_SHMCNTL, 45
data type of elements, 9 MPI_SHMEMCNTL, 39, 50
number of elements in, 9 MPI_TMPDIR, 39, 50
starting address, 9 MPI_WORKDIR, 39, 50
recvbuf variable, 12, 13 MPI_XMPI, 39
recvcount variable, 12
recvtype variable, 12
reduce, 13 S
reduce-scatter, 13 s, 43
reduction, 13 scan, 13
operation, 13 scatter, 11, 12
release notes, 23 secure shell, 49
remote shell, 35 select reduction operation, 13
remsh command, 49, 106 send buffer
secure, 49 address, 13
remsh, 35 data type of, 13
reordering, rank, 45 number of elements in, 13
req variable, 10 sendbuf variable, 12, 13
rhosts file, 35, 106 sendcount variable, 12
root process, 11 sending
root variable, 12, 13 data in one operation, 5
routine selection, 91 messages, 5–7
run sendtype variable, 12
application, 21 setenv
MPI application, 34, 106 MPI_ROOT, 23
MPI on multiple hosts, 35, 36, 51, 59, 59–63 shared memory
MPI on single host, 21 control subdivision of, 50
MPI on single hosts, 51 default settings, 45
179
MPI_SHMEMCNTL, 50 message buffering, 107
specify, 46 MPI_Finalize, 111
system limits, 107 UNIX file descriptors, 109
SIGBUS, 103 using MPI_FLAGS, 98
SIGILL, 103 using the what command, 19
SIGSEGV, 103 version information, 19, 105
SIGSYS, 103 tuning, 83–93
single-process debuggers, 98 twisted-data layout, 137
single-threaded processes, 16
SMP, 171
U
source variable, 9, 10
spin/yield logic, 44 UNIX open file descriptors, 109
SPMD, 171 unpacking and packing, 14
SPMD applications, 37 using
standard send mode, 8 counter instrumentation, 75
starting gang scheduling, 45
HP MPI, 21, 106 multiple network interfaces, 88
multihost applications, 35, 36, 106 profiling interface, 80
singlehost applications, 21
start-up, 34, 52 V
status, 8
status variable, 9 variables
stdargs, 164 buf, 8, 9, 10, 12
stdin, 110 comm, 8, 9, 10, 12, 13
stdio, 110, 164 count, 8, 9, 10, 12
stdout, 110 dest, 8, 10
storing temp files, 50 dtype, 8, 9, 10, 12, 13
structure constructor, 15 MPI_DEBUG_CONT, 98
subdivision of shared memory, 50 MPI_ROOT, 23
subscription op, 13
definition of, 90 recvbuf, 12, 13
swapping overhead, 46 recvcount, 12
synchronization, 14
recvtype, 12
performance, and, 93
variables, 4 req, 10
root, 12, 13
synchronous send mode, 8
runtime, 38–45
T sendbuf, 12, 13
sendcount, 12
-t option, 57 sendtype, 12
tag variable, 8, 9, 10 source, 9, 10
terminate MPI environment, 5
status, 9
thread
communication, 68 tag, 8, 9, 10
multiple, 16 vector constructor, 15
thread-compliant library, 31 version, using what, 19
viewing
+O3, 32 ASCII profile, 76
+Oparallel, 32
visual mpi, 97
total transfer time, 6
TOTALVIEW, 51
troubleshooting, 95 W
Fortran 90, 109 WDB, 41, 98, 113
HP MPI, 105–113 what command, 19, 105
180
with mpirun, 52
X
XDB, 41, 98, 113
Y
yield/spin logic, 44
Z
zero-buffering, 45
181