Message Passing Interface (MPI)
Message Passing Interface (MPI)
The message passing interface defines a standard suite of functions for these tasks.
The term message passing itself typically refers to the sending of a message to an
object, parallel process, subroutine, function or thread, which is then used to start
another process.
MPI isn't endorsed as an official standard by any standards organization, such as the
Institute of Electrical and Electronics Engineers (IEEE) or the International Organization
for Standardization (ISO), but it's generally considered to be the industry standard, and
it forms the basis for most communication interfaces adopted by parallel computing
programmers. Various implementations of MPI have been created by developers as
well.
MPI defines useful syntax for routines and libraries in programming languages
including Fortran, C, C++ and Java.
Benefits of the message passing interface
The following list includes some basic key MPI concepts and commands:
Comm: These are communicator objects that connect groups of processes in MPI.
Communicator commands give a contained process an independent identifier,
arranging it as an ordered topology. For example, a command for a base
communicator includes MPI_COMM_WORLD.
Color: This assigns a color to a process, and all processes with the same color are
located in the same communicator. A command related to color includes
MPE_Make_color_array, which changes the available colors.
Key: The rank or order of a process in the communicator is based on a key. If two
processes are given the same key, the order is based on the process's rank in the
communicator.
Newcomm: This is a command for creating a new communicator.
MPI_COMM_DUP is an example command to create a duplicate of a comm with
the same fixed attributes.
Derived data types: MPI functions need a specification to what type of data is sent
between processes. MPI_INT, MPI_CHAR and MPI_DOUBLE aid in predefining the
constants.
Point-to-point: This sends a message between two specific processes. MPI_Send
and MPI_Recv are two common blocking methods for point-to-point messages.
Blocking refers to having the sending and receiving processes wait until a complete
message has been correctly sent and received to send and complete a message.
Collective basics: These are collective functions that need communication among
all processes in a process group. MPI_Bcast is an example of such, which sends data
from one node to all processes in a process group.
One-sided: This term is typically used referring to a form of communications
operations, including MPI_Put, MPI_Get and MPI_Accumulate. They refer
specifically to being a writing to memory, reading from memory and reducing
operation on the same memory across tasks.
In November 1992, a draft for MPI-1 was created and in 1993 the standard was
presented at the Supercomputing '93 conference. With additional feedback and
changes, MPI version 1.0 was released in 1994. Since then, MPI has been open to all
members of the high-performance computing community, including more than 40
participating organizations.
The older MPI 1.3 standard, dubbed MPI-1, provides over 115 functions. The later MPI
2.2 standard, or MPI-2, offers over 500 functions and is largely backward compatible
with MPI-1.
However, not all MPI libraries provide a full implementation of MPI-2. MPI-2 included
new parallel I/O, dynamic process management as well as remote memory operations.
The MPI3 standard released in November 2014 improves scalability, enhances
performance, includes multicore and cluster support and interoperates with more
applications. In 2021, The MPI Forum released MPI 4.0. It introduced partitioned
communications, a new tool interface, Persistent Collectives and other new additions.
MPI 5.0 is currently under development.