0% found this document useful (0 votes)
36 views3 pages

Message Passing Interface - Point To Point

The document discusses Message Passing Interface (MPI) which is a standard for parallel programming. MPI allows processes to communicate through point-to-point, collective, and one-sided communication methods. Common MPI implementations include MPICH, Intel MPI, MVAPICH, and OpenMPI.

Uploaded by

Rafia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views3 pages

Message Passing Interface - Point To Point

The document discusses Message Passing Interface (MPI) which is a standard for parallel programming. MPI allows processes to communicate through point-to-point, collective, and one-sided communication methods. Common MPI implementations include MPICH, Intel MPI, MVAPICH, and OpenMPI.

Uploaded by

Rafia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

3/8/23, 9:39 AM Message Passing Interface :: High Performance Computing

High Performance Computing

Message Passing Interface


Message passing interface (MPI) is a standard specification of message-passing interface for parallel
computation in distributed-memory systems. MPI isn’t a programming language. It’s a library of func-
tions that programmers can call from C, C++, or Fortran code to write parallel programs. With MPI, an
MPI communicator can be dynamically created and have multiple processes concurrently running on
separate nodes of clusters. Each process has a unique MPI rank to identify it, its own memory space,
and executes independently from the other processes. Processes communicate with each other by
passing messages to exchange data. Parallelism occurs when a program task gets partitioned into
small chunks and distributes those chunks among the processes, in which such each process pro-
cesses its part.

MPI Communication Methods


MPI provides three different communication methods that MPI processes can use to communicate with
each other. The communication methods are discussed as follows:

Point-to-Point Communication

MPI Point-to_Point communication is the most used communication method in MPI. It involves the
transfer of a message from one process to a particular process in the same communicator. MPI pro-
vides blocking (synchronous) and non-blocking (asynchronous) Point-to-Point communication. With
blocking communication, an MPI process sends a message to another MPI process and waits until the
receiving process completely and correctly receives the message before it continues its work. On the
other hand, a sending process using non-blocking communication sends a message to another MPI
process and continues its work without waiting to ensure that the message has been correctly received
by the receiving process.

C ll ti C i ti
https://hpc.nmsu.edu/discovery/mpi/introduction/ 1/3
3/8/23, 9:39 AM Message Passing Interface :: High Performance Computing

Collective Communication High Performance Computing

With this type of MPI communication method, a process broadcasts a message is to all processes in
the same communicator including itself.

One-sided communication

With the MPI One-sided communication method, a process can directly access the memory space of
another process without involving it.

Common MPI Distribution


MPI isn’t a library implementation by itself, but rather, There are a vast majority of MPI implementations
available out there. In this section, the most widely used vendor and public MPI implementations are
discussed as follows.

Message passing interface chameleon (MPICH)

Message passing interface chameleon (MPICH) is a high-performance, open-source, portable imple-


mentation of message passing interface for parallel computation in distributed-memory systems dis-
tributed-memory. MPICH is the basis for a wide range of MPI derivatives including Intel MPI and
MVAPICH among others. MPICH supports different parallel systems from multi-core nodes to clusters
to large supercomputers. It also supports high-speed networks and proprietary high-end computing sys-
tems. For information, visit the MPICH home page.

Intel MPI Library

Developed by Intel, the Intel MPI Library implements the MPICH specification. A programmer can use
the Intel MPI Library to create advanced and more complex parallel applications that run on clusters
with Intel-based processors. In addition, the Intel MPI library provides programmers the ability to test
and maintain their MPI applications. For more details, visit the Intel MPI Library home page.

MVAPICH

Developed by Ohio state university, MVAPICH is an MPI implementation over the InfiniBand, Omni-Path,
Ethernet iWARP, and RoCE packages. It provides high performance, scalability, fault-tolerance support,

https://hpc.nmsu.edu/discovery/mpi/introduction/ 2/3
3/8/23, 9:39 AM Message Passing Interface :: High Performance Computing

HighFor
and portability across many networks. Performance Computing
more information, visit the MVAPICH home page.

Open Message Passing Interface (OpenMPI)

Open message passing interface (OpenMPI) is an open-source implementation of MPI that’s main-
tained by large communities form industry and academia. It supports many different 32 and 64 bits
platforms and networks including a heterogeneous network. Many of the largest systems on the top
500 supercomputers run OpenMPI. For details, visit the OpenMPI home page.

© 2021 New Mexico State University - Board of Regents.

https://hpc.nmsu.edu/discovery/mpi/introduction/ 3/3

You might also like