0% found this document useful (0 votes)
13 views8 pages

Introduction To MPI Basics

ok

Uploaded by

imareebkhan.25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views8 pages

Introduction To MPI Basics

ok

Uploaded by

imareebkhan.25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 8

INTRODUCTION TO MPI BASICS

MPI (Message Passing Interface) facilitates communication and


coordination among parallel processes across multiple processors or
computing nodes, supporting parallel and distributed computing.
KEY CONCEPTS
• MPI Definition:
• Standardized functions for message passing.
• Enables process communication and synchronization.
• Processes:
• Multiple parallel processes in MPI programs.
• Each process has its own address space and context.
• Message Passing:
• Communication achieved through message exchange.
• Processes send and receive messages.
• Scalability:
• Efficient scaling across distributed memory systems.
• Supports parallel execution on thousands of processors.
IMPLEMENTATION OF MPI
• MPI Libraries:
• MPI is implemented through libraries.
• These libraries provide the necessary functions and routines for message passing in MPI programs.
• Common MPI Implementations:
• Examples of popular MPI implementations include:
• MPICH
• Open MPI
• Intel MPI
• These implementations offer platform-independent support for parallel and distributed computing.
• Compatibility Across Platforms:
• MPI implementations are available for a wide range of platforms, including:
• Desktop workstations
• Compute clusters
• Supercomputers
• Cloud environments
• This ensures that MPI-based applications can run seamlessly across different hardware and software
configurations.
COMPATIBILITY ACROSS PLATFORMS:

Platform-Independent: MPI implementations run on various hardware and software


environments. This ensures that MPI-based applications can be developed and deployed
across different computing platforms without significant modifications.

• Benefits:
• Desktops: Efficient resource utilization for scientific tasks.
• Clusters: Utilizes distributed computing for large-scale computations.
• Supercomputers: Enables high-performance computing across thousands of
processors.
• Cloud: Access scalable resources for parallel computing and data processing.
MPI COMMUNICATION MODES

• Point-to-Point Communication:
• Direct communication between processes.
• Uses MPI send and receive operations.
• Collective Communication:
• Involves groups of processes working together.
• Includes broadcast, scatter, gather, reduce operations.
• Synchronization:
• Ensures effective coordination of processes.
• MPI provides primitives like barriers for synchronization.
MPI MESSAGE PASSING OPERATIONS

• Send and Receive Operations:


• MPI enables processes to send and receive messages.
• Send: Process sends a message.
• Receive: Process receives a message.
• Blocking vs. Non-blocking Communication:
• Blocking: Sender waits for receiver.
• Non-blocking: Sender continues without waiting.
• Buffering in MPI Message Passing:
• Messages are buffered until received.
• Ensures reliable and ordered delivery.
GENERAL MPI PROGRAM STRUCTURE
Basic program
MPI Progamming
→ writing MPI Programs
→ compiling & linking
program (hello.c)
#include<mpi.h>
#include <stdio.h>
int main( int argc, char **argv)
{
MPI_Init(&arge, &angv);
Print ("Hello world In");
MPI- Finalize ();
Return 0;
}

You might also like