0% found this document useful (0 votes)
3 views

Notes Python Parallel Day Four

The document explains the use of the mpi4py module for parallel computing in Python, detailing functions for communication between processes, such as sending and receiving messages, as well as synchronization mechanisms like barriers. It also introduces the parsl module, which simplifies parallel programming by providing a high-level interface for executing tasks concurrently. Key concepts include defining tasks with decorators and installing the parsl module using pip.

Uploaded by

Nikita yadav
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Notes Python Parallel Day Four

The document explains the use of the mpi4py module for parallel computing in Python, detailing functions for communication between processes, such as sending and receiving messages, as well as synchronization mechanisms like barriers. It also introduces the parsl module, which simplifies parallel programming by providing a high-level interface for executing tasks concurrently. Key concepts include defining tasks with decorators and installing the parsl module using pip.

Uploaded by

Nikita yadav
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 2

from mpi4py import MPI

This is to import MPI from mpi4py module

comm = MPI.COMM_WORLD
Creates a communicator which is used to communicate with all the processes.

totalProcess = comm.Get_size()
This function tells us how many number of processes inside the
communicator. i.e., total number if processes inside the communicator.

rank = comm.Get_rank()
Each process has an unique identifier called rank, which is basically a
number starting with 0 to the totalProcess - 1

print(f"Hello from process {rank} from {totalProcess} process")


this print statement will be executed by all the processes inside your
communicator

Output:

D:\MPIPython>mpiexec -n 4 python helloworld.py


Hello from process 2 from 4 process
Hello from process 1 from 4 process
Hello from process 3 from 4 process
Hello from process 0 from 4 process

As you can see, the ranks do not print messages in order. Whichever rank
reaches print statment first will print the message first.

MPI.Wtime() --> Wtime stands for wall time. We can use this function to keep
track of time taken by each section of your code.

Your MPI code can also incorporate numpy.

Let us explore send() and recv()

comm.send(message, dest = to which process, tag = mesgID)

mesg = comm.recv(source = from which process, tag = mesgID)

comm.Barrier()

Barrier:
The concept of barrier in parallel computing, including MPI, is an important
synchronization mechanism that ensures all the processes in a communicator reach a
certain point in the code before any of the processes can proceed.

Broadcast:
comm.bcast()

This function is used to send data(same data) from one process(root


process) to all the other processes in a communicator.

comm.scatter()
This function distrubutes distinct(unique) chunks of data from one
process(root ) to all other processes. Here each process recieves differt portion
of the data.

comm.gather()
this function does the reverse(inverse) of what scatter does. It
collects data from all the processes and assembles(merges) it at the root process.

=============================================================
parsl is a parallel programming module in Python designed to simplify the process
of writing parallel applications.
It provides sa high-level interface to execute tasks concurrently and manage
workflows, making it easier to use the capabilities of multi-core and distributed
computing environments.

Basic concepts:
Tasks:
functions that are defined to be executed parallely. You use decorator
@parsl.python_app to make your function as a task.

@parsl.python_app
def Add(a, b):
return a + b

To install this module:


pip install parsl

You might also like