Notes Python Parallel Day Four
Notes Python Parallel Day Four
comm = MPI.COMM_WORLD
Creates a communicator which is used to communicate with all the processes.
totalProcess = comm.Get_size()
This function tells us how many number of processes inside the
communicator. i.e., total number if processes inside the communicator.
rank = comm.Get_rank()
Each process has an unique identifier called rank, which is basically a
number starting with 0 to the totalProcess - 1
Output:
As you can see, the ranks do not print messages in order. Whichever rank
reaches print statment first will print the message first.
MPI.Wtime() --> Wtime stands for wall time. We can use this function to keep
track of time taken by each section of your code.
comm.Barrier()
Barrier:
The concept of barrier in parallel computing, including MPI, is an important
synchronization mechanism that ensures all the processes in a communicator reach a
certain point in the code before any of the processes can proceed.
Broadcast:
comm.bcast()
comm.scatter()
This function distrubutes distinct(unique) chunks of data from one
process(root ) to all other processes. Here each process recieves differt portion
of the data.
comm.gather()
this function does the reverse(inverse) of what scatter does. It
collects data from all the processes and assembles(merges) it at the root process.
=============================================================
parsl is a parallel programming module in Python designed to simplify the process
of writing parallel applications.
It provides sa high-level interface to execute tasks concurrently and manage
workflows, making it easier to use the capabilities of multi-core and distributed
computing environments.
Basic concepts:
Tasks:
functions that are defined to be executed parallely. You use decorator
@parsl.python_app to make your function as a task.
@parsl.python_app
def Add(a, b):
return a + b