Lab10 Parallel and Distributed Computing
Lab10 Parallel and Distributed Computing
EXPERIMENT NO 10
Ability to Conduct
Experiment
Data presentation
Experimental results
Conclusion
LAB REPORT 10
Date: 12/13/2024
2|Page
LAB TASKS
Consider a suitable instance that has MPI routines to assign different tasks to
different processors. For example, parts of an input data set might be divided
and processed by different processors, or a finite difference grid might be
divided among the processors available. This means that the code needs to
identify processors. In this example, processors are identified by rank - and
integer from 0 to total number of processors.
1. Implement the logic using C
2. Build the code
3. Show the screenshots with proper justification
To examine the above scenario, use functions/commands such
MPI_Init(NULL, NULL) initializes the MPI execution environment with the
two arguments being
pointer to the number of arguments and the pointer to the argument vector.
MPI_Comm_rank() is used to determine the rank of the calling process in the
communicator. It takes
the communicator as an argument.
MPI_Comm_size() is used to determine the size of the calling process in the
communicator. It takes
the communicator as an argument.
MPI_Finalize() is used to terminate the MPI execution environment.
3|Page
Code & Output:
4|Page
Lab Task 02:
Write a C program to use MPI_Reduce that divides the processors into the
group to find the addition independently.
Code:
5|Page
Output:
6|Page
Conclusion:
In this lab, we focused on the fundamentals of parallel and distributed computing using
MPI. We gained practical insights into the benefits and challenges of parallel computing. I
used MPI communication primitives such as MPI_Send and MPI_Recv to optimize task
distribution and reduce idle time, ensuring efficient resource utilization across processes.
Last but not least, these skills are critical for solving real-world problems in high
performance computing and distributed systems.
7|Page