0% found this document useful (0 votes)
17 views

Lab10 Parallel and Distributed Computing

Uploaded by

210287
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Lab10 Parallel and Distributed Computing

Uploaded by

210287
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

AIR UNIVERSITY

DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING

EXPERIMENT NO 10

Lab Title: __Parallel and Distributed Computing: MP Programs____________

Student Name: Muhammad Burhan Ahmed Reg. No: 210287 ___


Objective: Implement and analyze various MP Programs.
LAB ASSESSMENT:
Attributes Excellent Good Average Satisfactory (2) Unsatisfactory (1)
(5) (4) (3)

Ability to Conduct
Experiment

Ability to assimilate the


results

Effective use of lab


equipment and
follows the lab
safety rules

Total Marks: Obtained Marks:

LAB REPORT ASSESSMENT:


Attributes Excellent Good Average Satisfactory Unsatisfactory
(5) (4) (3) (2) (1)

Data presentation

Experimental results

Conclusion

Total Marks: Obtained Marks:

Date: _______12/13/2024______ Signature:


Air University

DEPARTMENT OF ELECTRICAL AND


COMPUTER ENGINEERING

LAB REPORT 10

SUBMITTED TO: Miss Sidrish Ehsan

SUBMITTED BY: Muhammad Burhan Ahmed

Date: 12/13/2024
2|Page
LAB TASKS

Lab Task 01:

Consider a suitable instance that has MPI routines to assign different tasks to
different processors. For example, parts of an input data set might be divided
and processed by different processors, or a finite difference grid might be
divided among the processors available. This means that the code needs to
identify processors. In this example, processors are identified by rank - and
integer from 0 to total number of processors.
1. Implement the logic using C
2. Build the code
3. Show the screenshots with proper justification
To examine the above scenario, use functions/commands such
MPI_Init(NULL, NULL) initializes the MPI execution environment with the
two arguments being
pointer to the number of arguments and the pointer to the argument vector.
MPI_Comm_rank() is used to determine the rank of the calling process in the
communicator. It takes
the communicator as an argument.
MPI_Comm_size() is used to determine the size of the calling process in the
communicator. It takes
the communicator as an argument.
MPI_Finalize() is used to terminate the MPI execution environment.

3|Page
Code & Output:

4|Page
Lab Task 02:

Write a C program to use MPI_Reduce that divides the processors into the
group to find the addition independently.

Hint. The function prototype is as follows:

MPI_Reduce(void* send_data, void* recv_data, int count, MPI_Datatype


datatype, MPI_Op op, int root, MPI_Comm communicator)

Code:

5|Page
Output:

6|Page
Conclusion:

In this lab, we focused on the fundamentals of parallel and distributed computing using
MPI. We gained practical insights into the benefits and challenges of parallel computing. I
used MPI communication primitives such as MPI_Send and MPI_Recv to optimize task
distribution and reduce idle time, ensuring efficient resource utilization across processes.
Last but not least, these skills are critical for solving real-world problems in high
performance computing and distributed systems.

7|Page

You might also like