0% found this document useful (0 votes)
26 views1 page

Master in High Performance Computing Advanced Parallel Programming MPI: Remote Memory Access Operations

The document outlines 4 labs to be completed on the Finis Terrae (FT2) supercomputer using MPI remote memory access (RMA) operations. Students will parallelize 4 codes (pi_integral.c, dotprod.c, mxvnm.c, and stencil.c) using MPI RMA and compare the performance to blocking collective and point-to-point communication versions. For each lab, students must write a short report explaining their work, results, and performance analysis in English or Spanish.

Uploaded by

nijota1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views1 page

Master in High Performance Computing Advanced Parallel Programming MPI: Remote Memory Access Operations

The document outlines 4 labs to be completed on the Finis Terrae (FT2) supercomputer using MPI remote memory access (RMA) operations. Students will parallelize 4 codes (pi_integral.c, dotprod.c, mxvnm.c, and stencil.c) using MPI RMA and compare the performance to blocking collective and point-to-point communication versions. For each lab, students must write a short report explaining their work, results, and performance analysis in English or Spanish.

Uploaded by

nijota1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Master in High Performance Computing

Advanced Parallel Programming


MPI: Remote Memory Access Operations

LABS 4
The labs will be performed in the Finis Terrae (FT2) supercomputer of the Galicia
Supercomputing Center (CESGA).
For each one of the labs you will have to write a small report, just explaining what
you have done in each exercise, the resulting codes, and the performance analysis. The
memory can be written in English or Spanish. The deadline dates for each lab will be
communicated via slack.
We can use Intel MPI implementation, module load intel impi, or OpenMPI one,
module load gcc openmpi. There may be some differences. The exercises are based on
codes that you parallelized in MPI previously.

1. Parallelize the code pi_integral.c using MPI RMA operations. Compare it with
the blocking collective version.

2. Parallelize the code dotprod.c using MPI RMA operations. Compare it with the
blocking collective version.

3. Parallelize the code mxvnm.c using MPI RMA operations. Compare it with the
blocking collective version. N = M can be assumed.

4. Parallelize the code stencil.c using MPI RMA operations. Compare it with the
implementation using point to point communications.

You might also like