0% found this document useful (0 votes)
15 views5 pages

Reporte 2

This document summarizes the results of a parallel computing lab experiment. The experiment involves using MPI to parallelize the calculation of temperature values over time for a 1D heat equation. Performance was measured using different numbers of processes. Plots show that computation time decreases but communication time increases as more processes are used, indicating good scaling up to 8 processes but poor scaling above that.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views5 pages

Reporte 2

This document summarizes the results of a parallel computing lab experiment. The experiment involves using MPI to parallelize the calculation of temperature values over time for a 1D heat equation. Performance was measured using different numbers of processes. Plots show that computation time decreases but communication time increases as more processes are used, indicating good scaling up to 8 processes but poor scaling above that.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

reporte_2

November 8, 2023

1 Laboratorio 7
1.1 Ejercicio 1
1.1.1 A
#include <iostream>
#include <cmath>
#include <ctime>
#include <mpi.h>

double frontera(double x, double tiempo) {


double limite;
if (x < 0.5) limite = 100.0 + 10.0 * sin(tiempo);
else limite = 75.0;
return limite;
}

double inicial(double x, double tiempo) {


double limite;
limite = 95.0;
return limite;
}

int main(int argc, char* argv[]) {


int num_threads = 2;
if (argc > 1) {
num_threads = atoi(argv[1]);
}

int rank, size;


MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);

if (size < 2) {
std::cerr << "This program requires at least 2 MPI processes." << std::endl;
MPI_Finalize();
return 1;

1
}

int i, j, j_min = 0, j_max = 400, tag, n = 10;


double k = 0.002;
double tiempo, dt, tmax = 10.0, tmin = 0.0, tnew;
double u[n], unew[n], x[n], dx;
double x_max = 1.0, x_min = 0.0;

dt = (tmax - tmin) / (double)(j_max - j_min);


dx = (x_max - x_min) / (double)(n - 1);
x[0] = 0;
for (i = 1; i < n; i++) {
x[i] = x[i - 1] + dx;
}

// Calculate the range of data each process will work on


int local_n = n / size;
int local_start = rank * local_n;
int local_end = local_start + local_n;

// Inicialización
double start_time = MPI_Wtime();
tiempo = tmin;
u[0] = 0.0;
for (i = 1; i <= n; i++) u[i] = inicial(x[i], tiempo);
u[n + 1] = 0.0;

// Valores of temperature at the next time step


for (j = 1; j <= j_max; j++) {
tnew += dt;

// Exchange boundary data between neighboring processes


if (rank > 0) {
MPI_Send(&u[local_start], 1, MPI_DOUBLE, rank - 1, 0, MPI_COMM_WORLD);
MPI_Recv(&u[local_start - 1], 1, MPI_DOUBLE, rank - 1, 0, MPI_COMM_WORLD, MPI_STATU
}
if (rank < size - 1) {
MPI_Send(&u[local_end - 1], 1, MPI_DOUBLE, rank + 1, 0, MPI_COMM_WORLD);
MPI_Recv(&u[local_end], 1, MPI_DOUBLE, rank + 1, 0, MPI_COMM_WORLD, MPI_STATUS_IGNO
}

// Update temperature in parallel


for (i = local_start; i < local_end; i++) {
unew[i] = u[i] + (dt * k / dx / dx) * (u[i - 1] - 2.0 * u[i] + u[i + 1]);
}

// Apply boundary conditions


if (local_start == 1) {

2
unew[1] = frontera(x[1], tnew);
}
if (local_end == n) {
unew[n] = frontera(x[n], tnew);
}

// Update time and temperature


tiempo = tnew;

for (i = local_start; i < local_end; i++) {


u[i] = unew[i];
if (j == j_max) {
printf("%f %f %f\n", tiempo, x[i], u[i]);
}
}
}
double end_time = MPI_Wtime();
double elapsed_time = end_time - start_time;

if (rank == 0) printf("Tiempo de ejecución: %f\n", elapsed_time);

MPI_Finalize();
return 0;
}
[2]: import matplotlib.pyplot as plt

n_process = [2, 4, 8, 16]


times = [0.003144, 0.001194, 0.002173, 0.114123]

plt.plot(n_process, times, label='Tiempo de ejecución', marker='o')


plt.legend()
plt.xlabel('Número de procesos')
plt.ylabel('Tiempo (s)')

plt.show()

3
1.1.2 B
El número de FLOPS son: 400 * (10*10) + 3 = 41200 FLOPS.
Ahora para visualizar la velocidad en FLOPS.
[12]: n_process = [2, 4, 8, 16]
#times = [41200/ 0.003144, 41200/0.001194, 41200/0.002173, 41200/0.114123]
times = [0.003144, 0.001194, 0.002173, 0.114123]
t_comm = [0.001234,0.001689,0.002179, 0.239181]

plt.plot(n_process, times, label='Tiempo de ejecución', marker='x')


plt.plot(n_process, t_comm, label='Tiempo de comunicación', marker='o')
plt.legend()
plt.xlabel('Número de procesos')
plt.ylabel('Tiempo (s)')

plt.show()

4
[ ]:

You might also like