Exericses 2 Solution Matrix Multlpication
Exericses 2 Solution Matrix Multlpication
1 Solution
1.1 Code
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <semaphore.h>
// Matrices
int A[N][N], B[N][N], C[N][N];
1
Bouchibane Mohamed Abdelwahab Exercise 2
}
}
// Signal completion
sem_post(&semaphore);
pthread_exit(NULL);
}
int main() {
pthread_t threads[2];
int thread_ids[2] = {0, 1};
printf("Matrix A:\n");
print_matrix(A);
printf("Matrix B:\n");
print_matrix(B);
// Create threads
for (int i = 0; i < 2; i++) {
pthread_create(&threads[i], NULL, compute_rows, &thread_ids[i]);
}
// Clean up
sem_destroy(&semaphore);
return 0;
}
1.2 Report
1.2.1 Understanding the Problem
• Matrix multiplication is a computationally expensive operation that can benefit from
parallel processing. The task requires multiplying two matrices A and B to produce a
∑N −1 matrix C Each element C[i][j] of the resulting matrix is computed by this formula
result
k=0 A[i][k].B[k][j]
2
Bouchibane Mohamed Abdelwahab Exercise 2
• The computation of the matrix product is split between two threads. Each thread is
responsible for calculating a subset of the rows of matrix C.
• The semaphore is used for synchronization, ensuring that the main thread waits for both
threads to finish before printing the result matrix C.
• Each thread calculates the dot product of the corresponding row of matrix A with all
columns of matrix B, and stores the result in the matrix C.
• The semaphore is initialized with a value of 0. Each thread signals (increments the
semaphore) once it finishes its computation using sem_post.
• The main thread waits for the completion of both threads using sem_wait, ensuring that
the final result matrix C is printed only after both threads have completed their tasks.
• Matrix Indexing: Ensuring that each thread worked on different rows without overlap-
ping was another challenge. The matrix rows were evenly divided between the threads
by assigning specific ranges to each thread, thus avoiding conflicts.
• Dynamic Thread Allocation: The current solution is limited to two threads. For
larger matrices or more threads, this approach could be modified to dynamically assign
rows based on the number of available threads. This would require adjusting the work
division and possibly using more advanced synchronization techniques.
1.2.5 Conclusion
• The solution successfully implements parallel matrix multiplication using semaphores to
ensure thread safety and synchronization.
• The parallel approach significantly reduces the computation time compared to sequential
matrix multiplication, especially for larger matrices.
3
Bouchibane Mohamed Abdelwahab Exercise 2
• Future work could involve scaling the solution to handle more threads dynamically, im-
proving load balancing, and optimizing the performance for larger matrices.