PDC Lab 9 Final
PDC Lab 9 Final
COMPUTING
Lab 9
Name: Kartik Shettiwar
Roll: 2022BCS0226
1. Write a MPI program that estimates the value of the integral using trapezoidal rule for
numerical integration. 𝑨𝒓𝒆𝒂 = ∫ 𝒇(𝒙) 𝒃 𝒂 𝒅𝒙 , 𝒘𝒉𝒆𝒓𝒆 𝒇(𝒙) = 𝟑𝒙 + 𝟓 𝒂 = 𝟎, 𝒃 = 𝟑, 𝒏 = 𝟏𝟔,
𝟏𝟐𝟖, 𝟓𝟏𝟐, 𝟏𝟎𝟐𝟒.
a) Write the serial version program to estimate the value of the integral. Test the result with
classical integration value. Calculate the execution time by using the library function.
Solution =>
Code:
#include <iostream>
#include <cmath>
#include <chrono>
double f(double x) {
return integral;
int main() {
int n = n_values[i];
std::cout << "Estimated integral with n = " << n << ": " << result << std::endl;
std::cout << "Execution time: " << elapsed.count() << " seconds" << std::endl;
std::cout << "Exact value of the integral: " << exact_value << std::endl;
return 0;
Output:
b) Write the MPI code for parallel program with MPI built-in functions to estimate the value
of integral. Root process is collecting the results from the other processes and produces the
final output. Calculate the execution time by using the library function. Assume: Number of
processes=np=4, Root process=MIN(Rank of the processes).
Solution =>
Code:
#include <iostream>
#include <vector>
#include <mpi.h>
#include <chrono>
double f(double x) {
return integral;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
if (rank == 0) {
int n = n_values[j];
if (rank == 0) {
std::cout << "Estimated integral with n = " << n << ": " << global_result << std::endl;
std::cout << "Execution time: " << elapsed.count() << " seconds" << std::endl;
MPI_Finalize();
return 0;
Output:
2. Odd-Even Transposition Sort
a) Write a serial Program which receives input as the number of elements and elements
from the user and perform the sort operation using Odd-Even Transposition sort.
Solution =>
Code:
#include <iostream>
#include <vector>
#include <mpi.h>
while (!isSorted) {
isSorted = true;
// Odd phase
isSorted = false;
// Even phase
isSorted = false;
}
int main(int argc, char** argv) {
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
std::vector<int> arr;
int n = 0;
if (rank == 0) {
std::cin >> n;
arr.resize(n);
// Input elements
arr.resize(n);
std::vector<int> localArr(chunkSize);
MPI_Scatter(arr.data(), chunkSize, MPI_INT, localArr.data(), chunkSize, MPI_INT, 0, MPI_COMM_WORLD);
oddEvenTranspositionSort(localArr, chunkSize);
if (rank == 0) {
MPI_Finalize();
return 0;
Output:
b) Parallelise program (a) using MPI.
Solution =>
Code:
#include <iostream>
#include <vector>
#include <mpi.h>
while (!isSorted) {
isSorted = true;
// Odd phase
isSorted = false;
// Even phase
isSorted = false;
result[k++] = left[i++];
else {
result[k++] = right[j++];
result[k++] = left[i++];
result[k++] = right[j++];
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
std::vector<int> arr;
int n = 0;
if (rank == 0) {
std::cin >> n;
arr.resize(n);
// Input elements
arr.resize(n);
std::vector<int> localArr(chunkSize);
oddEvenTranspositionSort(localArr, chunkSize);
std::vector<int> sortedChunks;
if (rank == 0) {
sortedChunks.resize(n);
// Only the root process will merge and display the sorted array
if (rank == 0) {
std::vector<int> finalSortedArray;
for (int i = 0; i < size; ++i) {
if (i == 0) {
finalSortedArray = leftChunk;
else {
finalSortedArray = merged;
MPI_Finalize();
return 0;
Output:
c) Check (a) and (b) with the test case: Number of elements=N=16, A=[151, 29, 106, 213, -
14, 415, 178, 192, 246, -118, 110, 7, 11 ,10, 25, 334 ]. Comm_size=4 (Number of Processes).
Solution =>
For Serial
Code:
#include <iostream>
#include <vector>
#include <mpi.h>
while (!isSorted) {
isSorted = true;
// Odd phase
isSorted = false;
// Even phase
isSorted = false;
}
}
int i = 0, j = 0, k = 0;
result[k++] = left[i++];
} else {
result[k++] = right[j++];
result[k++] = left[i++];
result[k++] = right[j++];
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
std::vector<int> arr;
int n = 0;
if (rank == 0) {
arr.resize(n);
std::vector<int> localArr(chunkSize);
oddEvenTranspositionSort(localArr, chunkSize);
std::vector<int> sortedChunks;
if (rank == 0) {
sortedChunks.resize(n);
// Only the root process will merge and display the sorted array
if (rank == 0) {
std::vector<int> finalSortedArray;
if (i == 0) {
finalSortedArray = leftChunk;
} else {
finalSortedArray = merged;
MPI_Finalize();
return 0;
Output:
For Parallel
Code :
#include <iostream>
#include <vector>
#include <mpi.h>
while (!isSorted) {
isSorted = true;
// Odd phase
isSorted = false;
// Even phase
isSorted = false;
int i = 0, j = 0, k = 0;
result[k++] = left[i++];
}
else {
result[k++] = right[j++];
result[k++] = left[i++];
result[k++] = right[j++];
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
std::vector<int> arr;
int n = 0;
if (rank == 0) {
std::vector<int> localArr(chunkSize);
oddEvenTranspositionSort(localArr, chunkSize);
std::vector<int> sortedChunks;
if (rank == 0) {
sortedChunks.resize(n);
// Only the root process will merge and display the sorted array
if (rank == 0) {
std::vector<int> finalSortedArray;
if (i == 0) {
finalSortedArray = leftChunk;
else {
finalSortedArray = merged;
}
// Display the sorted array
MPI_Finalize();
return 0;
Output: