Algorithm
Algorithm
Algorithm
ALGORITHM:
C = []
for i in range(len(A)):
C.append(A[i] + B[i])
return C
SOURCE CODE:
#include <stdlib.h>
#include <stdio.h>
#include <omp.h>
#define ARRAY_SIZE 8
#define NUM_THREADS 4
int * a;
int * b;
int * c;
int n = ARRAY_SIZE;
int n_per_thread;
int i;
a = (int *) malloc(sizeof(int)*n);
b = (int *) malloc(sizeof(int)*n);
c = (int *) malloc(sizeof(int)*n);
a[i] = i;
b[i] = i;
omp_set_num_threads(total_threads);
n_per_thread = n/total_threads;
{
c[i] = a[i]+b[i];
printf("\n");
printf("i\ta[i]\t+\tb[i]\t=\tc[i]\n");
free(a);
free(b);
free(c);
return 0;
OUTPUT:
RESULTS:
The maximum sum of two vectors is obtained when the two vectors are directed in the same
direction. Hence an Open MP program for vector addition is demonstrated.
SCENARIO – II
Write a simple OpenMP program for performing dot product. It is helpful to understand how threads
can be executed in parallel to sequentially check each element of the list for the target value until a
match is found or until all the elements have been searched. Note: In OpenMP, to parallelize the for
loop, the openMP directive is: #pragma omp parallel for.
ALGORITHM:
4) Assign storage for dot product vectors. Next, we will initialize dot product vectors.
6) Performing the dot product in an OpenMP parallel region for loop with a sum reduction for
illustration purposes.
The OpenM dot product algorithm is a method for calculating the dot product of two vectors.The
dot product is a scalar value that is calculated by multiplying the corresponding elements of the two
input vectors and then summing the results. The dot product is often used in linear algebra and
physics to represent the angle between two vectors, or to calculate the projection of one vector
onto another. The OpenMp dot product algorithm starts by initializing a variable to store the dot
product, and then loops through each element of the input vectors, multiplying the corresponding
elements together and adding the result to the dot product variable. Here is the pseudo-code for the
OpenMp dot product algorithm:
dot_product = 0
for i in range(len(A)):
return dot_product
It's important to note that the algorithm assumes that the input vectors are of the same length,
otherwise, it will fail. To handle vectors of different length, it's necessary to check if the vectors have
the same length, and if they don't, use the length of the smallest vector or use zero-padding to make
all vectors the same length before calculating the dot product.
The dot product of two vectors can also be calculated using matrix multiplication, where thefirst
vector is represented as a row vector and the second vector is represented as a column vector.
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
#include <math.h>
#include <omp.h>
#define SIZE 10
int i, j, tid;
dp=0.0;
for(i=0;i<SIZE;i++)
u[i]=1.0*(i+1);
v[i]=1.0*(i+2);
for (i=0;i<SIZE;i++){
tid=omp_get_thread_num();
for(i=0;i<SIZE;i++){
dpp+=u[i]*v[i];
printf("thread: %d\n", tid);
dp=dpp;
printf("thread %d\n",tid);
OUTPUT :
RESULTS:
Hence an open MP program for dot product is demonstrated. In conclusion, OpenMP is a simple and
efficient algorithm for calculating the dot product of two vectors. It's easy to understand and
implement, making it a good choice for many applications that require dot product calculation.
SCENARIO – III
Write a simple openMP program to demonstrate the sharing of a loop iteration by number of
threads . You can have a chunk size of 10 .
ALGORITHM
3) A parallel do/for loop divides up the iterations of the loop between threads.
4) There is a synchronization point at the end of the loop: all threads must finish their iterations
before any thread can proceed.
5) If the loop gives the same answers, if it is run in reverse order, then it is almost certainly parallel.
In this algorithm, we first specify the number of iterations for the loop using the variable "n".Then,
we use the OpenMP "parallel" directive to specify that the loop should be shared among threads.
Within the parallel block, we use the "private" clause to specify that each thread should have its own
copy of the "tid" variable, which stores the thread ID. We also use the "omp_get_thread_num()"
function to get the current thread ID and the "omp_get_num_threads()" function to get the total
number of threads.Next, we use the OpenMP "for" directive to divide the loop iteration among the
threads.Within the for loop, each thread will execute a portion of the loop and will print its thread ID
and the iteration number it is currently executing.
SOURCE CODE:
#include <omp.h>
#include <stdio.h>
#include <stdlib.h>
#define CHUNKSIZE 10
#define N 10
chunk = CHUNKSIZE;
tid = omp_get_thread_num();
if (tid == 0)
nthreads = omp_get_num_threads();
printf("Thread %d starting...\n",tid);
OUTPUT:
RESULTS:
This algorithm demonstrates how OpenMP can be used to share loop iterations among multiple
threads, allowing for more efficient parallel processing. The output of this program will vary
depending on the number of threads and the specific implementation, but it will show that the
iterations are shared among the threads.
SCENARIO – IV
Write a open MP program to demonstrate the sharing of works of a section using threads. You can
perform arithmetic operations on the one dimensional array and this section load can be shared by
the threads.
ALGORITHM
2) Initialize
3) Allows separate blocks of code to be executed in parallel (e.g. several independent subroutines)
4) There is a synchronization point at the end of the blocks: all threads must finish their blocks
before any thread can proceed.
5) Print output.
SOURCE CODE:
#include <omp.h>
#include <stdlib.h>
#include <stdio.h>
#define SIZE 10
w[i] = i * 1.5;
x[i] = i + 22.35;
y[i] = 0.0;
z[i] = 0.0;
private(i,thread_id)
{
thread_id = omp_get_thread_num();
if (thread_id == 0)
nThreads = omp_get_num_threads();
printf("Thread %d starts...\n",thread_id);
printf("Thread %d done\n",thread_id); } }
OUTPUT:
RESULTS:
This algorithm demonstrates how OpenMP can be used to share work among multiple threads,
allowing for more efficient parallel processing. The output of this program will vary depending on
the number of threads and the specific implementation, but it will show that the work is shared
among the threads.