0% found this document useful (0 votes)
44 views13 pages

PDC 7

The document discusses different OpenMP solutions for resolving race conditions that can occur when multiple threads access shared variables simultaneously, including using critical sections which allow only one thread at a time to execute a critical region, atomic operations which ensure a single line of code is executed by only one thread at a time, and reduction clauses which make the variable private to each thread and then combine the results using the specified operator at the end to avoid the race condition. It provides an example of using a reduction clause with the "+" operator to sum values in a parallel for loop without a race condition.

Uploaded by

Uzelia Ahmad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views13 pages

PDC 7

The document discusses different OpenMP solutions for resolving race conditions that can occur when multiple threads access shared variables simultaneously, including using critical sections which allow only one thread at a time to execute a critical region, atomic operations which ensure a single line of code is executed by only one thread at a time, and reduction clauses which make the variable private to each thread and then combine the results using the specified operator at the end to avoid the race condition. It provides an example of using a reduction clause with the "+" operator to sum values in a parallel for loop without a race condition.

Uploaded by

Uzelia Ahmad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 13

Programming with OpenMP

Race Condition
Race Condition
• When multiple threads Shared
Variable
simultaneously read/write
shared variable sum = 0

• Multiple OMP solutions Thread A

• Reduction 0
• Atomic +1 Thread B
• Critical
1

+1 +1

#pragma omp i<N;


for (i=0; parallel
i++)for
{ private(i) shared(sum)
sum += i; Should be 3!
}
Critical Section
• One solution: use critical
Shared
• Only one thread at a time Variable
can execute a critical section sum = 0

#pragma omp critical


Thread 0
{
sum += i; 0
}
+1 Thread 1

• Downside? 1
Wait
• SLOOOOWWW +1
• Overhead & serialization 2
+1
3
OMP Atomic
• Atomic like “mini” critical
Shared
• Only one line Variable
• Certain limitations sum = 0

Thread 0
#pragma omp atomic
sum += i; 0
+1 Thread 1

• Hardware controlled 1
Wait
• Less overhead than critical +1

2
+1
3
OMP Reduction
#pragma omp reduction (operator:variable)

• Avoids race condition


• Reduction variable must be shared
• Makes variable private, then performs operator at end of loop
• Operator cannot be overloaded (c++)
• One of: +, *, -, / (and &, ^, |, &&, ||)
• OpenMP 3.1: added min and max for c/c++
Reduction Example
#include <omp.h>
#include <stdio.h>

int main() {

int i;
const int N = 1000;
int sum = 0;

#pragma omp parallel for private(i) reduction(+: sum)


for (i=0; i<N; i++) {
sum += i;
}

printf("reduction sum=%d (expected %d)\n", sum, ((N-1)*N)/2);

reduction sum=499500 (expected 499500)


Homework Update

Resolve the critical section


Compare the time
Give VIVA next week

You might also like