0% found this document useful (0 votes)
19 views

Lecture Open MP

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Lecture Open MP

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Parallel

Programming with
OpenMP

CS240A, T. Yang

1
A Programmer’s View of OpenMP
• What is OpenMP?
• Open specification for Multi-Processing
• “Standard” API for defining multi-threaded shared-memory
programs
• openmp.org – Talks, examples, forums, etc.
• OpenMP is a portable, threaded, shared-memory
programming specification with “light” syntax
• Exact behavior depends on OpenMP implementation!
• Requires compiler support (C or Fortran)
• OpenMP will:
• Allow a programmer to separate a program into serial regions and
parallel regions, rather than T concurrently-executing threads.
• Hide stack management
• Provide synchronization constructs
• OpenMP will not:
• Parallelize automatically
• Guarantee speedup
• Provide freedom from data races 2
Motivation – OpenMP

int main() {

// Do this part in parallel

printf( "Hello, World!\n" );

return 0;
}

3
Motivation – OpenMP

All OpenMP directives begin: #pragma


int main() {

omp_set_num_threads(4);

// Do this part in parallel


Printf Printf Printf Printf
#pragma omp parallel
{
printf( "Hello, World!\n" );
}

return 0;
}

4
OpenMP parallel region construct
• Block of code to be executed by multiple threads in
parallel
• Each thread executes the same code redundantly
(SPMD)
• Work within work-sharing constructs is distributed among the
threads in a team
• Example with C/C++ syntax
#pragma omp parallel [ clause [ clause ] ... ] new-line
structured-block
• clause can include the following:
private (list)
shared (list)
• Example: OpenMP default is shared variables
To make private, need to declare with pragma:
#pragma omp parallel private (x)
OpenMP Programming Model - Review
• Fork - Join Model:
Thread 0 Thread 0
Thread 1 Thread 1
Sequential
code

• OpenMP programs begin as single process (master thread) and


executes sequentially until the first parallel region construct is
encountered
• FORK: Master thread then creates a team of parallel threads
• Statements in program that are enclosed by the parallel region construct are
executed in parallel among the various threads
• JOIN: When the team threads complete the statements in the parallel region
construct, they synchronize and terminate, leaving only the master thread

6
parallel Pragma and Scope – More Examples

#pragma omp parallel num_threads(2)


{
x=1;
y=1+x;
}

X=1; x=1;
y=1+x; y=1+x;

X and y are shared variables. There is a risk of data race


7
parallel Pragma and Scope - Review

#pragma omp parallel


{
x=1;
y=1+x;
}
Assume number of threads=2
Thread 0 Thread 1
X=1; x=1;
y=1+x; y=1+x;

X and y are shared variables. There is a risk of data race


8
parallel Pragma and Scope - Review

#pragma omp parallel num_threads(2)


{
x=1; y=1+x;
}

X=1; x=1;
y=x+1; y=x+1;

X and y are shared variables. There is a risk of data race


9
Divide for-loop for parallel sections

for (int i=0; i<8; i++) x[i]=0; //run on 4 threads

#pragma omp parallel


{ // Assume number of threads=4
int numt=omp_get_num_thread();
int id = omp_get_thread_num(); //id=0, 1, 2, or 3
for (int i=id; i<8; i +=numt)
x[i]=0;

}
Thread 0 Thread 1 Thread 2 Thread 3
Id=0; Id=1; Id=2; Id=3;
x[0]=0; x[1]=0; x[2]=0; x[3]=0;
X[4]=0; X[5]=0; X[6]=0; X[7]=0;

10
Use pragma parallel for

for (int i=0; i<8; i++) x[i]=0;

#pragma omp parallel for


{
for (int i=0; i<8; i++)
x[i]=0;
}
System divides loop iterations to threads
Id=0; Id=1; Id=2; Id=3;
x[0]=0; x[1]=0; x[2]=0; x[3]=0;
X[4]=0; X[5]=0; X[6]=0; X[7]=0;

11
OpenMP Data Parallel Construct: Parallel Loop
• Compiler calculates loop bounds for each thread directly
from serial source (computation decomposition)
• Compiler also manages data partitioning
• Synchronization also automatic (barrier)
Programming Model – Parallel Loops
• Requirement for parallel loops
• No data dependencies
(reads/write or write/write
pairs) between iterations!

• Preprocessor calculates loop


bounds and divide iterations
among parallel threads

#pragma omp parallel for


?
for( i=0; i < 25; i++ )
{
printf(“Foo”);
}
13
Example

for (i=0; i<max; i++) zero[i] = 0;

• Breaks for loop into chunks, and allocate each to a


separate thread
• e.g. if max = 100 with 2 threads:
assign 0-49 to thread 0, and 50-99 to thread 1
• Must have relatively simple “shape” for an OpenMP-aware
compiler to be able to parallelize it
• Necessary for the run-time system to be able to determine how
many of the loop iterations to assign to each thread
• No premature exits from the loop allowed In general,
don’t jump
• i.e. No break, return, exit, goto statements
outside of
any pragma
14
block
Parallel Statement Shorthand

#pragma omp parallel


This is the
{ only directive
#pragma omp for in the parallel
for(i=0;i<max;i++) { … } section
}
can be shortened to:
#pragma omp parallel for
for(i=0;i<max;i++) { … }

• Also works for sections


15
Example: Calculating π

16
Sequential Calculation of π in C

#include <stdio.h> /* Serial Code */


static long num_steps = 100000;
double step;
void main () {
int i;
double x, pi, sum = 0.0;
step = 1.0/(double)num_steps;
for (i = 1; i <= num_steps; i++) {
x = (i - 0.5) * step;
sum = sum + 4.0 / (1.0 + x*x);
}
pi = sum / num_steps;
printf ("pi = %6.12f\n", pi);
}
17
Parallel OpenMP Version (1)
#include <omp.h>
#define NUM_THREADS 4
static long num_steps = 100000; double step;

void main () {
int i; double x, pi, sum[NUM_THREADS];
step = 1.0/(double) num_steps;
#pragma omp parallel private ( i, x )
{
int id = omp_get_thread_num();
for (i=id, sum[id]=0.0; i< num_steps; i=i+NUM_THREADS)
{
x = (i+0.5)*step;
sum[id] += 4.0/(1.0+x*x);
}
}
for(i=1; i<NUM_THREADS; i++)
sum[0] += sum[i]; pi = sum[0] / num_steps
printf ("pi = %6.12f\n", pi);
}
18
OpenMP Reduction

double avg, sum=0.0, A[MAX]; int i;


#pragma omp parallel for private ( sum Sum+=A[0]
)
for (i = 0; i <= MAX ; i++)
sum += A[i]; Sum+=A[1]
avg = sum/MAX; // bug Sum+=A[2]
• Problem is that we really want sum over all threads! Sum+=A[3
• Reduction: specifies that 1 or more variables that are
private to each thread are subject of reduction operation at
end of parallel region:
reduction(operation:var) where
• Operation: operator to perform on the variables (var) at the end of the
parallel region
• Var: One or more variables on which to perform scalar reduction.

double avg, sum=0.0, A[MAX]; int i;


#pragma omp for reduction(+ : sum)
for (i = 0; i <= MAX ; i++)
sum += A[i];
avg = sum/MAX;
Sum+=A[0] Sum+=A[2]
Sum+=A[3]
19
Sum+=A[1]
OpenMp: Parallel Loops with Reductions
• OpenMP supports reduce operation
sum = 0;
#pragma omp parallel for reduction(+:sum)
for (i=0; i < 100; i++) {
sum += array[i];
}

• Reduce ops and init() values (C and C++):


+ 0 bitwise & ~0 logical & 1
- 0 bitwise | 0 logical | 0
* 1 bitwise ^ 0
Calculating π Version (1) - review
#include <omp.h>
#define NUM_THREADS 4
static long num_steps = 100000; double step;

void main () {
int i; double x, pi, sum[NUM_THREADS];
step = 1.0/(double) num_steps;
#pragma omp parallel private ( i, x )
{
int id = omp_get_thread_num();
for (i=id, sum[id]=0.0; i< num_steps; i=i+NUM_THREADS)
{
x = (i+0.5)*step;
sum[id] += 4.0/(1.0+x*x);
}
}
for(i=1; i<NUM_THREADS; i++)
sum[0] += sum[i]; pi = sum[0] / num_steps
printf ("pi = %6.12f\n", pi);
}
21
Version 2: parallel for, reduction
#include <omp.h>
#include <stdio.h>
/static long num_steps = 100000;
double step;
void main ()
{ int i; double x, pi, sum = 0.0;
step = 1.0/(double) num_steps;
#pragma omp parallel for private(x) reduction(+:sum)
for (i=1; i<= num_steps; i++){
x = (i-0.5)*step;
sum = sum + 4.0/(1.0+x*x);
}
pi = sum / num_steps;
printf ("pi = %6.8f\n", pi);
}

22
Loop Scheduling in Parallel for pragma

#pragma omp parallel for


for (i=0; i<max; i++) zero[i] = 0;

• Master thread creates additional threads, each


with a separate execution context
• All variables declared outside for loop are
shared by default, except for loop index which is
private per thread (Why?)
• Implicit “barrier” synchronization at end of for
loop
• Divide index regions sequentially per thread
• Thread 0 gets 0, 1, …, (max/n)-1;
• Thread 1 gets max/n, max/n+1, …, 2*(max/n)-1
• Why?

23
Impact of Scheduling Decision
• Load balance
• Same work in each iteration?
• Processors working at same speed?
• Scheduling overhead
• Static decisions are cheap because they require no run-time
coordination
• Dynamic decisions have overhead that is impacted by
complexity and frequency of decisions
• Data locality
• Particularly within cache lines for small chunk sizes
• Also impacts data reuse on same processor
OpenMP environment variables
OMP_NUM_THREADS
§ sets the number of threads to use during execution
§ when dynamic adjustment of the number of threads is enabled, the
value of this environment variable is the maximum number of
threads to use
§ For example,
setenv OMP_NUM_THREADS 16 [csh, tcsh]
export OMP_NUM_THREADS=16 [sh, ksh, bash]
OMP_SCHEDULE
§ applies only to do/for and parallel do/for directives that
have the schedule type RUNTIME
§ sets schedule type and chunk size for all such loops
§ For example,
setenv OMP_SCHEDULE GUIDED,4 [csh, tcsh]
export OMP_SCHEDULE= GUIDED,4 [sh, ksh, bash]
Programming Model – Loop Scheduling
•schedule clause determines how loop iterations are
divided among the thread team
• static([chunk]) divides iterations statically between
threads
• Each thread receives [chunk] iterations, rounding as necessary
to account for all iterations
• Default [chunk] is ceil( # iterations / # threads )
• dynamic([chunk]) allocates [chunk] iterations per thread,
allocating an additional [chunk] iterations when a thread
finishes
• Forms a logical work queue, consisting of all loop iterations
• Default [chunk] is 1
• guided([chunk]) allocates dynamically, but [chunk] is
exponentially reduced with each allocation

26
Loop scheduling options

2(2)
Programming Model – Data Sharing
• Parallel programs often employ // shared, globals
two types of data
int bigdata[1024];
• Shared data, visible to all
threads, similarly named
• Private data, visible to a single
void* foo(void* bar) {
thread (often stack-allocated)
intprivate,
// tid; stack
• PThreads:
• Global-scoped variables are int tid;
shared #pragma omp parallel \
• Stack-allocated variables are
shared
/* ( bigdata
Calculation ) \
goes
private
private ( tid )
here */
• OpenMP:
• shared variables are shared } {
• private variables are private /* Calc. here */
}
}
28
Programming Model - Synchronization
• OpenMP Synchronization #pragma omp critical
• OpenMP Critical Sections {
• Named or unnamed /* Critical code here */
• No explicit locks / mutexes }

• Barrier directives
#pragma omp barrier

• Explicit Lock functions omp_set_lock( lock l );


• When all else fails – may /* Code goes here */
require flush directive omp_unset_lock( lock l );

#pragma omp single


• Single-thread regions within
parallel regions {
• master, single directives /* Only executed once */
}

29
Omp critical vs. atomic
int sum=0
#pragma omp parallel for
for(int j=1; j <n; j++){
int x = j*j;
#pragma omp critical
{
sum=sum+x;// One thread enters the critical section at a time.
}
* May also use
#pragma omp atomic
x += exper
• Faster, but can support only limited arithmetic operation such as
++, --, +=, -=, *=, /=, &=, |=
30
OpenMP Timing

• Elapsed wall clock time:


double omp_get_wtime(void);
• Returns elapsed wall clock time in seconds
• Time is measured per thread, no guarantee can be made that two
distinct threads measure the same time
• Time is measured from “some time in the past,” so subtract
results of two calls to omp_get_wtime to get elapsed time

31
Parallel Matrix Multiply: Run Tasks Ti in parallel on
multiple threads

T1

T1 T2
Parallel Matrix Multiply: Run Tasks Ti in parallel on
multiple threads

T2

T1 T2 33
Matrix Multiply in OpenMP

// C[M][N] = A[M][P] × B[P][N]


start_time = omp_get_wtime();
#pragma omp parallel for private(tmp, j, k)
for (i=0; i<M; i++){ Outer loop spread across
N threads;
for (j=0; j<N; j++){
inner loops inside a single
tmp = 0.0; thread
for( k=0; k<P; k++){
/* C(i,j) = sum(over k) A(i,k) * B(k,j)*/
tmp += A[i][k] * B[k][j];
}
C[i][j] = tmp;
}
}
run_time = omp_get_wtime() - start_time;

34
OpenMP Summary
• OpenMP is a compiler-based technique to create
concurrent code from (mostly) serial code
• OpenMP can enable (easy) parallelization of loop-based
code with fork-join parallelism
• pragma omp parallel
• pragma omp parallel for
• pragma omp parallel private ( i, x )
• pragma omp atomic
• pragma omp critical
• #pragma omp for reduction(+ : sum)

• OpenMP performs comparably to manually-coded


threading
• Not a silver bullet for all applications

35

You might also like