0% found this document useful (0 votes)
2 views

Multiprocessing OpenMP

The document provides an overview of OpenMP, a parallel programming model for shared-memory systems, detailing its components, such as compiler directives and runtime functions. It explains how to create parallel regions, control the number of threads, and parallelize loops while addressing loop-carried dependencies and synchronization issues. Additionally, it discusses scheduling strategies for loop iterations, including static and dynamic scheduling methods.

Uploaded by

bilallodhi897
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Multiprocessing OpenMP

The document provides an overview of OpenMP, a parallel programming model for shared-memory systems, detailing its components, such as compiler directives and runtime functions. It explains how to create parallel regions, control the number of threads, and parallelize loops while addressing loop-carried dependencies and synchronization issues. Additionally, it discusses scheduling strategies for loop iterations, including static and dynamic scheduling methods.

Uploaded by

bilallodhi897
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Multiprocessing using OpenMP

• Introduction to OpenMP
• Basic Components of OpenMP
• OpenMP Parallel Regions
• Controlling the Number of Threads
• Parallelizing Loops with OpenMP
• Loop-Carried Dependencies and Synchronization
• How OpenMP Schedules for Loop Iterations
Introduction to OpenMP
• What is OpenMP?
– A parallel programming model for shared-memory
systems.
– Provides simple and flexible syntax for parallel
programming.
– Uses compiler directives, runtime libraries, and
environment variables to manage parallelism.
• Why OpenMP?
– Easy to implement and efficient for parallelizing loops and
tasks.
– Portable across platforms with minimal changes.
Basic Components of OpenMP
• Compiler Directives: Instructions that tell the compiler how to
parallelize the code.

#pragma is the key to OpenMP.

• Runtime Functions: Used to control the behavior of parallel sections.

Example: omp_get_thread_num(), omp_set_num_threads().

• Environment Variables: To control the number of threads, etc.

Example: OMP_NUM_THREADS sets the number of threads


OpenMP Parallel Regions
• Parallel Region: A block of code to be
executed by multiple threads.
#pragma omp parallel
{
// Code to be executed by each thread
}
Controlling the Number of Threads
• Through the OMP_NUM_THREADS environment variable or
omp_set_num_threads()
Parallelizing Loops with OpenMP
• For Loop Parallelization:
– #pragma omp parallel for directive allows parallel
execution of for loops
Parallelizing Loops with OpenMP
reduction(+:sum): A clause that ensures each thread maintains a private copy of the sum, and at
the end, they are combined into the global sum.
Loop-Carried Dependencies and Synchronization

• When iterations of a loop are dependent on each other,


meaning the result of one iteration affects the next.

• Will it work?
Loop-Carried Dependencies and Synchronization

• This type of dependency cannot be easily parallelized using OpenMP


without proper synchronization mechanisms, as threads may need to
access data that is being modified by other threads.
• Solution: Avoid parallelization in such cases, or use critical or atomic
directives to ensure safe access to shared data.
• The critical directive ensures that only one thread can execute the loop
body at a time.
How OpenMP Schedules for Loop Iterations

• OpenMP uses a static scheduling approach for


distributing iterations of a loop across threads.
However, OpenMP also provides options for
dynamic scheduling, guided scheduling, and
other strategies to control how iterations are
assigned to threads.
How OpenMP Schedules for Loop Iterations

• Static Scheduling (Default)


How it works: OpenMP assigns a block of consecutive iterations to each
thread before execution begins. This is the default scheduling type when
no scheduling clause is specified.

• For Threads = 4 , N = 12
– Thread 0 gets iterations 0-2.
– Thread 1 gets iterations 3-5.
– Thread 2 gets iterations 6-8.
– Thread 3 gets iterations 9-11.
How OpenMP Schedules for Loop Iterations

• Dynamic Scheduling
• How it works: OpenMP assigns a chunk of iterations to a thread, but
threads request additional chunks as they finish processing. This is more
flexible and adapts to uneven workloads.
How OpenMP Schedules for Loop Iterations

• How dynamic scheduling works in OpenMP for the sum of array elements
example with 4 threads, N = 25, and a chunk size of 3
How OpenMP Schedules for Loop Iterations

• Static versus Dynamic Scheduling

You might also like