0% found this document useful (0 votes)
10 views16 pages

Pdclab 8

This is the lab report 8 of parallel and distributed computing course

Uploaded by

Agha Ammar Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views16 pages

Pdclab 8

This is the lab report 8 of parallel and distributed computing course

Uploaded by

Agha Ammar Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

AIR UNIVERSITY

DEPARTMENT OF ELECTRICAL & COMPUTER ENGINEERING

EXPERIMENT NO. 8

Lab Title: Open MP & relevant concept

Student Name: M.Bilal Ijaz, Agha Ammar Khan Reg. No:210316,210300

Objective: Implement and analyze various Open MP Programs

LAB ASSESSMENT:

Attributes Excellent Good Average Satisfactory Unsatisfactory

(5) (4) (3) (2) (1)

Ability to Conduct
Experiment

Ability to assimilate the


results

Effective use of lab


equipment and follows the
lab safety rules

Total Marks: Obtained Marks:

LAB REPORT ASSESSMENT:

Attributes Excellent Good Average Satisfactory Unsatisfactory

(5) (4) (3) (2) (1)

Data Presentation

Experiment Results

Conclusion

Total Marks: Obtained Marks:

Date: 02/12/2024 Signature:


LAB#08
TITLE: Implement and analyze various Open MP Programs (Reduction,
Critical & Section Clause)

Objective:
• Implement and analyze various Open MP Programs (Reduction, Critical & Section Clause)

Introduction:
OpenMP (Open Multi-Processing) is an API (Application Programming Interface) for parallel
programming in C, C++, and Fortran. It allows developers to write parallel code by adding simple
compiler directives to loops or sections of code that can be executed concurrently. Using
OpenMP for tasks like reduction and finding critical clauses can significantly enhance
performance in algorithms that need to be executed on multiple cores or processors, especially
when working with large datasets or computationally intensive tasks. In OpenMP, the section
clause is used to split tasks or code into independent sections that can be executed concurrently
by different threads. It allows you to explicitly specify which portions of the code should run in
parallel, and it can be especially useful when different tasks can run in parallel but don’t need to
be inside a loop. Each section can be executed by a different thread.

OpenMP for Reduction:


In OpenMP, reduction is a technique used to ensure that each thread maintains a local copy of a
variable and performs the operation (e.g., sum, product) independently, and then combines the
results from all threads into a single result. This can be useful when you have an operation that
accumulates values (e.g., summing an array).
Example: Summing an Array Using OpenMP Reduction:
Let’s say you have a large array of integers, and you want to compute the sum using OpenMP
with the reduction clause:

Explanation:
• #pragma omp parallel for: This directive parallelizes the for loop, allowing iterations to
be divided across threads.
• reduction(+:sum): This tells OpenMP to handle the sum variable in a thread-safe
manner. Each thread will maintain a local copy of sum and add it to the global sum after
the loop.

Output:
The sum of the array is: 1000000 In this case, each thread works on a part of the array and
reduces the local results into a final sum, improving performance for large datasets.
OpenMP for Critical Clause:
In OpenMP, a critical section is used to protect a block of code from being executed by more
than one thread at a time. It is often used when threads need to access shared data, and it is
important to avoid race conditions. This is especially useful when working with critical clauses
where the order of execution matters, and you need to ensure that only one thread accesses
the critical section at a time.

Example: OpenMP Critical Section for Finding a Critical Clause in Logic:


Let’s assume we are solving a logical expression and need to perform a critical operation (like
updating a shared result variable). We’ll demonstrate this with a scenario where multiple
threads evaluate different parts of the logic expression and update a shared critical Result based
on the evaluation:
Explanation:
• #pragma omp parallel: This begins the parallel region where multiple threads are
created.
• #pragma omp for: This parallelizes the loop, allowing each thread to process a portion of
the loop.
• #pragma omp critical: This ensures that the critical Result is updated safely by only one
thread at a time. While threads can compute their local Result independently, the
update to the shared critical Result is protected.

Output:
The critical result is: 500000
In this example, the critical clause refers to a section of code that updates a shared variable
(critical Result). Using #pragma omp critical ensures that only one thread at a time modifies the
critical Result, preventing race conditions.

Using section Clause in OpenMP:


The sections construct allows you to define multiple blocks of code, each of which is executed
by a separate thread. Each thread executes one of the sections, and the sections are not
required to be executed in any particular order.

Syntax:
Example of Using section Clause:
Let’s demonstrate a simple example where we use the section clause to perform multiple independent
tasks in parallel. In this example, we’ll split the task of calculating the sum of an array, printing some text,
and performing some mathematical computation.
Explanation:
• #pragma omp parallel sections: This directive tells OpenMP to parallelize the following
sections of code. Each section directive defines a block of code that can be executed
independently.
• #pragma omp section: Marks a block of code as a section that will be executed by one of
the threads.
• Threads: OpenMP creates a team of threads, each executing a section of the code
concurrently.

Key Points:
• Independent Tasks: Each section can contain independent tasks that don’t require
communication with other sections.
• No Ordering Guarantee: The order in which sections are executed is not guaranteed.
Each section runs on a different thread, but the exact execution order can vary.
• Performance: OpenMP handles the assignment of sections to threads automatically. The
number of threads used is determined by the system’s available resources or can be
specified by the user with the omp_set_num_threads() function.
Output Example:
The sum of the array is: 1000000
This is section 2: Printing a message.
The factorial of 10 is: 3628800

Considerations:
• Number of Threads: The number of threads available will depend on the system’s
resources and OpenMP configuration. If you have more sections than threads, some
threads will handle multiple sections.
• Shared vs Private Variables: As with other OpenMP constructs, variables used inside
sections need to be carefully managed. By default, variables declared outside the
sections are shared between threads, but you can use OpenMP’s private or first private
clauses to control variable visibility.

Use Case Scenarios for sections:


• Independent Computations: Use OpenMP sections when you have several independent
tasks that can be processed in parallel. Each section can execute a completely different
task concurrently.
• Workload Division: When tasks involve separate work units, like in the example where
we sum an array, print a message, and calculate a factorial, sections allow these to run in
parallel without dependency between them.

Advanced Example: Dynamic Workload Using sections:


Sometimes, the number of sections can be determined dynamically or you might want to
process sections of varying sizes. You can do this by adding logic inside the sections, but the
sections themselves should remain independent.
Key Concepts in OpenMP for Reduction and Critical Clauses:

Reduction:
o Ensures that each thread computes its own local value and the results are combined
safely.
o Examples of operations: addition, multiplication, logical operations.
o Syntax: reduction(+:sum) where + is the operation (other operations could be *,&, etc.).

Critical Clause:
o Protects a section of code from concurrent access by multiple threads.
o Ensures that shared data is modified by only one thread at a time, preventing race
conditions.
o Syntax: #pragma omp critical before the code that needs to be synchronized.

Lab Tasks:
Code and Output:
Task2:
Conclusion:
Using OpenMP for reduction and handling critical clauses allows us to efficiently manage
parallelism in programs involving operations on shared data or performing operations like
summing large arrays. By leveraging reduction for independent operations and critical for shared
data updates, OpenMP helps in optimizing the performance and correctness of parallel
algorithms. The OpenMP sections construct is a useful tool for executing independent tasks
concurrently. It simplifies the process of parallelizing multiple blocks of code without the need to
manage complex thread creation or synchronization manually. By using sections, you can improve
the performance of applications where tasks do not depend on each other and can be divided
into parallelizable.

You might also like