Unit III
Unit III
Compilers
Object Oriented Parallel Programming Model
• The basic entity of this model is object, all the working of this model
is done by the object only.
• Objects:
• Objects are the fundamental building blocks in OOPP. They encapsulate
data and operations, similar to traditional OOP.
• However, in the parallel programming context, objects are designed to
execute concurrently and can be distributed across multiple processing units.
• Synchronization:
• OOPP provides synchronization mechanisms to ensure orderly access
to shared resources and avoid race conditions.
• Synchronization primitives like locks, semaphores, and barriers are
used to coordinate the execution of parallel objects.
• Load Balancing:
• Load balancing is an important aspect of parallel programming.
• OOPP models often include load balancing techniques to distribute
computational load evenly across parallel objects, ensuring efficient
resource utilization.
• The logical programming model is based on the concept of logic programming, where programs
are expressed as a set of logical statements or rules.
• In this model, the program specifies what should be computed rather than how it should be
computed.
• Rule-Based Programming: Programs consist of a set of rules that define logical relationships
between entities. These rules can be executed concurrently.
• Backtracking:
• The ability to explore alternative solutions when executing a program.
• Parallelism can be achieved by exploring different branches of a computation tree concurrently.
Parallel language constructs
• Parallel language constructs refer to features or mechanisms in
programming languages that allow developers to express and utilize
parallelism in their code.
• Many programming languages provide built-in support for creating and managing
threads.
• Threads can share memory and communicate with each other, but proper
synchronization mechanisms are required to avoid data races and ensure thread
safety.
• Fork-Join: Fork-Join is a construct where a task is divided into
multiple subtasks that can be executed concurrently.
• These data structures handle concurrent access and ensure thread safety
without requiring manual synchronization.
• Message Passing: Message passing allows communication and coordination
between different parallel tasks or processes. Programming languages like MPI
(Message Passing Interface) provide constructs and libraries for explicit message
passing, enabling communication between different processes running on
separate computer nodes in a distributed system.
• Parallel Patterns:
• Parallel patterns are higher-level constructs that encapsulate common
parallel computation patterns.
• Loop-level Parallelism:
• Compilers analyze loops to identify opportunities for parallel execution. They
examine loop dependencies, data accesses, and control flow to determine if
multiple loop iterations can be executed simultaneously.
• Techniques such as loop unrolling, loop fusion, and loop distribution can be
applied to expose parallelism.
• Automatic Vectorization:
• They analyze the data dependencies and memory access patterns within loops to
generate vector instructions that can process multiple data elements in parallel.
• Parallelization of Dependencies: Compilers analyze the dependencies
between different program statements or expressions and attempt to
parallelize them. Techniques like speculative execution, dependency
analysis, and software pipelining are employed to maximize parallel
execution while ensuring correct program semantics.