0% found this document useful (0 votes)
21 views31 pages

Unit III

Uploaded by

shubham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views31 pages

Unit III

Uploaded by

shubham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 31

Unit 3: Parallel Models, Languages and

Compilers
Object Oriented Parallel Programming Model
• The basic entity of this model is object, all the working of this model
is done by the object only.

• This model uses all the features of object oriented programming.

• The Object-Oriented Parallel Programming (OOPP) model combines


the principles of object-oriented programming (OOP) with parallel
computing concepts.

• It aims to facilitate the development of parallel programs by


providing abstractions and mechanisms for managing concurrency
and communication between parallel objects.
• Here are some key features and concepts in the Object-Oriented Parallel
Programming model:

• Objects:
• Objects are the fundamental building blocks in OOPP. They encapsulate
data and operations, similar to traditional OOP.
• However, in the parallel programming context, objects are designed to
execute concurrently and can be distributed across multiple processing units.

• Concurrency: OOPP supports concurrent execution of objects, allowing


multiple objects to execute simultaneously.
• Concurrency can be achieved through mechanisms like threads or processes,
where each object can have its own execution context.
• Communication:
• Objects in the OOPP model communicate with each other to exchange
data and synchronize their actions. Communication can take place
through message passing, shared memory, or other inter-object
communication mechanisms.
• Object communication is essential for coordination and cooperation
among parallel objects.

• Synchronization:
• OOPP provides synchronization mechanisms to ensure orderly access
to shared resources and avoid race conditions.
• Synchronization primitives like locks, semaphores, and barriers are
used to coordinate the execution of parallel objects.
• Load Balancing:
• Load balancing is an important aspect of parallel programming.
• OOPP models often include load balancing techniques to distribute
computational load evenly across parallel objects, ensuring efficient
resource utilization.

• By combining the principles of OOP and parallel computing, the


OOPP model aims to simplify the development of parallel programs
Functional & Logical Model
• Functional and logical parallel programming models are two different
approaches to designing parallel programs.

• First Model is based on using functional programming language such as Pure


LISP, SISAL and Strand88.

• Second model is based on logic programming such as concurrent Prolog and


Parlog.

• Both models are used in artificial Intelligence application


• In this model, the program is written as a composition of
functions that operate on immutable data.

• Parallelism is achieved by dividing the computation into


independent tasks that can be executed concurrently.

• Functional Composition: Programs are built by composing


smaller functions into larger ones, allowing for modular and
reusable code.
• Logical Parallel Programming Model:

• The logical programming model is based on the concept of logic programming, where programs
are expressed as a set of logical statements or rules.

• In this model, the program specifies what should be computed rather than how it should be
computed.

• Declarative Programming: Programs focus on defining relationships and constraints rather


than specifying a sequence of steps.

• Rule-Based Programming: Programs consist of a set of rules that define logical relationships
between entities. These rules can be executed concurrently.

• Backtracking:
• The ability to explore alternative solutions when executing a program.
• Parallelism can be achieved by exploring different branches of a computation tree concurrently.
Parallel language constructs
• Parallel language constructs refer to features or mechanisms in
programming languages that allow developers to express and utilize
parallelism in their code.

• These constructs enable the execution of multiple tasks or operations


simultaneously, taking advantage of the capabilities of multi-core
processors and distributed computing systems.

• Here are some common parallel language constructs:


• Threads:
• Threads are lightweight units of execution that can run concurrently within a
program.

• Many programming languages provide built-in support for creating and managing
threads.

• Developers can assign different tasks to different threads, allowing them to


execute in parallel.

• Threads can share memory and communicate with each other, but proper
synchronization mechanisms are required to avoid data races and ensure thread
safety.
• Fork-Join: Fork-Join is a construct where a task is divided into
multiple subtasks that can be executed concurrently.

• The "fork" operation creates multiple threads or processes to execute


these subtasks, and the "join" operation waits for all subtasks to
complete before proceeding.

• Fork-Join constructs are commonly used in languages like Java (with


the Java Fork/Join framework) and Cilk.
• Parallel Loops:
• Many programming languages provide constructs to parallelize loop
iterations automatically.

• For example, OpenMP (a directive-based API) allows developers to


annotate loops with pragmas or directives to indicate that iterations can
be executed in parallel.

• The runtime system then automatically distributes loop iterations


among available threads.
• Parallel Data Structures:

• Parallel programming often requires special data structures designed


for concurrent access.

• Some languages provide libraries or built-in constructs for parallel data


structures like concurrent queues, hash tables, and arrays.

• These data structures handle concurrent access and ensure thread safety
without requiring manual synchronization.
• Message Passing: Message passing allows communication and coordination
between different parallel tasks or processes. Programming languages like MPI
(Message Passing Interface) provide constructs and libraries for explicit message
passing, enabling communication between different processes running on
separate computer nodes in a distributed system.
• Parallel Patterns:
• Parallel patterns are higher-level constructs that encapsulate common
parallel computation patterns.

• Examples include map-reduce, pipeline, divide-and-conquer, and data


parallelism.

• Programming languages and libraries often provide abstractions and


APIs for expressing these patterns concisely, allowing developers to
focus on the algorithm's high-level structure while benefiting from
parallel execution.
• It's important to note that the availability and syntax of parallel
language constructs may vary across programming languages.

• Some languages provide native support for parallelism, while others


require the use of libraries or frameworks to enable parallel
programming.

• Additionally, the effectiveness and efficiency of parallelism depend


on the underlying hardware architecture and the nature of the
problem being solved.
Optimizing Compilers for Parallelism
Fig. Compilation phases in Parallel Code
Generation
• Optimizing compilers play a crucial role in extracting parallelism from sequential
programs and generating efficient code for parallel architectures.
• Here are some techniques and strategies employed by optimizing compilers to
enhance parallelism:

• Loop-level Parallelism:
• Compilers analyze loops to identify opportunities for parallel execution. They
examine loop dependencies, data accesses, and control flow to determine if
multiple loop iterations can be executed simultaneously.
• Techniques such as loop unrolling, loop fusion, and loop distribution can be
applied to expose parallelism.
• Automatic Vectorization:

• Compilers use vectorization to exploit data-level parallelism by transforming


scalar operations into SIMD (Single Instruction, Multiple Data) instructions.

• They analyze the data dependencies and memory access patterns within loops to
generate vector instructions that can process multiple data elements in parallel.
• Parallelization of Dependencies: Compilers analyze the dependencies
between different program statements or expressions and attempt to
parallelize them. Techniques like speculative execution, dependency
analysis, and software pipelining are employed to maximize parallel
execution while ensuring correct program semantics.

• Task-level Parallelism: Some compilers support the creation and


scheduling of tasks as a form of parallelism. They identify regions of
code that can be executed independently and generate task-based
parallel code. Task parallelism can be useful in exploiting parallelism
in irregular or dynamic programs.
• Language Extensions and Directives: Some programming languages provide
extensions or directives that allow programmers to annotate their code with
parallelism hints. Compilers can then use these annotations to guide parallelization
and optimization. Examples include OpenMP, CUDA, and OpenACC.
• Compiler-Managed Runtime Systems: Some compilers work in conjunction with
runtime systems to dynamically manage parallel execution. They can dynamically
schedule tasks, balance the workload, and handle synchronization and
communication between parallel threads or processes.
• It's important to note that the effectiveness of parallelization heavily
depends on the characteristics of the program, the target architecture,
and the compiler's capabilities.

• Not all programs can be parallelized effectively, and manual


intervention or restructuring of the code may be required in some
cases to fully exploit parallelism.

You might also like