0% found this document useful (0 votes)
84 views23 pages

ACA Unit 8 - 1

This document discusses parallel programming models and languages. It covers five parallel programming models: shared-variable, message-passing, data parallel, object-oriented, and functional/logic. It also discusses issues with the shared-variable model and synchronous/asynchronous message passing approaches. The document outlines features of parallel languages including optimization, availability, synchronization/communication, control of parallelism, data parallelism, and process management. It concludes by discussing parallel language constructs and the role of optimizing compilers in parallel code generation.

Uploaded by

sushil@ird
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views23 pages

ACA Unit 8 - 1

This document discusses parallel programming models and languages. It covers five parallel programming models: shared-variable, message-passing, data parallel, object-oriented, and functional/logic. It also discusses issues with the shared-variable model and synchronous/asynchronous message passing approaches. The document outlines features of parallel languages including optimization, availability, synchronization/communication, control of parallelism, data parallelism, and process management. It concludes by discussing parallel language constructs and the role of optimizing compilers in parallel code generation.

Uploaded by

sushil@ird
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Parallel Models, Languages, and

Compilers
Parallel programming model

• A programming model is a collection of program


abstraction providing a programmer a simplified
and transparent view of computer H/W and S/W.
- Parallel programming model is designed for
vector computers.
• Fundamental issues in parallel programming.
• Creation, suspension, reactivation, termination.
Five model are designed that
exploits parallelism-
• Shared-variable model.
• Message-passing model.
• Data parallel model.
• Object oriented model.
• Functional and logic model.
shared variable model
• In shared variable model parallelism depends
on how IPC is implemented.
• IPC implemented in parallel programming by
two ways.
• IPC using shared variable.
• IPC using message passing..
IPC with shared variable
IPC with message passing
Issue with shared variable model
• Critical section.
• Memory consistency.
• Atomicity with memory operation.
• Fast synchronization.
• Shared data structure.
Message passing model
• Two process communicate with each other by
passing message through a network.
• Delay caused by message passing is much longer
than shared variable model in a same memory.
• Two message passing approach are introduced here.
Synchronous message passing
• Its synchronizes the sender and receiver
process with time and space just like
telephone call.
• No shared memory.
• No need of mutual exclusion.
• No buffer are used in communication
channel.
• It can be blocked by channel being busy.
Asynchronous message passing
• Does not need to synchronize the sender and
receiver in time and space.
• Non blocking can be achieved.
• Buffer are used to hold the message along the path
of connecting channel.
• Message passing programming is gradually changing,
once the virtual memory from all nodes are
combined.
Data parallel model
• It require the use of pre-distributed data set.
Interconnected data structure are also needed
to facilitate data exchange operation.
• It emphasizes local computation and data
routing operation such as permutation,
replication, reduction and parallel prefix.
• It can be implemented on either SIMD or
SPMD multicomputer, depending on the grain
size of program.
Object oriented model
• Object are created and manipulated
dynamically.
• Processing is performed using object.
• Concurrent programming model are built up
from low level object such as processes,
queue and semaphore.
• C-OOP achieve parallelism using three
methods.
C-OOP parallelism methods
• Pipeline concurrency.
• Divide and conquer concurrency.
• Co-operating problem solving.
Functional and logic model
• Two language-oriented programming for
parallel processing are purposed.
• Functional programming model such as LISP,
SISAL, Strand 88.
• Logic programming model as prolog.
• Based on predicate logic, logic programming is
suitable for solving large database queries.
parallel Language
• Language feature for parallel programming
into six categories according to functionality.
Optimization features
• Used for program restructuring and
compilation directives.
• Sequentially coded program into parallel code.
• Automated parallelization.
• Semi-automated parallelization.
Availability feature
• Its use to enhance the user- friendliness.
• Make language portable to large class of
parallel computers.
• Scalability.
• Compatibility.
• Portability.
Synchronization/ communication
feature
• Shared variable for IPC.
• Single assignment language.
• Send/receive for message passing.
• Logical shared memory such as the row space
in Linda.
• Remote procedure call.
• Data flow languages such as id.
Control of parallelism
• Coarse, medium or fine grain.
• Explicit versus implicit parallelism.
• Loop parallelism in iteration.
• Shared task queue.
• Divide and conquer paradigm.
• Shared abstract data type.
Data parallelism feature
• It specified how data are accessed and
distributed
• Runtime automatic decomposition.
• Mapping specification.
• Virtual processor support.
• Direct access to shared data.
Process management features
• These feature are needed to support the
efficient creation of parallel processes.
• Implementation of multithreading or
multitasking.
• Dynamic process creation at runtime.
• Automatic load balancing.
• Light weight processes.
Parallel langauage construct
• Special language construct and data array
expression for exploiting parallelism in
program.
• First is FORTRAN 90 array notation.
• Parallel flow control is achieve using do across
and do all type of keyword which is use in the
FORTRAN 90.
• Same we also use FORK and JOIN method.
Optimizing compiler
• The role of compiler to remove the burden of
program optimization and code generation.
• A parallelizing compiler consist of the three
major phases.
• Flow analysis.
• Optimization.
• Code generation.
Compilation phases in parallel code
generation

You might also like