Unit 2.3 Parallel Programming Architecture
Unit 2.3 Parallel Programming Architecture
Introduction
⚫collection of program abstractions.
⚫designed for multiprocessors, multicomputer
or vector/SIMD computers
⚫Five models:
✔Shared-Variable Model
✔Message-Passing Model
✔Data-Parallel Model
✔Object-oriented Model
✔Functional and Logic Models
Shared-Variable Model
✔ To limit the scope and rights, the process
address space may be shared or restricted.
Mechanisms for IPC:
1. IPC using shared variable:
Shared Process B
Process Variables in a
A common
memory Process C
2. IPC using message passing:
Process D Process E
Following are some issues of Shared-variable Model:
▪ Shared-Variable communication:
▪ Critical Section(CS):
⚫ code segment accessing shared variables.
⚫ Requirements are –
⚫ Mutual exclusion
⚫ No deadlock in waiting
⚫ Non preemption
⚫ Eventual entry
▪ Protected Access: based on CS value
⚫ Multiprogramming
⚫ Multiprocessing – two types:
⚫ MIMD mode
⚫ MPMD mode
⚫ Multitasking
⚫ Multithreading
▪ Partitioning and Replication:
▪ Program partitioning is a technique for
decomposing a large program and data set into
many small pieces for parallel execution by multiple
processors.
▪ Program replication is referred to duplication of the
same program code for parallel execution on
multiple processor over different data sets.
▪ Scheduling and Synchronization:
▪ Scheduling of divided program modules on parallel
processor
▪ Two types are :
▪ Static scheduling
▪ Dynamic scheduling
▪ Cache Coherence and Protection:
If the value is returned on a read instruction is always
the value written by the latest write instruction on
the same memory location is called coherent.
Message-Passing Model
⚫Synchronous Message Passing –
⚫ It is must synchronize the sender process and
the receiver process in time and space
⚫Asynchronous Message Passing –
⚫It does not require message sending and
receiving be synchronized in time and space
⚫Non blocking can be achieved
⚫Distributing the computations:
⚫Subprogram level is handled rather than at the
instructional or fine grain process level in a
tightly coupled multiprocessor
Data-Parallel Model
⚫It is easier to write and to debug because
parallelism is explicitly handled by hardware
synchronization and flow control.
⚫It requires the use of pre-distributed data
sets
⚫Synchronization is done at compile time
rather than run time.
⚫the following are some issued handled
⚫Data Parallelism-
⚫Array Language Extensions
⚫Compiler support
Object-Oriented Model
⚫ Concurrent OOP – 3 application demands
⚫ There is increased use of interacting processes by
individual users
⚫ Workstation networks have become a cost-effective
mechanism
⚫ Multiprocessor technology in several variants has
advanced to the point of providing supercomputing
power
⚫ An actor model
⚫ It is presented as one framework for COOP
⚫ They are self-contained , interactive, independent
components of a computing system.
⚫ Basic primitives are :create to , send to, become
⚫ Parallelism in COOP:
⚫ 3 patterns- 1. pipeline concurrency 2.divide and
conquer currency 3.cooperative problem solving
Functional and Logic Models
⚫Two types of language oriented programming
models are
⚫Functional programming model
⚫ It emphasizes functionality of a program
⚫ No concepts of storage, assignment and branching
⚫ All single-assignment and dataflow languages are
functional in nature
⚫ Some e.g. are Lisp, SISAL and strand 88
⚫Logic programming model
⚫ Based on logic ,logic programming tat suitable for
dealing with large database.
⚫ Some e.g. are
⚫ concurrent Prolog -
⚫Concurrent Parlog