0% found this document useful (0 votes)
23 views20 pages

PDC Lecture 14 MPI Sockets and Memory Models

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views20 pages

PDC Lecture 14 MPI Sockets and Memory Models

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 20

1

Lahore Garrison University


Parallel and Distributed
Computing
Session Spring 2023

Lecture – 14
INSTRUCTOR: MUHAMMAD ARSALAN RAZA
2

Lahore Garrison University


NAME: EMAIL: PHONE NUMBER: VISITING HOURS:
MUHAMMAD ARSALAN RAZA [email protected] 0333-7843180 (PRIMARY) MON-FRIDAY:(10:00AM - 1:00PM)
K 0321-4069289 Thursday: (11:00Am - 12:00pm)
3
Preamble

 Data parallel model

 Task graph model

 Work pool model

 Master slave model

 Producer consumer or pipeline model

 Hybrid model

Lahore Garrison University


4
Lesson Plan

 Memory Models

 Shared Memory & Message Passing

 Message Passing Interface (MPI)

 Making Sockets Easier to Work With

 MPI Primitives and Meanings

Lahore Garrison University


5
Memory Models

Process Interaction:

 Process interaction relates to the mechanisms by which parallel processes can communicate with each other.

 The most common forms of interaction are shared memory and message passing, but interaction can also be
implicit. It is also known as the Process-centric approach.

 Shared Memory

 Distributed Memory

 Implicit Interaction
Lahore Garrison University
6
Shared Memory

Shared memory is an efficient means of passing data between processes.

 Parallel processes share a global address space that they read and write to asynchronously.

 Asynchronous concurrent access can lead to race conditions, and mechanisms such as mutex locks, counting
semaphores, and monitors can be used to avoid these.

 Conventional multi-core processors directly support shared memory, which many parallel programming
languages and libraries are designed to exploit, such as Cilk, OpenMP, and Threading Building Blocks.

Lahore Garrison University


7
Shared Memory Representation

Lahore Garrison University


8
Distributed Memory

 A distributed memory system consists of multiple computers, known as nodes, inter-connected by message

passing network.

 Each node acts as an autonomous computer having a processor, a local memory, and sometimes I/O devices.

 In this case, all local memories are private and are accessible only to the local processors. Therefore, the

traditional machines are called no-remote-memory-access (NORMA) machines.

Lahore Garrison University


9
Distributed Memory Representation

Lahore Garrison University


10
Implicit Interaction

 In an implicit model, no process interaction is visible to the programmer and instead the compiler and/or runtime is
responsible for performing it.
 Examples of implicit parallelism are with domain-specific languages, where the concurrency within high-level operations
is prescribed functional programming languages, because the absence of side-effects allows non-dependent functions to
be executed in parallel.
 However, this kind of parallelism is difficult to manage and functional languages such as Concurrent
Haskell, and Concurrent ML provide features to manage parallelism explicitly and correctly.
 Example; If a particular problem involves performing the same operation on a group of numbers, a language that
provides implicit parallelism might allow the programmer to write the instruction as:

Lahore Garrison University


11
The Message Passing Interface

 Connection-oriented communication pattern using sockets.


 A socket is one endpoint of a two-way communication link between two programs running on the network. A socket
is bound to a port number so that the TCP layer can identify the application that data is destined to be sent to.

Lahore Garrison University


12
Making Sockets Easier to Work With

 Sockets are rather low level, and the programming mistakes can be easily made. However, the way that
they are used is often the same (such as in a client-server setting).
 ZeroMQ provides a higher level of expression by pairing sockets: one for sending messages at process P
and a corresponding one at process Q for receiving messages.
 All communication is asynchronous.
 Three patterns
 Request-reply
 Publish-subscribe
 Pipeline

Lahore Garrison University


13
MPI Primitives and Meanings

 Some of the most intuitive message-passing primitives of MPI.

Lahore Garrison University


14

Lahore Garrison University


Programming Interfaces
MPI
15
MPI (Message Passing Interface)

 MPI is the standard programming interface


 MPI 1.0 in 1994
 MPI 2.0 in 1997
 Library interface (Fortran, C, C++)
 It includes
 point-to-point communication
 collective communication
 barrier synchronization
 one-sided communication (MPI 2.0)
 parallel I/O (MPI 2.0)
 process creation (MPI 2.0)
Lahore Garrison University
16

MPI
Example

Lahore Garrison University


17

MPI C-
Program
Example
18
Lesson Review

 Memory Models  MPI Primitives and Meanings

 Message Passing Interface (MPI)  MPI Program

 Making Sockets Easier to Work With  Assignment # 2

Lahore Garrison University


19
Next Lesson Preview

SHARED OPEN MP
MEMORY

Lahore Garrison University


20

Lahore Garrison University


 To cover this topic, different reference material has been used
for consultation.
 Chapter # 4: Communications
Distributed Systems: Principles and Paradigms, A. S.
Tanenbaum and M. V. Steen, Prentice Hall, 2nd Edition, 2007.

References Distributed and Cloud Computing: Clusters, Grids, Clouds,


and the Future Internet, K. Hwang, J. Dongarra and GC. C.
Fox, Elsevier, 1st Ed.

URL:
https://docs.oracle.com/javase/tutorial/networking/sockets/
 Google Search Engine

You might also like