Syllabus
Syllabus
L T P Cr
2 0 2 3.0
Programming Message Passing and Shared Address Space Platforms: Send and Receive
Operations, MPI: the Message Passing Interface, Topologies and Embedding, Overlapping
Communication with Computation, Groups and Communicators.
CUDA programming model: API function to transfer data to parallel computing device,
Concepts of Threads, Blocks, Grids, developing kernel function that will be executed by
threads in the parallelized part.
Laboratory work:
To implement parallel programming using CUDA with emphasis on developing applications
for processors with many computation cores, mapping computations to parallel hardware,
efficient data structures, paradigms for efficient parallel algorithms.
Text Books:
1. C Lin, L Snyder. Principles of Parallel Programming. USA: Addison-Wesley (2008).
2. A Grama, A Gupta, G Karypis, V Kumar. Introduction to Parallel Computing, Addison
Wesley (2003).
Reference Books:
1. B Gaster, L Howes, D Kaeli, P Mistry, and D Schaa. Heterogeneous Computing With
Opencl. Morgan Kaufmann and Elsevier (2011).
2. T Mattson, B Sanders, B Massingill. Patterns for Parallel Programming. Addison-
Wesley (2004).
3. Quinn, M. J.,Parallel Programming in C with MPI and OpenMP, McGraw-Hill(2004).