0% found this document useful (0 votes)
215 views2 pages

Syllabus

UCS645 is a 3 credit course that introduces fundamentals of parallel and distributed computing including parallel architectures, programming models, and algorithms. The course covers topics such as parallelism concepts, parallel architectures like GPUs, parallel decomposition techniques, distributed computing models, programming frameworks like MPI and CUDA, and designing parallel algorithms. Students learn to develop basic parallel applications using shared memory and message passing paradigms. The course aims to help students apply parallel concepts, analyze performance issues, and develop skills in parallel programming.

Uploaded by

Rohit Singla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
215 views2 pages

Syllabus

UCS645 is a 3 credit course that introduces fundamentals of parallel and distributed computing including parallel architectures, programming models, and algorithms. The course covers topics such as parallelism concepts, parallel architectures like GPUs, parallel decomposition techniques, distributed computing models, programming frameworks like MPI and CUDA, and designing parallel algorithms. Students learn to develop basic parallel applications using shared memory and message passing paradigms. The course aims to help students apply parallel concepts, analyze performance issues, and develop skills in parallel programming.

Uploaded by

Rohit Singla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

UCS645: PARALLEL & DISTRIBUTED COMPUTING

L T P Cr
2 0 2 3.0

Course Objectives: To introduce the fundamentals of parallel and distributed programming


and application development in different parallel programming environments.

Parallelism Fundamentals: Scope and issues of parallel and distributed computing,


Parallelism, Goals of parallelism, Parallelism and concurrency, Multiple simultaneous
computations.

Parallel Architecture: Implicit Parallelism, Array Processor, Vector Processor, Dichotomy


of Parallel Computing Platforms (Flynn’s Taxonomy, UMA, NUMA, Cache Coherence),
Fengs Classification, Handler Classification, Limitations of Memory System Performance,
Interconnection Networks, Communication Costs in Parallel Machines , Routing Mechanisms
for Interconnection Networks , Impact of Process-Processor Mapping and Mapping
Techniques, GPU.

Parallel Decomposition and Parallel Performance: Principles of Parallel Algorithm


Design: Decomposition Techniques, Characteristics of Tasks and Interactions, Mapping
Techniques for Load Balancing. Critical Paths, Sources of Overhead in Parallel Programs,
Performance metrics for parallel algorithm implementations, Performance measurement, The
Effect of Granularity on Performance.

Distributed Computing: Introduction: Definition, Relation to parallel systems, synchronous


vs. asynchronous execution, design issues and challenges, A Model of Distributed
Computations, A Model of distributed executions, Models of communication networks,
Global state of distributed system, Models of process communication.

Programming Message Passing and Shared Address Space Platforms: Send and Receive
Operations, MPI: the Message Passing Interface, Topologies and Embedding, Overlapping
Communication with Computation, Groups and Communicators.

CUDA programming model: Overview of CUDA, Isolating data to be used by parallelized


code, API function to allocate memory on the parallel computing device. Launching the
execution of kernel function by parallel threads, transferring data back to host processor with
API function call.

Parallel Algorithms design, Analysis, and Programming: Parallel Algorithms, Parallel


Graph Algorithms, Parallel Matrix Computations, Critical paths, work and span and relation
to Amdahl’s law, Speed-up and scalability, Naturally parallel algorithms, Parallel algorithmic
patterns like divide and conquer, map and reduce, Specific algorithms like parallel Merge
Sort.
Self-Learning Content:
Programming Message Passing and Shared Address Space Platforms: Thread Basics,
Synchronization Primitives in Pthreads, Controlling Thread and Synchronization Attributes,
Composite Synchronization Constructs, Tips for Designing Asynchronous Programs.

CUDA programming model: API function to transfer data to parallel computing device,
Concepts of Threads, Blocks, Grids, developing kernel function that will be executed by
threads in the parallelized part.

Parallel Algorithms design, Analysis, and Programming: Parallel graph algorithms,


parallel shortest path, parallel spanning tree, Producer-consumer and pipelined algorithms.

Laboratory work:
To implement parallel programming using CUDA with emphasis on developing applications
for processors with many computation cores, mapping computations to parallel hardware,
efficient data structures, paradigms for efficient parallel algorithms.

Course Learning Outcomes (CLOs) / Course Objectives (COs):


On completion of this course, the students will be able to
1. Apply the fundamentals of parallel and distributed computing including parallel
architectures and paradigms.
2. Apply parallel algorithms and key technologies.
3. Develop and execute basic parallel applications using basic programming models and
tools.
4. Apply shared address space and message passing in programming platforms
5. Analyze the performance issues in parallel computing and trade-offs.

Text Books:
1. C Lin, L Snyder. Principles of Parallel Programming. USA: Addison-Wesley (2008).
2. A Grama, A Gupta, G Karypis, V Kumar. Introduction to Parallel Computing, Addison
Wesley (2003).

Reference Books:
1. B Gaster, L Howes, D Kaeli, P Mistry, and D Schaa. Heterogeneous Computing With
Opencl. Morgan Kaufmann and Elsevier (2011).
2. T Mattson, B Sanders, B Massingill. Patterns for Parallel Programming. Addison-
Wesley (2004).
3. Quinn, M. J.,Parallel Programming in C with MPI and OpenMP, McGraw-Hill(2004).

You might also like