Bahria University Lahore Campus: Department of Computer Sciences
Bahria University Lahore Campus: Department of Computer Sciences
Instructor Instructor Name: Mr. Rohail Shehzad Designation: Lecturer Status □ Visiting
Name/Subject Expert Cluster Head Name: ______________________
Name
Course Aims
Course Objectives Despite the extraordinary advances in computing technology, we continue to need ever greater computing power to
address important fundamental scientific questions. Because individual compute processors have essentially reached
their performance limits, the need for greater computing power can only be met through the use of parallel computers.
This course is intended for students who are interested in learning how to take advantage of parallel and distributed
computing with the focus of writing parallel code for processor-intensive applications to be run on clusters, the grid,
the cloud or shared infrastructure. The objectives of this course are to give the students and understanding of how
they can use parallel computing resources in their research and enable them to write parallel code for their high
performance computing applications. Extensive use of pertinent and practical examples from scientific computing will
be made using popular parallel programming paradigms including POSIX threads, OpenMP and MPI. The programming
languages used will be C, C++ or C#. Both the shared and distributed paradigms of parallel computing will be covered
via the OpenMP and MPI libraries.
Course Outcomes Show clear understanding of the basic concepts of parallel computation, parallel programming paradigm
Demonstrate the performance analysis of parallel programs
Be able to use the POSIX threads, OpenMP and MPI to develop parallel programs
Able to intelligently compare and contrast among the use of shared infrastructure, cloud, cluster and grid
Study, analyze and design algorithms for shared and distributed memory computer architectures
Demonstrate the applications of parallel programming in scientific computations
Course Parallel programming paradigms and algorithms for shared and distributed memory computer architectures,
Description/Catalogue performance analysis, use of shared infrastructure, OpenMP, MPI library, pthread, applications in scientific computing
Lecture Plan (16 Week # Lecture/ Topic to be covered Learning outcomes Reference Text
Weeks) and Date Contact
Hour
Efficiency characteristics of
parallel computation: speedup,
efficiency, scalability.
Software Aspects of Parallel Scientific computations, Ch # 2 and Lecture
3 2 Hours
Computations + Quiz # 1 estimating the maximum Handouts
possible parallelization,
computational load balancing.
The Amdahl’s law.
Introduction to MPI
specifications and MPI libraries:
Hello World example, Running
Introduction to Parallel Ch # 3 and Lecture
11 2 Hours MPI program, Communicators,
Programming with MPI Handouts
The Trapezoidal Rule in MPI,
Collective communication, MPI
derived data types
Environment management
routines API: MPI_Init,
MPI Environment Management MPI_Comm_Size,
Ch # 3 and Lecture
14 2 Hours and Dealing with I/O. thread levels MPI_Finalize, Handouts
support
MPI_Comm_Rank
Function for reading user input