0% found this document useful (0 votes)
82 views10 pages

Bahria University Lahore Campus: Department of Computer Sciences

This document provides information about the Parallel Programming course offered at Bahria University Lahore Campus. The course is worth 2 credit hours and has prerequisites in introduction to programming, computer architecture, operating systems, discrete mathematics, and numerical methods. Assessment includes quizzes, assignments, midterm and final exams worth 10%, 20%, 20%, and 50% respectively. The textbook is An Introduction to Parallel Programming by Peter Pacheco. The course aims to teach parallel programming paradigms and algorithms using shared and distributed memory architectures. Students will learn to use parallel computing resources and write parallel code for applications in scientific computing using POSIX threads, OpenMP and MPI.

Uploaded by

Hussain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views10 pages

Bahria University Lahore Campus: Department of Computer Sciences

This document provides information about the Parallel Programming course offered at Bahria University Lahore Campus. The course is worth 2 credit hours and has prerequisites in introduction to programming, computer architecture, operating systems, discrete mathematics, and numerical methods. Assessment includes quizzes, assignments, midterm and final exams worth 10%, 20%, 20%, and 50% respectively. The textbook is An Introduction to Parallel Programming by Peter Pacheco. The course aims to teach parallel programming paradigms and algorithms using shared and distributed memory architectures. Students will learn to use parallel computing resources and write parallel code for applications in scientific computing using POSIX threads, OpenMP and MPI.

Uploaded by

Hussain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Bahria University Lahore Campus

Department of Computer Sciences


Course Code/number CSC-342
Course Title/Name Parallel Programming
Credit Hours/Contact
2
Hours
Degree Program Bachelors of Information Technology (BSIT)
Prerequisites or Co- Introduction to programming
requisites Computer Architecture
Operating System
Discrete Mathematics
Numerical Methods
Assessment Methods Quizzes 10
and Weightage Assignments/Projects/Presentations 20
Mid-Term Examination 20
Final Examination 50
Total 100
Textbook (or
An Introduction to Parallel Programming by Peter Pacheco.
Laboratory Manual for
Laboratory Courses)
Reference Material ( Documentation: (Pthread, MPI and OpenMP)
With Edition, ISBN#)
Designing and Building Parallel Programs by Ian Foster, Addison Wesley.

Web Resources/ URL (if www.openmp.org


any) https://www.open-mpi.org

Instructor Instructor Name: Mr. Rohail Shehzad Designation: Lecturer Status □ Visiting
Name/Subject Expert Cluster Head Name: ______________________
Name
Course Aims
Course Objectives Despite the extraordinary advances in computing technology, we continue to need ever greater computing power to
address important fundamental scientific questions. Because individual compute processors have essentially reached
their performance limits, the need for greater computing power can only be met through the use of parallel computers.
This course is intended for students who are interested in learning how to take advantage of parallel and distributed
computing with the focus of writing parallel code for processor-intensive applications to be run on clusters, the grid,
the cloud or shared infrastructure. The objectives of this course are to give the students and understanding of how
they can use parallel computing resources in their research and enable them to write parallel code for their high
performance computing applications. Extensive use of pertinent and practical examples from scientific computing will
be made using popular parallel programming paradigms including POSIX threads, OpenMP and MPI. The programming
languages used will be C, C++ or C#. Both the shared and distributed paradigms of parallel computing will be covered
via the OpenMP and MPI libraries.

Course Outcomes  Show clear understanding of the basic concepts of parallel computation, parallel programming paradigm
 Demonstrate the performance analysis of parallel programs
 Be able to use the POSIX threads, OpenMP and MPI to develop parallel programs
 Able to intelligently compare and contrast among the use of shared infrastructure, cloud, cluster and grid
 Study, analyze and design algorithms for shared and distributed memory computer architectures
 Demonstrate the applications of parallel programming in scientific computations
Course Parallel programming paradigms and algorithms for shared and distributed memory computer architectures,
Description/Catalogue performance analysis, use of shared infrastructure, OpenMP, MPI library, pthread, applications in scientific computing
Lecture Plan (16 Week # Lecture/ Topic to be covered Learning outcomes Reference Text
Weeks) and Date Contact
Hour

Understanding of the course


Detail course description
objectives and policies of the
Course outline course regarding assessments,
academic honesty and
Books, course plan, class decorum Ch # 1 and
1 2 Hours evaluation.
during the course.
Students will learn the concepts Lecture Handouts
Basic concepts of concurrent,
to concurrent, Parallel and
Parallel and distributed
distributed computation and its
computations
profomances

Students will be able to


Basic concepts of Parallel and differentiate parallel and
distributed hardware Architectures distributed architectures (SIMD, Ch # 2 and
2 2 Hours Hardware and software paradigms, MIMD).
Lecture Handouts
Shared infrastructure + Assignment Overview of some parallel
#1 systems. Multiprocessors and
multicomputers. Network
topologies. Computer system
classification. Clusters.

Efficiency characteristics of
parallel computation: speedup,
efficiency, scalability.
Software Aspects of Parallel Scientific computations, Ch # 2 and Lecture
3 2 Hours
Computations + Quiz # 1 estimating the maximum Handouts
possible parallelization,
computational load balancing.
The Amdahl’s law.

Parallel Programming Model


Background development on
Threads
Introduction to Parallel
Programming using Threads What are POSIX Threads Ch # 4 and Lecture
4 2 Hours
Handouts
Introduction to POSIX Threads Why use POSIX Threads
Design of POSIX Threads
Pthread Example
Threaded programming models,
POSIX Threads Models Thread safeness, Thread Limits.
Ch # 4 and Lecture
5 Communication and Data Exchange Pthread API explanation,
2 Hours Handouts
in POSIX threads Naming conventions, Example
code for communicating threads

Threads Creation, Thread


Attributes, Thread Binding &
Scheduling,
POSIX threads Management + Ch # 4 and Lecture
6 2 Hours Terminating Threads, Passing
Assignment # 2 Handouts
Arguments to Threads Joining &
Detaching Threads,
Demonstration examples
Mutex Variables: creating &
destroying mutexes, locking &
unlocking mutex

POSIX Threads Synchronization + Condition Variables: creating & Ch # 4 and Lecture


7 2 Hours
Quiz # 2 destroying condition variables, Handouts
wait, signal & broadcasting
condition variables

Stake management, get & set


API for stack size adjustment,
POSIX Threads Synchronization + Ch # 4 and Lecture
8 2 Hours Example code, Miscellaneous
Revision Handouts
Routines: pthread_self,
pthread_once & pthread_equal
9 MID TERM EXAM

API for Thread Scheduling,


POSIX Threads Scheduling + Ch # 4 and Lecture
10 2 Hours Example code, Threads
Revision Handouts
Scheduling Clauses

Introduction to MPI
specifications and MPI libraries:
Hello World example, Running
Introduction to Parallel Ch # 3 and Lecture
11 2 Hours MPI program, Communicators,
Programming with MPI Handouts
The Trapezoidal Rule in MPI,
Collective communication, MPI
derived data types

Basic of Point to point


communication, blocking & non-
Basic Communication model in
blocking communication Ch # 3 and Lecture
12 2 Hours MPI, Communication Programming
constructs, MPI Program Handouts
Examples
structure, components. Library
routines
Program structure description,
pseudo codes and execution
MPI Collective Communication Use of MPI_Reduce and MPI Ch # 3 and Lecture
13 2 Hours
Programming + Quiz # 3 Broadcast (MPI_BCAST) for Handouts
collective communication

Environment management
routines API: MPI_Init,
MPI Environment Management MPI_Comm_Size,
Ch # 3 and Lecture
14 2 Hours and Dealing with I/O. thread levels MPI_Finalize, Handouts
support
MPI_Comm_Rank
Function for reading user input

Collective operation: ALL-to-


One, One-to-All, All-to-All
Communication methods in MPI
with details + Assignment # 3 Gathering, Reduction, Ch # 3 and Lecture
15 2 Hours
scattering, synchronization using Handouts
barrier. Point-to-Point
Communication API
OpenMP Introduction OpenMP background, goals. Ch # 5 and
16 2 Hours OpenMP programs, variable
OpenMP Programs + Quiz # 4 scope Lecture Handouts

Bubble sort Ch # 5 and


17 2 Hours Loops in OpenMP + Revision
Odd-even transposition sort. Lecture Handouts

18 FINAL TERM EXAM

Semester Calendar for Assignments/Quizzes/Project


Assignments/ Projects and quizzes Week # Assignment Quiz No. Assignment/Proje Result Date of
Plan No. ct Quiz Date Assignment/Project/Quiz
1
Assignment
2
1
3 Quiz 1 Assignment 1
4
5 Assignment 1
Quiz 1
Assignment
6
2
7 Quiz 2 Assignment 2
8
Assignment 2
10
Quiz 2
11
12 Quiz 3
13
Assignment
14 Quiz 3
3
15 Quiz 4 Assignment 3
16 Assignment 3 Quiz 4

You might also like