0% found this document useful (0 votes)
14 views20 pages

DST4030A Lecture 1

The document outlines a course on Parallel Computing at United States International University, detailing course objectives, topics, and teaching methodology. Students will learn about parallel architectures, algorithms, and programming models, and will engage in practical lab exercises. The course includes assessments such as lab exercises, a project, and examinations to evaluate student performance.

Uploaded by

allansharad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views20 pages

DST4030A Lecture 1

The document outlines a course on Parallel Computing at United States International University, detailing course objectives, topics, and teaching methodology. Students will learn about parallel architectures, algorithms, and programming models, and will engage in practical lab exercises. The course includes assessments such as lab exercises, a project, and examinations to evaluate student performance.

Uploaded by

allansharad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

United States International University

School of Science & Technology

DST4030A: PARALLEL C OMPUTING


D R . MO A SIYO
E MAIL : MASIYO @ USIU. AC. KE
P HONE : 0722 509 217

Dr. Asiyo - USIU Parallel Computing May 14, 2024 1 / 20


References

[1] Marián Vajteršic, Peter Zinterhof, Roman Trobec (auth.), Prof. Roman Trobec,
Dr. Marián Vajteršic, Prof.Dr. Peter Zinterhof (eds.) (2009). Parallel computing:
Numerics, applications, and trends, 1st Ed., Springer-Verlag London, ISBN 13:
9781848824089

[2] C. Bischof, C. Bischof, M. Bucker, P. Gibbon, G. Joubert, T. Lippert (2008).


Parallel Computing: Architectures, Algorithms and Applications, 1st Ed., IOS
Press, ISBN 13: 9781586037963

[3] Bhujade, M.R. (2009). Parallel Computing, New Age International (P) Limited,
ISBN 13: 9788122423877

[4] A. Grama, V. Kumar, A. Gupta (2003). An Introduction to Parallel Computing,


Design and Analysis of Algorithms,2nd Ed., Addison Wesley , ISBN
0201648652

Dr. Asiyo - USIU Parallel Computing May 14, 2024 2 / 20


Outline

1 Course Learning Objectives


Course Outline

2 Lesson Objectives

3 Introduction
Parallelism

Dr. Asiyo - USIU Parallel Computing May 14, 2024 3 / 20


Course Learning Objectives

Outline

1 Course Learning Objectives


Course Outline

2 Lesson Objectives

3 Introduction
Parallelism

Dr. Asiyo - USIU Parallel Computing May 14, 2024 4 / 20


Course Learning Objectives

Course Learning Objectives


At the end of this course, students should be able to:

1 Describe different parallel architectures; inter-connect networks,


programming models, and algorithms for common operations
such as matrix-vector multiplication.

2 Develop an efficient parallel algorithm to solve a given problem.

3 Analyze the time complexity and number of processors for a


given parallel algorithm,.

4 Given a parallel algorithm show the steps performed by the


algorithm on an input.

Dr. Asiyo - USIU Parallel Computing May 14, 2024 5 / 20


Course Learning Objectives

Course Learning Objectives

5 Given a parallel algorithm, implement it using MPI, OpenMP,


pthreads, or a combination of MPI and OpenMP.

6 Given a parallel code, analyze its performance, determine


computational bottlenecks, and optimize the performance of the
code.

7 Given a parallel code, debug it and fix the errors.

Dr. Asiyo - USIU Parallel Computing May 14, 2024 6 / 20


Course Learning Objectives Course Outline

Week Topic
Week 1 Course Outline and Introduction to Parallel Computing
Week 2 Single processor Machines and Parallelism
Week 3 Single processor Machines and Parallelism
Week 4 Introduction to Parallel Machines and Programming Models
Week 5 Parallel Computer Architecture - A Hardware Approach
Week 6 Parallel Computer Architecture - A Software Approach
Week 7 MID - SEMESTER EXAMINATION
Week 8 Distributed memory Machines and Programming
Week 9 Simulation, Cost Model, Mapping, Platforms & Design
Week 10 Analytical Modeling of Parallel Programs
Week 11 Dense Matrix Algorithms, Sorting and Graph Algorithms
Week 12 Search Algorithms for Discrete Optimization Problems
Week 13 Dynamic Programming & Course Project Presentations
Week 14 FINAL SEMSTER EXAMINATION

Dr. Asiyo - USIU Parallel Computing May 14, 2024 7 / 20


Course Learning Objectives Course Outline

Teaching Methodology

The course will be conducted through lectures and class


discussions, illustrations using computers, and practical lab
exercises. The emphasis will be a ’hands-on’ approach and
at least 50% of instruction will be in the computer lab.

Dr. Asiyo - USIU Parallel Computing May 14, 2024 8 / 20


Course Learning Objectives Course Outline

Teaching Methodology

Table: Grading System


Numerical Average Letter Grade
(100% Max)
90 or above A Table: Distribution of Marks
87-89 A-
Laboratory exercises 15 %
84-86 B+
Assignments 5%
80-83 B
Participation 5%
77-79 B-
Quizzes 5%
74-76 C+
Project 20 %
70-73 C
Mid-Semester Exam: 20%
67-69 C-
Final Exam: 30 %
64-66 D+
62-63 D
60-61 D-
0-59 F
Dr. Asiyo - USIU Parallel Computing May 14, 2024 9 / 20
Lesson Objectives

Outline

1 Course Learning Objectives


Course Outline

2 Lesson Objectives

3 Introduction
Parallelism

Dr. Asiyo - USIU Parallel Computing May 14, 2024 10 / 20


Lesson Objectives

Lesson Objectives

At the end of this sub-unit module, students should be able to:


Define parallel computing and distinguish it from sequential
computing.
Give examples of applications of parallel computing in
everyday lives.
Recognize the main components of parallelism in
computing systems.

Dr. Asiyo - USIU Parallel Computing May 14, 2024 11 / 20


Introduction

Outline

1 Course Learning Objectives


Course Outline

2 Lesson Objectives

3 Introduction
Parallelism

Dr. Asiyo - USIU Parallel Computing May 14, 2024 12 / 20


Introduction Parallelism

Definition (Parallel Computing)


It refers to a type of computation in which there are many calculations
or the execution of processes being carried out simultaneously.

Therefore large problems can often be split into smaller ones, which
can then be solved simultaneously, as shown in below.

Figure: A typical example of parallel processing.


Dr. Asiyo - USIU Parallel Computing May 14, 2024 13 / 20
Introduction Parallelism

Definition (Parallel Processing)


It is a method in computing in which separate parts of an overall
complex task are broken up and run simultaneously on multiple CPUs,
hence minimizing the amount of time for processing.

Definition (Sequential Computing)


It is also known as serial computation, and it refers to the use of a
single processor to execute a program that is divided into a sequence
of discrete instructions, each executed one after the other with no
overlap at any given time.

Dr. Asiyo - USIU Parallel Computing May 14, 2024 14 / 20


Introduction Parallelism

Advantages of Parallel Computing

1 It saves time and money as many resources working


together will reduce the time and cut potential costs.
2 It can be impractical to solve larger problems on Serial
computing.
3 It can take advantage of non-local resources when the local
resources are finite.
4 Serial Computing ‘wastes’ the potential computing power,
thus Parallel computing makes better work of hardware.

Dr. Asiyo - USIU Parallel Computing May 14, 2024 15 / 20


Introduction Parallelism

Motivation for Parallelism


Parallel computing is the practice of identifying and exposing
parallelism in algorithms, expressing this in our software, and
understanding the costs, benefits, and limitations of the chosen
implementation.
In the end, parallel computing is about performance. This
includes more than just speed, but also the size of the problem
and energy efficiency.
One of the performance metric is speed up given by

Execution time using 1 processor


Speed up =
Execution time using P Processors

Dr. Asiyo - USIU Parallel Computing May 14, 2024 16 / 20


Introduction Parallelism

Examples of Everyday Parallelism

Supermarket stores
House construction - parallel tasks, wiring and plumbing
performed at once
Assembly line manufacture - pipelining, many instances in
process at once
Call centre - independent tasks executed simultaneously

Dr. Asiyo - USIU Parallel Computing May 14, 2024 17 / 20


Introduction Parallelism

Types of Parallelism

1 Bit-level parallelism: It is the form of parallel computing


which is based on the increasing processor’s size. It
reduces the number of instructions that the system must
execute in order to perform a task on large-sized data.
Example: Consider a scenario where an 8-bit processor
must compute the sum of two 16-bit integers. It must first
sum up the 8 lower-order bits, then add the 8 higher-order
bits, thus requiring two instructions to perform the operation.
A 16-bit processor can perform the operation with just one
instruction.

Dr. Asiyo - USIU Parallel Computing May 14, 2024 18 / 20


Introduction Parallelism

Types of Parallelism
2 Instruction-level parallelism: A processor can only address
less than one instruction for each clock cycle phase. These
instructions can be re-ordered and grouped which are later on
executed concurrently without affecting the result of the program.
This is called instruction-level parallelism.
3 Task Parallelism: Task parallelism employs the decomposition of
a task into subtasks and then allocating each of the subtasks for
execution. The processors perform execution of sub tasks
concurrently.
4 Data-level parallelism (DLP): Instructions from a single stream
operate concurrently on several data – Limited by non-regular
data manipulation patterns and by memory bandwidth
Dr. Asiyo - USIU Parallel Computing May 14, 2024 19 / 20
Thank You ,

Dr. Asiyo - USIU Parallel Computing May 14, 2024 20 / 20

You might also like