DST4030A Lecture 1
DST4030A Lecture 1
[1] Marián Vajteršic, Peter Zinterhof, Roman Trobec (auth.), Prof. Roman Trobec,
Dr. Marián Vajteršic, Prof.Dr. Peter Zinterhof (eds.) (2009). Parallel computing:
Numerics, applications, and trends, 1st Ed., Springer-Verlag London, ISBN 13:
9781848824089
[3] Bhujade, M.R. (2009). Parallel Computing, New Age International (P) Limited,
ISBN 13: 9788122423877
2 Lesson Objectives
3 Introduction
Parallelism
Outline
2 Lesson Objectives
3 Introduction
Parallelism
Week Topic
Week 1 Course Outline and Introduction to Parallel Computing
Week 2 Single processor Machines and Parallelism
Week 3 Single processor Machines and Parallelism
Week 4 Introduction to Parallel Machines and Programming Models
Week 5 Parallel Computer Architecture - A Hardware Approach
Week 6 Parallel Computer Architecture - A Software Approach
Week 7 MID - SEMESTER EXAMINATION
Week 8 Distributed memory Machines and Programming
Week 9 Simulation, Cost Model, Mapping, Platforms & Design
Week 10 Analytical Modeling of Parallel Programs
Week 11 Dense Matrix Algorithms, Sorting and Graph Algorithms
Week 12 Search Algorithms for Discrete Optimization Problems
Week 13 Dynamic Programming & Course Project Presentations
Week 14 FINAL SEMSTER EXAMINATION
Teaching Methodology
Teaching Methodology
Outline
2 Lesson Objectives
3 Introduction
Parallelism
Lesson Objectives
Outline
2 Lesson Objectives
3 Introduction
Parallelism
Therefore large problems can often be split into smaller ones, which
can then be solved simultaneously, as shown in below.
Supermarket stores
House construction - parallel tasks, wiring and plumbing
performed at once
Assembly line manufacture - pipelining, many instances in
process at once
Call centre - independent tasks executed simultaneously
Types of Parallelism
Types of Parallelism
2 Instruction-level parallelism: A processor can only address
less than one instruction for each clock cycle phase. These
instructions can be re-ordered and grouped which are later on
executed concurrently without affecting the result of the program.
This is called instruction-level parallelism.
3 Task Parallelism: Task parallelism employs the decomposition of
a task into subtasks and then allocating each of the subtasks for
execution. The processors perform execution of sub tasks
concurrently.
4 Data-level parallelism (DLP): Instructions from a single stream
operate concurrently on several data – Limited by non-regular
data manipulation patterns and by memory bandwidth
Dr. Asiyo - USIU Parallel Computing May 14, 2024 19 / 20
Thank You ,