Lecture Week - 1 Introduction 1 - SP-24
Lecture Week - 1 Introduction 1 - SP-24
Lecture NO: 01
Introduction
Farhad M. Riaz
[email protected]
Roughly
– 50 % Final Exam
– 25% Internal Evaluation
Quiz 8 Marks
Assignments 8 Marks
Project 9 Marks
– 25% Mid term exam
Books
IBM BG/Q Compute Chip with 18 cores (PU) and 16 L2 Cache units (L2)
Parallel Computers
Networks connect multiple
stand-alone computers
(nodes) to make larger
parallel computer clusters.
Parallel computer cluster
– Each compute node is a
multi-processor parallel
computer in itself
– Multiple compute nodes are
networked together with an
Infiniband network
– Special purpose nodes, also
multi-processor, are used for
other purposes
Types of Parallel and Distributed
Computing
Parallel Computing
– Shared Memory
– Distributed Memory
Distributed Computing
– Cluster Computing
– Grid Computing
– Cloud Computing
– Distributed Pervasive Systems
Parallel Computing
Distributed (Cluster) Computing
In theory, throwing
more resources at a
task will shorten its
time to completion,
with potential cost
savings.
Parallel computers
can be built from
cheap, commodity
components.
SOLVE LARGER / MORE COMPLEX
PROBLEMS (Main Reasons)
Many problems are so large and/or complex
that it is impractical or impossible to solve
them on a single computer, especially given
limited computer memory.
Example: Web search engines/databases
processing millions of transactions every
second
PROVIDE CONCURRENCY
(Main Reasons)