0% found this document useful (0 votes)
25 views

Lec1-Introduction To Parallel - Distributed System

This document provides an introduction to parallel programming, discussing why parallel computing is important and needed, how problems can be solved using parallel processing across multiple CPUs, and how hardware is evolving to support parallelism through multi-core processors to handle larger and more complex computational problems.

Uploaded by

Jawad Ali
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Lec1-Introduction To Parallel - Distributed System

This document provides an introduction to parallel programming, discussing why parallel computing is important and needed, how problems can be solved using parallel processing across multiple CPUs, and how hardware is evolving to support parallelism through multi-core processors to handle larger and more complex computational problems.

Uploaded by

Jawad Ali
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 29

An Introduction to Parallel Programming

Lecture 1
Why Parallel Computing?

1
INTRODUCTION
WEEK 01
Course Objectives
 Learn how to program parallel processors and
systems
 Learn how to think in parallel and write correct parallel
programs
 Achieve performance and scalability through
understanding of architecture and software mapping
 Significant hands-on programming experience
 Develop real applications on real hardware
 Discuss the current parallel computing context
 What are the drivers that make this course timely
 Contemporary programming models and architectures, and
where is the field going

3
Why is this Course Important?
 Multi-core and many-core era is here to stay
 Why? Technology Trends

 Many programmers will be developing parallel software


 But still not everyone is trained in parallel programming

 Learn how to put all these vast machine resources to the best

use!
 Useful for
 Joining the industry

 Graduate school

 Our focus
 Teach core concepts

 Use common programming models

 Discuss broader spectrum of parallel computing

4
Roadmap
 Why we need ever-increasing performance.
 Why we’re building parallel systems.
 Why we need to write parallel programs.
 What we’ll be doing.
 Concurrent, parallel, distributed!

5
Parallel and Distributed
Computing
 Parallel computing (processing):
 the use of two or more processors (computers), usually

within a single system, working simultaneously to solve a


single problem.
 Distributed computing (processing):
 any computing that involves multiple computers

remote from each other that each have a role in a


computation problem or information processing.

 Parallel programming:
 the human process of developing programs that express what

computations should be executed in parallel.

6
Parallel Computing
To be run using multiple CPUs
◦A problem is broken into discrete parts that can be solved
concurrently
◦Each part is further broken down to a series of
instructions

Instructions from each part execute simultaneously on


different CPUs

Page 7
Parallel Computing Example

Page 8
Compute Resources
The compute resources can include:
◦A single computer with multiple processors/cores
◦An arbitrary number of computers connected by
a network
◦A combination of both

Page 10
Why we need ever-increasing
performance
 Computational power is increasing, but so are our
computation problems and needs.
 Problems we never dreamed of have been
solved because of past increases, such as
decoding the human genome.
 More complex problems are still waiting to be
solved.

10
Climate modeling
 National Oceanic and Atmospheric Administration
(NOAA) has more than 20PB of data and processes
80TB/day

11
Climate modeling

One
Another processor
processor computes
computes this part
this part in
parallel

Processors in adjacent blocks in the grid communicate their result.

12
Data analysis
 CERN’s Large Hadron Collider (LHC) produces about 15PB per year
 High-energy physics workflows involve a range of both data-intensive and
compute-intensive activities.
 The collision data from the detectors on the LHC needs to be filtered to select a
few thousand interesting collisions from as many as one billion that may take
place each second.
 The WLCG produces a massive sample of billions of simulated beam crossings,
trying to predict the response of the detector and compare it to known physics
processes and potential new physics signals.

13
Drug discovery
 Computational drug discovery and design (CDDD) based on HPC is a
combination of pharmaceutical chemistry, computational chemistry, and
biology using supercomputers, and has become a critical technology in
drug research and development.

14
Why Parallel Computing?
The Real World is Massively Parallel:
◦Parallel computing attempts to emulate the natural world
◦Many complex, interrelated events happening at the same time, yet within a
sequence.

Page 12
Why Parallel Computing?

Page 13
Why Parallel Computing?
To solve larger, more complex Problems:
numerical simulations of complex systems and
"Grand Challenge Problems" such as:

◦weather and climate forecasting


◦chemical and nuclear reactions
◦geological, seismic activity
◦mechanical devices (spacecraft )
◦electronic circuits
◦manufacturing processes
Why Parallel Computing?

Example applications include:


◦parallel databases, data mining
◦web search engines, web based business services
◦computer-aided diagnosis in medicine
◦management of national and multi-national corporations
◦advanced graphics and virtual reality, particularly in the
entertainment industry

Page 16
Why Parallel Computing?
◦ To save time
◦ To solve larger problems
◦ To provide concurrency

Page 17
Why Parallel Computing?

Parallel computing is an attempt to maximize the


infinite but seemingly limited commodity called
time!

Page 18
Who and What?
Top500.org provides statistics on parallel computing
users.
The Future?
During the past 20 years, the trends indicated by ever faster
networks, distributed systems, and multi-processor
architectures clearly show that parallelism is the future of
computing.
In this same time period, there has been a greater than
500,000x increase in supercomputer performance, with no
end currently in sight.
The race is already on for Exascale Computing!
Exaflop = 1018 calculations per second

Page 21
Towards parallel hardware

23
Why we’re building parallel
systems
 Up to now, performance increases have been
attributable to increasing density of transistors.

 But there are inherent


problems.

24
A little physics lesson
 Smaller transistors = faster processors.
 Faster processors = increased power
consumption.
 Increased power consumption =
increased heat.
 Increased heat = unreliable processors.

25
Evolution of processors in the
last 50 years
Evolution of processors in the last 50 years

26
How small is 5nm?

https://www.tsmc.com/english/dedicatedFoundry/technology/logic/l_5nm
27
An intelligent solution
 Instead of designing and building faster
microprocessors, put multiple processors on a
single integrated circuit.
 Move away from single-core systems to
multicore processors.
 Introducing parallelism!!!

28
Thank You

You might also like