0% found this document useful (0 votes)
3 views

01-Parallel Computing

Uploaded by

Alex Siryani
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

01-Parallel Computing

Uploaded by

Alex Siryani
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Parallel Computing

Outlines - Introduction

 Cost versus Performance


 What is Parallel Computing?
 The Scope of Parallel Computing
 Issues in Parallel Computing

1
Cost versus Performance
Cost versus Performance curve and its evolution over the decades.

2000s
1990s

1980s

1970s

1960s
Performance

Cost
3

What is Parallel Computing?


 Example: Library and workers to distribute books

1) Dividing a task among workers by assigning them a


set of books is an instance of task partitioning.

2) Passing books to each other is an example of


communication between subtasks.

2
The Scope of Parallel Computing
 Applications such as weather prediction and
pollution monitoring.

 Satellites collect billions of bits per seconds


of data relating to pollution level and the
thickness of ozone layer.

Example: Weather Modeling & Forecasting

 Assumptions:
1) Modeling of weather over an area of 3000 x 3000 miles.

2) Area is being modeled up to a height of 11 miles.

3) 3000 x 3000 x 11 cubic mile domain is partitioned into


segments of size 0.1 x 0.1 x 0.1 cubic miles which is
approximately 1011 different segments.
4) Modeling the weather over a two-day period and the
parameters need to be computed once every half hour.

3000 x 3000 x 11 = 99000000 cubic miles

99000000 / (0.1 x 0.1 x 0.1) = 99000000000 ≈ 1011 Segments


6

3
Assumptions: Weather … Continues
5) The computation of parameters inside a segment
uses initial values and values from neighboring
segments.
 Assume that this computation takes 100 instructions,
then a single updating of parameters in entire domain
requires
 1011 segments x 100 instructions = 1013 instructions
 Since this has to be done approximately 100 times
(two days every half hour, that is 96), then total
number of operations (instructions) is 1015.
24 Hours x 2 x 2 days = 96 ≈ 102 times
7
1013 instructions x 102 times= 1015 instructions

Example: Weather Modeling … (Continues)

 On serial supercomputer capable of performing


one billion instructions per second, this weather
modeling would take approximately 280 hours
that is: 1000000000 instructions per second = 109 instruction per second
 1015 instructions / 109 = 10(15-9) = 106 seconds
 106 seconds / 60 second = 16,666.7 minutes
 16,666.7 minutes / 60 minute = 280 hours
 280 hour / 24 hour = 11.67 days

4
Example: Weather Modeling … (Continues)

 Taking 280 hours (11.67 days) to predict weather


for next 48 hours (2 days) is unreasonable.

 Parallel processing makes it possible to predict


weather not only faster but also more accurate.

Example: Weather Modeling … (Continues)

 If we have parallel computer with 1000 workstation class


processors, then we can partition 1011 segments of
domain among these processors.
 Each processor computes parameters for 108 segments
 1011 segments / 103 processors = 108 segments for each processor
 Assuming that the computing power of this computer is
100 million instructions per second, the problem can be
solved in less than 3 hours:
 100000000 instructions per second = 108 is the power of each processor
 108 segments x 100 (instructions per segment) = 108 x 102 =10 (8+2) = 1010
instructions for each processor
 1010 instructions x 100 (2 days) = 10(10+2) = 1012 instructions
 1012 instructions / 108 instruction per second = 10(12-8) = 104 seconds
 104 seconds / 3600 (60 second x 60 minute) sec. ≈ 2.7 hours.
 So, the whole process takes 2.7 hours because processors are working in parallel
10

10

5
Issues in Parallel Computing
1) Design of Parallel Computers:
 Large number of processors.
 Supporting fast communication.
 Supporting data sharing.
2) Design of Efficient Algorithms:
 Issues in designing parallel algorithms are very
different from those in designing their sequential
computers
 Partition: decompose task into several parallel tasks
 Load Balancing: distribute load on processors as evenly as possible
 Communication: How processors can communicate efficiently to
perform the whole parallel task
11

11

Issues in Parallel … (Continues)


3) Methods for Evaluating Parallel Algorithms:
 Given a parallel computer and a parallel algorithm
running on it, we need to evaluate the performance of
the resulting system.
 Performance analysis allows us to answer questions
such as:
a) How fast can a problem be solved using parallel processing?
b) How efficient are the processors used?

12

12

6
Issues in Parallel … (Continues)
4) Parallel algorithms are implemented on parallel
computers using a programming language.
 Examples: Pthreads, High Performance Fortran (HPF)
5) Parallel Programming Tools:
 To facilitate programming of parallel computers.
 Examples: MPI and PVM (using Fortran, C/C++, C#,
and Java).

13

13

Issues in Parallel … (Continues)

6) Portable Parallel Programs


 In a sense that the parallel program can be executed
under different operating systems and different
architectures
7) Automatic Programming of Parallel Computers:
 Parallel compilers are expected to allow us to program a
parallel computer like a serial computer.

14

14

You might also like