0% found this document useful (0 votes)
5 views

Lecture 1 - Introduction to PDC

The document provides an introduction to parallel and distributed computing, outlining the differences between the two, with parallel computing involving simultaneous processing by multiple processors within a single system, while distributed computing involves multiple remote computers. It discusses the limitations of serial computing, the need for faster processing due to increasing data demands, and the significance of parallel computing in solving larger problems efficiently. Additionally, it highlights the fastest computer, IBM Summit, detailing its specifications and applications in various research fields.

Uploaded by

Aliza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Lecture 1 - Introduction to PDC

The document provides an introduction to parallel and distributed computing, outlining the differences between the two, with parallel computing involving simultaneous processing by multiple processors within a single system, while distributed computing involves multiple remote computers. It discusses the limitations of serial computing, the need for faster processing due to increasing data demands, and the significance of parallel computing in solving larger problems efficiently. Additionally, it highlights the fastest computer, IBM Summit, detailing its specifications and applications in various research fields.

Uploaded by

Aliza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Distributed and Parallel

Computing

Lecture 1: Introduction
von Neumann Architecture
Parallel and Distributed Computing
• Parallel computing (processing):
• the use of two or more processors, usually within a single system,
working simultaneously to solve a single problem.
• Distributed computing (processing):
• any computing that involves multiple computers remote from each
other that each have a role in a computation problem or information
processing.
• Parallel programming:
• the human process of developing programs that express what
computations should be executed in parallel.
What is Parallel Computing?

Serial Computing:
• Traditionally, software has been written
for serial computation:
• A problem is broken into a discrete series of instructions
• Instructions are executed sequentially one after another
• Executed on a single processor
• Only one instruction may execute at any moment in time
CPU and Memory Speeds

• In 20 years, CPU speed (clock rate) has increased by a


factor of 1000
• DRAM speed has increased only by a factor of smaller
than 4
• How to feed data faster enough to keep CPU busy?
• CPU speed: 1-2 ns
• DRAM speed: 50-60 ns
• Cache: 10 ns
CPU, Memory, and Disk Speed
Technology Trends: Moore’s Law
Transistor
count still
rising

Clock speed
flattening
sharply
The limit of clock frequency

• Speed of light = 3 × 108 m/s


1 1
• One cycle at 4Ghz frequency = s = × 10−9 s
4×109 4
• The distance that the light can move at one cycle:
1
• 𝑠𝑝𝑒𝑒𝑑 × 𝑡𝑖𝑚𝑒 = 3 × 108 m/s × × 10−9s = 0.75× 10−1 m = 7.5cm
4

Intel chip dimension = 1.47 in x 1.47 in


= 3.73cm x 3.73cm

Not much room left for increasing the


frequency!
Possible Solutions

a. Distributed Data Communications


• Data may be collected and stored at different
locations
• It is expensive to bring them to a central location
for processing
• Many computing assignments many be inherently
parallel
• Privacy issues in data mining and other large scale
commercial database manipulations
Distributed Data Communications
Why Parallel Computing

• Save time –many processors work together


• Solve larger problems – larger than one processor’s
CPU and memory can handle
• Provide concurrency – do multiple things at the same
time: online access to databases, search engine
• Google’s 4,000 PC servers are one of the largest
clusters in the world
Why Parallel Computing? (contd..)

• To solve larger problems


– many applications need significantly more memory
than a regular PC can provide/handle
• To solve problems faster
– despite of many advances in computer hardware
technology, many applications are running slower and
slower
• e.g. databases having to handle more and more data
• e.g. large simulations working on even more accurate
solutions
Serial Execution
Parallel Computing:

• In the simplest sense, parallel computing is the


simultaneous use of multiple computer resources to
solve a computational problem:
• A problem is broken into discrete parts that can
be solved concurrently
• Each part is further broken down to a series of
instructions
• Instructions from each part execute
simultaneously on different processors
• An overall control/coordination mechanism is
employed
Parallel Execution
The Inherent Need for Speed

• We want things done fast. If we can get it by the end of the


week, we actually want it tomorrow. If we can get it
tomorrow, we would really like it today. Let's face it, we're a
society that doesn't like to wait.
• Just think about the last time you stood in line at a fast food
restaurant and had to wait for more than a couple of
minutes for your order.
• This idea extends to other things like the weather. We
routinely check the hourly forecast to see what the weather
will be like on our commute to and from work. We expect
that there is a computer, behind the scenes, providing this
information.
• But did you know that a single computer is often not up to
the task?
• That is where the idea of parallel computing comes in.
Parallel Computing

• In simple terms, parallel computing is breaking up a task


into smaller pieces and executing those pieces at the same
time, each on their own processor or on a set of computers
that have been networked together. Let's look at a simple
example. Say we have the following equation:
• Y = (4 x 5) + (1 x 6) + (5 x 3)
• On a single processor, the steps needed to calculate a value
for Y might look like:
• Step 1: Y = 20 + (1 x 6) + (5 x 3)
• Step 2: Y = 20 + 6 + (5 x 3)
• Step 3: Y = 20 + 6 + 15
• Step 4: Y = 26 + 15
• Step 5: Y = 41
Parallel Computing (contd..)

• But in a parallel computing scenario, with three


processors or computers, the steps look something
like:
• Step 1: Y = 20 + 6 + 15
• Step 2: Y = 41
• Now, this is a simple example, but the idea is clear.
Break the task down into pieces and execute those
pieces simultaneously.
Parallel Vs Distributed Computing

• Parallel computing is a computation type in which multiple


processors execute multiple tasks simultaneously.
• Distributed computing is a computation type in which
networked computers communicate and coordinate the
work through message passing to achieve a common goal.
Number of Computers Required
• Parallel computing occurs on one system.
• Distributed computing occurs between multiple system.
Processing Mechanism
• In parallel computing multiple processors perform processing.
• In distributed computing, computers rely on message passing.
Parallel Vs Distributed Computing (contd..)
Memory
• In Parallel computing, computers can have shared memory or
distributed memory.
• In Distributed computing, each computer has their own memory.
Usage
• Parallel computing is used to increase performance and for scientific
computing.
• Distributed computing is used to share resources and to increase
scalability.
Synchronization
• All processors share a single master clock for synchronization in
parallel computing.
• There is no global clock in distributed computing, it uses
synchronization algorithms.
Task 1
The fastest computer in the world today

• What is its name?

• Where is it located?

• How many processors does it have?

• What kind of processors?

• How fast is it?


The fastest computer in the world today
• What is its name? IBM Summit (a supercomputer)
• Uses: The Summit supercomputer provides scientists and researchers the opportunity to
solve complex tasks in the fields of energy, artificial intelligence, human health and other
research areas. It has been used in Earthquake Simulation, Extreme Weather simulation
using AI, Material science, Genomics and in predicting the lifetime of Neutrinos in physics

• Where is it located? United States

• How many processors does it have (architecture)? 4,611,236 processor cores

• What kind of processors? 9216 POWER9 22-core CPUs; 27,648 Nvidia Tesla V100
GPUs

• How fast is it? 122.3 pFLOPS


• (In computing, floating point operations per second (FLOPS, flops or flop/s) is a
measure of computer performance, useful in fields of scientific computations that
require floating-point calculations. It is a more accurate measure than measuring
instructions per second.)

You might also like