0% found this document useful (0 votes)
11 views

10 Parallel Computing

Parallel computing involves using multiple processors simultaneously to solve computational problems. It works by breaking problems into discrete parts that can be solved concurrently on different CPUs. This allows problems to be solved faster than using a single CPU. Parallel computing is useful for applications involving large datasets or complex simulations that can benefit from distributed processing power. The limitations of serial computing, along with the increasing availability of affordable multi-core processors, means parallelism is becoming central to modern computing.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

10 Parallel Computing

Parallel computing involves using multiple processors simultaneously to solve computational problems. It works by breaking problems into discrete parts that can be solved concurrently on different CPUs. This allows problems to be solved faster than using a single CPU. Parallel computing is useful for applications involving large datasets or complex simulations that can benefit from distributed processing power. The limitations of serial computing, along with the increasing availability of affordable multi-core processors, means parallelism is becoming central to modern computing.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 15

10 Parallel Computing

4311 CS
What is Parallel Computing? (1)
• Traditionally, software has been written for serial computation:
• To be run on a single computer having a single Central Processing Unit (CPU);
• A problem is broken into a discrete series of instructions.
• Instructions are executed one after another.
• Only one instruction may execute at any moment in time.
What is Parallel Computing? (2)
• In the simplest sense, parallel computing is the simultaneous use of multiple
compute resources to solve a computational problem.
• To be run using multiple CPUs
• A problem is broken into discrete parts that can be solved concurrently
• Each part is further broken down to a series of instructions
• Instructions from each part execute simultaneously on different CPUs
Parallel Computing: Resources
• The compute resources can include:
• A single computer with multiple processors;
• A single computer with (multiple) processor(s) and some specialized
computer resources (GPU, FPGA …)
• An arbitrary number of computers connected by a network;
• A combination of both.
Parallel Computing: The computational
problem
• The computational problem usually demonstrates characteristics such
as the ability to be:
• Broken apart into discrete pieces of work that can be solved simultaneously;
• Execute multiple program instructions at any moment in time;
• Solved in less time with multiple compute resources than with a single
compute resource.
Parallel Computing: what for? (1)
• Parallel computing is an evolution of serial computing that attempts to emulate what has always
been the state of affairs in the natural world: many complex, interrelated events happening at the
same time, yet within a sequence.
• Some examples:
• Planetary and galactic orbits
• Weather and ocean patterns
• Tectonic plate drift
• Rush hour traffic in Paris
• Automobile assembly line
• Daily operations within a business
• Building a shopping mall
• Ordering a hamburger at the drive through.
Parallel Computing: what for? (2)
• Traditionally, parallel computing has been considered to be "the high
end of computing" and has been motivated by numerical simulations
of complex systems and "Grand Challenge Problems" such as:
• weather and climate
• chemical and nuclear reactions
• biological, human genome
• geological, seismic activity
• mechanical devices - from prosthetics to spacecraft
• electronic circuits
• manufacturing processes
Parallel Computing: what for? (3)
• Today, commercial applications are providing an equal or greater driving force in the development
of faster computers. These applications require the processing of large amounts of data in
sophisticated ways. Example applications include:
• parallel databases, data mining
• oil exploration
• web search engines, web based business services
• computer-aided diagnosis in medicine
• management of national and multi-national corporations
• advanced graphics and virtual reality, particularly in the entertainment industry
• networked video and multi-media technologies
• collaborative work environments
• Ultimately, parallel computing is an attempt to maximize the infinite but seemingly scarce
commodity called time.
Why Parallel Computing? (1)
• This is a legitime question! Parallel computing is complex on any
aspect!

• The primary reasons for using parallel computing:


• Save time - wall clock time
• Solve larger problems
• Provide concurrency (do multiple things at the same time)
Why Parallel Computing? (2)
• Other reasons might include:
• Taking advantage of non-local resources - using available compute resources
on a wide area network, or even the Internet when local compute resources
are scarce.
• Cost savings - using multiple "cheap" computing resources instead of paying
for time on a supercomputer.
• Overcoming memory constraints - single computers have very finite memory
resources. For large problems, using the memories of multiple computers
may overcome this obstacle.
Limitations of Serial Computing
• Limits to serial computing - both physical and practical reasons pose significant constraints to simply building
ever faster serial computers.
• Transmission speeds - the speed of a serial computer is directly dependent upon how fast data can move
through hardware. Absolute limits are the speed of light (30 cm/nanosecond) and the transmission limit of
copper wire (9 cm/nanosecond). Increasing speeds necessitate increasing proximity of processing elements.
• Limits to miniaturization - processor technology is allowing an increasing number of transistors to be placed
on a chip. However, even with molecular or atomic-level components, a limit will be reached on how small
components can be.
• Economic limitations - it is increasingly expensive to make a single processor faster. Using a larger number of
moderately fast commodity processors to achieve the same (or better) performance is less expensive.
The future
• during the past 10 years, the trends indicated by ever faster networks,
distributed systems, and multi-processor computer architectures
(even at the desktop level) clearly show that parallelism is the future
of computing.
• It will be multi-forms, mixing general purpose solutions (your PC…)
and very speciliazed solutions as IBM Cells, ClearSpeed, GPGPU from
NVidia …
Who and What? (1)
• Top500.org provides statistics on parallel computing users - the charts
below are just a sample. Some things to note:
• Sectors may overlap - for example, research may be classified research.
Respondents have to choose between the two.
• "Not Specified" is by far the largest application - probably means
multiple applications.
Von Neumann Architecture
• For over 40 years, virtually all computers have followed a common
machine model known as the von Neumann computer. Named after
the Hungarian mathematician John von Neumann.

• A von Neumann computer uses the stored-program concept. The CPU


executes a stored program that specifies a sequence of read and write
operations on the memory.
Basic Design
• Basic design
• Memory is used to store both program
and data instructions
• Program instructions are coded data
which tell the computer to do
something
• Data is simply information to be used by
the program
• A central processing unit (CPU) gets
instructions and/or data from
memory, decodes the instructions
and then sequentially performs
them.

You might also like