0% found this document useful (0 votes)
50 views19 pages

Parallel Computers

Parallel computing involves solving computational problems simultaneously using multiple processors rather than a single processor. It can be implemented using shared-memory or message-passing architectures. Shared-memory systems use a common memory space that processors communicate through, while message-passing systems have separate memory for each node and use message passing for communication. Flynn's taxonomy classifies computer architectures based on their instruction and data streams as SISD, SIMD, MISD, or MIMD. Parallel computing can reduce computation time for large problems, enable concurrent processing, and provide cost savings over serial computing.

Uploaded by

Dhananjay Patil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views19 pages

Parallel Computers

Parallel computing involves solving computational problems simultaneously using multiple processors rather than a single processor. It can be implemented using shared-memory or message-passing architectures. Shared-memory systems use a common memory space that processors communicate through, while message-passing systems have separate memory for each node and use message passing for communication. Flynn's taxonomy classifies computer architectures based on their instruction and data streams as SISD, SIMD, MISD, or MIMD. Parallel computing can reduce computation time for large problems, enable concurrent processing, and provide cost savings over serial computing.

Uploaded by

Dhananjay Patil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 19

Presentation On

Parallel
Computing
Presented By :-

Dhananjay D. Patil (BE-

What is Parallel Computing?


Traditionally, software has been written for
serial computation.
1. To be run on a single computer having a single Central Processing
Unit(CPU).
2. A problem is broken into a discrete series of instructions.
3. Instructions are executed one after another.
4. Only one instruction may execute at any moment in time.

Serial Computing

Limitations of Serial Computer

Transmission speeds - the speed of a serial computer is


directly dependent upon how fast data can move through
hardware
Economic limitations - it is increasingly expensive to
make a single processor faster.

Parallel Computer
In the simplest sense, parallel computing is the
simultaneous use of multiple compute resources to solve
a computational problem.
To be run using multiple CPUs
A problem is broken into discrete parts that can be solved
concurrently
Each part is further broken down to a series of instructions

Instructions from each part execute simultaneously on


different CPUs

Parallel Computing

Classes of Parallel Computer


1.Shared-Memory
multiprocessor
2.Message-Passing
multicomputer

Shared-Memory
Multiprocessor
The processor in multiprocessors system
communicate with each other through shared
variable in common memory.
It is a computer system composed of multiple
independent processors that executes on different
instruction stream.

Message-Passing
Multicomputer
Each computer node in a multicomputer system
has a local memory, unshared with other nodes.
Interprocessor communication is done through
message passing amoung the nodes.

Flynn's classification
Michal Flynn's(1972) introduced a classification of
various computers architecture based on instruction
and data stream.

The matrix below defines the 4 possible


classifications according to Flynn

Single Instruction, Single Data (SISD


A serial (non- parallel) computer
Single instruction: only one instruction stream is being
acted on by the CPU during any one clock cycle
Single data: only one data stream is being used as input
during any one clock cycle

Single Instruction, Multiple Data (SIMD


A type of parallel computer
Single instruction: All processing units execute the
same instruction at any given clock cycle
Multiple data: Each processing unit can operate on
a different data element
Best suited for specialized problems characterized
by a high degree of regularity, such as image
processing

Multiple Instruction, Single


Data (MISD)
A single data stream is fed into multiple
processing units.
Each processing unit operates on the data
independently via independent instruction
streams.

Multiple Instruction, Multiple


Data (MIMD)
Currently, the most common type of parallel
computer. Most modern computers fall into
this category.
Multiple Instruction: every processor may be
executing a different instruction stream
Multiple Data: every processor may be
working with a different data stream

Memory Architecture
Shared Memory
Distributed Memory
Hybrid Distributed-Shared Memory

Why Parallel Computer?


Save time - well clock time
Solve larger problems
Provide concurrency (do multiple things at
the same time)
Cost savings

Parallel Computing: what for?


parallel databases, data mining
oil exploration
web search engines, web based business
services
computer-aided diagnosis in medicine
management of national and multi-national
corporations
networked video and multi-media
technologies
collaborative work environments

Thank
You..!

You might also like