Ppt1 Lecture 1 Distributed and Parallel Computing CSE423
Ppt1 Lecture 1 Distributed and Parallel Computing CSE423
CSE 423
Parallel And Distributed
Systems
Parallel Systems
• DEFINATION: A system is said to be a parallel system
in which multiple processor have direct access to
shared memory which forms a common address
space.
• Usually tightly-coupled system are referred to as
parallel system. In this systems there is a single
system wide primary memory that is shared by all
the processors.
• Parallel computing is the use of two or more
processors in combination to solve a single problem.
Parallel System
Applications of Parallel System
• An example of parallel computing would be two servers that
shares the workload of routing mail, managing connections
to an accounting system or database etc.
• Supercomputers are usually placed in parallel system
architecture.
• Terminals connected to single servers.
Advantages of Parallel System
• Provide concurrency (do multiple things at the same time)
• Taking advantage of non local sources
• Cost savings
• Save time and money
• Overcoming memory constraints
• Global addresses space provides a user friendly programming
perspective to memory
Disadvantages of Parallel System
• Primary disadvantage is the lack of scalability between
memory and CPUs.
• Programmer responsibility for synchronization constructs that
ensure “correct” access of global memory.
• It becomes increasingly difficult and expensive to design and
produce shared memory machines with ever increasing
number of processors.
• Reliability and fault tolerance.
Why Parallel Computing ?
• Traditionally, software has been written for serial
computation:
– To be run on a single computer having a single CPU;
– A problem is broken into a discrete series of instructions.
– Instructions are executed one after another.
– Only one instruction may execute at any moment of time.
Serial Problem…
Parallel Computing…
• Parallel computing is a form of computation in which many
calculations are carried out simultaneously.
• In the simplest sense, it is the simultaneous use of multiple
compute resources to solve a computational problem:
– To be run using multiple CPUs
– A problem is broken into discrete parts that can be solved
concurrently
– Each part is further broken down to a series of instructions
– Instructions from each part execute simultaneously on
different CPUs.
Parallel Problem
Why use Parallel Computing?
load B
• Only one instruction stream is being acted on T
C=A+B I
by the CPU during anyone clock cycle. Store C M
E
Single Data : A=B*2
P1 P2 Pn
Single Instruction, Multiple Data (SIMD):
A type of parallel computer
Single instruction:
• All processing units execute the same instruction at any given
clock cycle
Multiple data:
• Each processing unit can operate on a different data element.
• Best suited for specialize problems characterized by a high
degree of regularity, such as graphics/image processing.
• Two varieties processor arrays and vector pipelines.
Multiple Instruction, Single Data (MISD):
P1 P2 Pn
Multiple Instruction, Single Data (MISD):
• A single data stream is fed into multiple processing units.
• Each processing unit operates on the data independently via
independent instruction streams.
• Some conceivable usage might be:
o Multiple frequency filters operating on a single signal stream
Multiple Instruction, Multiple Data (MIMD):
prev instruct prev instruct prev instruct
P1 P2 Pn
Multiple Instruction, Multiple Data (MIMD):
• Multiple Instruction: Every processor may be executing a
different instruction stream.
• Multiple Data: Every processor may be working with a different
data stream.
• Execution can be synchronous or asynchronous, deterministic or
non- deterministic.
THANK YOU
&
ANY QUERIES?