Parallel Computer Architecture
• huge developments in the performance and
capability of a computer system
• VLSI- large number of components to be
accommodated on a single chip and clock rates to
increase
• more operations can be performed at a time, in
parallel.
What is parallel computing
• simultaneous use of multiple compute resources to
solve a computational problem
• A problem is broken into discrete parts that can be
solved concurrently
• Each part is further broken down to a series of
instructions
• Instructions from each part execute simultaneously
on different processors
• An overall control/coordination mechanism is
parallel computing
parallel computing
parallel computing
• The computational problem should be able to:
• Be broken apart into discrete pieces of work
that can be solved simultaneously;
• Execute multiple program instructions at any
moment in time;
• Be solved in less time with multiple compute
resources than with a single compute resource.
parallel computing
• The compute resources are typically:
• A single computer with multiple
processors/cores
• An arbitrary number of such computers
connected by a network
Parallel Computer Architecture
• method of organizing all the resources to
maximize the performance
• the programmability within the limits given by
technology
• the cost at any instance of time.
• The execution of these operations in parallel
depends on the place where the data is located
and how data is communicated among different
components
Why Parallel Architecture?
• development of computer system by using more
and more number of processors
• performance achieved by utilizing large number of
processors is higher than the performance of a
single processor at a given point of time.
Main Reasons
• save time and/or money
• solve larger / more complex problems
• provide concurrency
• take advantage of non-local resources