0% found this document useful (0 votes)
1 views

1. Parallel and Distributed Computing-1

Uploaded by

hamzayousaf973
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

1. Parallel and Distributed Computing-1

Uploaded by

hamzayousaf973
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 17

PARALLEL AND

DISTRIBUTED
COMPUTING
WHAT IS COMPUTING?

 Computing is the process of


using computer technology to
complete a given goal-oriented task.
 It includes the study and
experimentation
of algorithmic processes and
development of
both hardware and software.
 Major computing disciplines
include computer
engineering, computer
science, cybersecurity, data
SERIAL COMPUTATION:

 Traditionally software has been written


for serial computations:
 To be run on a single computer having a
single Central Processing Unit (CPU)
 A problem is broken into a discrete set of
instructions
 Instructions are executed one after another
 Only one instruction can be executed at any
moment in time
SERIAL COMPUTATION:
PARALLEL COMPUTING:

 In the simplest sense, parallel


computing is the simultaneous use of
multiple compute resources to solve a
computational problem:
 To be run using multiple CPUs
 A problem is broken into discrete parts that
can be solved concurrently
 Each part is further broken down to a series
of instructions
 Instructions from each part execute
simultaneously on different CPUs
PARALLEL COMPUTERS:

 Virtually all stand-alone computers


today are parallel from a hardware
perspective:
 Multiple functional units (floating point,
integer, GPU, etc.)
 Multiple execution units / cores
 Multiple hardware threads
PARALLEL COMPUTERS:

 Networks connect multiple stand-alone


computers (nodes) to create larger
parallel computer clusters
 Each compute node is a multi-processor
parallel computer in itself
 Special purpose nodes, also multi-
processor, are used for other purposes
DISTRIBUTED COMPUTING

 A distributed computer system consists of


multiple software components that are on
multiple computers, but run as a single system.
 The computers that are in a distributed system
can be physically close together and connected
by a local network, or they can be
geographically distant and connected by a
wide area network.
 A distributed system can consist of any number
of possible configurations, such as mainframes,
personal computers, workstations,
minicomputers, and so on.
 The goal of distributed computing is to make
such a network work as a single computer.
 parallel computing focuses on using
multiple processors or cores within a
single computer to solve a problem,
while distributed computing focuses on
using multiple computers connected by
a network to solve a problem.
 The choice between these two
approaches will depend on the specific
requirements of the problem being
solved, including the amount of data
being processed, the level of
computational power required, and the
nature of the computation itself.
 Both parallel and distributed computing
have both benefits and drawbacks, so
selecting one to choose will depend on
the system’s unique needs and
limitations. Parallel computing can
achieve high performance with low
latency and high bandwidth, while
distributed computing can achieve fault
tolerance and scalability with
geographic dispersion.
DIFFERENCE BETWEEN PARALLEL
COMPUTING AND DISTRIBUTED
COMPUTING:
S.N
Parallel Computing Distributed Computing
O
Many operations are performed System components are located at
1.
simultaneously different locations

2. Single computer is required Uses multiple computers

Multiple processors perform multiple Multiple computers perform


3.
operations multiple operations

It may have shared or distributed


4. It have only distributed memory
memory

Processors communicate with each other Computer communicate with each


5.
through bus other through message passing.

Improves system scalability, fault


6. Improves the system performance tolerance and resource sharing
capabilities
WHY USE PARALLEL COMPUTING?

 Parallel computing is complex on any


aspect!
 The primary reasons for using parallel
computing:
 Save time - wall clock time
 Solve larger problems
 Provide concurrency (do multiple things at
the same time)
WHY USE PARALLEL COMPUTING?

 Other reasons might include:


 Taking advantage of non-local resources -
using available compute resources on a
wide area network, or even the Internet
when local compute resources are scarce.
 Cost savings - using multiple "cheap"
computing resources instead of paying for
time on a supercomputer.
 Overcoming memory constraints - single
computers have very finite memory
resources. For large problems, using the
memories of multiple computers may
overcome this obstacle.
LIMITATIONS OF SERIAL COMPUTING

 Limits to serial computing


 both physical and practical reasons pose significant constraints
to simply building ever faster serial computers.
 Transmission speeds - the speed of a serial computer is
directly dependent upon how fast data can move through
hardware. Absolute limits are the speed of light (30
cm/nanosecond) and the transmission limit of copper wire
(9 cm/nanosecond). Increasing speeds necessitate
increasing proximity of processing elements.
 Limits to miniaturization - processor technology is allowing
an increasing number of transistors to be placed on a chip.
However, even with molecular or atomic-level components,
a limit will be reached on how small components can be.
 Economic limitations - it is increasingly expensive to make
a single processor faster. Using a larger number of
moderately fast commodity processors to achieve the
same (or better) performance is less expensive.

You might also like