0% found this document useful (0 votes)
2 views61 pages

CS621 Week 01

good

Uploaded by

Malik fizan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views61 pages

CS621 Week 01

good

Uploaded by

Malik fizan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 61

Dr.

Muhammad Anwaar Saeed


Dr. Said Nabi
Ms. Hina Ishaq

CS621 Parallel and Distributed


Computing
What is Computing?

CS621 Parallel and Distributed


Computing
Introduction of Computing.

Objective
s
History of Computing.
What is Computing

“Computing is the process to complete a given goal-


oriented task by using computer technology.”
Computing may include the
design and development of
software and hardware
systems for broad range of
purposes.
What is
Computing
Used for structuring,
Cont.… processing and managing
any kind of information - to
aid in the pursuit of
scientific studies and
making intelligent systems.
Batch Era

Time Sharing
Era
History of
Computing Desktop Era

Network Era
History of Computing Cont..
Batch Era: Execution of series of programs on a
computer without manual intervention.

Time Sharing: Sharing of computing resource


among many users by means of multiprogramming
and multi-tasking.

Desktop Era: A personal computer provides


computing power to one user.

Network Era: Systems with shared memory and


distributed memory.
Serial Vs. Parallel
Computing

CS621 Parallel and Distributed


Computing
Serial Computing.

Objective
Parallel Computing.
s

Difference between serial and


parallel computing
Serial Computing

“Serial computing is a type of processing in which one


task is completed at a time and all the tasks are
executed by the processor in a sequence.”
Parallel Computing

“Parallel computing is a type of computing architecture


in which several processors simultaneously execute
multiple, smaller calculations broken down from an
overall larger, complex problem.”
Serial vs parallel Computing Cont..
Difference between Serial and parallel
Computing
Serial Computing Parallel Computing
Are uniprocessor systems. Are multiprocessor systems.
Can execute one instruction at aCan execute multiple instructions
time. at a time.
Speed is limited. No limitation on speed.
Lower performance. Higher performance.
Examples: Pentium 3 and 4. Example: Window 7, 8 and 10.
Introduction to Parallel
Computing

CS621 Parallel and Distributed


Computing
Parallel Computing

Objective
Multi-Processer
s

Multi-Core
Introduction to Parallel
Computing

“Parallel Computing is the simultaneous execution of


the same task (split up and adapted) on multiple
processors in order to obtain faster results.”
Introduction to Parallel
Computing Cont.…

It is a kind of computing architecture where


the large problems break into independent,
smaller, usually similar parts that can be HPC: High
processed in one go. It is done by multiple Performance/Productivity
CPUs communicating via shared memory, Computing
which combines results upon completion. It Technical Computing
helps in performing large computations as Cluster computing
it divides the large problem between more
than one processor.
Introduction to Parallel
Computing Cont.…

The term parallel computing architecture sometimes used for a


computer with more than one processor available for processing.

The recent multicore processor (chips with more than one


processor core) are some commercial examples which bring
parallel computing to the desktop.
Introduction to Parallel
Computing Cont.…

Multi-processor Multi-core
More than one CPU works Is a microprocessor on a
together to carry out single integrated circuit
computer instructions or with two or more separate
programs. processing units, called
cores, each of which reads
and executes program
instructions.
Introduction to
Parallel Computing
Cont…
Principles of Parallel
Computing

CS621 Parallel and Distributed


Computing
Principles of Parallel
Objective Computing.
s
Finding
enough
parallelis
m

Pe
rfo
r
mo nce ma ale
de Sc
l in
g

Principles of
Parallel
Computing di na
or nd
t

Co n a roni
i o ch n
n
Lo
ca
lity
Sy zatio

Load
balance
Principles of Parallel Computing
Cont…
Finding Enough Parallelism Scale
Conventional architectures coarsely comprise Parallelism overhead includes cost of starting
of a processor, memory system, and the a head, accessing data, communicating
data-path. Each of these components' shared data, synchronization and extra
present significant performance bottlenecks. computation. Algorithms needs sufficiently
Parallelism addresses each of these large units of work to run fast in parallel.
components in significant ways.

Locality
Parallel processors collectively have large
and fast cache. The memory addresses are
distributed across the processors, a
processor may have faster access to memory
locations mapped locally than to memory
locations mapped to other processors.
Principles of Parallel Computing
Cont.…
Load Balance Coordination and Synchronization
Determines the workload, divide up evenly Several kind of synchronization is needed by
before staring in case of static load balancing processes cooperating to perform
but in dynamic load balance workload computation.
changes dynamically, need to rebalance
dynamically.

Performance Modeling
More efficient programming models and tools
formulated for massively parallel
supercomputers.
Why Use Parallel
Computing?

CS621 Parallel and Distributed


Computing
Objective Identifying the aspects that make
parallel computing more useful.
s
Computing
power

Why Use
Pr
con ovide anc
c urr form
e nc Per e
y

Parallel
Computing?
So
pr lve i l it
y
ob la
lem rge a l ab
s Sc
Why Use Parallel Computing?

Computing power
Performance
Modern consumer grade computing
Theoretical performance steadily
hardware comes equipped with multiple
increased, due to the fact that
central processing units (CPUs) and/or
performance is proportional to the
graphics processing units (GPUs) that
product of the clock frequency and the
can process many sets of instructions
number of cores.
simultaneously.

Scalability
Problem can be scaled up from size to
sizes that were out of reach with a
serial application. The larger problem
sizes are enabled by the larger
amounts of main memory, disk storage,
bandwidth over networks and to disk,
and CPUs.
Why Use Parallel Computing Cont.

Solve large problems Cost
Solve large problems by breaking down the cost of computation would reduce
larger problems into smaller, if we are to deploy parallel
independent, often similar parts that can computation than sequential
be executed simultaneously by multiple computation.
processors communicating via shared
memory. Can solve large problems like
Web search engines, processing millions
of transaction per second, etc.

Provide concurrency
Parallelism leads naturally to
Concurrency. For example,
Several processes trying to
print a file on a single printer.
Introduction to Distributed
Computing

CS621 Parallel and Distributed


Computing
Objective Detailed explanation of distributed
computing.
s
Introduction to Distributed
Computing

“A distributed system is a collection of independent


computers that appears to its users as a single
coherent system.”
Introduction to Distributed
Computing Cont.…

In distributed computing
• We have multiple autonomous Uses or coordinates
computers which seems to the user physically separate
as single system. computing resources:
• There is no shared memory and • Grid computing
computers communicate with each • Cloud Computing
other through message passing.
• A single task is divided among
different computers.
Introduction to Distributed Computing
Cont.…
Why Use Distributed
Computing?

CS621 Parallel and Distributed


Computing
Objective Identifying the aspects that makes
distributed computing more useful.
s
Heterogeneity

D ans
ib es

Tr
is p
ty
ss rc

tr a
ili

ib re
ce ou

ut nc
ac Res

io y
n
Why Use
Distributed
Scalability Openness

Computing

cy
to r r
er

Fa ran ov

en
le ec
ro

ul ce ery

rr
t ,

cu
on
C
Resiliency
Why use Distributed
Computing
Distribution Transparency
Heterogeneity
A distributed system that is
The Internet enables users to able to present itself to users
access services and run and applications as if it were
applications over a only a single computer
heterogeneous collection of system is said to be
computers and networks. transparent.

Openness Resiliency
An open distributed system is With multiple computers,
a system that offers services redundancies are
according to standard rules implemented to ensure that a
that describe the syntax and single failure doesn't equate
semantics of those services. to systems-wide failure.
Why use Distributed Computing
Cont.…
Scalability
Resources accessibility
Scalability of a distributed system
can be measured along at least The main goal of a distributed
three different dimensions: a system system is to make it easy for the
can be scalable with respect to its users to access remote
size, a geographically scalable resources, and to share them in
system, a system can be a controlled and efficient way.
administratively scalable.

Fault tolerance, Error recovery


Concurrency
Fault tolerance is an important
Concurrency between programs
aspect in distributed systems
sharing data is generally kept
design. A system is fault tolerant if
under control through
it can continue to operate in the
synchronization mechanisms for
presence of failures.
mutual exclusion and transaction.
Difference between
Parallel and Distributed
Computing

CS621 Parallel and Distributed


Computing
Identifying the key differences
Objective between parallel and distributed
s computing.
Difference between Parallel
and Distributed Computing

Distributed computing is often used in tandem with parallel


computing. Parallel computing on a single computer uses
multiple processors to process tasks in parallel, whereas
distributed parallel computing uses multiple computing
devices to process those tasks.
Difference between Parallel and
Distributed Computing Cont…
Sr. Parallel Computing Distributed Computing
1. Many operations are performed System components are located at different
simultaneously locations

2. Single computer is required Uses multiple computers


3. Multiple processors perform multiple Multiple computers perform multiple operations
operations

4. Processors communicate with each Computer communicate with each other through
other through bus message passing.

5. Improves the system performance Improves system scalability, fault tolerance and
resource sharing capabilities
Applications of Parallel
and Distributed Computing

CS621 Parallel and Distributed


Computing
Objective Scope of parallel and distributed
computing.
s
Science

Go i n ee
og g

Applications le E n ri ng

of Parallel and
Distributed P Bu
P2 wor si n

Computing t
Ne k
s es

Defens
e
Applications of Parallel and
Distributed Computing

Science Engineering
• Global climate modeling • Semiconductor design
• Biology: genomics; protein folding; drug • Earthquake and structural modeling
design • Computation fluid dynamics (airplane
• Astrophysical modeling design)
• Computational Chemistry • Combustion (engine design)
• Computational Material Sciences and • Crash simulation
Nano sciences
Applications of Parallel and Distributed
Computing Cont.…

Business Defense
• Financial and economic modeling • Nuclear weapons -- test by simulations
• Transaction processing, web services and • Cryptography
search engines
Applications of Parallel and Distributed
Computing Cont.…

P2P Network Google


• eDonkey, BitTorrent, Skype, … • 1500+ Linux machines behind
Google search engine
Issues in Parallel and
Distributed Computing

CS621 Parallel and Distributed


Computing
Objective Identifying the key challenges in
parallel and distributed computing.
s
Failure Handling

Co
m y
mu i li t

Issues in
n ic l ab
ati
on S ca

Parallel and
Distributed
Computing
ce Se
ur nt cu
e so eme rit
R ag y
n
ma

Process
synchronization
Issues in Parallel and
Distributed Computing
Scalability
Failure Handling As a distributed system is
Failures, like in any program, are scaled, several factors need
a major problem. With so many to be taken into account:
processes and users, the size, geography, and
consequences of failures are administration; and their
exacerbated. associated problems like
overloading, control and
reliability.

Security
As connectivity and sharing
increase, security is becoming
increasingly important. Increased
connectivity can also lead to
unwanted communication, such as
electronic junk mail, often called
spam.
Issues in Parallel and
Distributed Computing Cont.…
Process synchronization Resource management
One of the most important issues that In distributed systems, objects
engineers of distributed systems are consisting of resources are located on
facing is synchronizing computations different places. Routing is an issue at
consisting of thousands of components. the network layer of the distributed
Current methods of synchronization like system and at the application layer.
semaphores, monitors, barriers, remote Resource management in a distributed
procedure call, object method system will interact with its
invocation, and message passing, do heterogeneous nature.
not scale well.

Communication and Latency


Distributed Systems have become
more effective with the advent of
Internet but there are certain
requirements for performance,
reliability etc. Effective approaches to
communication should be used.
Parallel and Distributed
Computing Efforts

CS621 Parallel and Distributed


Computing
Objective Identifying the prerequisites for
parallel and distributed computing.
s
Parallel and Distributed Computing
Programming Design Efforts
Before a program is
written or a piece of
software is • Decomposition
developed, it must
first go through a • Communicatio
design process. n
For parallel and • Synchronizatio
distributed programs,
the design process n
will include three
issues:
Decomposition

One of the primary issues of


Decomposition is the process
concurrent programming is
of dividing up the problem and
identifying a natural WBS(Work
the solution into parts: logical
breakdown structure) for the
areas and logical resources.
software solution at hand.
Communication

Important Questions:
• How will this
communication be
performed if the parts are
in different processes or
Once the software solution is different computers?
decomposed into several Following issues must be • Do the different parts need
concurrently executing parts, considered when designing to share any memory?
those parts will usually do some parallel or distributed systems: • How will one part of the
amount of communicating.
software know when the
other part is done?
• Which part starts first?
• How will one component
know if another component
has failed?
Synchronization

The components' order of


execution must be coordinated. .
Do all the parts start at the same
time or does some work while
others wait?
The WBS designates who does what. What two or more components
When multiple components of software need access to the same
are working on the same problem, they resource?
must be coordinated. Who gets it first? If some of the
parts finish their work long before
the other parts, should they be
assigned new work?
Who assigns the new work in such
cases?

You might also like