CS621 Week 01
CS621 Week 01
Objective
s
History of Computing.
What is Computing
Time Sharing
Era
History of
Computing Desktop Era
Network Era
History of Computing Cont..
Batch Era: Execution of series of programs on a
computer without manual intervention.
Objective
Parallel Computing.
s
Objective
Multi-Processer
s
Multi-Core
Introduction to Parallel
Computing
Multi-processor Multi-core
More than one CPU works Is a microprocessor on a
together to carry out single integrated circuit
computer instructions or with two or more separate
programs. processing units, called
cores, each of which reads
and executes program
instructions.
Introduction to
Parallel Computing
Cont…
Principles of Parallel
Computing
Pe
rfo
r
mo nce ma ale
de Sc
l in
g
Principles of
Parallel
Computing di na
or nd
t
Co n a roni
i o ch n
n
Lo
ca
lity
Sy zatio
Load
balance
Principles of Parallel Computing
Cont…
Finding Enough Parallelism Scale
Conventional architectures coarsely comprise Parallelism overhead includes cost of starting
of a processor, memory system, and the a head, accessing data, communicating
data-path. Each of these components' shared data, synchronization and extra
present significant performance bottlenecks. computation. Algorithms needs sufficiently
Parallelism addresses each of these large units of work to run fast in parallel.
components in significant ways.
Locality
Parallel processors collectively have large
and fast cache. The memory addresses are
distributed across the processors, a
processor may have faster access to memory
locations mapped locally than to memory
locations mapped to other processors.
Principles of Parallel Computing
Cont.…
Load Balance Coordination and Synchronization
Determines the workload, divide up evenly Several kind of synchronization is needed by
before staring in case of static load balancing processes cooperating to perform
but in dynamic load balance workload computation.
changes dynamically, need to rebalance
dynamically.
Performance Modeling
More efficient programming models and tools
formulated for massively parallel
supercomputers.
Why Use Parallel
Computing?
Why Use
Pr
con ovide anc
c urr form
e nc Per e
y
Parallel
Computing?
So
pr lve i l it
y
ob la
lem rge a l ab
s Sc
Why Use Parallel Computing?
Computing power
Performance
Modern consumer grade computing
Theoretical performance steadily
hardware comes equipped with multiple
increased, due to the fact that
central processing units (CPUs) and/or
performance is proportional to the
graphics processing units (GPUs) that
product of the clock frequency and the
can process many sets of instructions
number of cores.
simultaneously.
Scalability
Problem can be scaled up from size to
sizes that were out of reach with a
serial application. The larger problem
sizes are enabled by the larger
amounts of main memory, disk storage,
bandwidth over networks and to disk,
and CPUs.
Why Use Parallel Computing Cont.
…
Solve large problems Cost
Solve large problems by breaking down the cost of computation would reduce
larger problems into smaller, if we are to deploy parallel
independent, often similar parts that can computation than sequential
be executed simultaneously by multiple computation.
processors communicating via shared
memory. Can solve large problems like
Web search engines, processing millions
of transaction per second, etc.
Provide concurrency
Parallelism leads naturally to
Concurrency. For example,
Several processes trying to
print a file on a single printer.
Introduction to Distributed
Computing
In distributed computing
• We have multiple autonomous Uses or coordinates
computers which seems to the user physically separate
as single system. computing resources:
• There is no shared memory and • Grid computing
computers communicate with each • Cloud Computing
other through message passing.
• A single task is divided among
different computers.
Introduction to Distributed Computing
Cont.…
Why Use Distributed
Computing?
D ans
ib es
Tr
is p
ty
ss rc
tr a
ili
ib re
ce ou
ut nc
ac Res
io y
n
Why Use
Distributed
Scalability Openness
Computing
cy
to r r
er
Fa ran ov
en
le ec
ro
ul ce ery
rr
t ,
cu
on
C
Resiliency
Why use Distributed
Computing
Distribution Transparency
Heterogeneity
A distributed system that is
The Internet enables users to able to present itself to users
access services and run and applications as if it were
applications over a only a single computer
heterogeneous collection of system is said to be
computers and networks. transparent.
Openness Resiliency
An open distributed system is With multiple computers,
a system that offers services redundancies are
according to standard rules implemented to ensure that a
that describe the syntax and single failure doesn't equate
semantics of those services. to systems-wide failure.
Why use Distributed Computing
Cont.…
Scalability
Resources accessibility
Scalability of a distributed system
can be measured along at least The main goal of a distributed
three different dimensions: a system system is to make it easy for the
can be scalable with respect to its users to access remote
size, a geographically scalable resources, and to share them in
system, a system can be a controlled and efficient way.
administratively scalable.
4. Processors communicate with each Computer communicate with each other through
other through bus message passing.
5. Improves the system performance Improves system scalability, fault tolerance and
resource sharing capabilities
Applications of Parallel
and Distributed Computing
Go i n ee
og g
Applications le E n ri ng
of Parallel and
Distributed P Bu
P2 wor si n
Computing t
Ne k
s es
Defens
e
Applications of Parallel and
Distributed Computing
Science Engineering
• Global climate modeling • Semiconductor design
• Biology: genomics; protein folding; drug • Earthquake and structural modeling
design • Computation fluid dynamics (airplane
• Astrophysical modeling design)
• Computational Chemistry • Combustion (engine design)
• Computational Material Sciences and • Crash simulation
Nano sciences
Applications of Parallel and Distributed
Computing Cont.…
Business Defense
• Financial and economic modeling • Nuclear weapons -- test by simulations
• Transaction processing, web services and • Cryptography
search engines
Applications of Parallel and Distributed
Computing Cont.…
Co
m y
mu i li t
Issues in
n ic l ab
ati
on S ca
Parallel and
Distributed
Computing
ce Se
ur nt cu
e so eme rit
R ag y
n
ma
Process
synchronization
Issues in Parallel and
Distributed Computing
Scalability
Failure Handling As a distributed system is
Failures, like in any program, are scaled, several factors need
a major problem. With so many to be taken into account:
processes and users, the size, geography, and
consequences of failures are administration; and their
exacerbated. associated problems like
overloading, control and
reliability.
Security
As connectivity and sharing
increase, security is becoming
increasingly important. Increased
connectivity can also lead to
unwanted communication, such as
electronic junk mail, often called
spam.
Issues in Parallel and
Distributed Computing Cont.…
Process synchronization Resource management
One of the most important issues that In distributed systems, objects
engineers of distributed systems are consisting of resources are located on
facing is synchronizing computations different places. Routing is an issue at
consisting of thousands of components. the network layer of the distributed
Current methods of synchronization like system and at the application layer.
semaphores, monitors, barriers, remote Resource management in a distributed
procedure call, object method system will interact with its
invocation, and message passing, do heterogeneous nature.
not scale well.
Important Questions:
• How will this
communication be
performed if the parts are
in different processes or
Once the software solution is different computers?
decomposed into several Following issues must be • Do the different parts need
concurrently executing parts, considered when designing to share any memory?
those parts will usually do some parallel or distributed systems: • How will one part of the
amount of communicating.
software know when the
other part is done?
• Which part starts first?
• How will one component
know if another component
has failed?
Synchronization