0% found this document useful (0 votes)
30 views19 pages

Files

Uploaded by

Muhammad Qaisar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views19 pages

Files

Uploaded by

Muhammad Qaisar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 19

Why use parallel computing?

Save time and/or money


• In theory, throwing more resources at a task will shorten its time to • completion,
with potential cost savings
• Parallel computers can be built from cheap, commodity components.

Solve larger / more complex problems


• Many problems are so large and/or complex that it is impractical or impossible to solve
them using a serial program, especially given limited computer memory.
• Example: Web search engines/databases processing millions of transactions every second
Provide concurrency
• A single compute resource can only do one thing at a time.
Multiple compute resources can do many things
simultaneously.
• Example: Collaborative Networks provide a global venue
where people from around the world can meet and conduct
work "virtually".

• • Take advantage of non-local resources •


• Using compute resources on a wide area network, or even
the Internet when local compute resources are scarce or
insufficient.
Make better use of underlying parallel hardware
Modern computers, even laptops, are parallel in architecture with multiple
processors/cores.

Parallel software is specifically intended for parallel hardware with multiple


cores, threads, etc.
Why use Distributed Computing?
• One reason is historical: computing resources that used to
operate independently now need to work together.
• After a while, there were many workstations in the office
building, and the users recognized that it would be
desirable to share data and resources among the individual
computers.
• They accomplished this by connecting the workstations over
a network.
• In most cases, serial programs run on modern computers
"waste" potential computing power.
Why use Distributed Computing?
• A second reason is functional: if there is special-function hardware or software
available over the network, then that functionality does not have to be duplicated
on every computer system (or node) that needs to access the special-purpose
resource.

• A third reason is economical: it may be more cost-effective to have many small


computers working together than one large computer of equivalent power.
• In additon, having many units connected to a network is the more flexible
configuration; if more resources are needed, another unit can
• be added in place, rather than bringing the whole system down and replacing it with
an upgraded one.
Why use Distributed Computing?
• Furthermore, a distributed system can be more
reliable and available than a centralized system.
• This is a result of the ability to replicate both data
and functionality.
• For example, when a given file is copied on two
different machines, then even if one machine is
unavailable, the file can still be accessed on the other
machine.
Why use Distributed Computing?
• Distributed computing inherently brings with it not only
potential advantages, but also new problems.
• Examples are keeping multiple copies of data consistent, and
• keeping the clocks on different machines in the system
synchronized.
• • A system that provides distributed computing support
must address these new issues.
Why not to use Parallel Computing ?
• First we have to recall, why we used Parallel Computing:
1. Solve larger / more complex problems
2. Provide concurrency
3. Take advantage of non-local resources
4. Make better use of underlying parallel hardware

• Now we have to understand, why we used Distributed


Computing:
1. Historical: work together from different places connected via
network.
2. Functional: resource sharing (e.g. software or hardware).
3. Economical: separate collaborative working units
Speedup and Amdhal's Law
• Amdahl's law is a formula which gives the theoretical
speedup in latency of the execution of a task at fixed
workload that can be expected of a system whose resources
are improved (scalability).
• It is named after computer scientist Gene Amdahl, and was
presented at the AFIPS Spring Joint Computer Conference in
1967. • Amdahl's law is often used in parallel computing to
predict the theoretical speedup when using multiple
Scalability
Scalability is the property of a system to handle a growing amount of work
by adding resources to the system.

A system is described as scalable if it will remain effective when there is a


significant increase in the number of resources and the number of users

Scalability of a system can be measured along the following different


dimensions:
Scalability Types
1. Physical Scalability/ Load Scalability: a system can be scalable with respect to its
size, meaning that we can easily add/remove more users and resources to the
system.
2. Administrative scalability: The ability for organizations or users to access a
system. an increasing number of
3. Functional scalability: The ability to enhance the system by adding new
functionality without disrupting existing activities.
4. Geographic scalability: The ability to maintain effectiveness during expansion
from a local area to a larger region.
5. Generation scalability: The ability of a system to scale by adopting new
generations of components.
6. Heterogeneous scalability: is the ability to adopt components from different
vendors.
Scalability: Scale out (Horizontal) & Scale up
(Vertical)
• · Resources fall into two broad categories:
horizontal and vertical
Scaling horizontally: (out/in) means adding more nodes to (or
removing nodes from) a system, such as adding a new computer
to a distributed software application.

An example might involve scaling out (to increase) from one web
server to three. Exploiting this scalability requires software for
efficient resource management and maintenance.
Scalability: Scale out (Horizontal) & Scale up
(Vertical)
Vertically: (up/down) means adding resources to (or removing
resources from) a single node, typically involving the addition of
CPUs, memory or storage to a single computer.

· Larger numbers of elements increases management complexity,


more sophisticated programming to allocate tasks among
resources and handle issues such as throughput and latency
across nodes.

You might also like