0% found this document useful (0 votes)
6 views10 pages

Document

The document discusses Amdahl's Law and Gustafson's Law, which are essential for understanding the scalability and optimization of parallel computing. Amdahl's Law quantifies maximum speedup based on the fixed size of problems, while Gustafson's Law provides a more optimistic view by considering scenarios where problem size can grow with available resources. The document emphasizes the importance of optimizing both parallel and sequential portions of code to achieve significant performance improvements in parallel systems.

Uploaded by

legesegetu33
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views10 pages

Document

The document discusses Amdahl's Law and Gustafson's Law, which are essential for understanding the scalability and optimization of parallel computing. Amdahl's Law quantifies maximum speedup based on the fixed size of problems, while Gustafson's Law provides a more optimistic view by considering scenarios where problem size can grow with available resources. The document emphasizes the importance of optimizing both parallel and sequential portions of code to achieve significant performance improvements in parallel systems.

Uploaded by

legesegetu33
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd
You are on page 1/ 10

parallel and distributed computing review

8.1 Amdahl's Law and Gustafson's Law

Last Updated on July 30, 2024

Amdahl's Law and Gustafson's Law are crucial concepts in understanding the limits and potential of
parallel computing. They provide frameworks for analyzing how programs can benefit from additional
processors, helping developers make informed decisions about scalability and optimization strategies.

These laws highlight the importance of balancing parallelizable and sequential code sections. While
Amdahl's Law focuses on fixed-size problems, Gustafson's Law considers scenarios where problem size
can grow with available resources, offering a more optimistic view of parallel scaling potential.

Amdahl's Law Implications

Quantifying Maximum Speedup

Top images from around the web for Quantifying Maximum Speedup

Amdahl's law - Wikipedia View original

Is this image relevant?

Amdahl's Law | Introduction to High Performance and High Throughput Computing View original

Is this image relevant?

Amdahl's Law | Introduction to High Performance and High Throughput Computing View original
Is this image relevant?

Amdahl's law - Wikipedia View original

Is this image relevant?

Amdahl's Law | Introduction to High Performance and High Throughput Computing View original

Is this image relevant?

2 of 3

Amdahl's Law quantifies maximum speedup achievable through parallelization considering parallel and
sequential components

Formula expresses speedup as

+
p

S=

(1−p)+

where p represents parallelizable fraction and n represents number of processors

Overall speedup limited by fraction of program that cannot be parallelized (sequential portion)

As number of processors increases, speedup asymptotically approaches limit determined by sequential


fraction

Even small sequential portions can severely limit potential speedup in highly parallel systems (1%
sequential portion limits maximum speedup to 100x)

Optimization Strategies

Emphasizes importance of optimizing sequential portion to achieve significant overall speedup

Focusing efforts on parallelizable sections yields diminishing returns as sequential bottlenecks dominate

Algorithmic improvements in sequential portion can have more significant impact than simply adding
processors

Identifies critical sections for optimization efforts (hotspots in profiling)

Encourages developers to minimize synchronization and communication overhead in parallel sections

Limitations and Assumptions

Assumes fixed problem size regardless of number of processors used

Does not account for scenarios where problem size scales with number of processors (addressed by
Gustafson's Law)
Neglects real-world factors like communication overhead, load balancing, and memory constraints

May not accurately predict performance for dynamic or irregular workloads

Assumes perfect parallelization of parallel portion, which is often unrealistic in practice

Limitations of Parallel Speedup

Speedup Bounds

Maximum achievable speedup inversely proportional to fraction of program executed sequentially

As number of processors approaches infinity, speedup bounded by

1−p

where p represents parallelizable fraction

For programs with 90% parallelizable code, maximum theoretical speedup limited to 10x regardless of
processors added

Adding more processors yields diminishing returns, especially when sequential portion significant

Real-world speedups often fall short of theoretical limits due to various overheads and inefficiencies

Diminishing Returns

Law demonstrates adding more processors provides negligible improvements beyond certain point

Illustrates concept of "knee of the curve" where additional processors offer minimal benefit (typically
occurs when number of processors approaches


p

1−p

Emphasizes importance of cost-benefit analysis when scaling parallel systems

Encourages focus on algorithmic improvements rather than hardware scaling alone

Highlights need for balanced approach to parallel system design considering both hardware and software
optimizations

Practical Considerations

Does not consider communication overhead between processors (can become significant bottleneck)

Ignores load balancing issues that may arise in real-world parallel systems

Fails to account for memory constraints that may limit scalability (cache coherence, memory bandwidth)

Assumes perfect parallelization which is often unrealistic due to synchronization and data dependencies

May not accurately represent performance of dynamic or irregular workloads common in many
applications

Gustafson's Law Extension

Scaled Speedup Concept

Addresses limitations of Amdahl's Law by considering scenarios where problem size scales with number
of processors

Assumes as more processors become available, problem size increases proportionally, maintaining
constant execution time

Introduces concept of "scaled speedup" measuring how much larger problem can be solved in same time
with more processors

Formula expressed as

(
P

S(P)=P−α(P−1) where S represents speedup, P represents number of processors, and α represents non-
parallelizable fraction

Demonstrates more optimistic potential speedup for problems that can scale with available resources

Comparison with Amdahl's Law

Gustafson's Law assumes problem size grows with number of processors while Amdahl's Law assumes
fixed problem size

Provides more optimistic outlook for parallel scaling especially for large-scale scientific and engineering
problems

Addresses limitation of Amdahl's Law in scenarios where increased computational power allows for
more detailed or larger-scale problems

Shifts focus from speeding up fixed-size problems to solving larger problems in same time

Highlights importance of considering both laws when analyzing parallel performance potential

Applications and Implications

Particularly relevant in scenarios such as scientific simulations, data analytics, and machine learning

Encourages development of scalable algorithms that can utilize additional resources effectively
Supports justification for large-scale parallel systems in fields where problem sizes can grow with
available computing power

Influences design decisions in high-performance computing architectures (supercomputers, cloud


computing platforms)

Emphasizes importance of developing parallel algorithms that can efficiently handle increasing problem
sizes

Analyzing Parallel Performance

Applying Amdahl's Law

Calculate theoretical maximum speedup for given program with known parallel fraction and number of
processors

Use formula

S=

(1−p)+

p
1

to predict speedup for various scenarios

Analyze impact of varying parallel fraction on potential speedup (create graphs showing speedup vs.
number of processors for different p values)

Identify theoretical limits of parallelization for specific applications

Use results to determine cost-effectiveness of adding more processors for fixed-size problems

Utilizing Gustafson's Law

Estimate scaled speedup when problem size can increase with number of processors

Apply formula

S(P)=P−α(P−1) to predict performance for scalable problems

Analyze how problem size can grow while maintaining constant execution time as processors increase
Compare scaled speedup predictions with fixed-size speedup from Amdahl's Law

Use results to justify investment in larger parallel systems for scalable problems

Real-world Considerations

Recognize limitations of both laws in real-world scenarios (communication overhead, load balancing,
memory constraints)

Incorporate additional factors into performance models (network topology, data locality, synchronization
costs)

Use profiling tools to identify actual parallel and sequential portions in real applications

Compare theoretical predictions with measured performance to refine models and identify optimization
opportunities

Apply insights from both laws to optimize parallel algorithms by identifying and minimizing sequential
bottlenecks in code

I was trying to understand the working of Amdahl's law but got confused in the process. Consider
the following problem:

Suppose

a program has a part at the beginning that is sequential in nature (must be executed by only one
processor) and takes 3 ms. Also, there is a part at the end of the program that is sequential (must
be executed by only one processor) and takes 4 ms. The rest of the code is divided into 5 equal
parts that are executed in parallel on 5 processes and each of these parts takes 16 ms. Calculate
speedup using Amdahl's law.

Here is how I approached this problem. I first calculated the serial and parallel fraction, where 0.3 is
the serial part and 0.7 is the parallel part calculated from the following logic:

Serial Part = 3 ms + 4 ms = 7 ms
Parallel Part = 16 ms (Only taking once as the code executes parallel on 5 processors)

Total = 7 ms + 16 ms = 23 ms

Serial Fraction = 7 ms / 23 ms = 0.3 (approx)

Parallel Fraction = 16 ms / 23 ms = 0.7 (approx)

Now putting values in Amdahl's law:

Speedup = 1 / (S + P/N) (where N = Processors, S = Serial Fraction, P = Parallel Fraction)

Speedup = 1 / (0.3 + 0.7/5) = 2.27 (approx)

You might also like