0% found this document useful (0 votes)
42 views26 pages

Errors

This document discusses sources of errors in computational methods and numerical algorithms. It covers topics like problem formulation and input data errors, algorithm design and implementation errors, and errors from numeric limitations like rounding. It also discusses concepts like stability, conditioning, and accuracy in analyzing numerical solutions and algorithms.

Uploaded by

Edmealem Gashaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views26 pages

Errors

This document discusses sources of errors in computational methods and numerical algorithms. It covers topics like problem formulation and input data errors, algorithm design and implementation errors, and errors from numeric limitations like rounding. It also discusses concepts like stability, conditioning, and accuracy in analyzing numerical solutions and algorithms.

Uploaded by

Edmealem Gashaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Computational Methods

Sources of Errors

© Manfred Huber 2011 1


Numerical Analysis /
Scientific Computing
 Many problems in Science and Engineering can
not be solved analytically on a computer
 Numeric solutions are often required
 Numeric solutions provide only approximate solutions
 Numeric solutions are not unique
 Different numeric algorithms might yield different approximations

 Numerical Analysis deals with the design and


analysis of numeric algorithms
 Deals with continuous quantities
 Considers / analyzes the effects of approximations
© Manfred Huber 2011 2
Computational Problems and
Numerical Algorithms
 Solving of computational problems usually
involves the following steps:
 Mathematical modeling
 Develop a mathematical description for the problem
 Algorithm design
 Build an algorithm to solve the mathematical problem
formulation
 Analyze the algorithm for its performance
 Implementation and Evaluation
 Implement the algorithm
 Evaluate its performance with real data
© Manfred Huber 2011 3
Computational Problems and
Numerical Algorithms
 A problem is well-posed if
 A solution exists and is unique
 The solution depends continuously on the data
 Ill-posed problems are often sensitive to the data
and solution algorithms are not stable
 Some ill-posed problems can be approximated by well-
posed similar problems
 Even solutions to well-posed problems can be
sensitive to data
 Computational algorithm should not increase sensitivity
© Manfred Huber 2011 4
Problem Solution Strategies
 Finding a solution (and subsequently an algorithm)
for a computational problem often involves
replacing a difficult problem with a simpler one
with identical or closely related solution
 Replace infinite with finite formulations
 Replace differential equations with algebraic equations
 Replace non-linear formulations with linear ones
 Replace complicated functions with simpler ones
 Replace higher-order systems with lower-order ones
 Solutions may only approximate the original ones
© Manfred Huber 2011 5
Sources of Approximation
 Problem formulation and input data
 Simplifications in the original model of the problem
 Errors in measurements used as input data
 Approximations resulting from pre-computations
 Algorithm and implementation
 Truncation and discretization as part of the algorithm
design (usually resulting from simplifications of the
original mathematical model)
 Rounding as a result of the use of a finite resolution
digital computer

© Manfred Huber 2011 6


Approximation and Error
 The accuracy of the solution produced by a
numerical algorithm depends on errors introduced
by the modeling and pre-computation and by the
computation in the algorithm
 The first can usually not be addressed but have to be
considered when analyzing an algorithm
 The problem and the solution algorithm can have
major effects on the accuracy of the approximation
 The problem can amplify input error (sensitivity)
 The algorithm can amplify computation errors (stability)
© Manfred Huber 2011 7
Approximation and Error
 The total error is generally a result of errors in the
data and errors arising through the computation
fˆ ( x˜ ) − f (x) = fˆ ( x˜ ) − f ( x˜ ) + f ( x˜ ) − f (x)
 Computational error : fˆ ( x˜ ) − f ( x˜ )
 Propagated data error: f ( x˜ ) − f (x)
 Computational errors arise from simplifications in
the algorithm and from numeric limitations
 Truncation error: Caused by the algorithm
 Rounding error: Caused by limited numeric precision
 Truncation and Rounding often trade off
© Manfred Huber 2011 8
Example: Finite Difference
 Compute the derivative using finite difference
approximation: f (x + Δx) − f (x)
f '(x) ≈
Δx
Δx
 Truncation error is bounded by M
2
ε
 Rounding error is bounded by 2
Δx
 Optimal step size:
Δx ≈ 2 ε / M

Sin(x) :

© Manfred Huber 2011 9


Absolute and Relative Error
 Absolute error: yˆ − y
yˆ − y
 Relative error:
y
 The true value, y, is generally unknown
 Relative error is often computed relative to the
approximate value
 Error has to be approximated or calculated as a bound

© Manfred Huber 2011 10


Forward and Backward Error
 Error can be analyzed in the output space or in the
input space of the algorithm
ˆ − y = fˆ (x) − f (x)
 Forward error : Δy = y

 error in the output of the algorithm for the same input


 Backward error: Δx = xˆ − x ; yˆ = f ( xˆ )
 error in the correct input corresponding to the output

© Manfred Huber 2011 11


Backward Error
 Backward error can be a useful analysis tool
 Backward error captures sensitivity
 Measures how much the original problem has to change to
result in exactly the approximate solution
 How much data error would explain the total error
 Approximate solution is good if it has a small backward
error
 Backward error is often easier to estimate than forward
error

© Manfred Huber 2011 12


Sensitivity and Conditioning
 A problem is sensitive (ill-conditioned) if a change
in the input data can cause a much larger change
in the output data
 Condition number captures sensitivity:
relative output error ( f ( xˆ ) − f (x)) / f (x) Δy / y
cond = = =
relative data error ( xˆ − x ) / x Δx / x
 A problem is sensitive if cond >> 1
 Condition number represents an amplification factor
between relative backward and relative forward error

© Manfred Huber 2011 13


Stability
 An algorithm is stable if the result is relatively
insensitive to perturbations caused by the
computation
 Similar to conditioning for problems
 An algorithm is stable if it results in a small backward
error (i.e. if its result is the exact solution to a similar
problem)
 If an algorithm is stable, the computational error is no
worse than a small input error

© Manfred Huber 2011 14


Accuracy
 Accuracy measures the similarity of the true and
the computed solution
 Accuracy depends on conditioning of the problem and
stability of the algorithm
 Stability or well-conditioning alone do not guarantee
accuracy
 A stable algorithm applied to an ill-conditioned problem can yield
inaccuracy
 An unstable algorithm applied to a well-conditioned problem can
yield inaccurate results
 To achieve accurate solutions a stable algorithm has to
be applied to a well-conditioned problem
© Manfred Huber 2011 15
Number Representation and
Rounding Errors
 Floating point numbers are used to represent
continuous number
 Real numbers can not be represented accurately
 Operations on floating point numbers are not accurate
 Floating point numbers:
 Base β
 Precision p
 Exponent range [L,U]
⎛ d1 d p−1 ⎞ E
x = ±⎜ d0 + + ...+ p−1 ⎟β
⎝ β β ⎠
© Manfred Huber 2011 16
Floating Point Numbers
 Multiple floating point standards exist

 Most floating point systems are normalized so that


the first bit of the mantissa is 1
 No digits wasted on leading zeros (saves 1 bit)
© Manfred Huber 2011 17
Floating Point Numbers
 Underflow: smallest positive normalized number
UFL = β L
 Overflow: largest floating point number
OFL = (1− β − p )β U +1
 Machine precision: smallest number larger than 1
minus 1
εmach = β 1− p
 Machine precision bounds the rounding error:
 With rounding to nearest: fl(x) − x 1
≤ εmach
© Manfred Huber 2011
x 2 18
Floating Point Number
Example
 Representable numbers for a binary number with 3
bit mantissa and 2 bit exponent

 OFL = 3.5
 UFL = 0.5
 εmach = 0.125

© Manfred Huber 2011 19


Floating Point Arithmetic
 Floating point operations introduce rounding errors
 Addition and subtraction
 For addition and subtraction the mantissa has to be shifted
until the exponents of the numbers are equal
 Potential loss of significant bits in the smaller number
 Multiplication
 Mantissas have to be multiplied, yielding theoretically a
new mantissa with 2p digits which has to be rounded
 Division
 Quotient of mantissas can theoretically have an infinite
number of digits which have to be rounded

© Manfred Huber 2011 20


Floating Point Arithmetic
 Besides rounding errors, floating point operations
can result in unrepresentable numbers
 Overflow
 Results of an overflow (a number too large to be
represented) possess no good approximation and can be
catastrophic.
 On most computer systems overflow produces an error
message
 Underflow
 Results of an underflow are usually approximated as 0.
 On many computer systems underflow is handled silently

© Manfred Huber 2011 21


Floating Point Arithmetic
 Many general arithmetic laws do not strictly hold in
floating point arithmetic
 Addition and multiplication are commutative but not
associative
 Underflow, overflow, and rounding can lead to
incorrect results.
1
∑n=1 n

Infinite sum

 While this sum diverges in reality (and thus has no result)


numeric calculation of it yields a finite sum
 Partial sum no longer changes once 1/n is too small compared to
the value of the partial sum
© Manfred Huber 2011 22
Relative Error and
Loss of Significance
 Errors produced by well implemented arithmetic
floating point operations can be modeled by
fl(x op y) = (x op y)(1+ δ ) ; δ ≤ εmach
 Relative error bound
fl(x op y) − (x op y)
≤ εmach
(x op y)
 Error propagation can still lead to high relative
error through loss of significance (or cancellation)
 When during subtraction leading digits cancel out the
result uses fewer than p digits and thus looses precision
© Manfred Huber 2011 23
Loss of Significance
 Rounding results in a loss of the least significant
digits while cancellation leads to a loss of the most
significant digits
 It is generally a bad idea to compute a small number by
subtracting large numbers
 The propagated rounding error through loss of significance
might ultimately dominate the actual result

© Manfred Huber 2011 24


Loss of Significance
 To avoid cancellation errors, problems can
sometimes be reformulated in a way that avoids
the problem
 Multiplication with conjugate expressions:
( y − x)( y + x) y − x 2
y−x= =
y+x y+x
 Application of identities to restructure the expression:
1− cos x 1
2
=
sin x 1+ cos x
−b ± b 2 − 4ac 2c
x= =
© Manfred Huber 2011
2a −b  b 2 − 4ac 25
Representation and Error
 Computations on digital computers produce
approximations that yield errors
 Approximation should be taken into account when
designing and analyzing algorithms
 Approximations break into multiple types
 Data errors
 Computation errors
 Truncation error due to algorithm
 Rounding error due to representation limitations

 Errors can propagate and be amplified


 Algorithm design should take errors into account
© Manfred Huber 2011 26

You might also like