0% found this document useful (0 votes)
70 views

Notes01 - Numerical Errors and Stability

This document discusses key concepts in applied mathematics including: 1. Numerical approximation and errors can arise from initial measurements, rounding, and truncation. Absolute and relative errors quantify the accuracy of approximations. 2. Condition numbers measure how sensitive problems are to input perturbations, with well-conditioned problems having small output changes from small inputs. 3. Numerical stability refers to algorithms producing reliable results over different inputs. An algorithm is unstable if error amplification occurs at any step.

Uploaded by

jess
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views

Notes01 - Numerical Errors and Stability

This document discusses key concepts in applied mathematics including: 1. Numerical approximation and errors can arise from initial measurements, rounding, and truncation. Absolute and relative errors quantify the accuracy of approximations. 2. Condition numbers measure how sensitive problems are to input perturbations, with well-conditioned problems having small output changes from small inputs. 3. Numerical stability refers to algorithms producing reliable results over different inputs. An algorithm is unstable if error amplification occurs at any step.

Uploaded by

jess
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 4

Applied mathematics and intelligent computation

Notes01 Numerical approximation and errors

Accuracy and precision


Although the words accuracy and precision are often used interchangeably in daily life, they
have different meanings.
Accuracy measures how close the results are to the true or known value. Precision, on the other hand,
measures how close the results are to one another.
The figure below illustrates different combinations of accuracy/precision levels.
(https://www.antarcticglaciers.org/glacial-geology/dating-glacial-sediments-2/precision-and-
accuracy-glacial-geology/)

Significant figures
Significant figures (or called significant digits) are the number of digits in a value, often a
measurement, that contribute to the degree of accuracy of the value. The way to count the number of
significant figures is by starting at the first non-zero digit.
Example: Both 1.23 and 0.00123 have 3 significant digits. We also round the values when reporting
the significant digits. Thus, the value of π (3.141592635…) becomes 3.14 if we want 3 significant
digits, and it becomes 3.1416 if we want 5 significant digits.
Even if certain digits are not completely known, they are still significant if they are reliable, as they
indicate the actual volume within an acceptable range of uncertainty. For instance, if someone’s
height is 187.65 cm or 1876.5 mm (measured by a ruler with the smallest interval between marks at 1
mm), the first four digits are certain and obviously significant. The last digit (5, contributing 0.5 mm)
is likewise considered significant despite its uncertainty. So, all the five digits are considered
significant.

Approximated values and the corresponding errors


Approximation of numbers is unavoidable in real-life situations, because we usually obtain the
values from measurements (subject to errors), and a lot of numbers do not have a finite
representation in decimal form (e.g., π, e, the square root of 2, …).
There are essentially three sources of errors: 1. Initial errors; 2. Round-off errors; 3. Truncation
errors.
The initial error, which can also be called source error, is the error initially contained in the data. This
kind of error is usually due to the apparatus used and flaws in data acquisition or storage procedure.
The round-off (or called rounding) error is due to inexactness in the representation of real numbers
and the arithmetic operations done with them. Because computers represent numbers by a limited
number of binary bits (e.g., 32, 64, 128 bits), some decimal numbers can be only approximately
represented in the computer. For example, a 32-bit computer can resolve real numbers that are
different from each other by about 3.0E-8 (3*10-8), and finer difference will be missed. Thus, the
round-off error is the difference between the result produced by an exact arithmetic and that obtained
by finite precision, rounded arithmetic.
The truncation error is the difference between the actual value and its truncated quantity. A truncated
quantity is represented by a numeral with excessive digits/terms chopped off. Hence, this error is due
to approximating a mathematical process by truncating (cutting off) some terms of the original
arithmetic. Example: Euler’s number e=Σ(1/n!), for n being from 0 to infinity. The truncated value of
e can be summing up to n=5. In this case, the truncated error = e – (1/0! +1/1!+1/2!+1/3!+1/4!+1/5!).

Absolute error and relative error


Suppose x is the exact value of a variable, and xa is an approximated value of x. The difference,
x - xa, is the approximation error.
The absolute error of the approximation xa of x is defined as Δx = |x - xa|.
The absolute relative error of the of the approximation xa of x (not zero) is defined as δx = |(x - xa)/x|.
Example: If xa = 0.05 is an approximation of x=0.049. The absolute error = Δx = |x - xa| = 0.001. The
absolute relative error = δx = |(x - xa)/x| = 0.001/0.049 = 1/49.

Numerical condition and stability


Basic idea: Solving a mathematics problem is similar to evaluating a function y=f(x). Here f is the
problem of interest, x is the input and y is the output (the solution to the problem). We want to know
the effect of perturbing x slightly on the value of y. Well-conditioned problem: small perturbation in
x leads to a small change in y.
Ill-condition problem: small perturbation in x leads to a large change in y. (And this kind of problem
will cause numerical instability.)

Condition: The condition of a mathematical problem relates to its sensitivity to changes in its input
values. The relative condition number κ(x) is defined as the ratio of relative change in f to relative
change in x. That is,
κ(x) =(| (f(x+ Δx) – f(x))/f(x) |)/( |Δx/x|)
For a small Δx, we know that (f(x+ Δx) – f(x))/Δx is f’(x).
Thus, κ(x) = |x*f’(x)/f(x)|
A big (>10 or 100, depending on different situations) condition number means that the problem is ill-
conditioned.
If x and/or f(x) has several components, then f’(x) is replaced by the Jacobian matrix J(x)=əf i/əxj.

Stability: The ability of a numerical method/algorithm to produce reliable and consistent results over
a range of inputs and conditions.
It concerns how errors introduced during the execution of an algorithm affect the result, and is a
property of an algorithm rather than the problem being solved. A stable numerical algorithm will not
amplify errors or produce wildly fluctuating solutions. It is crucial for ensuring the reliability and
robustness of numerical simulations.
In solving differential equations, the stability of an integration scheme is its ability to keep the error
from growing forward in time. If the error does not grow, then the scheme is stable; otherwise it is
unstable. Some integration schemes are stable for certain choices of h and unstable for others; these
integration schemes are also referred to as unstable.

To sum up, the condition concerns the problem itself, while stability concerns the numerical
algorithm used for solving the problem. In general, an algorithm may consist of many steps
(expressed as functions), and therefore each step has its own condition. Thus, to know whether an
algorithm is stable, we may need to consider the possibility of errors introduced in each step.
For example, evaluating y = sin(x/(1-x)) can be broken into 3 steps. First, let g=1-x. Next, let h=x/g.
And finally, let y=sin(h). An algorithm is stable if every step is well-conditioned, and is unstable if
any one step is ill-conditioned.

Examples of numerical errors, condition and instability


Example 1 (Round-off error). Adding 0.3 ten times and compare with the exact value (3). Since the
value 0.3 cannot be exactly represented using binary code, we only get an approximated solution.
#Here is the Python code
xa=3
xb=0
for i in range(10):
xb=xb+0.3

print(xa)
print(xb)

Example 2 (Condition of a problem). Suppose f(x) = x/(1-x). So, f’(x)=1/(1-x) + x/(1-x)2 = 1/(1-x)2
The condition number κ(x) = |x*f’(x)/f(x)| = |(x/(1-x)2)/( x/(1-x))| = 1/|1-x|. We can see that, at
x=0.99, κ=100; at x=-0.99, κ=0.5025.
Thus, the problem is ill-conditioned for x being close to positive 1, and is well-condition for x being
close to negative 1.

Example 3 (from https://web.math.utk.edu/~ccollins/M577/Handouts/cond_stab.pdf ). Suppose


solving a problem involves the algorithm of evaluating f(x) = sqrt(1+x) – 1 for x near 0. We can
break it into 3 steps. First, let t(x)=1+x. Next, let u(t)=sqrt(t). Finally, let v(u)=u-1.
In step 1, since t’=dt/dx =1, we have κ(x)= |x*t’(x)/t(x)| = |x*1/(1+x)|. So, κ(x) is near 0 for x near 0.
In step 2, since du/dt = 1/(2t1/2), we have κ(t) = |t*(du/dt)/u| = |t/(2t1/2t1/2)| = ½
In step 3, note that dv/du = 1, and while x is near 0, t=1+x is near 1, and u is near sqrt(t)=1. We have
κ(u)= |u*(dv/du)/v| = | 1/(u-1) |. Thus, κ(u) approaches infinity for x near 0 (in which u near 1).
From the above calculations, we know step 1 and 2 are well-conditioned (with the condition number
near 0 and ½, respectively), while the last step is ill-conditioned. Thus, this algorithm is unstable.
Note: If we directly evaluate the condition number for f(x), we get
κ(x)=(sqrt(1+x) +1)/(2*sqrt(1+x)), and κ(1)<1. This small condition number indicates that the
problem is not ill-conditioned. This means that there exists a different algorithm (which is stable) for
evaluating the function (by multiplying the original function by (sqrt(1+x)+1)/ (sqrt(1+x)+1), and let
f(x)=x/(sqrt(1+x)+1) ).

Example 4 & 5.
https://farside.ph.utexas.edu/teaching/329/lectures/node34.html
https://pythonnumericalmethods.berkeley.edu/notebooks/chapter22.04-Numerical-Error-and-
Instability.html

For more details, consult the book by Higham (2002): Accuracy and Stability of Numerical
Algorithms.

You might also like