L3 Source of Error, Floating-Point
L3 Source of Error, Floating-Point
• Round-off Error
• Caused by the limited number of digits that represent numbers
in a computer and
• The ways numbers are stored
• Additions and subtractions are performed in
• Truncation Error
• Caused by approximation used in the mathematical formula of
the scheme a computer
1
Roundoff Errors
• Roundoff errors arise because digital computers cannot
represent some quantities exactly
• There are two major aspects of roundoff errors involved in
numerical calculations:
1. Digital computers have magnitude and precision limits on
their ability to represent numbers.
2. Certain numerical manipulations are highly sensitive to
roundoff errors. This can result from both mathematical
considerations as well as from the way in which computers
perform arithmetic operations.
2
Computer Number Representation
• Numerical roundoff errors are directly related to the manner
in which numbers are stored in a computer.
• A number system is merely a convention for representing
quantities.
• We have 10 fingers and 10 toes, the number system that we
are most familiar with is the decimal, or base-10.
• Numbers on the computer are represented with a binary, or
base-2, system
• (10101101)2 = 27 + 25 + 23 + 22 + 20 = 128 + 32 + 8 + 4 + 1 = (173)10
3
Floating Point Number
• Fractional quantities are typically represented in computers using
floating-point format. In this approach, which is very much like
scientific notation, the number is expressed as
± s× be
• where s = the significand (or mantissa), b = the base of the number
system being used, and e = the exponent.
• For example, a value like 0.005678 could be represented in a
wasteful manner as 0.005678 × 100
• However, normalization would yield 5.678 × 10−3 which eliminates
the useless zeroes
4
Implications of Floating-Point Representation
• Let a base-10 computer with a 5-digit word size. Assume, one
digit is used for the sign, two for the exponent, and two for the
mantissa.
• For simplicity, assume that one of the exponent digits is used
for its sign, leaving a single digit for its magnitude
• A general representation of the number following
normalization would be
s1s0d2d1d0 → s1d1.d2 × 10s0d0
where s0 and s1 = the signs, d0 = the magnitude of the exponent,
and d1 and d2 = the magnitude of the significand digits. 5
Implications of Floating-Point Representation
• Largest possible value in base-10, that is, 9:
• Largest value = ±9.9 × 10+9
• Smallest value = ±1.0 × 10−9
• We could not use it to represent a quantity like Planck’s constant
6.626 × 10−34 J·s
• If we increase the mantissa by one digit, the maximum/minimum
value increases slightly to
±9.99 ×10±9
• In contrast, a one-digit increase in the exponent raises the maximum
/minimum by 90 orders of magnitude to
±9.9 ×10±99
6
Implications of Floating-Point Representation
7
Computer Number Representation
8
IEEE-754 Floating-Point Format
single: 8 bits single: 23 bits
double: 11 bits double: 52 bits
S Exponent Fraction
• Even more dramatic results would be obtained when the numbers are very close as in
0.7642×103 − 0.7641×103 = 0.0001×103
which would be converted to 0.1000×100 = 0.1000.
• Thus, for this case, three nonsignificant zeros are appended.
• The subtracting of two nearly equal numbers is called subtractive cancellation.
14
Floating Point Arithmetic in Computer
• Large Computations: Certain methods require extremely large numbers
of arithmetic manipulations to arrive at their final results.
• A very simple case involves summing a round base-10 number that is not
round in base-2. Suppose that the following function is called as:
int main(void){
…;
• For the case of 15 significant-digit s_out = sum(0.0001, 10000);
representation …;
}
float
float sum(float num, int times){
S_out = 1.000053524971008 float s = 0;
double for(int i=0; i<times; i++){
s = s + num;
S_out = 0.99999999999990619000 }
return s;
15
}
Minimizing Round-off Error
16
Minimizing Round-off Error
• Double Precision
• In a double precision, 8 bytes, or equivalently 64 bits, are used to
store one real number. In this format 1 bit is used for sign, 11 bits
are used for exponent, and 48 bits are used for mantissa.
• Grouping
When the small numbers are computed, e.g. addition, subtraction,
etc., grouping them helps to reduce round-off errors. For example,
to add 0.00001 to unity ten thousand times one can grouped into
100 groups and each group consists of 100 small values.
17
Minimizing Round-off Error
• Taylor Expansions
as approaches 0, accuracy of a numerical evaluation for
sin(1 + ) − sin(1)
f ( ) =
becomes very poor because round-off errors. By using Taylor
expansion, we can rewrite the equation so that the accuracy for
is improved as
f ( ) cos(1) − 0.5 sin(1) 2
18
Minimizing Round-off Error
• Rewriting the equation to avoid subtraction
consider the equation of
f ( x) = x( x + 1 − x )
for an increasing of values x, the calculation of the equation
above has a loss-of-significance error. To avoid this error one
can reformulate it to get
x
f ( x) =
x +1 + x 19
Truncation Error
• Numerical solutions are mostly approximations for exact
solution
• Most numerical methods are based on approximating
function by polynomials
• How accurately the polynomial is approximating the true
function?
• Comparing the polynomial to the exact solution it
becomes possible to evaluate the error, called truncation
error
20
Truncation Error
• Truncation errors are those that result from using an
approximation in place of an exact mathematical
procedure.
• Example 1: approximation to a derivative using a finite-
difference equation:
21
Truncation Error
Example 2: The Taylor Series
• The Taylor theorem states that any smooth function can be
approximated as a polynomial.
• The Taylor series provides a means to express this idea
mathematically.
22
Taylor Series
• Let x=xi+1, x0=xi and (xi+1 – xi) = h, then the above series can be
written as,
• In general, the nth order Taylor series expansion will be exact for an
nth order polynomial.
• In other cases, the remainder term Rn is of the order of hn+1,
meaning:
• The more terms are used, the smaller the error, and
• The smaller the spacing, the smaller the error for a given number of terms
23
Numerical Differentiation
24
Numerical Differentiation
25
Total Numerical Error
• The total numerical error is the
summation of the truncation and
roundoff errors.
• The truncation error generally
increases as the step size increases,
while the roundoff error decreases
as the step size increases - this
leads to a point of diminishing
returns for step size
26