0% found this document useful (0 votes)
6 views2 pages

Loss of Precision Theorem

The document discusses floating point error analysis, focusing on absolute and relative errors, and the propagation of rounding errors in sums and products. It presents two theorems regarding relative errors, particularly emphasizing the challenges of subtracting nearly equal floating point numbers, which can lead to significant relative errors. The Loss of Precision Theorem is introduced, outlining the bounds on the number of significant binary digits lost during such subtractions.

Uploaded by

jazzy075
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views2 pages

Loss of Precision Theorem

The document discusses floating point error analysis, focusing on absolute and relative errors, and the propagation of rounding errors in sums and products. It presents two theorems regarding relative errors, particularly emphasizing the challenges of subtracting nearly equal floating point numbers, which can lead to significant relative errors. The Loss of Precision Theorem is introduced, outlining the bounds on the number of significant binary digits lost during such subtractions.

Uploaded by

jazzy075
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

LECTURE 4

Floating Point Error Analysis


1. Absolute and Relative Errors
2. Relative Error Analysis
We now state two theorems regarding the propagation of round o errors for sums and products.
Theorem 4.1. Let x0 ; x1; : : : ; xn be positive machine numbers in a computer whose unit roundo error is
". Then the relative roundo error of the sum
n
X
xk
k=0
is at most (1 + ")n , 1  n".

2.1. Subtraction of Nearly Equal Quanities. As we saw in the preceding lecture roundo errors
are an inevitable consequence of the way computers carry out calculations. Another, but often avoidable,
means of introducing large relative errors is by computing the di erence between two nearly equal oating
point numbers.
For example, let
x = 0:3721478693
y = 0:3720230572
x , y = 0:0001248121
and suppose that the di erence x , y is computed on a decimal computer allowing a 5-digit mantissa (i.e.,
5 signi cant gures)
fl(x) = 0:37215
fl(y) = 0:37202
two numbers, each with 5 signi cant digits. When the machine computes the di erence between fl(x) and
fl(y) it obtains
fl(x) , fl(y) = 0:00013 = 1:3  104
a number with only two signi cant digits. The relative error is thus fairly large
x , y , [fl(x) , fl(y)] = 0:000124812 , 0:00013  4%
x=y 0:000124812
The following theorem gives bounds on the relative error that can be introduced by such subtraction errors.
Theorem 4.2. (Loss of Precision Theorem.) If x and y are positive normalized binary machine num-
bers such that x > y, and
2,q  1 , xy  2,p
then at most q and at least p signi cant binary digits will be lost in the subtraction x , y.
1
2. RELATIVE ERROR ANALYSIS 2
Proof. Let
x = r  2n ; 1  r < 2
y = s  2m ; 1  s < 2
be the normalized binary oating point forms for x and y. Since x is larger than y, m  n. In order to
carry out the subtraction, a computer will rst have to shift the oating point of y so that the machine
representations of x and y have the same number of decimal places; e ectively writing y as
, 
y = s  2m,n  2n
We then have
, 
 , y = r , s  2m,n  2n
The mantissa of this expression satis es
 m
r , s  2m,n = r 1 , sr  22n = r 1 , xy < 2,p
 

To normalize this expression, expression then a shift of at least p digits is required. This means that at
least p bits of precision have been lost. On the other hand, since the new mantissa also satis es
r , s  2m,n = r 1 , xy > 2,q
 

then a shift of no more than q digits will be necessary to put the mantissa in standard form. Thus, at most
q bits of precision have been lost.

You might also like