0% found this document useful (0 votes)
15 views16 pages

Chapter 3

The document discusses approximations and round-off errors in numerical methods, highlighting that exact solutions are often unattainable in engineering problems. It explains the concepts of accuracy, precision, inaccuracy, and significant figures, as well as the types of numerical errors such as truncation and round-off errors. Additionally, it covers the representation of numbers in computing, including floating point representation and the implications of rounding and chopping on error margins.

Uploaded by

Tachbir Dewan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views16 pages

Chapter 3

The document discusses approximations and round-off errors in numerical methods, highlighting that exact solutions are often unattainable in engineering problems. It explains the concepts of accuracy, precision, inaccuracy, and significant figures, as well as the types of numerical errors such as truncation and round-off errors. Additionally, it covers the representation of numbers in computing, including floating point representation and the implications of rounding and chopping on error margins.

Uploaded by

Tachbir Dewan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Approximations & Round-

off Errors

Chapra: Chapter-3
• For many engineering problems, we cannot obtain analytical
solutions
• Numerical methods yield approximate solution that are close
to the analytical solution. We cannot exactly compute the
errors associated with numerical methods.
▫ Only rarely given data are exact, since they originate from
measurements. Therefore there is probably error in the input
information.
▫ Algorithm itself usually introduces errors as well, e.g., unavoidable
round-offs, etc …
▫ The output information will then contain error from both of these
sources.
• How confident we are in our approximate result?
• The question is “how much error is present in our calculation
and is it tolerable?”
• Accuracy. How close is a computed or measured
value to the true value
• Precision (or reproducibility). How close is a
computed or measured value to previously computed
or measured values.
• Inaccuracy (or bias). A systematic deviation from the
actual value.
• Imprecision (or uncertainty). Magnitude of scatter.
Significant Figures
• Number of significant figures indicates precision. Significant digits of a
number are those that can be used with confidence, e.g., the number of
certain digits plus one estimated digit.
• Let the speed reading a motorcycle be in between 48 and 49 km/h.
Someone says it is 48.8 and another says 48.9 km/h. Two digits 48 are
certain and one digit must be estimated.

53,800 How many significant figures?

5.38 x 104 3
5.380 x 104 4
5.3800 x 104 5
Error Definitions
Numerical errors arise from the use of approximations to
represent exact mathematical operations and quantities.
These include
1. truncation errors, which result when approximations are
used to represent exact mathematical procedures
2. round-off errors, which result when numbers having limited
significant figures are used to represent exact numbers.
Error Definitions
True Value = Approximation + Error

Et = True value – Approximation (+/-)

True error
true error
True fractional relative error 
true value
true error
True percent relative error,  t  100%
true value
• For numerical methods, the true value will be known only
when we deal with functions that can be solved analytically
(simple systems). In real world applications, we will
obviously not know the true answer a priori. Then

 a Approximate error 100%


Approximation
• Iterative approach, example Newton’s method

 a Current approximation - Previous approximation 100%


(+ / -) Current approximation
• Use absolute value.
• Computations are repeated until stopping criterion is
satisfied.

 a  s Pre-specified % tolerance based


on the knowledge of your
solution

• If the following criterion is met


 s (0.5 10 (2 - n)
)%

you can be sure that the result is correct to at least n


significant figures.
Example 3.2
• In mathematics, function can often be represented by infinite series. For
example, the exponential function can be computed using the Maclaurin
series exapansion as:
x x 2 x3 xn
e 1  x    ....................   ............
2! 3! n!
Thus, as more terms are added in sequence, the approximation becomes a
better and better estimate.
Starting with the simplest version, ex=1, add terms one at a time
to estimate e0.5. After each new term is added, compute the true and
approximate percent relative errors. Note that the true value of e0.5 =
1.648721. Add terms until the absolute value of the approximate error
estimate εa falls below a pre-specified error criterion εs conforming to three
significant digits.
Example 3.2
• Solution: The error criterion that ensures a result is correct to at least 3
significant digits:

First estimation is simply 1, then adding second term:


For x = 0.5;
True percent relative error:

Approximate estimate of the error:

Continue until εa < εs


Round-off Errors
• Numbers such as p, e, or 7 cannot be expressed by a
fixed number of significant figures.
• Computers use a base-2 representation, they cannot
precisely represent certain exact base-10 numbers.
• Fractional quantities are typically represented in computer
using “floating point” form, e.g.,

Integer part
exponent
m.be
mantissa Base of the number system used
Integer part
mbe exponent

mantissa Base of the number system


used

156.78  0.15678103
1
0.029411765 Suppose only 4
34 decimal places to be stored
0.0294100

• Normalized to remove the leading zeroes. Multiply


the mantissa by 10 and lower the exponent by 1
0.2941 x 10-1

Additional significant figure is retained


1
 m 1
b
Therefore
for a base-10 system 0.1 ≤m<1
for a base-2 system 0.5 ≤m<1
• Floating point representation allows both fractions
and very large numbers to be expressed on the
computer. However,
▫ Floating point numbers take up more room.
▫ Take longer to process than integer numbers.
▫ Round-off errors are introduced because mantissa holds
only a finite number of significant figures.
Chopping
Example:
p=3.14159265358 to be stored on a base-10 system
carrying 7 significant digits.
p=3.141592 chopping error et=0.00000065
If rounded
p=3.141593 et=0.00000035
• Some machines use chopping, because rounding adds
to the computational overhead. Since number of
significant figures is large enough, resulting chopping
error is negligible.

You might also like