Lec-3 (Errors)
Lec-3 (Errors)
Lecture 03
5/27/2023 Md. Golam Moazzam, Dept. of CSE, JU 1
Numerical Methods
❑ Significant Digits
– All computers operate with a fixed length of numbers.
– The concept of significant digits has been introduced primarily to indicate the
accuracy of a numerical value.
– Basic concept of significant digits:
All non-zero digits are significant.
All zeros occurring between non-zero digits are significant digits.
Trailing zeros following a decimal point are significant.
For example, 3.50, 65.0 and 0.230 have three significant digits each.
Example 4.1: Which of the following numbers has the greatest precision?
(a) 4.3201 (b) 4.32 (c) 4.320106
Answer: (a) 4.3201 has a precision of 10-4.
(b) 4.32 has a precision of 10-2.
(c) 4.320106 has a precision of 10-6.
The last number has the greatest precision.
Total error
Measuring
Computing Numerical
method
machine method
Round-off Errors:
Round-off errors occur when a fixed number of digits are used to represent exact numbers.
Since the numbers are stored at every stage of computations, round-off error is introduced at
the end of the every arithmetic operation. Consequently, even though an individual round-off
error could be very small, the cumulative effect of a series of computations can be very
significant.
Rounding a number can be done in two ways:
A) Chopping and B) Symmetric rounding.
Symmetric Round-off:
– In the symmetric roun-off method, the last retained significant digit is “rounded up” by 1
if the first discarded digit is larger or equal to 5; otherwise, the last retained digit is
unchanged.
– For example, the number 42.7893 would become 42.79 and the number 76.5432 would
become 76.54.
Absolute Error
– Let us suppose that the true value of a data item is denoted by xt and its approximate
value is denoted by xa. Then, they are related as follows:
– True value xt = Approximate value xa + Error.
– The error is then given by
Error = xt - xa
– The error may be negative or positive depending on the values of xt and xa. In error
analysis, the magnitude of the error is important and not the sign and, therefore, we
normally consider what is known as absolute error which is denoted by
ea = xt − x a
absolute error
er =
true value
xt − x a xa
= = 1−
xt xt