Numerical CH Oneandtwo
Numerical CH Oneandtwo
December 5, 2022
Chapter 1
1
Many of the linear problems can be solved analytically
Example 1 ax + b = 0, a ̸= 0 then x = − ab
√
−b± b2 +4ac
ax2 + bx + c = 0, a ̸= 0 then x = 2a
However, many nonlinear equations have no such explicit solution. For these, nu-
merical methods based on approximation can be developed which produce roughly
correct results.For instance
δ = |a − A| ≤ δa i.e. a − δa ≤ A ≤ a + δa
Manalebish D: 2 2
An error of 0.01cm in a measurement of 10m is better than the same error in a
measurement of 1m. Thus
The relative error of a is given by
δ a − A a
ε= = = − 1
|A| A A
δa δa
εa = ≈
a − δa a
1
Here εR = 0.1x 100 = 0.001 Then δR = RεR = (29.25)(0.001) ≈ 0.03 Therefore,
R − δR ≤ R ≤ R + δR Then 29.22 ≤ R ≤ 29.28.
Manalebish D: 3 3
2. Initial errors: This is due to numerical parameters such as physical con-
stants.(which are imperfect known)
3. Experimental (or inherent errors) these are errors of the given data when ‘a’ is
determined by physical measurements, the error depends upon the measuring
instrument.
4. Truncation (residual) errors This type of error is rising from the fact that
(finite or infinite) sequences of computational steps necessary to produce an
exact result is truncated prematurely after a certain number of steps
Example 4
∞
x
X xn x x2 x3
e = is approximated as e = 1 + x + +
n=0
n! 2! 3!
5. Rounding errors This error is due to the limitation of the calculating aids,
numbers have to be rounded off during computations
6. Programming errors (Blunders) These are due to human errors during pro-
gramming (in using computers) bugs/debugging. This can be avoided by
being careful
Manalebish D: 4 4
where αi for i = 0, 1, · · · , 9 are called the digits of a.αm ̸= 0, m ∈ Z
A significant digit of a number ‘a’ is any given digit of a, except possible for zeros
to the left of the first nonzero digit that serve only to fix the position of the decimal
point. Those which carry real information as to the size of the number apart from
exponential position.
We say that the first n significant digits of an approximate number are correct if
the absolute error does not exceed one half unit in the nth place. That is if in
equation (1.1)
1
δ = |a − A| ≤ 10m−n+1
2
Then the first n digits αm , αm−1 , · · · , αm−n+1 are correct (and we say the number
a is correct to n significant figures.)
Example 7
Manalebish D: 5 5
with
1 1 1
δ = |35.97 − 36.0| = 0.03 ≤ (10−n+1 ) = (10−2+1 ) = (10−1 ) = 0.05
2 2 2
1. If the first of the discarded digits is less than 5 leave the remaining digits
unchanged (rounding down)
Manalebish D: 6 6
2. If the first of the discarded digits is greater than 5, add 1 to the nth digit
(rounding up)
Example 8
6.125753 ≈ 6.1round to (1 decimal place) δ ≤ 12 10−1 = 0.05
Since we want 2 significant digits we go up to 6.1 then the first of the
digits being left out is 2 < 5 hence rounding down gives 6.1 here the
place value of 2 is 10−2 implies n = 2 with
δ = |6.125753 − 6.1| = 0.025753 ≤ 12 (10−2+1 ) = 12 (10−1 ) = 0.05
≈ 6.12 round to (2 decimal place) δ ≤ 12 10−2 = 0.005
≈ 6.126 round to (3 decimal place) δ ≤ 21 10−3 = 0.0005
≈ 6.1258 round to (4 decimal place) δ ≤ 21 10−4 = 0.00005
Since we want 5 significant digits we go up to 6.1257 then the first of the
digits being left out is 5 and since 7 is odd then even digit rule gives 6.1258
here the place value of 5 is 10−5 implies n = 5 with
δ = |6.125753 − 6.1258| = 0.000047 ≤ 21 (10−5+1 ) = 12 (10−4 ) = 0.00005
Rounding errors are most dangerous when we have to perform more arithmetic
operations.
22 355
Example 9 Approximate π = 3.14159265 using 7 and 113 correct to 2 decimal
place and 4 decimal place and find the corresponding absolute and relative errors.
22 355
= 3.1428571 = 3.1415929
7 113
22 355
δ1 = | − π| = 0.0012645 δ2 = | − π| = 0.0000002668
7 113
δ1 δ2
ε1 = = 4.025 × 10−4 ε2 = = 8.49 × 10−8
|π| |π|
Manalebish D: 7 7
1.4 Propagation of errors
Suppose a′ is a computed result of a number a (a′ ∈ Q). The initial error is a′ − a
while the difference δ1 = f (a′ ) − f (a) is the corresponding approximation error. If
f is replaced by a simpler function of f1 (say a power series representation of f )
then δ2 = f1 (a′ ) − f (a′ ) is the truncation error. But in calculations we obtain, say,
f2 (a′ ) instead of f1 (a′ ) which is wrongly computed value of a wrong function of a
wrong argument. The difference δ3 = f2 (a′ ) − f1 (a′ ) is termed as the error from
the rounding. The total error is then the propagating error δ = f2 (a′ ) − f (a) =
δ1 + δ2 + δ3 .
1
Example 10 Determine e 3 with 4 decimal places.
We compute e0.3333 instead of e0.3̇ with initial error
1
δ1 = e0.3333 − e 3 = e0.3333 1 − e0.0000333... = −0.0000465196
2 3 4
Next, we compute ex from ex = 1 + x + x2! + x3! + x4! for x = 0.3333 with truncation
error
(0.3333)2 (0.3333)3 (0.3333)4
δ2 = 1 + 0.3333 + + +
2! 3! 4!
(0.3333)2 (0.3333)3 (0.3333)4
− 1 + 0.3333 + + + + ...
2! 3! 4!
(0.3333)5 (0.3333)6
= − + + . . . = −0.0000362750
5! 6!
Finally, the summation of the truncated series is done with rounded values giving
the result
x2 x3 x4
1+x+ + + = 1 + 0.3333 + 0.0555 + 0.0062 + 0.0005 = 1.3955
2! 3! 4!
instead if we go upto 10 decimal places we would have 1.3955296304 Then
Manalebish D: 8 8
Note:-
Investigation of error propagation are important in iterative processes and compu-
tations where each value depends on its predecessors
1.5 Instability
In some cases one may consider a small error negligible and want to suppress it
and after some steps the accumulated error may have a fatal error on the solution.
Small changes in initial data may produce large changes in the final results.
This property is known as ill-conditioned. ill-conditioned problem of computing
output value y from input value x by y = g(x) : when x is slightly perturbed to x
the result y = g(x) is far from y
A well conditioned problem has a stable algorithm of computing y = f (x), The
out put y is the exact result y = f (x), for a slightly perturbed input x which is close
to the input x. Thus if the algorithm is stable and the problem is well conditioned
the computed result y is close to the exact y
Algorithm is a systematic procedure that solves a problem. It is said to be
stable if its output is the exact result of a slightly perturbed input.
Performance features that may be expected from a good numerical algorithm.
Accuracy:- This is related to errors, how accurate is the result going to be when
a numerical algorithm is run with some particular input data.
efficiency:- How fast can we solve a certain problem, rate of convergence of
floating point operations
If an error stays at one point in an algorithm and does not aggregate further
as the calculation continues, then it is considered a numerically stable error. This
happens when the error causes only a very small variation in the formula result.
If the opposite occurs and error propagates bigger as the calculation continuous,
then it is considered numerically unstable.
Manalebish D: 9 9
Chapter 2
Non-linear Equation
Locating roots
consider an equation of the form f (x) = 0, assuming the function f (x) is con-
tinuously differentiable function of sufficiently high order,we want to find solution
(roots of the equation) f (x) = 0. That is, we find numbers a such that f (a) = 0.
a is the point at which the graph of the function intersects the x−axis.
The function can be algebraic equation such as polynomials, rationals or tran-
scendental equations: trigonometric, exponential, logarithmic, nth root, etc.
This nonlinear function for example have infinitely many solutions but it is difficult
to find them
In most cases we have to use approximate solutions i.e. find a such that |f (a)| <
ε where ε is given tolerance which gives an interval where the root is located but
not the exact root. in this case the equation f (x) = 0 and M.f (x) = 0 (where M
is a constant) do not have the same roots
10
2.1 Bisection method
This is a simple but slowly convergent method for determining the roots of a con-
tinuous function f (x). it is based on the intermediate value theorem for f (x) in
an interval [a0 , b0 ] for which if f has opposite signs at the end points say f (a0 ) < 0
and f (b0 ) > 0, then f has a root in [a0 , b0 ]
a0 +b0
compute the mid point x0 = 2
0+1
1. Let us take the first root between [a0 , b0 ] = [0, 1] with x1 = 2 = 0.5
Manalebish D: 11 11
continuing the process we get
1+2
2. For the second root between [a0 , b0 ] = [1, 2] with x1 = 2 = 1.5 f (x1 ) =
f (1.5) < 0
Then [a1 , b1 ] = [1.5, 2] and so on similarly [a14 , b14 ] = [1.5121, 1.5122] contin-
uing the process we get
Example 13 find the roots of f (x) = x2 − 3 on the interval [1, 2] with absolute
error δ = 0.01
Manalebish D: 12 12
2.2 The method of false position
This method is always convergent for continuous function f
It requires two initial guess [a0 , b0 ] with f (a0 )f (b0 ) < 0 the end points of the
new interval are calculated as a weighted average defined on previous interval. i.e.
f (b0 )a0 − f (a0 )b0
w1 = provided f (b0 ) and f (a0 ) have opposite signs.
f (b0 ) − f (a0 )
In general the formula is given as
f (bn−1 )an−1 − f (an−1 )bn−1
wn = for n = 0, 1, 2, . . . until convergence criteria is
f (bn−1 ) − f (an−1 )
satisfied
Then if f (an )f (wn ) ≤ 0 then set an+1 = an : bn+1 = wn otherwise an+1 = wn :
bn+1 = bn
Algorithm
Given a function f (x) continuous on the interval [a, b] satisfying the criteria
f (a)f (b) < 0
1. Set a0 = a, b0 = b
f (bn−1 )an−1 − f
2. For n = 0, 1, 2, . . . until convergence criteria is satisfied compute wn =
f (bn−1 ) − f
If f (an )f (wn ) ≤ 0 then set an+1 = an ; bn+1 = wn
Example 14 Solve 2x3 − 25 x − 5 = 0 for the interval [1, 2] using false position
method with absolute error ε < 10−3
5
f (x) = 2x3 − x − 5
2
5 11
f (1) = 2 − − 5 = −
2 2
f (2) = 16 − 5 − 5 = 6
11
f (1)f (2) = − 6 = −33 < 0
2
Manalebish D: 13 13
Let a0 = 1 b0 = 2
f (b0 )a0 − f (a0 )b0 f (2)1 − f (1)2
w1 = = = 1.47826
f (b0 ) − f (a0 ) f (2) − f (1)
Then f (w1 ) = f (1.47826) = −2.23489761
next f (1)f (w1 ) = − 11
2 (−2.23489761) > 0
√
Exercise 1 Approximate 2 using f (x) = x2 − 2 = 0 on [0, 2] with error δ < 0.01
Solution:-
Manalebish D: 14 14
[0, 2] f (0) = −2 < 0 and f (2) = 2 > 0
0(2) − 2(−2)
x1 = =1 f (1) = −1 < 0
2 − (−2)
1(2) − 2(−1) 4 4 2
[1, 2] x2 = = f =− <0
2 − (−1) 3 3 9
4 2
(2) − 2(− 9 ) 28
4
,2 x3 = 3 = = 1.4 f (1.4) = −0.04 < 0
3 2 − (− 29 ) 20
1.4(2) − 2(−0.04)
[1.4, 2] x4 = = 1.412 f (1.4) = −0.006256 < 0
2 − (−0.04)
Manalebish D: 15 15
set x0 = x1
set x1 = x2
until |f (x2 )| <tolerance value
Solution:-
Manalebish D: 16 16
so that we approach the root a.
we write the equation f (x) = 0 in the form x = g(x) where g is defined in some
interval I containing a and the range of g lies in I for x ∈ I.
Compute successively
x1 = g(x0 ) (2.1)
x2 = g(x1 ) (2.2)
x3 = g(x2 ) (2.3)
..
. (2.4)
xn+1 = g(xn ) (2.5)
Which may converge to the actual root a (depending on the choice of g and x0 )
with limn→∞ (xn − a) = 0 i.e. ∀ε > 0, ∃M such that |xn − a| < ε for n ≥ M, or that
the values xn may move away from the root a.(divergent)
A solution of the equation x = g(x) is called a fixed point of g.
Manalebish D: 17 17
To obtain the second root a2 = 4 If we choose x0 = 5 then
52 + 4
x1 = = 5.8
5
(5.8)2 + 4
x2 = = 7.528
5
x3 = 12.134
y=(x2+4)/5
10
6
y=x
−2
−4
−3 −2 −1 0 1 2 3 4 5 6 7
Note 1 From the graph convergence depends on the fact that in a neighborhood I
of a solution, the slope of the curve y = g(x) is less than the slope of y = x(i.e.
|g ′ (x)| < 1 for x ∈ I).
2 x2 + 1
Exercise 3 Find solutions of f (x) = x − 3x + 1 = 0 using x = g(x) = and
3
x0 = 1, 2, 3
Manalebish D: 18 18
since g(a) = a and g(xn ) = xn+1 for n = 0, 1, 2, . . . we have
Manalebish D: 19 19
Manalebish D: 20 20
Example 18 Find the roots of 4x − 4 sin x = 1
write f (x) = 0 as x = sin x + 0.25 = g(x) with |g ′ (x)| = | cos x| < 1 say for
1 < x < 1.3 choosing x0 = 1.2 we have
1 −3
Thus a = 1.172 (correct to 3D) with an error ε ≤ 2 10 = 0.0005 i.e. a =
1.172 ± 0.0005.
x ex ′ ex
⇔ e = 3x ⇒ x = = g(x) with |g (x)| = | | < 1 ⇒ x < ln 3 ≈ 1.1
3 3
To find a1 take x0 = 0 then
x1 = 0.3̇
x2 = 0.465
x3 = 0.53
.
= ..
x7 = 0.610
But a2 ̸∈ I suppose x0 = 2
x1 = 2.46
x2 = 3.91
x3 = 16.7
diverges
So to obtain a2 , take x = g(x) = 4x − ex with g ′ (x) = 4 − ex
Manalebish D: 21 21
|g ′ (x)| = |4 − ex | < 1 ⇔ −1 < 4 − ex < 1
−5 < −ex < −3
3 < ex < 5
ln 3 < x < ln 5 x ∈ (1.1, 1.6)
Hence convergent
Manalebish D: 22 22
Let x1 be the point of intersection of the tangent line with the x-axis. The
slope of the tangent to f at x0 is
f (x0 ) f (x0
tan β = f ′ (x0 ) = =⇒ x1 = x0 − ′
x0 − x1 f (x0 )
If xn+1 is close to the actual root a, f (xn+1 ) = 0 and xn ≈ xn+1 so that (xn+1 − xn )2
and all higher powers can be neglected which gives
f (xn )
=⇒ xn+1 = xn +
f ′ (xn )
Example 20 Find the roots of f (x) = ex −3x = 0 starting from x0 = 0 and x0 = 2
Solution:-
f ′ (x) = ex − 3
f (xn )
xn+1 = xn − ′
f (x )
xnn
e − 3xn (xn − 1)
= xn − x
= exn x −3
e n −3 en
e0 (0 − 1) −1
for x0 = 0 x1 = = = 0.5
e0 − 3 −2
e0.5 (0.5 − 1)
x2 = = 0.61
e0.5 − 3
x3 = 0.619
Manalebish D: 23 23
e2 (2 − 1)
for x0 = 2 x1 = = 1.6835
e2 − 3
x2 = 1.5435
x3 = 1.5134
x4 = 1.512
Solution:-
Manalebish D: 24 24