0% found this document useful (0 votes)
52 views25 pages

Numerical CH Oneandtwo

The document discusses sources of error in numerical analysis. It identifies seven main sources: errors in the problem formulation, initial errors in parameters, experimental errors in measurements, truncation errors due to prematurely ending computations, rounding errors due to limitations of calculating aids, programming errors, and machine errors from malfunctions. It also discusses estimating error bounds through significant figures and rounding rules to minimize rounding error when approximating numbers to a given precision.

Uploaded by

Ebisa Dugo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views25 pages

Numerical CH Oneandtwo

The document discusses sources of error in numerical analysis. It identifies seven main sources: errors in the problem formulation, initial errors in parameters, experimental errors in measurements, truncation errors due to prematurely ending computations, rounding errors due to limitations of calculating aids, programming errors, and machine errors from malfunctions. It also discusses estimating error bounds through significant figures and rounding rules to minimize rounding error when approximating numbers to a given precision.

Uploaded by

Ebisa Dugo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Manalebish D

Numerical Analysis Math-3221

December 5, 2022
Chapter 1

Basic concepts in error


estimation
The study of numerical methods involve solving problems which produce numerical
answers (outputs) given numerical data (inputs).
Many applied mathematics problems can be solved only approximately such
problems arise whenever mathematics is used to model the real world. The design
and development of numerical methods often depend on the computational aids
available: computers,desk machines, calculators
An algorithm is a systematic procedure that solves a problem.
Examples of numerical problems

1. The evaluation of functions at given arguments: e−0.5 , cos(0.315)

2. The determination of roots of equations

3. The evaluation of derivatives at a given argument or the evaluation of definite


integral of a function

4. The determination of maxima and minima of a function

5. The solution of differential equations, systems of linear equations

1
Many of the linear problems can be solved analytically

Example 1 ax + b = 0, a ̸= 0 then x = − ab

−b± b2 +4ac
ax2 + bx + c = 0, a ̸= 0 then x = 2a

However, many nonlinear equations have no such explicit solution. For these, nu-
merical methods based on approximation can be developed which produce roughly
correct results.For instance

• Finding roots by finding successively better approximations

• Approximating a function which is not analytically integrable by one which


is intengrable and integrating the approximation.

Any iteration must be terminated after a finite number of steps;giving rise to an


approximation to a specified degree of tolerance (or degree of error).
An approximate value “a” is a value which differs from an exact value “A”
(usually unknown) and is used in place of A in computations.
An error ∆a of an approximate number ‘a′ is given by ∆a = a − A( or A − a)
so that A = a+ − ∆a.
In general, we will not be able to compute the error in a result, (for if we were
able to do so we would simply subtract that error from the computed result to get
the exact value) but we shall compute bounds on the magnitude of the error.
The absolute error of a is given by δ = |∆a| = |a − A| Since A is not usually
known we cannot evaluate the exact value of δ. Thus The limiting absolute error δa
of a is any number ≥ δ( smallest possible number)

δ = |a − A| ≤ δa i.e. a − δa ≤ A ≤ a + δa

Example 2 For 2.71 ≤ e ≤ 2.72 we have δa = 0.01


For 2.717 ≤ e ≤ 2.720 we have δa = 0.003

Manalebish D: 2 2
An error of 0.01cm in a measurement of 10m is better than the same error in a
measurement of 1m. Thus
The relative error of a is given by

δ a − A a
ε= = = − 1

|A| A A

for A ̸= 0(usually given in percentage)


This implies δ = |A|ε and The limiting relative error εa of a is a number ≥ ε
That is
εa ≥ ε δ ≤ |A|εa

We may take δa = |A|εa since a ≈ A we may write δa = |a|εa


From a − δa < A < a + δa . We get A = a(1 ± εa )
δ δa
Assuming A > 0, a > 0 and δa < a, we have ε = A ≤ a−δa . Thus take

δa δa
εa = ≈
a − δa a

Example 3 A result of R = 29.25 was obtained in determining the gas constant


for air. Knowing that the relative error of this value is 0.1%. find the limits within
which R lies.

1
Here εR = 0.1x 100 = 0.001 Then δR = RεR = (29.25)(0.001) ≈ 0.03 Therefore,
R − δR ≤ R ≤ R + δR Then 29.22 ≤ R ≤ 29.28.

1.1 Sources of errors


Analysis of errors is the central concern in the study of numerical analysis and
therefore we will investigate the sources and types of errors that may occur in a
given problem and the subsequent propagation of errors.

1. Errors of the problem (instability) When setting up a mathematical model,


i.e. formulating the problem in mathematical terms.

Manalebish D: 3 3
2. Initial errors: This is due to numerical parameters such as physical con-
stants.(which are imperfect known)

3. Experimental (or inherent errors) these are errors of the given data when ‘a’ is
determined by physical measurements, the error depends upon the measuring
instrument.

4. Truncation (residual) errors This type of error is rising from the fact that
(finite or infinite) sequences of computational steps necessary to produce an
exact result is truncated prematurely after a certain number of steps

Example 4

x
X xn x x2 x3
e = is approximated as e = 1 + x + +
n=0
n! 2! 3!

5. Rounding errors This error is due to the limitation of the calculating aids,
numbers have to be rounded off during computations

6. Programming errors (Blunders) These are due to human errors during pro-
gramming (in using computers) bugs/debugging. This can be avoided by
being careful

7. Machine errors This is due to machine malfunctions (Land calculators) usu-


ally it may be obtained as a result of a calculation involving several steps
each of which is subject to errors of one or more of the above steps.

1.2 Approximation of errors


it is unusual to say that the distance between two cities is 94.815 kms It is better to
say that the distance is 95 kms. Let a be any rational numbers (a > 0) terminating
or non-terminating. Then in base ten

a = αm 10m + αm−1 10m−1 + · · · + αm−n+1 10m−n+1 (1.1)

Manalebish D: 4 4
where αi for i = 0, 1, · · · , 9 are called the digits of a.αm ̸= 0, m ∈ Z

Example 5 215.3106 = 2 × 102 + 1 × 101 + 5 × 100 + 3 × 10−1 + 1 × 10−2 + 6 × 10−4

In scientific notation (floating point). we write

37500 = 3.75 × 104 = 3.75E04 = 3.75 + 04


0.00064 = 6.4 × 10−4 = 6.4E − 04 = 6.4 − 04

A significant digit of a number ‘a’ is any given digit of a, except possible for zeros
to the left of the first nonzero digit that serve only to fix the position of the decimal
point. Those which carry real information as to the size of the number apart from
exponential position.

Example 6 The numbers 2380, 023.80, 0.0002380 have 4 significant digits


2, 380, 000 = 2.380 × 106 with 4 significant digits and 3 decimal place
= 2.38 × 106 with 3 significant digits and 2 decimal place
= 0.238 × 107 with 3 significant digits and 3 decimal place

We say that the first n significant digits of an approximate number are correct if
the absolute error does not exceed one half unit in the nth place. That is if in
equation (1.1)
1
δ = |a − A| ≤ 10m−n+1
2
Then the first n digits αm , αm−1 , · · · , αm−n+1 are correct (and we say the number
a is correct to n significant figures.)

Example 7

1. A = 35.97 ≈ 36.0 correct to 3 significant digits


since we want 3 significant digits we want to go up to 35.9 then the first of
the digits being left out is 7 > 5 hence rounding up gives 36.0 here the place
value of 7 is 10−2 implies n = 2

Manalebish D: 5 5
with
1 1 1
δ = |35.97 − 36.0| = 0.03 ≤ (10−n+1 ) = (10−2+1 ) = (10−1 ) = 0.05
2 2 2

2. A = 5.18 ≈ 5.2 correct to 2 significant digits


Since we want 2 significant digits we go up to 5.1 then the first of the digits
being left out is 8 > 5 hence rounding up gives 5.2 here the place value of 8
is 10−2 implies n = 2 with
1 1
δ = |5.18 − 5.2| = 0.02 ≤ (10−2+1 ) = (10−1 ) = 0.05
2 2

3. A = 1596 ≈ 1600 correct to 2 significant digits Since we want 2 significant


digits we go up to 15 then the first of the digits being left out is 9 > 5 hence
rounding up gives 16 here the place value of 9 is is 101 implies m = 1 with
1 1
δ = |1596 − 1600| = 4 ≤ (10m+1 ) = (101+1 ) = 50
2 2

4. A = 1796 ≈ 2000(= 2 × 103 ) correct to one significant digit Since we want


1 significant digits we go up to 1 then the first of the digits being left out is
7 > 5 hence rounding up gives 2 here the place value of 7 is is 102 implies
m = 2 with
1 1
δ = |1796 − 2000| = 204 ≤ (10m+1 ) = (102+1 ) = 500
2 2

1.3 Rounding off errors


Round off errors are errors due to computer imperfection.
Round off rule:is used to choose an approximate number that minimizes the round
off error. To round off to n significant digits, discard all digits to the right of the
nth significant digit as follows

1. If the first of the discarded digits is less than 5 leave the remaining digits
unchanged (rounding down)

Manalebish D: 6 6
2. If the first of the discarded digits is greater than 5, add 1 to the nth digit
(rounding up)

3. If it is 5, the last retained digit is left unchanged if even and increase by 1 if


odd. (even digit rule)

Example 8
6.125753 ≈ 6.1round to (1 decimal place) δ ≤ 12 10−1 = 0.05
Since we want 2 significant digits we go up to 6.1 then the first of the
digits being left out is 2 < 5 hence rounding down gives 6.1 here the
place value of 2 is 10−2 implies n = 2 with
δ = |6.125753 − 6.1| = 0.025753 ≤ 12 (10−2+1 ) = 12 (10−1 ) = 0.05
≈ 6.12 round to (2 decimal place) δ ≤ 12 10−2 = 0.005
≈ 6.126 round to (3 decimal place) δ ≤ 21 10−3 = 0.0005
≈ 6.1258 round to (4 decimal place) δ ≤ 21 10−4 = 0.00005
Since we want 5 significant digits we go up to 6.1257 then the first of the
digits being left out is 5 and since 7 is odd then even digit rule gives 6.1258
here the place value of 5 is 10−5 implies n = 5 with
δ = |6.125753 − 6.1258| = 0.000047 ≤ 21 (10−5+1 ) = 12 (10−4 ) = 0.00005

Rounding errors are most dangerous when we have to perform more arithmetic
operations.

22 355
Example 9 Approximate π = 3.14159265 using 7 and 113 correct to 2 decimal
place and 4 decimal place and find the corresponding absolute and relative errors.

22 355
= 3.1428571 = 3.1415929
7 113
22 355
δ1 = | − π| = 0.0012645 δ2 = | − π| = 0.0000002668
7 113
δ1 δ2
ε1 = = 4.025 × 10−4 ε2 = = 8.49 × 10−8
|π| |π|

Manalebish D: 7 7
1.4 Propagation of errors
Suppose a′ is a computed result of a number a (a′ ∈ Q). The initial error is a′ − a
while the difference δ1 = f (a′ ) − f (a) is the corresponding approximation error. If
f is replaced by a simpler function of f1 (say a power series representation of f )
then δ2 = f1 (a′ ) − f (a′ ) is the truncation error. But in calculations we obtain, say,
f2 (a′ ) instead of f1 (a′ ) which is wrongly computed value of a wrong function of a
wrong argument. The difference δ3 = f2 (a′ ) − f1 (a′ ) is termed as the error from
the rounding. The total error is then the propagating error δ = f2 (a′ ) − f (a) =
δ1 + δ2 + δ3 .
1
Example 10 Determine e 3 with 4 decimal places.
We compute e0.3333 instead of e0.3̇ with initial error
1
δ1 = e0.3333 − e 3 = e0.3333 1 − e0.0000333... = −0.0000465196


2 3 4
Next, we compute ex from ex = 1 + x + x2! + x3! + x4! for x = 0.3333 with truncation
error
(0.3333)2 (0.3333)3 (0.3333)4
 
δ2 = 1 + 0.3333 + + +
2! 3! 4!
(0.3333)2 (0.3333)3 (0.3333)4
 
− 1 + 0.3333 + + + + ...
2! 3! 4!
(0.3333)5 (0.3333)6
 
= − + + . . . = −0.0000362750
5! 6!
Finally, the summation of the truncated series is done with rounded values giving
the result
x2 x3 x4
1+x+ + + = 1 + 0.3333 + 0.0555 + 0.0062 + 0.0005 = 1.3955
2! 3! 4!
instead if we go upto 10 decimal places we would have 1.3955296304 Then

δ3 = 1.3955296304 − 1.3955 = −0.0000296304

The total error is δ = δ1 + δ2 + δ3 = −0.000112425 and


1
1.3955 − e 3 = −0.000112425

Manalebish D: 8 8
Note:-
Investigation of error propagation are important in iterative processes and compu-
tations where each value depends on its predecessors

1.5 Instability
In some cases one may consider a small error negligible and want to suppress it
and after some steps the accumulated error may have a fatal error on the solution.
Small changes in initial data may produce large changes in the final results.
This property is known as ill-conditioned. ill-conditioned problem of computing
output value y from input value x by y = g(x) : when x is slightly perturbed to x
the result y = g(x) is far from y
A well conditioned problem has a stable algorithm of computing y = f (x), The
out put y is the exact result y = f (x), for a slightly perturbed input x which is close
to the input x. Thus if the algorithm is stable and the problem is well conditioned
the computed result y is close to the exact y
Algorithm is a systematic procedure that solves a problem. It is said to be
stable if its output is the exact result of a slightly perturbed input.
Performance features that may be expected from a good numerical algorithm.
Accuracy:- This is related to errors, how accurate is the result going to be when
a numerical algorithm is run with some particular input data.
efficiency:- How fast can we solve a certain problem, rate of convergence of
floating point operations
If an error stays at one point in an algorithm and does not aggregate further
as the calculation continues, then it is considered a numerically stable error. This
happens when the error causes only a very small variation in the formula result.
If the opposite occurs and error propagates bigger as the calculation continuous,
then it is considered numerically unstable.

Manalebish D: 9 9
Chapter 2

Non-linear Equation
Locating roots
consider an equation of the form f (x) = 0, assuming the function f (x) is con-
tinuously differentiable function of sufficiently high order,we want to find solution
(roots of the equation) f (x) = 0. That is, we find numbers a such that f (a) = 0.
a is the point at which the graph of the function intersects the x−axis.
The function can be algebraic equation such as polynomials, rationals or tran-
scendental equations: trigonometric, exponential, logarithmic, nth root, etc.

Example 11 f (x) = ex − 3 cos x = 0

This nonlinear function for example have infinitely many solutions but it is difficult
to find them
In most cases we have to use approximate solutions i.e. find a such that |f (a)| <
ε where ε is given tolerance which gives an interval where the root is located but
not the exact root. in this case the equation f (x) = 0 and M.f (x) = 0 (where M
is a constant) do not have the same roots

10
2.1 Bisection method
This is a simple but slowly convergent method for determining the roots of a con-
tinuous function f (x). it is based on the intermediate value theorem for f (x) in
an interval [a0 , b0 ] for which if f has opposite signs at the end points say f (a0 ) < 0
and f (b0 ) > 0, then f has a root in [a0 , b0 ]
a0 +b0
compute the mid point x0 = 2

If f (x0 = 0 then x0 is a solution


If f (x0 < 0 then take a1 = x0 and b1 = b0 i.e. [a1 , b1 ] = [x0 , b0 ]
If f (x0 > 0 then take b1 = x0 and a1 = a0 i.e. [a1 , b1 ] = [a0 , x0 ]
continue the process until f (xn ) = 0
|b−a|
after n bisections the interval is reduced by a factor of 2n i.e. ε < 2n

Advantage:-it requires little thought and calculations. it guarantees convergence


and accuracy (no error testing is necessary)
Disadvantage:-The need for two initial guesses, one on each side of the root, is
difficult if we have no idea of the root, or if we have roots close to each other.
even order multiple roots are impossible
Round off error can cause problems as xn approached the actual root a.

Example 12 Find the roots of ex − 3x = 0 with absolute error δ = 0.0001

f (0) = 1 > 0 f (1) = e − 3 < 0 and f (2) = e2 − 6 ≃ 1.39 > 0


Then there are roots on [0, 1] and [1, 2]

0+1
1. Let us take the first root between [a0 , b0 ] = [0, 1] with x1 = 2 = 0.5

f (x1 ) = f (0.5) = 0.14872 > 0


0.5+1
Take [a1 , b1 ] = [0.5, 1] with x2 = 2 = 0.75

f (x2 ) = f (0.75) = −0.132 < 0


0.5+0.75
Take [a2 , b2 ] = [0.5, 0.75] with x3 = 2 = 0.625

f (x3 ) = f (0.625) = −0.00675 < 0

Manalebish D: 11 11
continuing the process we get

x14 = 0.61902 and [a14 , b14 ] = [0.61902, 0.61914]

Take a ≃ x15 = 0.61908 with f (x15 ) = −0.0000214

1+2
2. For the second root between [a0 , b0 ] = [1, 2] with x1 = 2 = 1.5 f (x1 ) =
f (1.5) < 0

Then [a1 , b1 ] = [1.5, 2] and so on similarly [a14 , b14 ] = [1.5121, 1.5122] contin-
uing the process we get

a ≃ x15 = 1.51215 with f (x15 ) = 0.0000237

Example 13 find the roots of f (x) = x2 − 3 on the interval [1, 2] with absolute
error δ = 0.01

f (1) = −2 < 0 and f (2) = 1 > 0


1+2
Let [a0 , b0 ] = [1, 2] with x1 = 2 = 1.5 f (x1 ) = f (1.5) = (1.5)2 − 3 = −0.75 < 0
Let [a1 , b1 ] = [1.5, 2] with x2 = 1.5+2
2 = 1.75
f (x2 ) = f (1.75) = 3.0625 − 3 = 0.0625 > 0
1.5+1.75
Let [a2 , b2 ] = [1.5, 1.75] with x3 = 2 = 1.625
f (x3 ) = f (1.625) = −0.359 < 0
1.625+1.75
Let [a3 , b3 ] = [1.625, 1.75] with x4 = 2 = 1.6875
f (x4 ) = f (1.6875) = −0.1523 < 0
1.6875+1.75
Let [a4 , b4 ] = [1.6875, 1.75] with x5 = 2 = 1.7188
f (x5 ) = f (1.7188) = −0.0457 < 0
1.7188+1.75
Let [a5 , b5 ] = [1.7188, 1.75] with x6 = 2 = 1.7344
f (x6 ) = f (1.7344) = 0.0081 > 0
1.7188+1.7344
Let [a6 , b6 ] = [1.7188, 1.7344] with x7 = 2 = 1.7266
f (x7 ) = −0.0189 < 0
Here |f (x7 )| = |f (1.7266)| = | − 0.0189| = 0.0189
Then the best approximation would be |f (x6 )| = |f (1.7344)| = |0.008| < 0.01
then the root is best approximated by a = 1.7344

Manalebish D: 12 12
2.2 The method of false position
This method is always convergent for continuous function f
It requires two initial guess [a0 , b0 ] with f (a0 )f (b0 ) < 0 the end points of the
new interval are calculated as a weighted average defined on previous interval. i.e.
f (b0 )a0 − f (a0 )b0
w1 = provided f (b0 ) and f (a0 ) have opposite signs.
f (b0 ) − f (a0 )
In general the formula is given as
f (bn−1 )an−1 − f (an−1 )bn−1
wn = for n = 0, 1, 2, . . . until convergence criteria is
f (bn−1 ) − f (an−1 )
satisfied
Then if f (an )f (wn ) ≤ 0 then set an+1 = an : bn+1 = wn otherwise an+1 = wn :
bn+1 = bn
Algorithm
Given a function f (x) continuous on the interval [a, b] satisfying the criteria
f (a)f (b) < 0

1. Set a0 = a, b0 = b
f (bn−1 )an−1 − f
2. For n = 0, 1, 2, . . . until convergence criteria is satisfied compute wn =
f (bn−1 ) − f
If f (an )f (wn ) ≤ 0 then set an+1 = an ; bn+1 = wn

Otherwise an+1 = wn ; bn+1 = bn

Example 14 Solve 2x3 − 25 x − 5 = 0 for the interval [1, 2] using false position
method with absolute error ε < 10−3

5
f (x) = 2x3 − x − 5
2
5 11
f (1) = 2 − − 5 = −
2 2
f (2) = 16 − 5 − 5 = 6
11
f (1)f (2) = − 6 = −33 < 0
2

Then we look for the roots in [1, 2]

Manalebish D: 13 13
Let a0 = 1 b0 = 2
f (b0 )a0 − f (a0 )b0 f (2)1 − f (1)2
w1 = = = 1.47826
f (b0 ) − f (a0 ) f (2) − f (1)
Then f (w1 ) = f (1.47826) = −2.23489761
next f (1)f (w1 ) = − 11

2 (−2.23489761) > 0

Then set a1 = w1 and b1 = b0 = 2


f (2)(1.47826) − f (1.47826)2
w2 = = 1.6198574
f (2) − f (1.47826)
Then f (w2 ) = f (1.6198574) = −0.548832
f (w1 )f (w2 ) = f (1.47826)f (1.6198574) = (−2.23489761)(−0.548832) > 0
Next set a2 = w2 and b2 = b1 = b0 = 2
f (2)(w2 ) − f (w2 )2
w3 = = 1.65171574
f (2) − f (w2 )
Then f (w3 ) = f (1.65171574) = −0.116983
f (w2 )f (w3 ) = (−0.548832)(−0.116983) > 0
Then set a3 = w3 and b3 = 2
f (2)(w3 ) − f (w3 )2
w4 = = 1.658376
f (2) − f (w3 )
f (w4 ) = −0.0241659
f (w3 )f (w4 ) > 0 continuing in this manner
w7 = 1.660085
f (w7 ) = −0.0002089
|f (1.660085)| < 0.001
Hence one of the root is 1.660085


Exercise 1 Approximate 2 using f (x) = x2 − 2 = 0 on [0, 2] with error δ < 0.01

Solution:-

Manalebish D: 14 14
[0, 2] f (0) = −2 < 0 and f (2) = 2 > 0
0(2) − 2(−2)
x1 = =1 f (1) = −1 < 0
2 − (−2)
 
1(2) − 2(−1) 4 4 2
[1, 2] x2 = = f =− <0
2 − (−1) 3 3 9
4 2
(2) − 2(− 9 ) 28
 
4
,2 x3 = 3 = = 1.4 f (1.4) = −0.04 < 0
3 2 − (− 29 ) 20
1.4(2) − 2(−0.04)
[1.4, 2] x4 = = 1.412 f (1.4) = −0.006256 < 0
2 − (−0.04)

find roots of 5 sin2 x − 8 cos5 x = 0 on 12 , 23 by false position method


 
Example 15
with absolute error δ = 10−3

2.3 The secant method


This method is always convergent for continuous function f
It requires two initial two initial guess [a0 , b0 ] with f (a0 )f (b0 ) < 0
Draw a chord from A(a0 ,  f (a0 )) to A(b0 , f(b0 )). This chord intersects the x-axis
x1 − x0
at x1 where x2 = x1 − f (x1 )
f (x1 ) − f(x0 ) 
xn−1 − xn−2
In general, for xn = xn−1 − f (xn−1 )
f (xn−1 ) − f (xn−2 )
If f (xn ) = 0 xn is a root
If f (xn ) < 0 take the interval [an+1 , bn+1 ] = [xn , bn ]
If f (xn ) > 0 take the interval [an+1 , bn+1 ] = [an , xn ]
continue the process until f (xn ) = 0 or until |f (xn )| < δ
Algorithm for secant method
To determine a root f (x) = 0 given the values x0 and x1 that are near the root,
if |f (x0 )| < |f (x1 )| Then swap x0 with x1
Repeat  
x1 − x0
set x2 = x1 − f (x1 )
f (x1 ) − f (x0 )

Manalebish D: 15 15
set x0 = x1
set x1 = x2
until |f (x2 )| <tolerance value

Example 16 Find an approximation of the root of x3 + x − 1 = 0 with δ < 0.001


on [0.5, 1] with f (0.5) = −0.375 < 0 and f (1) = 1 > 0
   
x1 − x0 1 − 0.5
x2 = x1 −f (x1 ) = 1−1 = 0.64 f (0.64) ≈
f (x1 ) − f (x0 ) 1 − (−0.375)
−0.098
next [x1 , x2 ] = [0.64,1] 
0.64 − 1
x3 = 0.64 − f (0.64) = 0.672 f (0.672) ≈ −0.0245
−0.098 − 1
next [0.672, 1]  
0.672 − 1
x4 = 0.672 − f (0.672) = 0.68 f (0.68) ≈ −0.00557
−0.0245 − 1
next [0.68, 1]  
0.68 − 1
x5 = 0.68 − f (0.68) = 0.682 f (0.682) ≈ −0.000785
−0.00557 − 1
and so on

Exercise 2 Approximate 2 using f (x) = x2 − 2 = 0 on [0, 2] with error δ < 0.01

Solution:-

[0, 2] f (0) = −2 < 0 and f (2) = 2 > 0


2 − 0) 2 − 0)
x2 = 2 − f (2) = 2 − f (2) =1 f (1) = −1 < 0
f (2) − f (0) 2 − (−2)
 
1−2 4 4 2
[1, 2] x3 = 1 − (−1) = f( ) = − < 0
−1 − 2 3 3 9
and so on

2.4 Iteration method:-


we choose an arbitrary initial guess x0 (usually with rough graphical methods )
and try to improve the guess by repeating (iterating) a few steps again and again

Manalebish D: 16 16
so that we approach the root a.
we write the equation f (x) = 0 in the form x = g(x) where g is defined in some
interval I containing a and the range of g lies in I for x ∈ I.
Compute successively

x1 = g(x0 ) (2.1)
x2 = g(x1 ) (2.2)
x3 = g(x2 ) (2.3)
..
. (2.4)
xn+1 = g(xn ) (2.5)

Which may converge to the actual root a (depending on the choice of g and x0 )
with limn→∞ (xn − a) = 0 i.e. ∀ε > 0, ∃M such that |xn − a| < ε for n ≥ M, or that
the values xn may move away from the root a.(divergent)
A solution of the equation x = g(x) is called a fixed point of g.

Example 17 The equation x2 − 5x + 4 = 0 has two roots a1 = 1 and a2 = 4


x2 + 4 √ 4
it can be written as x = or x = 5x − 4 or x =
5 5−x
x2 + 4
Let’s take x0 = 2 and x = g(x) = . Then
5
22 + 4 8
x1 = g(x0 ) = = = 1.6
5 5
2
(1.6) + 4
x2 = g(1.6) = = 1.312
5
(1.312)2 + 4
x3 = g(1.312) = = 1.144
5
(1.144)2 + 4
x4 = g(1.144) = = 1.062
5
x5 = 1.025 x6 = 1.010 x7 = 1.004

which converges to the root a1 = 1

Manalebish D: 17 17
To obtain the second root a2 = 4 If we choose x0 = 5 then

52 + 4
x1 = = 5.8
5
(5.8)2 + 4
x2 = = 7.528
5
x3 = 12.134

Which diverges from 4


12

y=(x2+4)/5
10

6
y=x

−2

−4
−3 −2 −1 0 1 2 3 4 5 6 7

Note 1 From the graph convergence depends on the fact that in a neighborhood I
of a solution, the slope of the curve y = g(x) is less than the slope of y = x(i.e.
|g ′ (x)| < 1 for x ∈ I).

2 x2 + 1
Exercise 3 Find solutions of f (x) = x − 3x + 1 = 0 using x = g(x) = and
3
x0 = 1, 2, 3

Theorem 2.4.1 (sufficient condition for convergence) Let x = a be a solu-


tion of x = g(x) and suppose that g has a continuous derivative in some interval I,
containing a. Then if |g ′ (x)| ≤ α < 1 in I, then the iteration process xn+1 = g(xn )
converges for any x0 ∈ I.

proof By the Mean Value Theorem of differential calculus there is a t between x


and a such that
g(x) − g(a) = g ′ (t)(x − a) ∀x ∈ I

Manalebish D: 18 18
since g(a) = a and g(xn ) = xn+1 for n = 0, 1, 2, . . . we have

|xn − a| = |g(xn−1 ) − g(a)| = |g ′ (t)||xn−1 − a|


≤ α|xn−1 − a|
≤ α2 |xn−2 − a|
..
.
≤ αn |x0 − a|

since α < 1, αn −→ 0 as n −→ ∞, and hence |xn − a| −→ 0 as n −→ ∞ i.e.


xn → a.

Remark 2.4.2 1. If g ′ (x) (or α ) is close to 0 in I, it converges quickly. If


g ′ (x) (or α ) is close to 1 in I, it converges slowly.

2. converse of the theorem is not true

counter example:- For f (x) = x2 − 5x + 4 = 0 take x = g(x) = x2 − 4x + 4 with


x0 = 0
g ′ (x) = 2x − 4, g ′ (0) = −4 |g ′ (0)| = 4 > 1 yet x1 = g(x0 ) = g(0) = 4
2 x2 + 4 ′ 2x
converges to a2 = 4 for f (x) = x − 5x + 4 = 0 with x = g(x) = , g (x) =
  5 5
2x −5 5 −5 5
|g ′ (x)| < 1 ⇔ −1 < <1⇔ < x < i.e. I = ,
5 2 2 2 2
4
Thus x0 = 2 ∈ I with g ′ (2) = close to 1 Thus xn → a1 = 1 slowly where as
5
x0 = 5 ̸∈ I need not converge to a2 = 4.

Manalebish D: 19 19
Manalebish D: 20 20
Example 18 Find the roots of 4x − 4 sin x = 1

write f (x) = 0 as x = sin x + 0.25 = g(x) with |g ′ (x)| = | cos x| < 1 say for
1 < x < 1.3 choosing x0 = 1.2 we have

x1 = sin(1.2) + 0.25 = 1.182


x2 = sin(1.182) + 0.25 = 1.175
x3 = sin(1.175) + 0.25 = 1.173
x4 = sin(1.173) + 0.25 = 1.172

1 −3
Thus a = 1.172 (correct to 3D) with an error ε ≤ 2 10 = 0.0005 i.e. a =
1.172 ± 0.0005.

Example 19 Find the roots of f (x) = ex − 3x = 0

x ex ′ ex
⇔ e = 3x ⇒ x = = g(x) with |g (x)| = | | < 1 ⇒ x < ln 3 ≈ 1.1
3 3
To find a1 take x0 = 0 then

x1 = 0.3̇
x2 = 0.465
x3 = 0.53
.
= ..
x7 = 0.610

But a2 ̸∈ I suppose x0 = 2

x1 = 2.46
x2 = 3.91
x3 = 16.7

diverges
So to obtain a2 , take x = g(x) = 4x − ex with g ′ (x) = 4 − ex

Manalebish D: 21 21
|g ′ (x)| = |4 − ex | < 1 ⇔ −1 < 4 − ex < 1
−5 < −ex < −3
3 < ex < 5
ln 3 < x < ln 5 x ∈ (1.1, 1.6)

Then take x0 = 1.5

x1 = 4(1.5) − e1.5 = 1.52


x2 = 4(1.52) − e1.52 = 1.51
x3 = 4(1.51) − e1.51 = 1.513
x4 = 4(1.513) − e1.513 = 1.512

Hence convergent

Exercise 4 Find solutions to the following by iteration method

1. f (x) = ex − 3x = 0 with x = g(x) = ex − 2x x0 = 1, 2, . . .


1
2. f (x) = x3 + x − 1 = 0 with x = g1 (x) = 2
and x = g2 (x) = 1 − x3
1+x
x0 = 1

2.5 The Newton Raphson method


• This method is simple and converges quickly

• To solve an equation f (x) = 0 where f has a continuous derivative f ′

• We approximate the graph of f by appropriate tangents.


Let x0 be an initial guess, then Draw the line tangent to the graph of f at
the point (x0 , f (x0 )).

Manalebish D: 22 22
Let x1 be the point of intersection of the tangent line with the x-axis. The
slope of the tangent to f at x0 is

f (x0 ) f (x0
tan β = f ′ (x0 ) = =⇒ x1 = x0 − ′
x0 − x1 f (x0 )

Then we continue the process by computing


f (x1 f (xn
x2 = x1 − ′ and in general xn+1 = xn − ′ until say |f (xn )| < ε,
f (x1 ) f (xn )
given

Remark 2.5.1 If we expand f (x) in a Taylor series and express f (xn+1 ) at xn we


get
(xn+1 − xn )2
f (xn+1 ) = f (xn ) + f ′ (xn )(xn+1 − xn ) + f ′′ (xn ) + ···
2!

If xn+1 is close to the actual root a, f (xn+1 ) = 0 and xn ≈ xn+1 so that (xn+1 − xn )2
and all higher powers can be neglected which gives

0 = f (xn ) + f ′ (xn )(xn+1 − xn )

f (xn )
=⇒ xn+1 = xn +
f ′ (xn )
Example 20 Find the roots of f (x) = ex −3x = 0 starting from x0 = 0 and x0 = 2

Solution:-

f ′ (x) = ex − 3
f (xn )
xn+1 = xn − ′
f (x )
 xnn 
e − 3xn (xn − 1)
= xn − x
= exn x −3
e n −3 en
e0 (0 − 1) −1
for x0 = 0 x1 = = = 0.5
e0 − 3 −2
e0.5 (0.5 − 1)
x2 = = 0.61
e0.5 − 3
x3 = 0.619

Manalebish D: 23 23
e2 (2 − 1)
for x0 = 2 x1 = = 1.6835
e2 − 3
x2 = 1.5435
x3 = 1.5134
x4 = 1.512

Example 21 Find the root of f (x) = x − 2 sin x = 0 near x0 = 2

Solution:-

f ′ (x) = 1 − 2 cos x (2.6)


 
xn − 2 sin xn 2(sin xn − xn cos xn )
xn+ = xn − = (2.7)
1 − 2 cos xn 1 − 2 cos xn
2(sin 2 − 2 cos 2)
x1 = = 1.901 (2.8)
1 − 2 cos 2
x2 = 1.896 (2.9)

Exercise 5 1. Approximate 2 using Newton Raphson method

2. Find a root of f (x) = x3 + x − 1 = 0 from x0 = 1

Manalebish D: 24 24

You might also like