0% found this document useful (0 votes)
60 views17 pages

CH2. Locating Roots of Nonlinear Equations

- The document discusses numerical methods for finding roots or zeros of equations, where the exact root cannot be found. - It introduces the bisection method, Newton's method, secant method, and fixed-point iteration method. - The bisection method uses interval halving to narrow down the range that contains the root. Newton's method uses the function value and derivative to rapidly converge on the root in an iterative process.

Uploaded by

bytebuilder25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views17 pages

CH2. Locating Roots of Nonlinear Equations

- The document discusses numerical methods for finding roots or zeros of equations, where the exact root cannot be found. - It introduces the bisection method, Newton's method, secant method, and fixed-point iteration method. - The bisection method uses interval halving to narrow down the range that contains the root. Newton's method uses the function value and derivative to rapidly converge on the root in an iterative process.

Uploaded by

bytebuilder25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Ch.

2- Locating Roots of Equations

2.1 Introduction

One of the important problems in scientific work is to find the roots of equations of
the form f(x) = 0.
Let f be a real- or complex-valued function of a real or complex variable. A number
r, real or complex, for which f (r ) = 0 is called a root of that equation or a zero of f
. For example, the function
f (x) = 3x2 − 8x + 4
Using the quadratic formula we obtain 2/3 and 2 as zeros of f (x) = 0.
For another example, the function
f(x) = cos 3x − cos 7x
has infinitely many roots (every integer multiple of π/5 and of π/2 as well x=0).
For general nonlinear equations, we cannot find the exact (true) root, so we have to
find an approximate root, that is x* is approximate root for which the function f(x)
is very near zero, i.e. 𝑓(𝑥 ∗ ) ≈ 0.

Why is locating roots important?

Consider the following engineering problem whose solution is the root of an


equation. In a certain electrical circuit, the voltage V and current I are related by
two equations of the form
I = a(ebV − 1),
c=dI+V
in which a, b, c, and d are constants. Combined the two equations by eliminating I
between them, we obtain:
c = ad(ebV − 1) + V
For example, suppose the constants are given, such that the equation becomes
14.3(e2V − 1) + V- 12 = 0
and its solution is required. (It turns out that V ≈ 0.299 in this case.)
Hundreds of methods are available for locating zeros of functions. In this chapter we
will introduce four basic methods: the bisection method, Newton’s method, the
secant method, and the fixed-point (functional) iteration method..

2.2 Bisection Method

Assume that f(x) is continuous on a given interval [a, b] and that is also satisfies
f(a)f(b) < 0 with f(a) ≠0 and f(b) ≠ 0. Using the intermediate value theorem, we can
see that the function f(x) has at least one root in [a, b]. We assume that there is only
one root for the equation f(x) = 0 in the interval [a, b].
The Bisection Algorithm:

Input: a, b, f(x), 𝜀.
Step 2: For i = 1 to imax do the following
- Compute ri = (a + b)/2
- If f(a) * f(ri) = 0, then ri is the root, stop
- Otherwise, is |f(ri)| ≤ 𝜀, then ri is the approximate root, stop.
- Otherwise, is f(a) * f(ri) < 0, yes, then 𝑟 ∈ [𝑎, 𝑟𝑖 ], set b = ri, continue looping.
no, then 𝑟 ∈ [ 𝑟𝑖 , 𝑏], set a = ri, continue looping

EXAMPLE 1
Let f(x) = 2 - ex, and we take the original interval to be [a, b] = [0,1], and 𝜀 = 10−5 ,
then the first several steps of the computation are as follows:
- f(0) = 1, f(1) = - 0.7183 → r ∊[0 , l] and r1= (0+1)/2 =0.5;
| f(0.5)= 0.3513 > 0|≨ 10-5, r ∊ [0.5,1].

- r2= (0.5 + 1)/2 = 0.75, |f(0.75) = - 0.1170| ≨ 10-5 → r ∊ [0.5 , 0.75].

- r3= (0.5 + 0.75)/2 = 0.625, |f(0.625) = 0.131754| ≨ 10-5

we continue the process until we have the root localized to within an interval of
length as small as we want, or |𝑓(𝑟𝑖 )| ≤ 10−5
EXAMPLE 2

Suppose that we have the functions f(x) = x3 − 2 sin(x) on [0.5, 2] and we seek a zero
in the specified interval:
The computer results for the iterative steps of the bisection method for f (x):

Also, the results for f(x) are as follows:


i ri f(ri) error = |b –a|/2
−2
1 1.25 5.52 × 10 0.75
2 0.875 −0.865 0.375
3 1.0625 −0.548 0.188
4 1.15625 −0. 9.38 × 10−2
5 1.20312 5 −0.125 4.69 × 10−2
...
20 1.23618 27 −4.88 × 10−6 1.43 × 10−6
21 1.23618 34 −2.15 × 10−6 7.15 × 10−7
Convergence Analysis

Now let us investigate the accuracy with which the bisection method determines a
root of a function. Suppose that f is a continuous function that takes values of
opposite sign at the ends of an interval [a0, b0]. Then there is a root r in [a0, b0], and
if we use the midpoint
r0 = (a0 + b0)/2 as our estimate of r, we have |r − r0| ≤ (b0 − a0)/2

If the bisection algorithm is now applied and if the computed quantities are

|r − rn| ≤ (bn – an)/2, n≥0

Since the widths of the intervals are divided by 2 in each step, we conclude that

|r − rn| ≤ (b0 − a0)/ 2n+1


ETHOD THEOREM
Hence, if the bisection algorithm is applied to a continuous function f on an interval
[a, b], where f (a) f (b) < 0, then, after n steps, an approximate root will have been
computed with error at most (b − a)/2n+1
If an error tolerance has been prescribed in advance, it is possible to determine the
number of steps required in the bisection method. Suppose that we want
|r −rn| < ε. Then (b – a) / 2n+1 < ε. By taking the natural logarithms, we obtain

n > [ln(b − a) − ln(2ε)]/ ln(2)

EXAMPLE 3: How many steps of the bisection algorithm are needed to compute a
root of f (x) =0 in example 2 with ε= 10-10?
Solution: n > [ln(b − a) − ln(2ε)]/ ln(2) → n > [ln(2− 0.5) − ln(2*10-10)]/ ln(2)

n > 32.8 , then you can take any n > 33.


2.2 Newton’s Method

The procedure known as Newton’s method is also called the Newton-Raphson


iteration. This method is one of the more important procedures in numerical
analysis, and its applicability extends to differential equations and integral
equations. Here it is being applied to a single equation of the form f (x) = 0

The process of this method as follows:

Hence,

Given f(x) = 0, where f(x) is differentiable. Start with any initial point x0 to generate
the sequence of {𝑥𝑛 }∞
𝑛=0 using Newton’s formula
𝑓(𝑥𝑛 )
𝑥𝑛+1 = 𝑥𝑛 − , 𝑓́ (𝑥𝑛 ) ≠ 0
𝑓́ (𝑥𝑛 )
If the sequence convergence, then it will converge quadratically to the exact root r,
that is
𝐿𝑖𝑚 𝑥𝑛 = 𝑟
𝑛→∞
Converge quadratically means 𝑒𝑛+1 = 𝑐(𝛿)𝑒𝑛2

EXAMPLE 4: Consider the equation Sin(x) + x2 − 1 = 0. Start with x0 = 1.

𝑓́ (𝑥) = 𝐶𝑜𝑠(𝑥) + 2𝑥
𝑆𝑖𝑛(𝑥𝑛 ) + 𝑥𝑛2 − 1
𝑥𝑛+1 = 𝑥𝑛 −
𝐶𝑜𝑠(𝑥𝑛 ) + 2𝑥𝑛
n=0 →
𝑥02
𝑆𝑖𝑛(𝑥0 ) + − 1 𝑆𝑖𝑛(1) + 1 − 1
𝑥1 = 𝑥0 − =1− = 0.668752
𝐶𝑜𝑠(𝑥0 ) + 2𝑥0 𝐶𝑜𝑠(1) + 2
Then the iterations from the Newton’s method gives
n xn |𝑓(𝑥𝑛 )|
1 0.668752 0.67236*10-1
2 0.637068 6.967696*10-4
3 0.636733 7.254549*10-7

Recall that the exact solution is r ≈ 0.636733. Obviously, the Newton method is
much faster than the bisection method.

Now, we will derive analytically the Newton method. The Taylor polynomial of
degree n = 1 with remainder is given by

𝑓(𝑥) = 𝑓(𝑥𝑛 ) + 𝑓́(𝑥𝑛 )(𝑥 − 𝑥𝑛 ) + 𝑓 ′′ (𝜏)(𝑥 − 𝑥𝑛 )2 /2!

where 𝜏 lies somewhere between 𝑥𝑛 and x. Substituting x = r into the above equation,
we get

0 = 𝑓(𝑟) = 𝑓(𝑥𝑛 ) + 𝑓́ (𝑥𝑛 )(𝑟 − 𝑥𝑛 ) + 𝑓 ′′ (𝜏)(𝑟 − 𝑥𝑛 )2 /2!

When 𝑥𝑛 is very close to 𝑟, then 𝑓 ′′ (𝜏)(𝑟 − 𝑥𝑛 )2 /2! is smaller when compared to


the other two terms on the RHS and therefore, can be neglected. Then
𝑓(𝑥𝑛 ) + 𝑓́ (𝑥𝑛 )(𝑟 − 𝑥𝑛 ) ≈ 0

Solving for 𝑟 and using the notation 𝑥𝑛+1 for this new approximate solution, we get
𝑓(𝑥𝑛 )
𝑥𝑛+1 = 𝑥𝑛 −
𝑓́ (𝑥𝑛 )
With error of order two.

EXAMPLE 5: If f (x) = x3 − x + 1 and x0 = 1, what are x1 , x2 ,and x3 in the Newton


iteration?
Solution: From the basic formula, x1 = x0 − f (x0)/ f’(x0). Now f’(x) = 3x2 −1, and so
f ‘(1) = 2
Also, we find f (1) = 1. Hence, we have
x1 = 1 – 1/2 = 0.5, |f(0.5)| = 0.625
Similarly, we have f(0.5) = 0.625 and f’(0.5) = -0.25, then
𝑓(𝑥1 )
𝑥2 = 𝑥1 − = x2 = 0.5 – (0.625/-0.25) = 3, |f(3)| = 25
𝑓́ (𝑥1 )
𝑓(𝑥 ) 25
𝑥3 = 𝑥2 − ́ (𝑥2) =3− = 2.038462 , |f(2.038462)|= 7.432015
𝑓 2 26

Newton’s Method Algorithm

- Input f(x), f’(x), x0, nmax, ε


for n = 0 to nmax do the following
𝑓(𝑥 )
- Compute 𝑥𝑛+1 = 𝑥𝑛 − ́ (𝑥𝑛 ), assuming 𝑓́(𝑥𝑛 ) ≠ 0
𝑓 𝑛
|𝑥𝑛+1 −𝑥𝑛 |
- If |𝑓(𝑥𝑛+1 )| < ε (or If ) < ε), then output “convergence”
|𝑥𝑛+1 |
Print 𝑥𝑛+1 is the approximate root, STOP, otherwise
- Update 𝑥𝑛 = 𝑥𝑛+1 , Continue looping
end for

EXAMPLE 6: Now we illustrate Newton’s method by locating a root of


x3 + x = 2x2 + 3. We apply the method to the function f (x) = x3 −2x2 + x −3, starting
with x0 = 3. Of course, f’(x) = 3x2 − 4x + 1.

To see the rapid convergence of Newton’s method, we use arithmetic with


double the normal precision in the program and obtain the following results:
n xn | f (xn)|
0 3.0 9.0
1 2.4375 2.04
2 2.2130327224731445 0.256
3 2.17555 49386143684 0.646 × 10−2
4 2.17456 01006550714 0.448 × 10−5
5 2.17455 94102932841 0.197 × 10−11
Notice the doubling of the accuracy in f (x) (and also in x) until the maximum
precision of the computer is encountered.

Suppose, for simplicity, that c = 1. Suppose also that xn is an estimate of the root r
that differs from it by at most one unit in the kth decimal place. This means that
|r − xn|≤10−k
Which implies that |r − xn+1|≤ 10−2k

In other words, xn+1 differs from r by at most one unit in the (2k)th decimal place.
So, xn+1has approximately twice as many correct digits as xn!

The analysis of Newton’s method said that the method converges quadratically (of
order two) in case of f ’(xn)≠0. In the case f ’(r) =0, then r is a root of f and f’. Such
a root, we call it multiple rootof f, in this case, at least a double root. Newton’s
method for a multiple root converges only linearly. If we know in advance that f has
a root of multiplicity m, then we have to modify Newton’s formula to
𝑓(𝑥𝑛 )
𝑥𝑛+1 = 𝑥𝑛 − 𝑚 , where m the multiplicity of the root, to have quadratic
𝑓′ (𝑥𝑛 )
convergence.
𝑓(𝑥𝑛 )
For your example, f(x) = (x-2)3, m = 3, so the new formula is 𝑥𝑛+1 = 𝑥𝑛 − 3
𝑓′ (𝑥𝑛 )

EXAMPLE 7: Let f ( x)  ( x  2) 4 .
(a) Use Newton’s method to compute x1, x2, x3, and x4, starting with x0 = 2.1.
(b) Is the sequence in part (a) converging quadratically or linearly?
Explain.
SOLUTION:
f  x    x  2 . We will be using Newton’Method to compute
4

x1 , x2 , x3 , x4 starting with x0  2.1

f  xn   x  2 =
4
1
xn 1  xn   xn  n xn   xn  2 
f  xn 
/
4  x  2
3
4

f  x0  0.1
x1  x0   2.1   2.075
f /
 x0  4

f  x1  0.075
x2  x1   2.075   2.05625
f /
 x1  4

f  x2  0.05625
x3  x2   2.05625   2.0421875
f /
 x2  4

f  x3  0.0421875
x4  x3   2.0421875   2.031640625
f /
 x3  4

(a) Form observing sequence, converging to   2


  x1  2  2.075  0.075

  x2  2  2.05625  0.05625

  x3  2  2.0421875  0421875 0.0421875

  x4  2  2.031640625  0.031640625

|𝛼−𝑥2 | 0.05625   x3 0.0421875


Hence, = = 0.75 ,   0.75
|𝛼−𝑥1 | 0.075   x2 0.05625

  x4 0.031640625
  0.75 .
  x3 0.0421875

Since convergence takes place with constant rate of change.


Hence the convergence is Linear.
𝒇(𝒙𝒏 )
Try to use the modified formula of Newton’s 𝒙𝒏+𝟏 = 𝒙𝒏 − 𝟒 .
𝒇′ (𝒙𝒏 )
3.3 Secant Method

We now consider a general-purpose procedure that converges almost as fast as


Newton’s method. This method mimics Newton’s method but avoids the calculation
of derivatives.
Recall that Newton’s iteration defines xn+1 in terms of xn via the formula

𝑓(𝑥𝑛 )
𝑥𝑛+1 = 𝑥𝑛 − (1)
𝑓′ (𝑥𝑛 )

𝑓(𝑥𝑛 ) − 𝑓(𝑥𝑛−1 )
́ 𝑥𝑛 ) =
𝑓( lim ,
𝑥𝑛−1 →𝑥𝑛 𝑥𝑛 − 𝑥𝑛−1
Then,
́ 𝑥𝑛 ) ≈ 𝑓(𝑥𝑛 )−𝑓(𝑥𝑛−1)
𝑓( (2)
𝑥𝑛 −𝑥𝑛−1
́ 𝑥𝑛 ) in formula (1), we obtain
Replace 𝑓(
𝑓(𝑥𝑛 )
𝑥𝑛+1 = 𝑥𝑛 −
𝑓(𝑥𝑛 ) − 𝑓(𝑥𝑛−1 )
𝑥𝑛 − 𝑥𝑛−1
Simplify this formula, we obtain

𝑥𝑛−1 𝑓(𝑥𝑛 ) − 𝑥𝑛 𝑓(𝑥𝑛−1 )


𝑥𝑛+1 = , 𝑛 = 1, 2, 3, …
𝑓(𝑥𝑛 ) − 𝑓(𝑥𝑛−1 )
This is called the secant formula.

The name of the method is taken from the fact that the right member of equation (2)
is the slope of a secant line to the graph of f (see the Figure below).

Notice that in the secant formula, xn+1 depends on two previous elements of the
sequence.
Now, we will illustrate the method geometrically:
Given f(x) = 0 start with any two initial points x0 and x1 to generate the sequence of
{𝑥𝑛 }∞
𝑛=0 using the secant formula
𝑥𝑛−1 𝑓(𝑥𝑛 ) − 𝑥𝑛 𝑓(𝑥𝑛−1 )
𝑥𝑛+1 = , 𝑛 = 1, 2, 3, …
𝑓(𝑥𝑛 ) − 𝑓(𝑥𝑛−1 )

If the sequence convergence, then the convergence of the method is superliner to the
exact root r, that is
𝐿𝑖𝑚 𝑥𝑛 = 𝑟
𝑛→∞
1+√5
≈1.62
Superliner convergence means 𝑒𝑛+1 = 𝑐(𝛿) 𝑒𝑛 2

EXAMPLE 8: use the secant method to find an approximate root of the polynomial
p(x) = x5 + x3 + 3 with x0 = −1 and x1 = 1, what is x8?

Solution: The output from the computer program to the secant method is as follows.
n xn | p(xn)|
0 −1.0 1.0
1 1.0 5.0
2 −1.5 7.97
3 −1.05575 0.512
4 −1.11416 9.991 × 10−2
5 −1.10462 7.593 × 10−3
6 −1.10529 1.011 × 10−4
7 −1.10530 2.990 × 10−7
8 −1.10530 2.990 × 10−7

EXAMPLE 9: Consider the equation Sin(x) + x2 − 1 = 0. Let x0 = 0, x1 = 1. Then


the iterations from
the secant method are given by
n xn |𝑓𝑥𝑛 )|
2 0.543044 0.1882519
3 0.626623 0.0209309
4 0.637072 7.0508159*10-4
5 0.636732 1.3520529*10-6

Recall that the exact solution is r ≈ 0.636733. Obviously, the secant method is less
faster than Newton’s method.
Convergence Analysis

One advantage of the secant method that after the first step only one function
evaluation needed in each step, but Newton’s method needs two functions
evaluation, hence it is almost as rapidly convergent as Newton’s.

Let the error at step in 𝑒𝑛 = 𝑟 − 𝑥𝑛 , 𝑒𝑛+1 = 𝑟 − 𝑥𝑛+1 , and 𝑒𝑛−1 = 𝑟 − 𝑥𝑛−1

The authors showed by using Taylor expansion that the secant method satisfies the
following equation

1 𝑓 ′′ (𝜏𝑛 ) 1 𝑓 ′′ (𝑟)
𝑒𝑛+1 = − ( )𝑒 𝑒 ≈− ( )𝑒 𝑒
2 𝑓́(𝛾𝑛 ) 𝑛 𝑛−1 2 𝑓́(𝑟) 𝑛 𝑛−1
Where 𝜏𝑛 and 𝛾𝑛 are in the smallest interval that contains r, 𝑥𝑛 𝑎𝑛𝑑 𝑥𝑛+1 .
1+√5
≈1.62
And they conclude that 𝑒𝑛+1 = 𝑐(𝛿) 𝑒𝑛 2
. Hence, the order of convergence is
superliner.

3.4 Fixed-Point (Functional) Iteration

The idea of this method is to rewrite the equation f(x) = 0 in the form
= ()
Then, we seek a point where the curve of g(x) intersects the diagonal line y = x. A
value of x such that x = g(x) is a fixed point of g because x is unchanged when g is
applied to it. Any fixed point of () is a solution of f(x) = 0.

EXAMPLE 10: The equation 2 − − 2 = 0 can be written as


1. = 2 − 2
2. = √𝑥 + 2
3. = 1 + 2/x

and so on.
The fixed-point iteration method is to 𝑠𝑒𝑡 𝑎𝑛 𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑣𝑒 𝑝𝑟𝑜𝑐𝑒𝑠𝑠 𝑜𝑓 𝑡ℎ𝑒 𝑓𝑜𝑟𝑚
𝑥𝑛+1 = 𝑔(𝑥𝑛 ), 𝑛 = 0, 1, 2, …

Theorem: Suppose g(x) continuous on [a, b] and g(x) ∈ [a, b] for all x ∈ [a, b], then
g has at least one fixed point in [a, b]. In addition, if 𝑔́ (x) exists on (a, b) and
|𝑔́ (𝑥)| < 1 for all x ∈ (a, b), then there is a unique fixed point in [a, b].
(For the proof of this theorem, see any textbook on Numerical Analysis).

Note that for a given nonlinear equation, this iteration function is not unique.
Therefore, the crucial point in this method is to choose a good iteration function
().

EXAMPLE 11: Consider the equation 𝑥 3 − 2𝑥 − 1 = 0, has a root on the interval


[1.5, 2] Let us consider the following possibilities:
1 1
𝑥𝑛+1 = 𝑔1 (𝑥𝑛 ) = (𝑥𝑛3 − 1), 𝑥𝑛+1 = 𝑔2 (𝑥𝑛 ) = 2
2 (𝑥𝑛 − 2)

(2𝑥𝑛 + 1)
𝑥𝑛+1 = 𝑔3 (𝑥𝑛 ) = √
𝑥𝑛
Starting with the initial solution 𝑥0 = 1.5. Apply the above formulas, the numerical
results summarized in the following table:

n 𝑥𝑛+1 = 𝑔1 (𝑥𝑛 ) 𝑥𝑛+1 = 𝑔2 (𝑥𝑛 ) 𝑥𝑛+1 = 𝑔3 (𝑥𝑛 )


0 1.5 1.5 1.5
1 1.1875 4.0 1.632993
2 0.337280 0.071429 1.616284
3 -0.480816 -0.501279 -
4 -0.555579 -0.571847 -
5 -0.585745 -0.597731 -

We note that the first two formulas diverge and the third one converges. In addition
1 3
𝑔1 (𝑥) = (𝑥 3 − 1) → 𝑔1′ (𝑥) = 𝑥 2 , 𝑠𝑜 |𝑔1′ (𝑥)| > 1 𝑜𝑛 [1.5, 2], 𝑑𝑖𝑣𝑒𝑟𝑔𝑒𝑠.
2 2
1 −2𝑥
𝑔2 (𝑥) = 2 → 𝑔2′ (𝑥) = 2 2
, 𝑠𝑜 |𝑔2′ (𝑥)| > 1 𝑜𝑛 [1.5, 2], 𝑑𝑖𝑣𝑒𝑟𝑔𝑒𝑠.
(𝑥 − 2) (𝑥 − 2)
2𝑥 + 1 𝑥 −3/2
𝑔3 (𝑥) = √ ′
→ 𝑔3 (𝑥) = , 𝑠𝑜 |𝑔2′ (𝑥)| < 1 𝑜𝑛 [1.5, 2], 𝑐𝑜𝑛𝑣𝑒𝑟𝑔𝑒𝑠.
𝑥 2√2𝑥 + 1
Therefore, if the sequence converges then

lim 𝑥𝑛+1 = lim 𝑔(𝑥𝑛 ) = 𝑔( lim 𝑥𝑛 ), assuming 𝑔(𝑥) continuous.


𝑛→∞ 𝑛→∞ 𝑛→∞

Then 𝑟 = 𝑔(𝑟).

EXAMPLE 12: Consider the equation Sin(x) + x2 − 1 = 0 x on [0, 1]. Rewrite the
equation in the form 𝑥 = √1 − 𝑆𝑖𝑛(𝑥). Start with x0 =0.8 using the fixed-point
iteration
𝑥𝑛+1 = 𝑔(𝑥𝑛 ), 𝑛 = 0, 1, 2, …
𝑥1 = 𝑔(𝑥0 ) = 𝑔(0.8) = √1 − 𝑆𝑖𝑛(0.8) = 0.5316426
|𝑓(0.5316426)| = 0.2104062
𝑥2 = 𝑔(𝑥1 ) ….etc. The table below show the result after 5 iterations.

n xn |𝑓(𝑥𝑛 )|
1 0.5316426 0.2104062
2 0.7021753 0.1389300
3 0.5950799 0.0853050
4 0.6628914 0.0548235
5 0.6201625 0.0342311
………….
It is obvious that the functional iteration converges to the exact root r ≈ 0.636733.
́ 𝑥) = 1 (√1 + 𝑆𝑖𝑛(𝑥)| ≤ 1 < 1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑥 ∈ [0, 1].
In addition that |𝑔(
2 √2

Order of Convergence

Given 𝑓(𝑥) = 0, 𝑠𝑒𝑡 𝑎𝑛 𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑣𝑒 𝑝𝑟𝑜𝑐𝑒𝑠𝑠 𝑜𝑓 𝑡ℎ𝑒 𝑓𝑜𝑟𝑚


𝑥𝑛+1 = 𝑔(𝑥𝑛 ), 𝑛 = 0, 1, 2, …
Assuming g(x) has a fixed-point and |𝑔́ (𝑥)| < 1 .
The error 𝑒𝑛 = 𝑟 − 𝑥𝑛 , then 𝑒𝑛+1 = 𝑟 − 𝑥𝑛+1

𝑒𝑛+1 = 𝑟 − 𝑥𝑛+1 = 𝑟 − 𝑔(𝑥𝑛 ) = 𝑟 − [𝑔(𝑟 − 𝑒𝑛 )] (1)


Using Taylor expansion for
𝑔′′ (𝑟) 2 𝑔′′′ (𝑟) 3 𝑔𝐼𝑉 (𝑟) 4
́ 𝑟)𝑒𝑛 +
𝑔(𝑟 − 𝑒𝑛 ) = 𝑔(𝑟) − 𝑔( 𝑒 − 𝑒𝑛 + 𝑒𝑛 − ⋯
2! 𝑛 3! 4!
But, 𝑔(𝑟) = 𝑟, then
𝑔′′ (𝑟) 2 𝑔′′′ (𝑟) 3 𝑔𝐼𝑉 (𝑟) 4
́ 𝑟)𝑒𝑛 +
𝑔(𝑟 − 𝑒𝑛 ) = 𝑟 − 𝑔( 𝑒 − 𝑒𝑛 + 𝑒𝑛 − ⋯
2! 𝑛 3! 4!

Substitute in (1), we obtain


′′ (𝑟) 𝑔′′′ (𝑟) 𝑔𝐼𝑉 (𝑟)
́ 𝑟)𝑒𝑛 + 𝑔
𝑒𝑛+1 = 𝑟 − [𝑟 − 𝑔( 𝑒𝑛2 − 𝑒𝑛3 + 𝑒𝑛4 − ⋯]
2! 3! 4!
′′ (𝑟) 𝑔′′′ (𝑟) 𝑔𝐼𝑉 (𝑟)
Hence, ́ 𝑟)𝑒𝑛 − 𝑔
𝑒𝑛+1 = 𝑔( 𝑒𝑛2 + 𝑒𝑛3 − 𝑒𝑛4 + ⋯]
2! 3! 4!

Now,
 If 𝑔(́ 𝑟) ≠ 0, then the fixed-point iteration converge linearly (order one).
́ 𝑟) = 0 𝑎𝑛𝑑 𝑔′′ (𝑟) ≠ 0, then the fixed-point iteration converge
 If 𝑔(
quadratically (order two).
́ 𝑟) = 0 , 𝑔′′ (𝑟) = 0, 𝑎𝑛𝑑 𝑔′′′ (𝑟) ≠ 0, then the fixed-point iteration
 If 𝑔(
converge cubic (order three).
 etc, ….

3 1
EXAMPLE 13: The iteration formula 𝑥𝑛+1 = 2 − 𝑥𝑛 + 𝑥𝑛3 will converge to r
2 2
= 1 (provide that x0 is chosen sufficiently close to r). Will the convergence be
quadratic?
Solution: g(x) = 2- 3/2 x +1/2 x3, g’(x) = -3/2 + 3/2 x2, then
g’(1) = -3/2 + 3/2 (1) = 0
𝑔′′ (𝑥) = 3𝑥, 𝑔′′ (1) = 3 ≠ 0, then the convergence is quadratic.

You might also like