0% found this document useful (0 votes)
241 views53 pages

Roots of Equations: Open Methods

The document introduces root finding methods, including bracketing methods like the bisection method and regula falsi, as well as open methods. Open methods iteratively find roots by constructing a formula g(x) to predict the root. While open methods may converge quickly, they can also diverge. The Newton-Raphson method uses the slope of the function to predict where the tangent line intersects the x-axis. This results in quadratic convergence, making it faster than fixed point iteration methods, which only achieve linear convergence.

Uploaded by

Gary Blake
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
241 views53 pages

Roots of Equations: Open Methods

The document introduces root finding methods, including bracketing methods like the bisection method and regula falsi, as well as open methods. Open methods iteratively find roots by constructing a formula g(x) to predict the root. While open methods may converge quickly, they can also diverge. The Newton-Raphson method uses the slope of the function to predict where the tangent line intersects the x-axis. This results in quadratic convergence, making it faster than fixed point iteration methods, which only achieve linear convergence.

Uploaded by

Gary Blake
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 53

Roots of Equations

Open Methods

Second Term

The following root finding methods will be


introduced:
A. Bracketing Methods
A.1. Bisection Method
A.2. Regula Falsi

B. Open Methods
B.1. Fixed Point Iteration
B.2. Newton Raphson's Method
B.3. Secant Method
Second Term

B. Open Methods
To find the root for
f(x) = 0, we
construct a magic
formulae
xi+1 = g(xi)

(a) Bisection method


(b) Open method (diverge)
(c) Open method (converge)

Second Term

to predict the root


iteratively until x
converge to a root.
However, x may
diverge!
3

What you should know about Open


Methods
How to construct the magic formulae g(x)?
How can we ensure convergence?
What makes a method converges quickly or
diverge?
How fast does a method converge?
Second Term

B.1. Fixed Point Iteration


Also known as one-point iteration or
successive substitution
To find the root for f(x) = 0, we rearrange f(x) = 0
so that there is an x on one side of the equation.

f ( x) 0

g ( x) x

If we can solve g(x) = x, we solve f(x) = 0.


We solve g(x) = x by computing

xi 1 g ( xi )

with x0 given

until xi+1 converges to xi.


Second Term

Fixed Point Iteration Example


f ( x) x 2 2 x 3 0
x2 2x 3 0 2x 3 x2

3 x2
x
2

3 xi2
xi 1 g ( xi )
2

Reason: When x converges, i.e. xi+1 xi


3 xi2
3 xi2
xi 1
xi
2
2
xi2 2 xi 3 0
Second Term

Example
Find root of f(x) = e-x - x = 0.
(Answer: = 0.56714329)

We put xi 1 e

Second Term

xi

xi

a (%)

t (%)

1.000000

100.0

76.3

0.367879

171.8

35.1

0.692201

46.9

22.1

0.500473

38.3

11.8

0.606244

17.4

6.89

0.545396

11.2

3.83

0.579612

5.90

2.20

0.560115

3.48

1.24

0.571143

1.93

0.705

10

0.564879

1.11

0.399

100.0

Two Curve Graphical


Method

The point, x, where


the two curves,
f1(x) = x and
f2(x) = g(x),
intersect is the solution
to f(x) = 0.
Demo
Second Term

Fixed Point Iteration

There are infinite ways to construct g(x) from f(x).


For example,

f ( x ) x 2 x 3 0 (ans: x = 3 or -1)
2

Case a:

Case b:

Case c:

x2 2x 3 0

x2 2x 3 0
x ( x 2) 3 0
3
x
x2
3
g( x)
x2

x2 2x 3 0

x2 2x 3
x 2x 3
g ( x) 2 x 3

2x x2 3
x2 3
x
2
x2 3
g ( x)
2

So which one is better?


Second Term

Case a
xi 1 2 xi 3

Case b

Case c

xi 1

xi 2

xi 1

xi2 3

1.

x0 = 4

1.

x0 = 4

1.

x0 = 4

2.

x1 = 3.31662

2.

x1 = 1.5

2.

x1 = 6.5

3.

x1 = 3.10375

3.

x1 = -6

3.

x1 = 19.625

4.

x1 = 3.03439

4.

x1 = -0.375

4.

x1 = 191.070

5.

x1 = 3.01144

5.

x1 = -1.263158

6.

x1 = 3.00381

6.

x1 = -0.919355

7.
8.
9.

-1.02762
-0.990876
-1.00305

Converge!

Second Term

Diverge!

Converge, but slower

10

How to choose g(x)?


Can we know which function g(x) would
converge to solution before we do the
computation?

Second Term

11

Convergence of Fixed Point


Iteration
By definition

i xi
i 1 xi 1

(1)
( 2)

Fixed point
iteration

g ( )
and
xi 1 g ( xi )

(3)

(3) (4)

xi 1

Sub (2) in (5) i 1


Second Term

(4)

g ( ) g ( xi ) (5)
g ( ) g ( xi ) (6)
12

Convergence of Fixed Point Iteration


According to the derivative mean-value theorem, if a g(x)
and g'(x) are continuous over an interval xi x , there
exists a value x = within the interval such that
g ( ) g ( xi )
g ' ( x)
(7 )
xi
From (1) and (6), we have i xi and i 1 g ( ) g ( xi )
i 1
Thus (7) g ' ( x)
i 1 g ' ( x) i
i
Therefore, if |g'(x)| < 1, the error decreases with each
iteration. If |g'(x)| > 1, the error increase.
If the derivative is positive, the iterative solution will be
monotonic.
If the derivative is negative, the errors will oscillate.
Second Term

13

(a) |g'(x)| < 1, g'(x) is +ve


converge, monotonic
(b) |g'(x)| < 1, g'(x) is -ve
converge, oscillate
(c) |g'(x)| > 1, g'(x) is +ve
diverge, monotonic
(d) |g'(x)| > 1, g'(x) is -ve
diverge, oscillate

Demo
Second Term

14

Fixed Point Iteration Impl. (as C function)


// x0: Initial guess of the root
// es: Acceptable relative percentage error
// iter_max: Maximum number of iterations allowed
double FixedPt(double x0, double es, int iter_max) {
double xr = x0; // Estimated root
double xr_old;
// Keep xr from previous iteration
int iter = 0;
// Keep track of # of iterations
do {
xr_old = xr;
xr = g(xr_old); // g(x) has to be supplied
if (xr != 0)
ea = fabs((xr xr_old) / xr) * 100;
iter++;
} while (ea > es && iter < iter_max);
return xr;
}

Second Term

15

The following root finding methods will be


introduced:
A. Bracketing Methods
A.1. Bisection Method
A.2. Regula Falsi

B. Open Methods
B.1. Fixed Point Iteration
B.2. Newton Raphson's Method
B.3. Secant Method
Second Term

16

B.2. Newton-Raphson
Method
Use the slope of f(x)
to predict the
location of the root.
xi+1 is the point
where the tangent
at xi intersects xaxis.

f ( xi ) 0
f ( xi )
f ' ( xi )
xi 1 xi
xi xi 1
f ' ( xi )

Second Term

17

Newton-Raphson Method

f ( xi )
xi 1 xi
f ' ( xi )

What would happen when f '() = 0?


For example, f(x) = (x-1)2 = 0

Second Term

18

Error Analysis of Newton-Raphson


Method
By definition

i xi
i 1 xi 1

(1)
( 2)

Newton-Raphson method
f ( xi )
xi 1 xi
f ' ( xi )
f ( xi ) f ' ( xi )( xi xi 1 )
f ( xi ) f ' ( xi )( xi ) f ' ( xi )( xi 1 )
f ( xi ) f ' ( xi )( xi ) f ' ( xi )( xi 1 )
Second Term

(3)
19

Error Analysis of Newton-Raphson


Method

Suppose is the true value (i.e., f() = 0).


Using Taylor's series

f " ( )
f ( ) f ( xi ) f ' ( xi )( xi )
( xi ) 2
2
f " ( )
0 f ( xi ) f ' ( xi )( xi )
( xi ) 2
2
f " ( )
0 f ' ( xi )( xi 1 )
( xi ) 2
(from (3))
2
f " ( )
0 f ' ( xi )( i 1 )
( i ) 2
(from (1) and ( 2))
2
f " ( ) 2 f " ( ) 2
When xi and are very
i 1
i
i
2 f ' ( xi )
2 f ' ( )
close to each other, is
between xi and .

The iterative process is said to be of second order.

Second Term

20

The Order of Iterative Process


(Definition)
Using an iterative process we get xk+1 from xk and other info.
We have x0, x1, x2, , xk+1 as the estimation for the root .
Let k = xk
Then we may observe

k 1 O( )
p
k

The process in such a case is said to be of p-th order.


It is called Superlinear if p > 1.
It is called Linear if p = 1.
It is called Sublinear if p < 1.

Second Term

21

Error of the Newton-Raphson


Method
Each error is approximately proportional to the square of
the previous error. This means that the number of correct
decimal places roughly doubles with each approximation.
Example: Find the root of f(x) = e-x - x = 0
(Ans: = 0.56714329)
e xi xi
xi 1 xi xi
e 1
Error Analysis

f ' ( ) e 1 1.56714329
f " ( ) e 0.56714329
Second Term

22

Error Analysis

i 1

f " ( ) 2

i
2 f ' ( )
0.56714329
2

i
2(1.56714329)
2
0.18095 i

xi

t (%)

|i|

estimated |i+1|

100

0.56714329

0.0582

0.500000000

11.8

0.06714329

0.008158

0.566311003

0.147

0.0008323

0.000000125

0.567143165

0.0000220

0.000000125

2.83x10-15

0.567143290

< 10-8

Second Term

23

Newton-Raphson vs. Fixed Point


Iteration

Find root of f(x) = e-x - x = 0.


(Answer: = 0.56714329)

Newton-Raphson
i

xi

0 0

Fixed Point Iteration with


xi 1 e xi
i

xi

a (%)

t (%)

1.000000

100.0

76.3

100.0

t (%)

|i|

0.367879

171.8

35.1

100

0.56714329

0.692201

46.9

22.1

0.500473

38.3

11.8

0.606244

17.4

6.89

0.545396

11.2

3.83

0.579612

5.90

2.20

0.560115

3.48

1.24

0.571143

1.93

0.705

10

0.564879

1.11

0.399

1 0.500000000

11.8

0.06714329

2 0.566311003

0.147

0.0008323

3 0.567143165

0.0000220

0.000000125

4 0.567143290

< 10-8

Second Term

24

Pitfalls of the Newton-Raphson


Method
Sometimes slow
f ( x) x10 1

Second Term

iteration

0.5

51.65

46.485

41.8365

37.65285

33.8877565

Infinity

1.0000000

25

Pitfalls of the Newton-Raphson


Method
Figure (a)
An infection point
(f"(x)=0) at the vicinity
of a root causes
divergence.
Figure (b)
A local maximum or
minimum causes
oscillations.
Second Term

26

Pitfalls of the Newton-Raphson


Method
Figure (c)
It may jump from one
location close to one
root to a location that
is several roots away.
Figure (d)
A zero slope causes
division by zero.

Second Term

27

Overcoming the Pitfalls?


No general convergence criteria for NewtonRaphson method.
Convergence depends on function nature and
accuracy of initial guess.
A guess that's close to true root is always a better
choice
Good knowledge of the functions or graphical analysis
can help you make good guesses

Good software should recognize slow


convergence or divergence.
At the end of computation, the final root estimate
should always be substituted into the original function to
verify the solution.

Second Term

28

The following root finding methods will be


introduced:
A. Bracketing Methods
A.1. Bisection Method
A.2. Regula Falsi

B. Open Methods
B.1. Fixed Point Iteration
B.2. Newton Raphson's Method
B.3. Secant Method
Second Term

29

B.2. Secant Method


Newton-Raphson method needs to compute the
derivatives.
The secant method approximate the derivatives by
finite divided difference.
f ( xi 1 ) f ( xi )
From Newton-Raphson
f ' ( xi )
xi 1 xi
method
f ( xi )
xi 1 xi
f ' ( xi )
f ( xi )( xi 1 xi )
f ( xi )( xi 1 xi )
xi 1 xi
xi
f ( xi 1 ) f ( xi )
f ( xi 1 ) f ( xi )
Second Term

30

Secant
Method

Second Term

31

Secant Method Example


Find root of f(x) = e-x - x = 0 with initial estimate of
x-1 = 0 and x0 = 1.0. (Answer: = 0.56714329)

f ( xi )( xi 1 xi )
xi 1 xi
f ( xi 1 ) f ( xi )
i

xi-1

xi

f(xi-1)

f(xi)

xi+1

1.00000

-0.63212

0.61270

8.0 %

0.61270

-0.63212

-0.07081

0.56384

0.58 %

0.61270 0.56384

-0.07081

0.00518

0.56717

0.0048 %

Again, compare this results obtained by the NewtonRaphson method and simple fixed point iteration
method.
Second Term

32

Comparison of the Secant and


False-position method
Both methods use the same expression to
compute xr.

f ( xi )( xi 1 xi )
Secant :
xi 1 xi
f ( xi 1 ) f ( xi )
f ( xu )( xl xu )
False position : xr xu
f ( xl ) f ( xu )
They have different methods for the replacement
of the initial values by the new estimate. (see next
page)
Second Term

33

Comparison of the Secant and False-position


method

Second Term

34

Comparison of the
Secant and Falseposition method

f(x) e x x

Second Term

35

Modified Secant Method


Needs only one instead of two initial guess points
Replace xi-1 - xi by xi and approximate f'(x) as
f ( xi xi ) f ( xi )
f ' ( xi )
xi

From Newton-Raphson method,


f ( xi )
xi 1 xi
f ' ( xi )
xi 1 xi

xi f ( xi )
f ( xi xi ) f ( xi )

Second Term

36

Modified Secant Method

Find root of f(x) = e-x - x = 0 with initial estimate of


x0 = 1.0 and =0.01. (Answer: = 0.56714329)
i

xi

xi+xi

f(xi)

f(xi+xi)

xi+1

1.01

-0.63212

-0.64578

0.537263

0.537263 0.542635

0.047083

0.038579

0.56701

0.56701

0.000209

-0.00867

0.567143

0.567143

Compared with the Secant method


i

xi-1

xi

f(xi-1)

f(xi)

xi+1

1.00000

-0.63212

0.61270

8.0 %

0.61270

-0.63212

-0.07081

0.56384

0.58 %

0.00518

0.56717

0.0048 %

0.61270 0.56384 -0.07081


Second
Term

37

Modified Secant Method About


If is too small, the method can be swamped by roundoff error caused by subtractive cancellation in the
denominator of
x f (x )
xi 1 xi

f ( xi xi ) f ( xi )

If is too big, this technique can become inefficient and


even divergent.
If is selected properly, this method provides a good
alternative for cases when developing two initial guess
is inconvenient.
Second Term

38

The following root finding methods will be


introduced:
A. Bracketing Methods
A.1. Bisection Method
A.2. Regula Falsi

B. Open Methods

Can they
handle
multiple
roots?

B.1. Fixed Point Iteration


B.2. Newton Raphson's Method
B.3. Secant Method
Second Term

39

Multiple Roots
A multiple root corresponds to a point where a
function is tangent to the x axis.
For example, this function has a double root.
f(x) = (x 3)(x 1)(x 1)
= x3 5x2 + 7x - 3
For example, this function has a triple root.
f(x) = (x 3)(x 1)(x 1) (x 1)
= x4 6x3 +12x2 - 10x + 3
Second Term

40

Multiple Roots

Odd multiple roots cross the axis.


(Figure (b))

Even multiple roots do not cross


the axis. (Figure (a) and (c))

Second Term

41

Difficulties when we have multiple roots


Bracketing methods do not work for even multiple
roots.
f() = f'() = 0, so both f(xi) and f'(xi) approach
zero near the root. This could result in division by
zero. A zero check for f(x) should be incorporated
so that the computation stops before f'(x) reaches
zero.
For multiple roots, Newton-Raphson and Secant
methods converge linearly, rather than quadratic
convergence.
Second Term

42

Modified Newton-Raphson
Methods for Multiple Roots
Suggested Solution 1:

~
f f 1/ m , m is the multiplicity of the root

Define
~
f ( ) 0 and is a single root.
~
f ( xi )
xi 1 xi ~
f ' ( xi )
f 1/ m ( xi )
xi
1 1
1
m
( xi ) f ' ( xi )
m f
f ( xi )
xi m
f ' ( xi )
Second Term

Disadvantage:
work only when m is
known.

43

Modified Newton-Raphson
Methods for Multiple Roots
Suggested Solution 2:
~
f ( x)
Define f ( x)
(1)
f ' ( x)
~
f ( x) has roots at all the same locations as f ( x).
~
f ( xi )
xi 1 xi ~
(2)
f ' ( xi )
~
f ' ( xi ) f ' ( x) f ( x) f " ( x)
Differentiate (1) f ' ( x)
(3)
2
[ f ' ( x)]
f ( xi ) f ' ( xi )
Sub (1) and (3) into (2) xi 1 xi
[ f ' ( x)]2 f ( xi ) f " ( xi )
Second Term

44

Example of the Modified


Newton-Raphson Method for
Multiple Roots

Original Newton Raphson method

f ( x) ( x 3)( x 1)( x 1)
x 5x 7 x 3
f ( x)
xi 1 xi
f ' ( x)
3

x 5x 7 x 3
xi
3 x 2 10 x 7
3

xi

t (%)

100

0.4285714

57

0.6857143

31

0.8328654

17

0.9133290

8.7

0.9557833

4.4

0.9776551

2.2

The method is linearly convergent toward the true value of 1.0.


Second Term

45

Example of the Modified


Newton-Raphson Method for
Multiple Roots

For the modified algorithm

f ( x) ( x 3)( x 1)( x 1)
x 5x 7 x 3
f ( xi ) f ' ( xi )
xi 1 xi
[ f ' ( x)]2 f ( xi ) f " ( xi )
3

xi

t (%)

100

1.105263

11

1.003082

0.31

1.000002

0.00024

( xi 5 xi 7 xi 3)(3 xi 10 xi 7)
xi
2
3
2
(3 xi 10 xi 7) ( xi 5 xi 7 xi 3)(6 xi 10)

Second Term

46

Example of the Modified


Newton-Raphson Method for
Multiple Roots

How about their performance on finding the


single root?
i

Standard

t (%)

Modified

t (%)

33

33

3.4

13

2.636364

12

3.1

3.3

2.820225

6.0

3.008696

0.29

2.961728

1.3

3.000075

0.0025

2.998479

0.05

3.000000

2x10-7

2.999998

7.7x10-5

Second Term

47

Modified Newton-Raphson
Methods for Multiple Roots
What's the disadvantage of the modified
Newton-Raphson Methods for multiple roots
over the original Newton-Raphson method?

Note that the Secant method can also be


modified in a similar fashion for multiple
roots.
Second Term

48

Summary of Open Methods


Unlike bracketing methods, open methods do
not always converge.
Open methods, if converge, usually converge
more quickly than bracketing methods.
Open methods can locate even multiple roots
whereas bracketing methods cannot. (why?)

Second Term

49

Study Objectives
Understand the graphical interpretation of a root
Understand the differences between bracketing
methods and open methods for root location
Understand the concept of convergence and
divergence
Know why bracketing methods always converge,
whereas open methods may sometimes diverge
Realize that convergence of open methods is
more likely if the initial guess is close to the true
root.
Second Term

50

Study Objectives
Understand what conditions make a method
converge quickly or diverge
Understand the concepts of linear and quadratic
convergence and their implications for the
efficiencies of the fixed-point-iteration and
Newton-Raphson methods
Know the fundamental difference between the
false-position and secant methods and how it
relates to convergence
Understand the problems posed by multiple roots
and the modifications available to mitigate them
Second Term

51

Analysis of Convergent Rate


Suppose g(x) converges to the solution, then the Taylor
series of g() about xi can be expressed as
g " ( xi )
g ( ) g ( xi ) g ' ( xi )( xi )
( xi ) 2 ...
2!
g " ( xi )
g ( ) g ( xi ) g ' ( xi )( xi )
( xi ) 2 ...
2!
By definition
g( ), xi 1 g ( xi ), i 1 xi 1 , i xi

(1)

Thus (1) becomes

g " ( xi ) 2
xi 1 g ' ( xi ) i
i
2!
g " ( xi ) 2
i 1 g ' ( xi ) i
i
2!
Second Term

( 2)
52

Analysis of Convergent
Rate
When xi is very close to the solution, we can rewrite (2) as

i 1

g" ( ) 2 g ( 3) ( ) 3
g ' ( ) i
i
i
2!
3!

Suppose g(n) exists and the nth term is the first non-zero term,
then
g ( n ) ( ) n
i 1
i
n!
Thus to analyze the convergent rate, we can find the smallest
n such that g(n)() 0.

Second Term

53

You might also like