0% found this document useful (0 votes)
13 views93 pages

Root Finding

Uploaded by

Mutasem abadleh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views93 pages

Root Finding

Uploaded by

Mutasem abadleh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 93

Numerical Methods

Root Finding
Root Finding Topics

• Bi-section Method
• Newton’s method
• Secant Method
• Fixed-point formulas, Basins of Attraction and
Fractals.

2
Objectives
•Understanding what roots problems are and where
they occur in engineering and science
•Knowing how to determine a root graphically
•Knowing how to solve a root problem with
bracketing method
•Understand the open method and recognize the
difference between the open and bracketing
methods
•Knowing how to use MATLAB built-in function
Content

•Introduction
•Graphical method
•Bracketing method (Bisection and false position)
•Open method (Newton Raphson and secant
methods)
•MATLAB built-in function
•Conclusion
Introduction
Many engineering and science problems predict
dependent variable(s) as a function of
independent variables

Heat balance : time : time&space


Mass balance : mass concen. : time&space
Kirchoff’s law: current/voltage : time
Introduction (cont’d)
If we are solving a simple nonlinear
equation, it’s piece of cake.
 b  b 2
 4ac
ax  bx  c  0  x 
2

2a
But what about these

ax5  bx4  cx 3  dx2  ex  f  0  x  ?


sin x  x  0  x  ?

Yes, it’s time for numerical methods !!!


Nonlinear Equation
Solvers

Bracketing Graphical Open Methods

Bisection Newton Raphson


False Position
(Regula-Falsi) Secant

All Iterative
7
Motivation

• Many problems can be re-written into a form such


as:
• f(x,y,z,…) = 0
• f(x,y,z,…) = g(s,q,…)

8
Motivation

• A root, r, of function f occurs when f(r) = 0.


• For example:
• f(x) = x2 – 2x – 3
has two roots at r = -1 and r = 3.
• f(-1) = 1 + 2 – 3 = 0
• f(3) = 9 – 6 – 3 = 0
• We can also look at f in its factored form.
f(x) = x2 – 2x – 3 = (x + 1)(x – 3)

9
Factored Form of Functions
• The factored form is not limited to polynomials.
• Consider:
f(x)= x sin x – sin x.
A root exists at x = 1.
f(x) = (x – 1) sin x
• Or,
f(x) = sin px => x (x – 1) (x – 2) …

10
Graphical…
Graphical method means that we plot graph from
the target nonlinear equation and observe the zero
of it !!

Very simple but too iterative.

Here try with parachutist problem…..


Graphical… (cont’d)
Start with the distant equation

1000 
gm 
c 

m ( c / m)t1km
 t1km  e

1 
c 
Find t1km such that

f (t1km ) 
gm 

m ( c / m)t1km
 t1km  e
c 
 
 1   1000  0
c 
is satisfied.
Plot f(t1km) from t1km = {0,T} make sure that the zero of
f(t1km) falls within this interval.
Graphical… (cont’d)
Try with MATLAB to plot the function from t=0,..,50
Utilize the command “inline”
3500

3000

2500

2000

1500
f(x)

1000

500
zero
0 Around 12.7, more
-500 accurate result can
attained if u zoom
-1000
0 5 10 15 20 in
25 the
30 figure
35 !!
40 45 50
time (s)
Bracketing Methods
(Or, two point methods for finding roots)

• Two initial guesses for the root are


required. These guesses must “bracket”
or be on either side of the root.

• If one root of a real and continuous


function, f(x)=0, is bounded by values
x=xl, x =xu then

f(xl) . f(xu) <0. (The function changes


sign on opposite sides of the root)

14
No answer (No root)

Nice case (one root)

Oops!! (two roots!!)

Three roots( Might work for a while!!)


Two roots( Might
work for a while!!)

Discontinuous function.
Need special method
MANY-MANY roots. What do
we do?

f(x)=sin 10x+cos 3x

17
Root Finding Algorithms

• Closed or Bracketed techniques


• Bi-section
• Regula-Falsi
• Open techniques
• Newton fixed-point iteration
• Secant method
• Multidimensional non-linear problems
• The Jacobian matrix
• Fixed-point iterations
• Convergence and Fractal Basins of Attraction

18
Bisection Method

• Based on the fact that the function will change signs


as it passes thru the root.
• f(a)*f(b) < 0
• Once we have a root bracketed, we simply evaluate
the mid-point and halve the interval.

19
The Bisection Method

For the arbitrary equation of one variable, f(x)=0


1. Pick xl and xu such that they bound the root of interest,
check if f(xl).f(xu) <0.

2. Estimate the root by evaluating f[(xl+xu)/2].

3. Find the pair


• If f(xl). f[(xl+xu)/2]<0, root lies in the lower interval,
then xu=(xl+xu)/2 and go to step 2.

20
• If f(xl). f[(xl+xu)/2]>0, root
lies in the upper interval,
then xl= [(xl+xu)/2, go to
step 2.
xl  xu
xl 
2
 100%
• If f(xl). f[(xl+xu)/2]=0, then xl  xu
root is (xl+xu)/2 and 2
terminate. or
xl  xu
xu 
4. Compare es with ea 2
 100%
xl  xu
2
5. If ea< es, stop. Otherwise
repeat the process.
21
22
Evaluation of Method

Pros Cons
• Easy • Slow
• Always find root • Know a and b that
• Number of iterations bound root
required to attain an • Multiple roots
absolute error can be • No account is taken of
computed a priori. f(xl) and f(xu), if f(xl) is
closer to zero, it is likely
that root is closer to xl .

23
Bisection Method

• c=(a+b)/2

f(a)>0

f(c)>0

a c b

f(b)<0
24
Bisection Method

• Guaranteed to converge to a root if one exists within the bracket.

a=c
f(a)>0

a c b

f(c)<0
f(b)<0
25
Bisection Method

• Slowly converges to a root

b=c
f(b)<0

a b
c

26
Bisection Method
• Simple algorithm:
Given: a and b, such that f(a)*f(b)<0
Given: error tolerance, err

c=(a+b)/2.0; // Find the midpoint


While( |f(c)| > err ) {
if( f(a)*f(c) < 0 ) // root in the left
half
b = c;
else // root in the right
half
a = c;
c=(a+b)/2.0; // Find the new midpoint
}
return c;

27
Step 1
Choose xl and xu as two guesses for the root such that f(xl)
f(xu) < 0, or in other words, f(x) changes sign between xl and xu.
This was demonstrated in Figure 1.

f(x)

xl
x
xu

Figure 1

28
Step 2

Estimate the root, xm of the equation f (x) = 0 as the mid point


between xl and xu as
f(x)

xl  xu
xm =
2
xl xm
x
xu

Figure 5 Estimate of xm
29
Step 3

Now check the following

a) If f xl  f xm   0 , then the root lies between xl and xm;


then xl = xl ; xu = xm.

b) If f xl  f xm   0, then the root lies between xm and xu;


then xl = xm; xu = xu.

c) If f xl  f xm   0 ; then the root is xm. Stop the algorithm


if this is true.

30
Step 4

Find the new estimate of the root


xl  xu
xm =
2
Find the absolute relative approximate error
x new  x old

a  100
m m
new
xm

where
xmold  previous estimate of root
xmnew  current estimate of root
31
Step 5

Compare the absolute relative approximate error a with


the pre-specified error tolerance s .
Go to Step 2 using new
Yes upper and lower
Is a s ? guesses.

No Stop the algorithm

Note one should also check whether the number of


iterations is more than the maximum number of iterations
allowed. If so, one needs to terminate the algorithm and
notify the user about it.
32
Example 1
You are working for ‘DOWN THE TOILET COMPANY’ that makes
floats for ABC commodes. The floating ball has a specific
gravity of 0.6 and has a radius of 5.5 cm. You are asked to find
the depth to which the ball is submerged when floating in
water.

Figure 6 Diagram of the floating ball


33
Example 1 Cont.

The equation that gives the depth x to which the ball is


submerged under water is given by
x 3  0.165x 2  3.993104  0

a) Use the bisection method of finding roots of equations to find


the depth x to which the ball is submerged under water.
Conduct three iterations to estimate the root of the above
equation.
b) Find the absolute relative approximate error at the end of
each iteration, and the number of significant digits at least
correct at the end of each iteration.

34
Example 1 Cont.

From the physics of the problem, the ball would be submerged


between x = 0 and x = 2R,
where R = radius of the ball,
that is

0  x  2R
0  x  20.055
0  x  0.11

Figure 6 Diagram of the floating ball


35
Example 1 Cont.

Solution

To aid in the understanding


of how this method works to
find the root of an equation,
the graph of f(x) is shown to
the right,
where
f x   x 3  0.165x 2  3.99310- 4

Figure 7 Graph of the function f(x)


36
Example 1 Cont.

Let us assume

xl  0.00
xu  0.11
Check if the function changes sign between xl and xu .
f xl   f 0  0  0.1650  3.993104  3.993104
3 2

f xu   f 0.11  0.11  0.1650.11  3.993104  2.662104


3 2

Hence
  
f xl  f xu   f 0 f 0.11  3.993104  2.662104  0

So there is at least on root between xl and xu, that is between 0 and 0.11
37
Example 1 Cont.

Figure 8 Graph demonstrating sign change between initial limits


38
Example 1 Cont.

Iteration 1 xl  xu 0  0.11
The estimate of the root is xm    0.055
2 2

f xm   f 0.055  0.055  0.1650.055  3.993104  6.655105


3 2

f xl  f xm   f 0 f 0.055  3.993104 6.655105   0

Hence the root is bracketed between xm and xu, that is, between 0.055
and 0.11. So, the lower and upper limits of the new bracket are
xl  0.055, xu  0.11
At this point, the absolute relative approximate error a cannot be
calculated as we do not have a previous approximation.

39
Example 1 Cont.

Figure 9 Estimate of the root for Iteration 1


40
Example 1 Cont.

Iteration 2 xl  xu 0.055  0.11


The estimate of the root is xm    0.0825
2 2

f xm   f 0.0825  0.0825  0.1650.0825  3.993  104  1.622  104


3 2

f xl  f xm   f 0.055 f (0.0825)   1.622  104 6.655  105   0

Hence the root is bracketed between xl and xm, that is, between 0.055
and 0.0825. So, the lower and upper limits of the new bracket are
xl  0.055, xu  0.0825

41
Example 1 Cont.

Figure 10 Estimate of the root for Iteration 2


42
Example 1 Cont.

The absolute relative approximate error a at the end of Iteration 2 is


xmnew  xmold
a  new
100
xm
0.0825  0.055
 100
0.0825
 33.333%
None of the significant digits are at least correct in the estimate root of
xm = 0.0825 because the absolute relative approximate error is greater
than 5%.

43
Example 1 Cont.

Iteration 3 xl  xu 0.055  0.0825


The estimate of the root is xm    0.06875
2 2
f xm   f 0.06875  0.06875  0.1650.06875  3.993104  5.563105
3 2

f xl  f xm   f 0.055 f 0.06875  6.655105  5.563105   0

Hence the root is bracketed between xl and xm, that is, between 0.055
and 0.06875. So, the lower and upper limits of the new bracket are
xl  0.055, xu  0.06875

44
Example 1 Cont.

Figure 11 Estimate of the root for Iteration 3


45
Example 1 Cont.

The absolute relative approximate error a at the end of Iteration 3 is


xmnew  xmold
a  new
100
xm
0.06875 0.0825
 100
0.06875
 20%
Still none of the significant digits are at least correct in the estimated
root of the equation as the absolute relative approximate error is
greater than 5%.
Seven more iterations were conducted and these iterations are shown in
Table 1.
46
Table 1 Cont.

Table 1 Root of f(x)=0 as function of number of iterations for


bisection method.
Iteration xl xu xm a % f(xm)

1 0.00000 0.11 0.055 ---------- 6.655×10−5


2 0.055 0.11 0.0825 33.33 −1.622×10−4
3 0.055 0.0825 0.06875 20.00 −5.563×10−5
4 0.055 0.06875 0.06188 11.11 4.484×10−6
5 0.06188 0.06875 0.06531 5.263 −2.593×10−5
6 0.06188 0.06531 0.06359 2.702 −1.0804×10−5
7 0.06188 0.06359 0.06273 1.370 −3.176×10−6
8 0.06188 0.06273 0.0623 0.6897 6.497×10−7
9 0.0623 0.06273 0.06252 0.3436 −1.265×10−6
10 0.0623 0.06252 0.06241 0.1721 −3.0768×10−7

47
Bracketing Methods

• Bracketing methods are robust


• Convergence typically slower than open methods
• Use to find approximate location of roots
• “Polish” with open methods
• Relies on identifying two points a,b initially such
that:
• f(a) f(b) < 0
• Guaranteed to converge

48
Root Finding Algorithms

• Closed or Bracketed techniques


• Bi-section
• Regula-Falsi
• Open techniques
• Newton fixed-point iteration
• Secant method
• Multidimensional non-linear problems
• The Jacobian matrix
• Fixed-point iterations
• Convergence and Fractal Basins of Attraction

49
Open methods
• Open methods are
based on formulas
that require only a
single starting
value of x or two
starting values that
do not necessarily
bracket the root.
Simple Fixed-point Iteration

•Rearrange the function so that x is on the


left side of the equation:
f ( x)  0  g ( x)  x
xk  g ( xk 1 ) xo given , k  1, 2, ...
•Bracketing methods are “convergent”.
•Fixed-point methods may sometime
“diverge”, depending on the starting point
(initial guess) and how the function behaves.
Example:
f ( x)  x  x  2
2
x0
g ( x)  x  2
2

or
g ( x)  x  2
or
2
g ( x)  1 
x

Convergence

 x=g(x) can be expressed


as a pair of equations:
y1=x
y2=g(x) (component
equations)
 Plot them separately.
Conclusion

 Fixed-point iteration converges if

g ( x)  1 (slope of the line f(x)  x)

•When the method converges, the error is


roughly proportional to or less than the error of
the previous step, therefore it is called “linearly
convergent.”
Newton-Raphson Method

 Most widely used method.


 Based on Taylor series expansion:
x 2
f ( xi 1 )  f ( xi )  f ( xi )x  f ( xi )  Ox 3
2!
The root is the value of x i 1 when f(x i 1 )  0
Rearranging,
Solve for
0  f(x i )  f (x i )( xi 1  xi )
f ( xi )
xi 1  xi  Newton-Raphson formula
f ( xi )
 A convenient method for
functions whose
derivatives can be
evaluated analytically. It
may not be convenient
for functions whose
derivatives cannot be
evaluated analytically.
Advantages and disadvantages of Newton’s
method:
1 The error decreases rapidly with each iteration
2 Newton’s method is very fast. (Compare with bisection
method!)
3 Unfortunately, for bad choices of x 0 (the initial guess)
the method can fail to converge! Therefore the choice of
is x 0 VERY IMPORTANT!
4 Each iteration of Newton’s method requires two
function evaluations, while the bisection method requires
only one.
Newton’s Method

• Open solution, that requires only one current guess.


• Root does not need to be bracketed.
• Consider some point x0.
• If we approximate f(x) as a line about x0, then we
can again solve for the root of the line.

l ( x)  f ( x0 )( x  x0 )  f ( x0 )

59
Newton’s Method

• Solving, leads to the following iteration:

l ( x)  0
f ( x0 )
x1  x0 
f ( x0 )
f ( xi )
xi 1  xi 
f ( xi )

60
Newton’s Method
• This can also be seen from Taylor’s Series.
• Assume we have a guess, x0, close to the actual
root. Expand f(x) about this point.
x  xi  x
x 2
f ( xi  x)  f ( xi )  xf ( xi )  f ( xi )  0
2!
• If dx is small, then dxn quickly goes to zero.

f ( xi )
x  xi 1  xi  
f ( xi )
61
Newton’s Method

• Graphically, follow the tangent vector down to the x-axis intersection.

xi xi+1

62
Newton’s Method

• Problems

3
diverges
x0
2

63
Newton’s Method

• Need the initial guess to be close, or, the function to behave nearly
linear within the range.

64
Finding a square-root

• Ever wonder why they call this a square-root?


• Consider the roots of the equation:
• f(x) = x2-a
• This of course works for any power:

a  x  a  0, p  R
p p

65
Finding a square-root
• Example: 2 = 1.4142135623730950488016887242097
• Let x0 be one and apply Newton’s method.
f ( x)  2 x
xi  2 1  2
2
xi 1  xi    xi  
2 xi 2 xi 
x0  1
1 2 3
x1  1     1.5000000000
2 1 2
1  3 4  17
x2       1.4166666667
2  2 3  12
66
Finding a square-root
• Example: 2 = 1.4142135623730950488016887242097
• Note the rapid convergence
1  17 24  577
x3       1.414215686
2  12 17  408
x4  1.4142135623746
x5  1.4142135623730950488016896
x6  1.4142135623730950488016887242097

• Note, this was done with the standard Microsoft calculator


to maximum precision.

67
Finding a square-root
• Can we come up with a better initial guess?
• Sure, just divide the exponent by 2.
• Remember the bias offset
• Use bit-masks to extract the exponent to an
integer, modify and set the initial guess.
• For 2, this will lead to x0=1 (round down).

68
Newton’s Algorithm

• Requires the derivative function to be evaluated,


hence more function evaluations per iteration.
• A robust solution would check to see if the iteration
is stepping too far and limit the step.
• Most uses of Newton’s method assume the
approximation is pretty close and apply one to
three iterations blindly.

69
Division by Multiplication
• Newton’s method has many uses in computing basic
numbers.
• For example, consider the equation: 1
a 0
x
• Newton’s method gives the iteration:
1
a
xk
xk 1  xk   xk  xk  axk2
1
 2
xk
 xk  2  axk 

70
Reciprocal Square Root
• Another useful operator is the reciprocal-square
root.
• Needed to normalize vectors
• Can be used to calculate the square-root.

1
a  a
a

71
Reciprocal Square Root
1
Let f ( x)  2  a  0
x
2

f ( x)   3
x

• Newton’s iteration yields:


xk xk3
xk 1  xk   a
2 2
 xk  3  axk2 
1
2

72
1/Sqrt(2)
• Let’s look at the convergence for the reciprocal square-root of 2.

x0  1
x1  0.5 1  3  2 12   0.5
x2  0.5  0.5   3  2  0.52   0.625
If we could
x3  0.693359375
only start
x4  0.706708468496799468994140625 here!!
x 5  0.707106444695907075511730676593228
x 6  0.707106781186307335925435931237738
x 7  0.70710678118654752440084423972481

73
Root Finding Algorithms

• Closed or Bracketed techniques


• Bi-section
• Regula-Falsi
• Open techniques
• Newton fixed-point iteration
• Secant method
• Multidimensional non-linear problems
• The Jacobian matrix
• Fixed-point iterations
• Convergence and Fractal Basins of Attraction

74
Secant Method

• What if we do not know the derivative of f(x)?

Secant line

Tangent vector

xi xi-1

75
Secant Method

• As we converge on the root, the secant line approaches the tangent.


• Hence, we can use the secant line as an estimate and look at where it
intersects the x-axis (its root).

76
Secant Method – Derivation

f(x) Newton’s Method


f(xi )
xi 1 = xi - (1)
f(xi)
x f x  f (xi )
i, i

Approximate the derivative


f ( xi )  f ( xi 1 )
f ( xi )  (2)
f(xi-1)
xi  xi 1

X
Substituting Equation (2)
xi+2 xi+1 xi
into Equation (1) gives the
Secant method
Figure 1 Geometrical illustration of f ( xi )( xi  xi 1 )
the Newton-Raphson method. xi 1  xi 
f ( xi )  f ( xi 1 )
77
Secant Method – Derivation

The secant method can also be derived from geometry:


f(x)
The Geometric Similar Triangles
AB DC

f(xi) B AE DE
can be written as
f ( xi ) f ( xi 1 )

C
xi  xi 1 xi 1  xi 1
f(xi-1)

E D A
On rearranging, the secant
xi+1 xi-1 xi
X
method is given as

f ( xi )( xi  xi 1 )
Figure 2 Geometrical representation of xi 1  xi 
the Secant method. f ( xi )  f ( xi 1 )
78
Secant Method
• This also works by looking at the definition of the
derivative: f ( x  h)  f ( x )
f ( x)  lim
h 0 h
f ( xk )  f ( xk 1 )
f ( xk ) 
xk  xk 1

• Therefore, Newton’s method gives:


 xk  xk 1 
xk 1  xk    f ( xk )
 f ( xk )  f ( xk 1 ) 
• Which is the Secant Method.

May 6, 2021 79
Step 1

Calculate the next estimate of the root from two initial guesses
f ( xi )( xi  xi 1 )
xi 1  xi 
f ( xi )  f ( xi 1 )
Find the absolute relative approximate error

xi 1- xi
a =  100
xi 1

80
Step 2

Find if the absolute relative approximate error is greater than


the prespecified relative error tolerance.

If so, go back to step 1, else stop the algorithm.

Also check if the number of iterations has exceeded the


maximum number of iterations.

81
Example 1

You are working for ‘DOWN THE TOILET COMPANY’ that makes
floats for ABC commodes. The floating ball has a specific
gravity of 0.6 and has a radius of 5.5 cm. You are asked to find
the depth to which the ball is submerged when floating in
water.

Figure 3 Floating Ball Problem.


82
Example 1 Cont.
The equation that gives the depth x to which the ball is
submerged under water is given by

f x   x 3-0.165x 2+3.99310- 4
Use the Secant method of finding roots of equations to
find the depth x to which the ball is submerged under
water.
• Conduct three iterations to estimate the root of the
above equation.
• Find the absolute relative approximate error and the
number of significant digits at least correct at the end
of each iteration.
83
Example 1 Cont.

Solution
To aid in the understanding
of how this method works to
find the root of an equation,
the graph of f(x) is shown to
the right,
where
f x   x 3-0.165x 2+3.99310- 4

Figure 4 Graph of the function f(x).


84
Example 1 Cont.

Let us assume the initial guesses of the root of f x   0


as x1  0.02 and x0  0.05.

Iteration 1
The estimate of the root is
f x0 x0  x1 
x1  x0 
f x0   f x1 

 0.05 
0.05  0.1650.05  3.99310 0.05  0.02
3 2 4

0.05  0.1650.05  3.99310  0.02  0.1650.02  3.99310 


3 2 4 3 2 4

 0.06461

85
Example 1 Cont.

The absolute relative approximate error a at the end of


Iteration 1 is
x1  x0
a  100
x1
0.06461 0.05
 100
0.06461
 22.62%
The number of significant digits at least correct is 0, as you
need an absolute relative approximate error of 5% or less
for one significant digits to be correct in your result.
86
Example 1 Cont.

Figure 5 Graph of results of Iteration 1.


87
Example 1 Cont.

Iteration 2
The estimate of the root is

f x1 x1  x0 
x2  x1 
f x1   f x0 

 0.06461
0.06461  0.1650.06461  3.99310 0.06461 0.05
3 2 4

0.06461  0.1650.06461  3.99310  0.05  0.1650.05  3.99310 


3 2 4 3 2 4

 0.06241

88
Example 1 Cont.

The absolute relative approximate error a at the end of


Iteration 2 is
x2  x1
a  100
x2
0.06241 0.06461
 100
0.06241
 3.525%
The number of significant digits at least correct is 1, as you
need an absolute relative approximate error of 5% or less.

89
Example 1 Cont.

Figure 6 Graph of results of Iteration 2.


90
Example 1 Cont.

Iteration 3
The estimate of the root is

f x2 x2  x1 
x3  x2 
f x2   f x1 

 0.06241 
0.06241  0.1650.06241  3.993 10 0.06241  0.06461
3 2 4

0.06241  0.1650.06241  3.993 10  0.06461  0.1650.06461  3.993 10 


3 2 4 3 2 4

 0.06238

91
Example 1 Cont.

The absolute relative approximate error a at the end of


Iteration 3 is
x3  x2
a  100
x3
0.06238 0.06241
 100
0.06238
 0.0595%
The number of significant digits at least correct is 5, as you
need an absolute relative approximate error of 0.5% or
less.
92
Iteration #3

Figure 7 Graph of results of Iteration 3.


93

You might also like