Lecture 11

Download as pdf or txt
Download as pdf or txt
You are on page 1of 57

Bracketing

• A bracket is usually sufficient, although a


singularity is still possible: e.g. f(x) = (x – c)–1
f(x) f(b) > 0

c x
f(a) < 0
Fortunately, no danger of
mistaking this for a root!
115
Bracketing
• The absence of a bracket can imply any
number of roots
f(x)

f(a) > 0 f(b) > 0

116
Bisection Method
Basis of Bisection Method
Theorem An equation f(x)=0, where f(x) is a real continuous function,
has at least one root between xl and xu if f(xl) f(xu) < 0.
f(x)

x!
x
xu

Figure 1 At least one root exists between the two points if the function is
real, continuous, and changes sign.
Basis of Bisection Method
f(x)

x! x
xu

Figure 2 If function f (x ) does not change sign between two


points, roots of the equation f (x ) = 0 may still exist between the two
points.
Basis of Bisection Method
f(x)
f(x)

x! xu
x x
x! xu

Figure 3 If the function f (x ) does not change sign between two


points, there may not be any roots for the equation f (x ) = 0 between
the two points.
Basis of Bisection Method
f(x)

xu x
x!

Figure 4 If the function f (x ) changes sign between two points,


more than one root for the equation f (x ) = 0 may exist between the two
points.
Algorithm for Bisection Method
Step 1
Choose x! and xu as two guesses for the root such that
f(x!) f(xu) < 0, or in other words, f(x) changes sign
between x! and xu. This was demonstrated in Figure 1.
f(x)

x!
x
xu

Figure 1
Step 2
Estimate the root, xm of the equation f (x) = 0 as the mid
point between x! and xu as
f(x)

x! + xu
xm =
2
x! xm
x
xu

Figure 5 Estimate of xm
Step 3
Now check the following

a) If f ( xl ) f ( xm ) < 0 , then the root lies between x! and


xm; then x! = x! ; xu = xm.

b) If f ( xl ) f ( xm ) > 0 , then the root lies between xm and


xu; then x! = xm; xu = xu.

c) If f ( xl ) f ( xm ) = 0 ; then the root is xm. Stop the


algorithm if this is true.
Step 4
Find the new estimate of the root
x! + xu
xm =
2
Find the absolute relative approximate error
x new - x old
m
Îa = m
new
´100
x m

where
xmold = previous estimate of root
xmnew = current estimate of root
Step 5
Compare the absolute relative approximate error Îa with
the pre-specified error tolerance Îs .
Go to Step 2 using new
Yes upper and lower
Is Îa >Îs ? guesses.

No Stop the algorithm

Note one should also check whether the number of


iterations is more than the maximum number of iterations
allowed. If so, one needs to terminate the algorithm and
notify the user about it.
Example 1
You are working for ‘DOWN THE TOILET COMPANY’ that
makes floats for
The floating ballABC
hascommodes. The floating
a specific gravity ballhas
of 0.6 and hasaa
specific
radius ofgravity
5.5 cm. of You
0.6 are
and asked
has a to
radius of 5.5
find the cm.toYou
depth which
are asked
the to find
ball the depth towhen
is submerged which the ball
floating in is
water
submerged when floating in water.

Figure 6 Diagram of the floating ball


Example 1 Cont.
The equation that gives the depth x to which the ball is
submerged under water is given by
x 3 - 0.165 x 2 + 3.993 ´10 -4 = 0

a) Use the bisection method of finding roots of equations to


find the depth x to which the ball is submerged under
water. Conduct three iterations to estimate the root of
the above equation.
b) Find the absolute relative approximate error at the end
of each iteration, and the number of significant digits at
least correct at the end of each iteration.
Example 1 Cont.
From the physics of the problem, the ball would be
submerged between x = 0 and x = 2R,
where R = radius of the ball,
that is
0 £ x £ 2R
0 £ x £ 2(0.055)
0 £ x £ 0.11

Figure 6 Diagram of the floating ball


Example 1 Cont.
The picture can't be displayed.

Solution

To aid in the understanding


of how this method works to
find the root of an equation,
the graph of f(x) is shown to
the right,
where
f ( x ) = x 3 - 0.165 x 2 + 3.993 ´10- 4

Figure 7 Graph of the function f(x)


Example 1 Cont.
Let us assume
x! = 0.00
xu = 0.11
Check if the function changes sign between x! and xu .
f ( xl ) = f (0 ) = (0 ) - 0.165(0 ) + 3.993 ´10 -4 = 3.993 ´10 -4
3 2

f ( xu ) = f (0.11) = (0.11) - 0.165(0.11) + 3.993 ´10 - 4 = -2.662 ´10 - 4


3 2

Hence
( )( )
f ( xl ) f ( xu ) = f (0 ) f (0.11) = 3.993 ´10 -4 - 2.662 ´10 -4 < 0

So there is at least on root between x! and xu, that is between 0 and 0.11
Example 1 Cont.

Figure 8 Graph demonstrating sign change between initial limits


Example 1 Cont.
Iteration 1 x! + xu 0 + 0.11
The estimate of the root is xm = = = 0.055
2 2

f ( xm ) = f (0.055) = (0.055) - 0.165(0.055) + 3.993 ´10 -4 = 6.655 ´10 -5


3 2

( )( )
f ( xl ) f ( xm ) = f (0 ) f (0.055) = 3.993 ´10 - 4 6.655 ´10 -5 > 0

Hence the root is bracketed between xm and xu, that is, between 0.055
and 0.11. So, the lower and upper limits of the new bracket are
xl = 0.055, xu = 0.11
At this point, the absolute relative approximate error Îa cannot be
calculated as we do not have a previous approximation.
Example 1 Cont.

Figure 9 Estimate of the root for Iteration 1


Example 1 Cont.
Iteration 2 x! + xu 0.055 + 0.11
The estimate of the root is xm = = = 0.0825
2 2
f ( xm ) = f (0.0825) = (0.0825) - 0.165(0.0825) + 3.993 ´ 10 -4 = -1.622 ´ 10 -4
3 2

f ( xl ) f ( xm ) = f (0.055) f (0.0825) = (- 1.622 ´ 10 -4 )(6.655 ´ 10 -5 ) < 0

Hence the root is bracketed between x!"and xm, that is, between 0.055
and 0.0825. So, the lower and upper limits of the new bracket are
xl = 0.055, xu = 0.0825
Example 1 Cont.

Figure 10 Estimate of the root for Iteration 2


Example 1 Cont.
The absolute relative approximate error Îa at the end of Iteration 2 is

xmnew - xmold
Îa = new
´100
xm
0.0825 - 0.055
= ´100
0.0825
= 33.333%
None of the significant digits are at least correct in the estimate root of
xm = 0.0825 because the absolute relative approximate error is greater
than 5%.
Example 1 Cont.
Iteration 3 x! + xu 0.055 + 0.0825
The estimate of the root is xm = = = 0.06875
2 2
f ( xm ) = f (0.06875) = (0.06875) - 0.165(0.06875) + 3.993 ´10 -4 = -5.563 ´10 -5
3 2

( )( )
f ( xl ) f ( xm ) = f (0.055) f (0.06875) = 6.655 ´10 -5 - 5.563 ´10 -5 < 0

Hence the root is bracketed between x!"and xm, that is, between 0.055
and 0.06875. So, the lower and upper limits of the new bracket are
xl = 0.055, xu = 0.06875
Example 1 Cont.

Figure 11 Estimate of the root for Iteration 3


Example 1 Cont.
The absolute relative approximate error Îa at the end of Iteration 3 is

xmnew - xmold
Îa = new
´100
xm
0.06875 - 0.0825
= ´100
0.06875
= 20%
Still none of the significant digits are at least correct in the estimated
root of the equation as the absolute relative approximate error is
greater than 5%.
Seven more iterations were conducted and these iterations are shown in
Table 1.
Table 1 Cont.
Table 1 Root of f(x)=0 as function of number of iterations for
bisection method.
Iteration x! xu xm Îa % f(xm)

1 0.00000 0.11 0.055 ---------- 6.655×10−5


2 0.055 0.11 0.0825 33.33 −1.622×10−4
3 0.055 0.0825 0.06875 20.00 −5.563×10−5
4 0.055 0.06875 0.06188 11.11 4.484×10−6
5 0.06188 0.06875 0.06531 5.263 −2.593×10−5
6 0.06188 0.06531 0.06359 2.702 −1.0804×10−5
7 0.06188 0.06359 0.06273 1.370 −3.176×10−6
8 0.06188 0.06273 0.0623 0.6897 6.497×10−7
9 0.0623 0.06273 0.06252 0.3436 −1.265×10−6
10 0.0623 0.06252 0.06241 0.1721 −3.0768×10−7
Table 1 Cont.
Hence the number of significant digits at least correct is given by the
largest value or m for which
Îa £ 0.5 ´10 2- m
0.1721 £ 0.5 ´10 2- m
0.3442 £ 10 2- m
log(0.3442 ) £ 2 - m
m £ 2 - log(0.3442 ) = 2.463
So
m=2
The number of significant digits at least correct in the estimated root
of 0.06241 at the end of the 10th iteration is 2.
Advantages
n Always convergent
n The root bracket gets halved with each
iteration - guaranteed.
Drawbacks
n Slow convergence
n If one of the initial guesses is close to
the root, the convergence is slower
Drawbacks (continued)
n If a function f(x) is such that it just
touches the x-axis it will be unable to find
the lower and upper guesses.
f(x)

f (x ) = x 2

x
Drawbacks (continued)
n Function changes sign but root does not
exist

1
f(x)
f (x ) =
x
x
Newton-Raphson Method
Newton-Raphson Method
f(x)

f(xi)
[x f (x )] f(xi )
i, i xi +1 = xi -
f ¢(xi )

f(xi-1)

q
xi+2 xi+1 xi X

Figure 1 Geometrical illustration of the Newton-Raphson method.


Derivation
f(x)

AB
f(xi) B tan(a ) =
AC

f ( xi )
f ' ( xi ) =
xi - xi +1
C a A X f ( xi )
xi+1 xi xi +1 = xi -
f ¢( xi )

Figure 2 Derivation of the Newton-Raphson method.


Algorithm for Newton-
Raphson Method
Step 1

Evaluate f ¢(x) symbolically.


Step 2

Use an initial guess of the root, xi , to estimate the new


value of the root, xi +1 , as
f ( xi )
xi +1 = xi -
f ¢( xi )
Step 3

Find the absolute relative approximate error Îa as

xi +1- xi
Îa = ´ 100
xi +1
Step 4
Compare the absolute relative approximate error
with the pre-specified relative error tolerance Îs.

Go to Step 2 using new


Yes
estimate of the root.
Is Îa >Îs ?

No Stop the algorithm

Also, check if the number of iterations has exceeded


the maximum number of iterations allowed. If so,
one needs to terminate the algorithm and notify the
user.
Example 1 Cont.

Figure 5 Estimate of the root for the first iteration.


Example 1 Cont.
The absolute relative approximate error Îa at the end of Iteration 1
is
x1 - x0
Îa = ´ 100
x1
0.06242 - 0.05
= ´ 100
0.06242
= 19.90%
The number of significant digits at least correct is 0, as you need an
absolute relative approximate error of 5% or less for at least one
significant digits to be correct in your result.
Example 1 Cont.
Iteration 2
The estimate of the root is
f ( x1 )
x2 = x1 -
f ' ( x1 )

= 0.06242 -
(0.06242 ) - 0.165(0.06242 ) + 3.993 ´ 10 - 4
3 2

3(0.06242 ) - 0.33(0.06242 )
2

- 3.97781´ 10 -7
= 0.06242 -
- 8.90973 ´ 10 -3
(
= 0.06242 - 4.4646 ´ 10 -5 )
= 0.06238
Example 1 Cont.

Figure 6 Estimate of the root for the Iteration 2.


Example 1 Cont.
The absolute relative approximate error Îa at the end of Iteration 2
is
x2 - x1
Îa = ´ 100
x2
0.06238 - 0.06242
= ´ 100
0.06238
= 0.0716%

2-m
The maximum value of m for which Îa £ 0.5 ´ 10 is 2.844.
Hence, the number of significant digits at least correct in the
answer is 2.
Example 1 Cont.
Iteration 3
The estimate of the root is
f ( x2 )
x3 = x2 -
f ' ( x2 )

= 0.06238 -
(0.06238) - 0.165(0.06238) + 3.993 ´ 10 - 4
3 2

3(0.06238) - 0.33(0.06238)
2

4.44 ´ 10 -11
= 0.06238 -
- 8.91171´ 10 -3
( )
= 0.06238 - - 4.9822 ´ 10 -9
= 0.06238
Example 1 Cont.

Figure 7 Estimate of the root for the Iteration 3.


Example 1 Cont.
The absolute relative approximate error Îa at the end of Iteration 3
is
x2 - x1
Îa = ´ 100
x2
0.06238 - 0.06238
= ´ 100
0.06238
= 0%

The number of significant digits at least correct is 4, as only 4


significant digits are carried through all the calculations.
Advantages and Drawbacks
of Newton Raphson Method
Advantages
n Converges fast (quadratic convergence), if
it converges.
n Requires only one guess
Drawbacks
1. Divergence at inflection points
Selection of the initial guess or an iteration value of the root that
is close to the inflection point of the function f ( x ) may start
diverging away from the root in ther Newton-Raphson method.

For example, to find the root of the equation f ( x ) = ( x - 1) + 0.512 = 0 .


3

The Newton-Raphson method reduces to xi +1 = xi -


(x3
i )
3
- 1 + 0.512
.
3( xi - 1)
2

Table 1 shows the iterated values of the root of the equation.


The root starts to diverge at Iteration 6 because the previous estimate
of 0.92589 is close to the inflection point of x = 1 .
Eventually after 12 more iterations the root converges to the exact
value of x = 0.2.
Drawbacks – Inflection Points
Table 1 Divergence near inflection point.
Iteration xi
Number
0 5.0000
1 3.6560
2 2.7465
3 2.1084
4 1.6000
5 0.92589
6 −30.119
7 −19.746 Figure 8 Divergence at inflection point for
f ( x ) = ( x - 1) + 0.512 = 0
3
18 0.2000
Drawbacks – Division by Zero
2. Division by zero
For the equation
f ( x ) = x 3 - 0.03 x 2 + 2.4 ´10 -6 = 0
the Newton-Raphson method
reduces to
xi3 - 0.03 xi2 + 2.4 ´10 -6
xi +1 = xi -
3 xi2 - 0.06 xi

For x0 = 0 or x0 = 0.02 , the Figure 9 Pitfall of division by zero


denominator will equal zero. or near a zero number
Drawbacks – Oscillations near local
maximum and minimum
3. Oscillations near local maximum and minimum

Results obtained from the Newton-Raphson method may


oscillate about the local maximum or minimum without
converging on a root but converging on the local maximum or
minimum.
Eventually, it may lead to division by a number close to zero
and may diverge.
For example for f ( x ) = x + 2 = 0 the equation has no real
2

roots.
Drawbacks – Oscillations near local
maximum and minimum
Table 3 Oscillations near local maxima 6
f(x)
and mimima in Newton-Raphson method. 5

f ( xi ) Îa %
Iteration
Number xi 4

3
0 –1.0000 3.00 3

1 0.5 2.25 300.00 2 2

2 –1.75 5.063 128.571 11

3 –0.30357 2.092 476.47 4


x
4 3.1423 11.874 109.66 -2 -1
0
0 1 2 3
-1.75 -0.3040 0.5 3.142
5 1.2529 3.570 150.80 -1

6 –0.17166 2.029 829.88


Figure 10 Oscillations around local
7 5.7395 34.942 102.99
minima for f (x ) = x + 2 .
2
8 2.6955 9.266 112.93
9 0.97678 2.954 175.96
Drawbacks – Root Jumping
4. Root Jumping
( )
In some cases where the function f x is oscillating and has a number
of roots, one may choose an initial guess close to a root. However, the
guesses may jump and converge to some other root.
1.5
f(x)

For example 1

f ( x ) = sin x = 0 0.5

x
Choose 0

x0 = 2.4p = 7.539822
-2 0 2 4 6 8 10
-0.06307 0.5499 4.461 7.539822
-0.5

It will converge to x=0 -1

x = 2p = 6.2831853 Figure 11 Root jumping from intended


-1.5

instead of
location of root for
f ( x ) = sin
. x=0

You might also like