0% found this document useful (0 votes)
14 views22 pages

Bisection Newton Secant

The document discusses the bisection method for solving nonlinear equations, detailing its algorithm and application. It explains how the method works by bracketing a root between two points and iteratively narrowing the interval to find the root. Additionally, it outlines the advantages and disadvantages of the method, including its guaranteed convergence and slow rate of convergence.

Uploaded by

dawitkebede1619
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views22 pages

Bisection Newton Secant

The document discusses the bisection method for solving nonlinear equations, detailing its algorithm and application. It explains how the method works by bracketing a root between two points and iteratively narrowing the interval to find the root. Additionally, it outlines the advantages and disadvantages of the method, including its guaranteed convergence and slow rate of convergence.

Uploaded by

dawitkebede1619
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Bisection Method of Solving a Nonlinear Equation

After reading this chapter, you should be able to:


1. Follow the algorithm of the bisection method of solving a nonlinear equation,
2. Use the bisection method to solve examples of finding roots of a nonlinear equation, and
3. Enumerate the advantages and disadvantages of the bisection method.
What is the bisection method and what is it based on?
One of the first numerical methods developed to find the root of a nonlinear equation
f ( x)  0 was the bisection method (also called binary-search method). The method is based
on the following theorem.
Theorem
An equation f ( x)  0 , where f (x) is a real continuous function, has at least one root
between x and x u if f ( x ) f ( xu )  0 (See Figure 1).
Note that if f ( x ) f ( xu )  0 , there may or may not be any root between x and x u
(Figures 2 and 3). If f ( x ) f ( xu )  0 , then there may be more than one root between x and
x u (Figure 4). So the theorem only guarantees one root between x and x u .

Bisection method
Since the method is based on finding the root between two points, the method falls
under the category of bracketing methods.
Since the root is bracketed between two points, x and x u , one can find the mid-
point, x m between x and x u . This gives us two new intervals
1. x and x m , and
2. x m and x u .
f (x)

xℓ
x
xu

Figure 1 At least one root exists between the two points if the function is real, continuous,
and changes sign.
f (x)

x
xℓ xu

Figure 2 If the function f (x) does not change sign between the two points, roots of the
equation f ( x)  0 may still exist between the two points.

f (x) f (x)

xℓ xu
x x
xℓ xu

Figure 3 If the function f (x) does not change sign between two points, there may
not be any roots for the equation f ( x)  0 between the two points.
Bisection Method

f (x)

xu
xℓ x

Figure 4 If the function f (x) changes sign between the two points, more than one root for
the equation f ( x)  0 may exist between the two points.

Is the root now between x and x m or between x m and x u ? Well, one can find the sign of
f ( x ) f ( x m ) , and if f ( x ) f ( x m )  0 then the new bracket is between x and x m , otherwise,
it is between x m and x u . So, you can see that you are literally halving the interval. As one
repeats this process, the width of the interval x , xu  becomes smaller and smaller, and you
can zero in to the root of the equation f ( x)  0 . The algorithm for the bisection method is
given as follows.

Algorithm for the Bisection Method


The steps to apply the bisection method to find the root of the equation f ( x)  0 are
1. Choose x and x u as two guesses for the root such that f ( x ) f ( xu )  0 , or in other
words, f (x) changes sign between x and x u .
2. Estimate the root, x m , of the equation f ( x)  0 as the mid-point between x and x u
as
x  xu
xm = 
2
3. Now check the following
a) If f ( x ) f ( x m )  0 , then the root lies between x and x m ; then x  x and
xu  x m .
b) If f ( x  ) f ( x m )  0 , then the root lies between x m and x u ; then x  x m and
xu  xu .
c) If f ( x  ) f ( x m )  0 ; then the root is x m . Stop the algorithm if this is true.
4. Find the new estimate of the root
x   xu
xm =
2
Find the absolute relative approximate error as
xmnew - xmold
a =  100
xmnew
where
xmnew = estimated root from present iteration
xmold = estimated root from previous iteration
5. Compare the absolute relative approximate error a with the pre-specified relative
error tolerance s . If a s , then go to Step 3, else stop the algorithm. Note one
should also check whether the number of iterations is more than the maximum
number of iterations allowed. If so, one needs to terminate the algorithm and notify
the user about it.

Example 1
You are working for ‘DOWN THE TOILET COMPANY’ that makes floats for ABC
commodes. The floating ball has a specific gravity of 0.6 and has a radius of 5.5 cm. You
are asked to find the depth to which the ball is submerged when floating in water.
The equation that gives the depth x to which the ball is submerged under water is given by
x 3  0.165 x 2  3.993  10 4  0
Use the bisection method of finding roots of equations to find the depth x to which the ball
is submerged under water. Conduct three iterations to estimate the root of the above
equation. Find the absolute relative approximate error at the end of each iteration, and the
number of significant digits at least correct at the end of each iteration.
Solution
From the physics of the problem, the ball would be submerged between x  0 and x  2R ,
where
R  radius of the ball,
that is
0  x  2R
0  x  2(0.055)
0  x  0.11

Figure 5 Floating ball problem.


Bisection Method

Lets us assume
x  0, xu  0.11
Check if the function changes sign between x and x u .
f ( x )  f (0)  (0)3  0.165(0) 2  3.993104  3.993104
f ( xu )  f (0.11)  (0.11) 3  0.165(0.11) 2  3.993 104  2.662  104
Hence
f ( x ) f ( xu )  f (0) f (0.11)  (3.993104 )(2.662104 )  0
So there is at least one root between x and x u , that is between 0 and 0.11.
Iteration 1
The estimate of the root is
x x
xm   u
2
0  0.11

2
 0.055
f xm   f 0.055  0.055  0.1650.055  3.993  10 4  6.655  10 5
3 2

  
f ( x ) f ( xm )  f (0) f (0.055)  3.993104 6.655104  0
Hence the root is bracketed between x m and x u , that is, between 0.055 and 0.11. So, the
lower and upper limit of the new bracket is
x  0.055, xu  0.11
At this point, the absolute relative approximate error a cannot be calculated as we do not
have a previous approximation.
Iteration 2
The estimate of the root is
x x
xm   u
2
0.055  0.11

2
 0.0825
f ( xm )  f (0.0825)  (0.0825)3  0.165(0.0825) 2  3.993104  1.622104
   
f x  f xm   f 0.055 f 0.0825  6.655 105   1.622  104  0
Hence, the root is bracketed between x and x m , that is, between 0.055 and 0.0825. So the
lower and upper limit of the new bracket is
x  0.055, xu  0.0825
The absolute relative approximate error a at the end of Iteration 2 is
xmnew  xmold
a   100
xmnew
0.0825  0.055
  100
0.0825
 33.33%
None of the significant digits are at least correct in the estimated root of x m  0.0825
because the absolute relative approximate error is greater than 5%.
Iteration 3
x x
xm   u
2
0.055  0.0825

2
 0.06875
f ( xm )  f (0.06875)  (0.06875)3  0.165(0.06875) 2  3.993104  5.563105
f ( x ) f ( xm )  f (0.055) f (0.06875)  (6.655105 )  (5.563105 )  0
Hence, the root is bracketed between x and x m , that is, between 0.055 and 0.06875. So the
lower and upper limit of the new bracket is
x  0.055, xu  0.06875
The absolute relative approximate error a at the ends of Iteration 3 is
xmnew  xmold
a   100
xmnew
0.06875  0.0825
  100
0.06875
 20%
Still none of the significant digits are at least correct in the estimated root of the equation as
the absolute relative approximate error is greater than 5%.
Seven more iterations were conducted and these iterations are shown in Table 1.

Table 1 Root of f ( x)  0 as function of number of iterations for bisection method.


Iteration x xu xm a % f (xm )
1 0.00000 0.11 0.055 ---------- 6.655  105
2 0.055 0.11 0.0825 33.33  1.622  104
3 0.055 0.0825 0.06875 20.00  5.563  105
4 0.055 0.06875 0.06188 11.11 4.484  106
5 0.06188 0.06875 0.06531 5.263  2.593  105
6 0.06188 0.06531 0.06359 2.702  1.0804  105
7 0.06188 0.06359 0.06273 1.370  3.176  106
8 0.06188 0.06273 0.0623 0.6897 6.497  107
9 0.0623 0.06273 0.06252 0.3436  1.265  106
10 0.0623 0.06252 0.06241 0.1721  3.0768  107
Bisection Method

At the end of 10th iteration,


a  0.1721%
Hence the number of significant digits at least correct is given by the largest value of m for
which
a  0.5  10 2m
0.1721  0.5  10 2 m
0.3442  10 2 m
log( 0.3442)  2  m
m  2  log( 0.3442)  2.463
So
m2
The number of significant digits at least correct in the estimated root of 0.06241 at the end of
the 10 th iteration is 2.

Advantages of bisection method


a) The bisection method is always convergent. Since the method brackets the root,
the method is guaranteed to converge.
b) As iterations are conducted, the interval gets halved. So one can guarantee the
error in the solution of the equation.

Drawbacks of bisection method


a) The convergence of the bisection method is slow as it is simply based on halving
the interval.
b) If one of the initial guesses is closer to the root, it will take larger number of
iterations to reach the root.
c) If a function f (x) is such that it just touches the x -axis (Figure 6) such as
f ( x)  x 2  0
it will be unable to find the lower guess, x , and upper guess, x u , such that
f ( x ) f ( xu )  0
d) For functions f (x) where there is a singularity 1 and it reverses sign at the
singularity, the bisection method may converge on the singularity (Figure 7). An
example includes
1
f ( x) 
x
where x  2 , xu  3 are valid initial guesses which satisfy
f ( x  ) f ( xu )  0
However, the function is not continuous and the theorem that a root exists is also
not applicable.
f (x)

Figure 6 The equation f ( x)  x 2  0 has a single root at x  0 that cannot be bracketed.

1
A singularity in a function is defined as a point where the function becomes infinite. For example, for a function
such as 1 / x , the point of singularity is x  0 as it becomes infinite.

f (x)

Figure 7 The equation f x  


1
 0 has no root but changes sign.
x
Bisection Method

Newton-Raphson Method of Solving a Nonlinear Equation


After reading this chapter, you should be able to:

1. derive the Newton-Raphson method formula,


2. develop the algorithm of the Newton-Raphson method,
3. use the Newton-Raphson method to solve a nonlinear equation, and
4. discuss the drawbacks of the Newton-Raphson method.

Introduction
Methods such as the bisection method and the false position method of finding roots of a
nonlinear equation f ( x)  0 require bracketing of the root by two guesses. Such methods
are called bracketing methods. These methods are always convergent since they are based on
reducing the interval between the two guesses so as to zero in on the root of the equation.
In the Newton-Raphson method, the root is not bracketed. In fact, only one initial
guess of the root is needed to get the iterative process started to find the root of an equation.
The method hence falls in the category of open methods. Convergence in open methods is
not guaranteed but if the method does converge, it does so much faster than the bracketing
methods.

Derivation
The Newton-Raphson method is based on the principle that if the initial guess of the root of
f ( x)  0 is at xi , then if one draws the tangent to the curve at f ( xi ) , the point x i 1 where
the tangent crosses the x -axis is an improved estimate of the root (Figure 1).
Using the definition of the slope of a function, at x  xi
f  xi  = tan θ
f  xi   0
= ,
xi  xi 1
which gives
f  xi 
xi 1 = xi  (1)
f  xi 
Equation (1) is called the Newton-Raphson formula for solving nonlinear equations of the
form f x   0 . So starting with an initial guess, xi , one can find the next guess, x i 1 , by
using Equation (1). One can repeat this process until one finds the root within a desirable
tolerance.

Algorithm
The steps of the Newton-Raphson method to find the root of an equation f x   0 are
1. Evaluate f x  symbolically
2. Use an initial guess of the root, xi , to estimate the new value of the root, x i 1 , as
f xi 
xi 1 = xi 
f xi 
3. Find the absolute relative approximate error a as
xi 1  xi
a =  100
xi 1
4. Compare the absolute relative approximate error with the pre-specified relative
error tolerance, s . If a > s , then go to Step 2, else stop the algorithm. Also,
check if the number of iterations has exceeded the maximum number of iterations
allowed. If so, one needs to terminate the algorithm and notify the user.

f (x)

f (xi) [xi, f (xi)]

f (xi+1)

θ
x
xi+2 xi+1 xi

Figure 1 Geometrical illustration of the Newton-Raphson method.

Example 1
You are working for ‘DOWN THE TOILET COMPANY’ that makes floats for ABC
commodes. The floating ball has a specific gravity of 0.6 and has a radius of 5.5 cm. You
are asked to find the depth to which the ball is submerged when floating in water.
Bisection Method

Figure 2 Floating ball problem.

The equation that gives the depth x in meters to which the ball is submerged under water is
given by
x 3  0.165 x 2  3.993  10 4  0
Use the Newton-Raphson method of finding roots of equations to find
a) the depth x to which the ball is submerged under water. Conduct three iterations
to estimate the root of the above equation.
b) the absolute relative approximate error at the end of each iteration, and
c) the number of significant digits at least correct at the end of each iteration.
Solution
f  x   x 3  0.165 x 2  3.993  10 4
f  x   3 x 2  0.33 x
Let us assume the initial guess of the root of f x   0 is x0  0.05 m. This is a reasonable
guess (discuss why x  0 and x  0.11 m are not good choices) as the extreme values of the
depth x would be 0 and the diameter (0.11 m) of the ball.
Iteration 1
The estimate of the root is
f  x0 
x1  x0 
f x0 

 0.05 
0.05  0.1650.05  3.993  10 4
3 2

30.05  0.330.05
2

1.118  10 4
 0.05 
 9  10 3
 0.05   0.01242 
 0.06242
The absolute relative approximate error a at the end of Iteration 1 is
x1  x0
a   100
x1
0.06242  0.05
  100
0.06242
 19.90%

The number of significant digits at least correct is 0, as you need an absolute relative
approximate error of 5% or less for at least one significant digit to be correct in your result.
Iteration 2
The estimate of the root is
f  x1 
x 2  x1 
f  x1 

 0.06242 
0.06242   0.1650.06242   3.993  10 4
3 2

30.06242   0.330.06242 
2

 3.97781  10 7
 0.06242 
 8.90973  10 3
 0.06242  4.4646  10 5 
 0.06238
The absolute relative approximate error a at the end of Iteration 2 is
x2  x1
a   100
x2
0.06238  0.06242
  100
0.06238
 0.0716%
The maximum value of m for which a  0.5 102  m is 2.844. Hence, the number of
significant digits at least correct in the answer is 2.
Iteration 3
The estimate of the root is
f x2 
x3  x 2 
f  x 2 

 0.06238 
0.062383  0.1650.062382  3.993  10 4
30.06238   0.330.06238 
2

4.44  10 11
 0.06238 
 8.91171  10 3
 0.06238   4.9822 10 9 
 0.06238
The absolute relative approximate error a at the end of Iteration 3 is
0.06238  0.06238
a   100
0.06238
0
Bisection Method

The number of significant digits at least correct is 4, as only 4 significant digits are carried
through in all the calculations.

Drawbacks of the Newton-Raphson Method


1. Divergence at inflection points
If the selection of the initial guess or an iterated value of the root turns out to be close to the
inflection point (see the definition in the appendix of this chapter) of the function f  x  in the
equation f  x   0 , Newton-Raphson method may start diverging away from the root. It may
then start converging back to the root. For example, to find the root of the equation
f x   x  1  0.512  0
3

the Newton-Raphson method reduces to


( x  1)3  0.512
3
xi 1 = xi  i
3( xi  1)2
Starting with an initial guess of x0  5.0 , Table 1 shows the iterated values of the root of the
equation. As you can observe, the root starts to diverge at Iteration 6 because the previous
estimate of 0.92589 is close to the inflection point of x  1 (the value of f '  x  is zero at the
inflection point). Eventually, after 12 more iterations the root converges to the exact value of
x  0.2 .
Table 1 Divergence near inflection point.
Iteration xi
Number
0 5.0000
1 3.6560
2 2.7465
3 2.1084
4 1.6000
5 0.92589
6 –30.119
7 –19.746
8 –12.831
9 –8.2217
10 –5.1498
11 –3.1044
12 –1.7464
13 –0.85356
14 –0.28538
15 0.039784
16 0.17475
17 0.19924
18 0.2
Figure 3 Divergence at inflection point for f x   x  1  0 .
3

2. Division by zero
For the equation
f  x   x 3  0.03 x 2  2.4  10 6  0
the Newton-Raphson method reduces to
xi  0.03 xi  2.4  106
3 2
xi 1 = xi 
3xi  0.06 xi
2

For x0  0 or x0  0.02 , division by zero occurs (Figure 4). For an initial guess close to
0.02 such as x0  0.01999 , one may avoid division by zero, but then the denominator in the
formula is a small number. For this case, as given in Table 2, even after 9 iterations, the
Newton-Raphson method does not converge.

Iteration xi f ( xi ) a %
Number
0 0.019990  1.60000  10-6
1 –2.6480 18.778 100.75
2 –1.7620 –5.5638 50.282
3 –1.1714 –1.6485 50.422
4 –0.77765 –0.48842 50.632
5 –0.51518 –0.14470 50.946
6 –0.34025 –0.042862 51.413
7 –0.22369 –0.012692 52.107
8 –0.14608 –0.0037553 53.127
9 –0.094490 –0.0011091 54.602
Bisection Method

T 1.00E-05

a f(x)
7.50E-06
b
5.00E-06
l
e 2.50E-06

0.00E+00
x
-0.03 -0.02 -0.01 0 0.01 0.02 0.03 0.04
2 -2.50E-06
0.02
-5.00E-06
D
-7.50E-06
i
v -1.00E-05

i
s
i Figure 4 Pitfall of division by zero or a near zero number.
o
n by near zero in Newton-Raphson method.

3. Oscillations near local maximum and minimum


Results obtained from the Newton-Raphson method may oscillate about the local maximum
or minimum without converging on a root but converging on the local maximum or
minimum. Eventually, it may lead to division by a number close to zero and may diverge.
For example, for
f x   x 2  2  0
the equation has no real roots (Figure 5 and Table 3).
6
f(x)
5

3
3

2
2

1 1
4
x
0
-2 -1 0 1 2 3
-1.75 -0.3040 0.5 3.142
-1

Figure 5 Oscillations around local minima for f  x   x 2  2 .


Table 3 Oscillations near local maxima and minima in Newton-Raphson method.
Iteration xi f ( xi ) a %
Number
0 –1.0000 3.00
1 0.5 2.25 300.00
2 –1.75 5.063 128.571
3 –0.30357 2.092 476.47
4 3.1423 11.874 109.66
5 1.2529 3.570 150.80
6 –0.17166 2.029 829.88
7 5.7395 34.942 102.99
8 2.6955 9.266 112.93
9 0.97678 2.954 175.96

4. Root jumping
In some case where the function f (x) is oscillating and has a number of roots, one may
choose an initial guess close to a root. However, the guesses may jump and converge to
some other root. For example for solving the equation sin x  0 if you choose
x0  2.4  7.539822  as an initial guess, it converges to the root of x  0 as shown in
Table 4 and Figure 6. However, one may have chosen this as an initial guess to converge to
x  2  6.2831853.

Table 4 Root jumping in Newton-Raphson method.


Iteration xi f ( xi ) a %
Number
0 7.539822 0.951
1 4.462 –0.969 68.973
2 0.5499 0.5226 711.44
3 –0.06307 –0.06303 971.91
4 8.376 10 4
8.375 10 5
7.54 10 4
 1.9586110 13  1.9586110 13 4.28 10
10
5

Appendix A. What is an inflection point?


For a function f  x  , the point where the concavity changes from up-to-down or
down-to-up is called its inflection point. For example, for the function f x   x  1 , the
3

concavity changes at x  1 (see Figure 3), and hence (1,0) is an inflection point. An
inflection points MAY exist at a point where f ( x)  0 and where f ' ' ( x ) does not exist.
Bisection Method

1.5
f(x)
1

0.5

x
0
-2 0 2 4 6 8 10
-0.06307 0.5499 4.461 7.539822
-0.5

-1

-1.5

Figure 6 Root jumping from intended location of root for f x   sin x  0 .

The reason we say that it MAY exist is because if f ( x)  0 , it only makes it a
possible inflection point. For example, for f ( x )  x 4  16 , f (0)  0 , but the concavity does
not change at x  0 . Hence the point (0, –16) is not an inflection point of f ( x )  x 4  16 .
For f x   x  1 , f (x) changes sign at x  1 ( f ( x)  0 for x  1 , and f ( x)  0
3

for x  1), and thus brings up the Inflection Point Theorem for a function f (x) that states the
following.
“If f ' (c) exists and f (c) changes sign at x  c , then the point (c, f (c)) is an
inflection point of the graph of f .”

Appendix B. Derivation of Newton-Raphson method from Taylor series


Newton-Raphson method can also be derived from Taylor series. For a general function
f  x  , the Taylor series is
f" xi 
f  xi 1   f  xi   f  xi  xi 1  xi  + xi 1  xi 2  
2!
As an approximation, taking only the first two terms of the right hand side,
f  xi 1   f  xi   f  xi  xi 1  xi 
and we are seeking a point where f x   0, that is, if we assume
f  xi 1   0 ,
0  f  xi   f  xi  xi 1  xi 
which gives
f  xi 
xi 1  xi 
f'  xi 
This is the same Newton-Raphson method formula series as derived previously using the
geometric m
Secant Method of Solving Nonlinear Equations
After reading this chapter, you should be able to:

1. derive the secant method to solve for the roots of a nonlinear equation,
2. use the secant method to numerically solve a nonlinear equation.

What is the secant method and why would I want to use it instead of the Newton-
Raphson method?
The Newton-Raphson method of solving a nonlinear equation f ( x)  0 is given by the
iterative formula
f ( xi )
xi 1 = xi  (1)
f ( xi )
One of the drawbacks of the Newton-Raphson method is that you have to evaluate the
derivative of the function. With availability of symbolic manipulators such as Maple,
MathCAD, MATHEMATICA and MATLAB, this process has become more convenient.
However, it still can be a laborious process, and even intractable if the function is derived as
part of a numerical scheme. To overcome these drawbacks, the derivative of the function,
f (x) is approximated as
f ( xi )  f ( xi 1 )
f ( xi )  (2)
xi  xi 1
Substituting Equation (2) in Equation (1) gives
f ( xi )( xi  xi 1 )
xi 1  xi  (3)
f ( xi )  f ( xi 1 )
Bisection Method

The above equation is called the secant method. This method now requires two initial
guesses, but unlike the bisection method, the two initial guesses do not need to bracket the
root of the equation. The secant method is an open method and may or may not converge.
However, when secant method converges, it will typically converge faster than the bisection
method. However, since the derivative is approximated as given by Equation (2), it typically
converges slower than the Newton-Raphson method.

The secant method can also be derived from geometry, as shown in Figure 1. Taking two
initial guesses, xi 1 and xi , one draws a straight line between f ( xi ) and f ( xi 1 ) passing
through the x -axis at x i 1 . ABE and DCE are similar triangles.
Hence
AB DC

AE DE
f ( xi ) f ( xi 1 )

xi  xi 1 xi 1  xi 1
On rearranging, the secant method is given as
f ( xi )( xi  xi 1 )
xi 1  xi 
f ( xi )  f ( xi 1 )

f (x)

f (xi) B

f (xi–1) C

E D A
x
xi+1 xi–1 xi

Figure 1 Geometrical representation of the secant method.

Example 1
You are working for ‘DOWN THE TOILET COMPANY’ that makes floats (Figure 2) for
ABC commodes. The floating ball has a specific gravity of 0.6 and a radius of 5.5 cm. You
are asked to find the depth to which the ball is submerged when floating in water.
The equation that gives the depth x to which the ball is submerged under water is given by
x 3  0.165 x 2  3.993  10 4  0
Use the secant method of finding roots of equations to find the depth x to which the ball is
submerged under water. Conduct three iterations to estimate the root of the above equation.
Find the absolute relative approximate error and the number of significant digits at least
correct at the end of each iteration.

Solution
f  x   x 3  0.165 x 2  3.993  10 4
Let us assume the initial guesses of the root of f x   0 as x 1  0.02 and x0  0.05 .

Figure 2 Floating ball problem.


Iteration 1
The estimate of the root is
f x0 x0  x1 
x1  x0 
f x0   f x1 

 x0 
x 3
0 
 0.165 x02  3.993 10 4  x0  x1 
x 3
 
 0.165 x02  3.993 10  4  x31  0.165 x21  3.993 10  4 
0.05 
0

 0.1650.05  3.993 10 4  0.05  0.02


3 2
 0.05 
0.05 3
 0.1650.05  3.993 10  4
2
  0.02 3
 0.1650.02  3.993 10  4
2

 0.06461

The absolute relative approximate error a at the end of Iteration 1 is


x1  x0
a   100
x1
0.06461  0.05
  100
0.06461
 22.62%
The number of significant digits at least correct is 0, as you need an absolute relative
approximate error of 5% or less for one significant digit to be correct in your result.
Bisection Method

Iteration 2
f x1 x1  x0 
x2  x1 
f x1   f x0 

 x1 
x 3
1  0.165 x12  3.993 10 4  x1  x0 
x
3
 0.165 x  3.993 10
2 4
  x 3
 0.165 x02  3.993 10  4 
0.06461  0.1650.06461 
1 1 0

 3.993 10 4  0.06461  0.05


3 2
 0.06461 
0.06461  0.1650.06461
3 2
 
 3.993 10  4  0.053  0.1650.05  3.993 10  4
2

 0.06241
The absolute relative approximate error a at the end of Iteration 2 is
x2  x1
a   100
x2
0.06241  0.06461
  100
0.06241
 3.525%
The number of significant digits at least correct is 1, as you need an absolute relative
approximate error of 5% or less.

Iteration 3
f  x2  x2  x1 
x3  x2 
f  x2   f  x1 

 x2 
x 3
2  0.165 x22  3.993 10 4   x2  x1 
x 3
 0.165 x  3.993 10
2 4
 x 3
 0.165 x12  3.993 10  4 
0.06241  0.1650.06241 
2 2 1

 3.993 10 4  0.06241  0.06461


3 2
 0.06241 
0.06241  0.1650.06241
3 2
 3.993 10  4   0.06461  0.1650.06461
3 2
 3.993 10  4 
 0.06238
The absolute relative approximate error a at the end of Iteration 3 is
x3  x2
a   100
x3
0.06238  0.06241
  100
0.06238
 0.0595%
The number of significant digits at least correct is 2, as you need an absolute relative
approximate error of 0.5% or less. Table 1 shows the secant method calculations for the
results from the above problem.

Table 1 Secant method results as a function of iterations.


Iteration x i 1 xi x i 1 a % f  xi 1 
Number, i
1 0.02 0.05 0.06461 22.62  1.9812  10 5
2 0.05 0.06461 0.06241 3.525  3.2852  10 7
3 0.06461 0.06241 0.06238 0.0595 2.0252  10 9
4 0.06241 0.06238 0.06238  3.64  10 4  1.8576  10 13

You might also like