CH 2
CH 2
CH 2
Page 1 of 16
f (x)
f (x)
xℓ
xu x
x
xℓ xu
Figure 1 At least one root exists between the two Figure 2 If the function f (x) does not change
points if the function is real, continuous, and sign between the two points, roots of the
changes sign. equation f ( x) 0 may still exist between the
two points.
Figure 3 If the function f (x) does not change sign between two points, there may not be any roots for
the equation f ( x) 0 between the two points.
f (x) f (x)
xℓ xu
x x
xℓ xu
Page 2 of 16
f (x)
xu
x
xℓ
Bisection method
Since the method is based on finding the root between two points, the method falls under the
category of bracketing methods.
Since the root is bracketed between two points, x and xu , one can find the mid-point, x m
between x and xu . This gives us two new intervals
1. x and x m , and
2. x m and xu .
Is the root now between x and x m or between x m and xu ? Well, one can find the sign of f ( x ) f ( xm ) ,
and if f ( x ) f ( xm ) 0 then the new bracket is between x and x m , otherwise, it is between x m and xu .
So, you can see that you are literally halving the interval. As one repeats this process, the width of the
interval x , xu becomes smaller and smaller, and you can zero in to the root of the equation f ( x) 0 .
The algorithm for the bisection method is given as follows.
Algorithm (procedure) for the bisection method
The steps to apply the bisection method to find the root of the equation f ( x) 0 are
1. Choose x and xu as two guesses for the root such that f ( x ) f ( xu ) 0 , or in other words,
f (x) changes sign between x and xu .
2. Estimate the root, x m , of the equation f ( x) 0 as the mid-point between x and xu as
x xu
xm =
2
3. Now check the following
a) If f ( x ) f ( xm ) 0 , then the root lies between x and x m ; then x x and xu xm .
b) If f ( x ) f ( xm ) 0 , then the root lies between x m and xu ; then x xm and xu xu .
c) If f ( x ) f ( xm ) 0 ; then the root is x m . Stop the algorithm if this is true.
Page 3 of 16
4. Find the new estimate of the root
x xu
xm =
2
Find the absolute relative approximate error as
xmnew - xmold
a = 100
xmnew
where
xmnew = estimated root from present iteration
xmold = estimated root from previous iteration
5. Compare the absolute relative approximate error a with the pre-specified relative error
tolerance s . If a s , then go to Step 3, else stop the algorithm. Note one should
also check whether the number of iterations is more than the maximum number of
iterations allowed. If so, one needs to terminate the algorithm and notify the user about
it.
Example
You are working for a start-up computer assembly company and have been asked to determine
the minimum number of computers that the shop will have to sell to make a profit.
The equation that gives the minimum number of computers n to be sold after considering the
total costs and the total sales is
f (n) 40n1.5 875n 35000 0
Use the bisection method of finding roots of equations to find the minimum number of
computers that need to be sold to make a profit. Conduct three iterations to estimate the root of
the above equation. Find the absolute relative approximate error at the end of each iteration and
the number of significant digits at least correct at the end of each iteration.
Solution
Let us assume
n 50, nu 100
Check if the function changes sign between n and nu .
f (n ) f 50 40(50)1.5 875(50) 35000 5392.1
f(nu ) f (100) 40(100)1.5 875(100) 35000 12500
Hence
f n f nu f 50 f 100 5392.1 12500 0
So there is at least one root between n and nu , that is, between 50 and 100.
Page 4 of 16
Iteration 1 The absolute relative approximate error, a at the end of
The estimate of the root is
Iteration 2 is
n nu
nm nmnew nmold
2 a 100
nmnew
50 100
62.5 75
2 100
75 62.5
20
f nm f 75 40(75)1.5 875(75) 35000 4.6442 %3
10
None of the significant digits are at least correct in the
estimated root
nm 62.5
f n f nm f 50 f 75 5392.1 4.6442 103 as the
0 absolute relative approximate error is greater than
5%.
Hence the root is bracketed between n and n m ,
Iteration 3
that is, between 50 and 75. So, the lower and The estimate of the root is
upper limits of the new bracket are n nu
nm
n 50, nu 75 2
At this point, the absolute relative approximate 62.5 75
error a cannot be calculated, as we do not 2
have a previous approximation. 68.75
Page 5 of 16
Table 1 Root of f x 0 as a function of the number of iterations for bisection method.
Iteration n nu nm a % f nm
1 50 100 75 ---------- 4.6442 103
2 50 75 62.5 20 76.735
3 62.5 75 68.75 9.0909 2.3545 103
4 62.5 68.75 65.625 4.7619
5 62.5 65.625 64.063 2.4390
1.1569 103
6 62.5 64.063 63.281 1.2346 −544.68
7 62.5 63.281 62.891 0.62112 −235.12
8 62.5 62.891 62.695 0.31153 −79.483
9 62.5 62.695 62.598 0.15601 −1.4459
10 62.598 62.695 62.646 0.077942 37.627
18.086
Page 6 of 16
The equation of a straight line that connects points (b, f (b)) to point (a, f (a)) is given by
𝑓(𝑏) − 𝑓(𝑎)
𝑦= (𝑥 − 𝑏) + 𝑓(𝑏)
𝑏−𝑎
The points xs where the line intersects the x-axis is determined by substituting y = 0 in Equation above
and solving the equation for x.
𝑎𝑓(𝑏)−𝑏𝑓(𝑎)
Hence, 𝑥𝑠 = 𝑓(𝑏)−𝑓(𝑎)
The procedure (or algorithm) for finding a solution with the method of False Position is given below:
Algorithm for the method of False Position
1. Define the first interval (a, b) such that solution exists between them. Check f (a) f (b) < 0.
𝑎𝑓(𝑏)−𝑏𝑓(𝑎)
2. Compute the first estimate of the numerical solution xs using Eq( 𝑥𝑠 = 𝑓(𝑏)−𝑓(𝑎)
).
3. Find out whether the actual solution is between a and xs1 or between xs1 and b. This is
accomplished by checking the sign of the product f (a) f ( xs1).
i. If f (a) f (xs1) < 0, the solution is between a and xs1.
ii. If f (a) f (xs1) > 0, the solution is between xs1 and b.
4. Select the subinterval that contains the solution (a to xs1, or xs1 to b) is the new interval (a, b) and
go back to step 2. Steps 2 through 4 are repeated until a specified tolerance or error bound is
attained.
The method of False Position always converges to an answer, provided a root is initially bracketed in the
interval (a, b).
Newton-Raphson Method
Introduction
Methods such as the bisection method and the false position method of finding roots of a nonlinear equation
f ( x) 0 require bracketing of the root by two guesses. Such methods are called bracketing methods. These
methods are always convergent since they are based on reducing the interval between the two guesses so as to zero
in on the root of the equation.
In the Newton-Raphson method, the root is not bracketed. In fact, only one initial guess of the root is
needed to get the iterative process started to find the root of an equation. The method hence falls in the category of
open methods. Convergence in open methods is not guaranteed but if the method does converge, it does so much
faster than the bracketing methods.
Derivation
The Newton-Raphson method is based on the principle that if the initial guess of the root of f ( x) 0 is
at x i , then if one draws the tangent to the curve at f ( xi ) , the point xi 1 where the tangent crosses the x -
axis is an improved estimate of the root (Figure 1).
Using the definition of the slope of a function, at x xi
f xi = tan θ
f xi 0
= ,
xi xi 1
This gives
Page 7 of 16
f xi
xi 1 = xi (1)
f xi
Equation (1) is called the Newton-Raphson formula for solving nonlinear equations of the form f x 0 .
So starting with an initial guess, x i , one can find the next guess, xi 1 , by using Equation (1). One can
repeat this process until one finds the root within a desirable tolerance.
Algorithm
The steps of the Newton-Raphson method to find the root of an equation f x 0 are
1. Evaluate f x symbolically
2. Use an initial guess of the root, x i , to estimate the new value of the root, xi 1 , as
f xi
xi 1 = xi
f xi
3. Find the absolute relative approximate error a as
xi 1 xi
a = 100
xi 1
4. Compare the absolute relative approximate error with the pre-specified relative error
tolerance, s . If a > s , then go to Step 2, else stop the algorithm. Also, check if the
number of iterations has exceeded the maximum number of iterations allowed. If so, one
needs to terminate the algorithm and notify the user.
f (x)
f (xi+1)
θ
x
xi+2 xi+1 xi
Page 9 of 16
For x0 0 or x0 0.02 , division by zero occurs (Figure 4). For an initial guess close to 0.02 such as
x0 0.01999 , one may avoid division by zero, but then the denominator in the formula is a small
number. For this case, as given in Table 2, even after 9 iterations, the Newton-Raphson method does not
converge.
Table 2 Division by near zero in Newton-Raphson method.
Iteration
xi f ( xi ) a %
Number
1.00E-05
0.019990
0
–2.6480
1.6000010-6 7.50E-06
f(x)
–0.34025
6 –0.042862 51.413 -7.50E-06
–0.22369
–0.012692
-1.00E-05
7 52.107
–0.14608
8 –0.0037553 53.127
–
9 –0.0011091 54.602
0.094490
Figure 3 Pitfall of division by zero or a near zero number.
III. Oscillations near local maximum and minimum
Results obtained from the Newton-Raphson method may oscillate about the local maximum or minimum
without converging on a root but converging on the local maximum or minimum. Eventually, it may lead
to division by a number close to zero and may diverge.
For example, for
f x x 2 2 0 the equation has no real roots (Figure 4 and Table 3).
Iteration
xi f ( xi ) a %
Number
6
0 –1.0000 3.00
f(x) 1 0.5 2.25 300.00
2 –1.75 5.063 128.571
5
3 –0.30357 2.092 476.47
4 3.1423 11.874 109.66
4
5 1.2529 3.570 150.80
6 –0.17166 2.029 829.88
3
3 7 5.7395 34.942 102.99
8 2.6955 9.266 112.93
2
2
9 0.97678 2.954 175.96
11
4
x
0
-2 -1 0 1 2 3
-1.75 -0.3040 0.5 3.142
-1
Page 10 of 16
Figure 4 Oscillations around local Table 3 Oscillations near local maxima and
minima for f x x 2 . 2 minima in Newton-Raphson method
Page 11 of 16
Example 1
You are working for a start-up computer assembly company and have been asked to determine the
minimum number of computers that the shop will have to sell to make a profit. The equation that gives
the minimum number of computers n to be sold after considering the total costs and the total sales is
f n 40n1.5 875n 35000 0
Use the Newton-Raphson method of finding roots of equations to find the minimum number of computers
that need to be sold to make a profit. Conduct three iterations to estimate the root of the above equations.
Find the absolute relative approximate error at the end of each iteration and the number of significant
digits at least correct at the end of each iteration.
Solution
f n 40n1.5 875n 35000 0
f n 60n 0.5 875
Let us take the initial guess of the root of f n 0 as n0 50 .
Iteration 1
The estimate of the root is
f n0
n1 n0
f n0
4050 87550 35000
1.5
50
6050 875
0.5
5392.1
50
450.74
50 11.963
61.963
The absolute relative approximate error a at the end of Iteration 1 is
n1 n0
a 100
n1
61.963 50
100
61.963
19.307%
The number of significant digits at least correct is 0, as you need an absolute relative approximate error of less than
5% for one significant digit to be correct in your result.
Iteration 2
The estimate of the root is
f n1
n2 n1
f n1
Page 12 of 16
4061.963 87561.963 35000
1.5
61.963
6061.963 875
0.5
292.45
61.963
402.70
61.963 0.72623
62.689
The absolute relative approximate error a at the end of Iteration 2 is
n2 n1
a 100
n2
62.689 61.963
100
62.689
1.1585%
The number of significant digits at least correct is 1, because the absolute relative approximate error is less than 5%
Iteration 3
The estimate of the root is
f n2
n3 n 2
f n2
4062.689 87562.689 35000
1.5
62.689
6062.689 875
0.5
1.0031
62.689
399.94
62.689 2.5080 10 3
62.692
The absolute relative approximate error a at the end of Iteration 3 is
n3 n 2
a 100
n3
62.692 62.689
a 100
62.692
4.0006 10 3 %
Hence the number of significant digits at least correct is given by the largest value of m for which
a 0.5 10 2m
4.0006 10 3 0.5 10 2m
8.0011 10 3 10 2m
log8.0011 10 3 2 m
m 2 log8.0011 10 3 4.0968
So
m4
The number of significant digits at least correct in the estimated root 62.692 is 4.
Page 13 of 16
SECANT METHOD
What is the secant method and why would we want to use it instead of the Newton-Raphson
method?
The Newton-Raphson method of solving a nonlinear equation f ( x) 0 is given by the iterative formula
f ( xi )
xi 1 = xi (1)
f ( xi )
One of the drawbacks of the Newton-Raphson method is that you have to evaluate the derivative of the
function. With availability of symbolic manipulators such as Maple, MathCAD, MATHEMATICA and
MATLAB, this process has become more convenient. However, it still can be a laborious process, and
even intractable if the function is derived as part of a numerical scheme. To overcome these drawbacks,
the derivative of the function, f (x) is approximated as
f ( xi ) f ( xi 1 )
f ( xi ) (2)
xi xi 1
Substituting Equation (2) in Equation (1) gives
f ( xi )( xi xi 1 )
xi 1 xi (3)
f ( xi ) f ( xi 1 )
The above equation is called the secant method. This method now requires two initial guesses, but unlike
the bisection method, the two initial guesses do not need to bracket the root of the equation. The secant
method is an open method and may or may not converge. However, when secant method converges, it
will typically converge faster than the bisection method. However, since the derivative is approximated
as given by Equation (2), it typically converges slower than the Newton-Raphson method.
The secant method can also be derived from geometry, as shown in Figure 1. Taking two initial guesses,
xi 1 and xi , one draws a straight line between f (x) f ( xi ) and f ( xi 1 ) passing through the x
-axis at xi 1 . ABE and DCE are similar triangles.
Hence
AB DC f (xi)
B
AE DE
f ( xi ) f ( xi 1 )
xi xi 1 xi 1 xi 1
On rearranging, the secant method is given as
f ( xi )( xi xi 1 ) f (xi–1) C
xi 1 xi
f ( xi ) f ( xi 1 ) A
E D
x
xi+1 xi–1 xi
Figure 1.Geometrical representation of the
secant method.
Page 14 of 16
Example 1
You are working for a start-up computer assembly company and have been asked to determine the
minimum number of computers that the shop will have to sell to make a profit.
The equation that gives the minimum number of computers n to be sold after considering the total costs
and the total sales is
f (n) 40n1.5 875n 35000 0
Use the secant method of finding roots of equations to find the minimum number of computers that need
to be sold to make a profit. Conduct three iterations to estimate the root of the above equation. Find the
absolute relative approximate error at the end of each iteration and the number of significant digits at least
correct at the end of each iteration.
Solution
Let us take the initial guesses of the root of f n 0 as n1 25 and n0 50 .
Iteration 1
The estimate of the root is
f n0 n0 n1
n1 n0
f n0 f n1
n0
40n
875n0 35000 n0 n1
1.5
40n
0
1.5
0 875n0 35000 40n1.15 875n1 35000
50
40(50)
875(50) 35000 50 25
1.5
40(50) 1.5
875(50) 35000 40(25)1.5 875(25) 35000
60.587
The absolute relative approximate error a at the end of Iteration 1 is
n1 n0
a 100
n1
60.587 50
100
60.587
17.474%
The number of significant digits at least correct is 0, as you need an absolute relative approximate error of
less than 5% for one significant digit to be correct in your result.
Iteration 2
The estimate of the root is
f n1 n1 n0
n2 n1
f n1 f n0
Page 15 of 16
n1
40n 875n1 35000 n1 n0
1.5
40n
1
1.5
1 875n1 35000 40n01.5 875n0 35000
60.587
4060.587 87560.587 3500060.587 50
1.5
4050 87550 35000
1.5
62.569
The absolute relative approximate error a at the end of Iteration 2 is
n2 n1
a 100
n2
62.569 60.587
100
62.569
3.1672%
The number of significant digits at least correct is 1, because the absolute relative approximate error is
less than 5% .
Iteration 3
The estimate of the root is
f n2 n2 n1
n3 n2
f n2 f n1
n2
40n 875n2 35000 n2 n1
1.5
40n
2
1.5
2 875n2 35000 40n11.5 875n1 35000
62.569
4062.569 87562.569 3500062.569 60.587
1.5
4060.587 87560.587 35000
1.5
62.690
The absolute relative approximate error a at the end of Iteration 3 is
n3 n 2
a 100
n3
62.690 62.569
100
62.690
0.19425%
The number of significant digits at least correct is 2, because the absolute relative approximate error is
less than 0.5% .
Page 16 of 16