CH 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

CH-2

SOLUTION OF NON-LINEAR EQUATIONS


One of the most common problem encountered in engineering analysis is that given a function f (x), find
the values of x for which f (x) = 0. The solution (values of x) are known as the roots of the equation f (x) =
0, or the zeroes of the function f (x).
The roots of equations may be real or complex. In general, an equation may have any number of (real)
roots or no roots at all. For example, sin x – x = 0 has a single root, namely, x = 0, whereas tan x – x = 0
has infinite number of roots (x = 0, ± 4.493, ± 7.725,.). There are two types of methods available to find
the roots of algebraic and transcendental equations of the form f (x) =0.
1. Direct Methods: Direct methods give the exact value of the roots in a finite number of steps. We
assume here that there is no round off errors. Direct methods determine all the roots at the same time.
2. Indirect or Iterative Methods: Indirect or iterative methods are based on the concept of successive
approximations. The general procedure is to start with one or more initial approximation to the root and
obtain a sequence of iterates (xk) which in the limit converges to the actual or true solution to the root.
Indirect or iterative methods determine one or two roots at a time.
The indirect or iterative methods are further divided into two categories: bracketing and open methods.
The bracketing methods require the limits between which the root lies, whereas the open methods require
the initial estimation of the solution. Bisection and False position methods are two known examples of the
bracketing methods. Among the open methods, the Newton- Raphson and the method of successive
approximation are most commonly used. The most popular method for solving a non-linear equation is
the Newton-Raphson method and this method has a high rate of convergence to a solution.
In this chapter, we present the following indirect or iterative methods with illustrative examples:
1. Bisection Method
2. Method of False Position (Regular Falsi Method)
3. Newton-Raphson Method (Newton’s method)
4. Successive Approximation Method.
Bisection Method of Solving a Nonlinear Equation
After reading this topic, you should be able to:
1. follow the algorithm of the bisection method of solving a nonlinear equation,
2. use the bisection method to solve examples of finding roots of a nonlinear equation, and
3. Enumerate the advantages and disadvantages of the bisection method.
What is the bisection method and what is it based on?
One of the first numerical methods developed to find the root of a nonlinear equation f ( x)  0 was the
bisection method (also called binary-search method). The method is based on the following theorem.
Theorem
An equation f ( x)  0 , where f (x) is a real continuous function,
 If f ( x ) f ( xu )  0 has at least one root between x and xu (See Figure 1).
 If f ( x ) f ( xu )  0 , there may or may not be any root between x and xu (Figures 2 and 3).
 If f ( x ) f ( xu )  0 , then there may be more than one root between x and xu (Figure 4).
So the theorem only guarantees one root between x and xu .

Page 1 of 16
f (x)
f (x)

xℓ
xu x
x
xℓ xu

Figure 1 At least one root exists between the two Figure 2 If the function f (x) does not change
points if the function is real, continuous, and sign between the two points, roots of the
changes sign. equation f ( x)  0 may still exist between the
two points.
Figure 3 If the function f (x) does not change sign between two points, there may not be any roots for
the equation f ( x)  0 between the two points.

f (x) f (x)

xℓ xu
x x
xℓ xu

Page 2 of 16
f (x)

xu
x
xℓ

Figure 4 If the function f (x) changes sign


between the two points, more than one root for the equation f ( x)  0 may exist between the two points.

Bisection method
Since the method is based on finding the root between two points, the method falls under the
category of bracketing methods.
Since the root is bracketed between two points, x and xu , one can find the mid-point, x m
between x and xu . This gives us two new intervals
1. x and x m , and
2. x m and xu .
Is the root now between x and x m or between x m and xu ? Well, one can find the sign of f ( x ) f ( xm ) ,
and if f ( x ) f ( xm )  0 then the new bracket is between x and x m , otherwise, it is between x m and xu .
So, you can see that you are literally halving the interval. As one repeats this process, the width of the
interval x , xu  becomes smaller and smaller, and you can zero in to the root of the equation f ( x)  0 .
The algorithm for the bisection method is given as follows.
Algorithm (procedure) for the bisection method
The steps to apply the bisection method to find the root of the equation f ( x)  0 are
1. Choose x and xu as two guesses for the root such that f ( x ) f ( xu )  0 , or in other words,
f (x) changes sign between x and xu .
2. Estimate the root, x m , of the equation f ( x)  0 as the mid-point between x and xu as
x   xu
xm =
2
3. Now check the following
a) If f ( x ) f ( xm )  0 , then the root lies between x and x m ; then x  x and xu  xm .
b) If f ( x ) f ( xm )  0 , then the root lies between x m and xu ; then x  xm and xu  xu .
c) If f ( x ) f ( xm )  0 ; then the root is x m . Stop the algorithm if this is true.

Page 3 of 16
4. Find the new estimate of the root
x   xu
xm =
2
Find the absolute relative approximate error as
xmnew - xmold
a =  100
xmnew
where
xmnew = estimated root from present iteration
xmold = estimated root from previous iteration
5. Compare the absolute relative approximate error a with the pre-specified relative error
tolerance s . If a s , then go to Step 3, else stop the algorithm. Note one should
also check whether the number of iterations is more than the maximum number of
iterations allowed. If so, one needs to terminate the algorithm and notify the user about
it.
Example
You are working for a start-up computer assembly company and have been asked to determine
the minimum number of computers that the shop will have to sell to make a profit.
The equation that gives the minimum number of computers n to be sold after considering the
total costs and the total sales is
f (n)  40n1.5  875n  35000  0
Use the bisection method of finding roots of equations to find the minimum number of
computers that need to be sold to make a profit. Conduct three iterations to estimate the root of
the above equation. Find the absolute relative approximate error at the end of each iteration and
the number of significant digits at least correct at the end of each iteration.
Solution
Let us assume
n  50, nu  100
Check if the function changes sign between n and nu .
f (n )  f 50  40(50)1.5  875(50)  35000  5392.1
f(nu )  f (100)  40(100)1.5  875(100)  35000  12500
Hence
f n  f nu   f 50 f 100  5392.1 12500  0
So there is at least one root between n and nu , that is, between 50 and 100.

Page 4 of 16
Iteration 1 The absolute relative approximate error, a at the end of
The estimate of the root is
Iteration 2 is
n  nu
nm   nmnew  nmold
2 a   100
nmnew
50  100
 62.5  75
2   100
 75 62.5
 20
f nm   f 75  40(75)1.5  875(75)  35000  4.6442 %3
10
None of the significant digits are at least correct in the
estimated root
nm  62.5

f n  f nm   f 50 f 75  5392.1  4.6442  103 as the
0  absolute relative approximate error is greater than
5%.
Hence the root is bracketed between n and n m ,
Iteration 3
that is, between 50 and 75. So, the lower and The estimate of the root is
upper limits of the new bracket are n  nu
nm 
n  50, nu  75 2
At this point, the absolute relative approximate 62.5  75

error a cannot be calculated, as we do not 2
have a previous approximation.  68.75

Iteration 2 f nm   f 68.75  40(68.75)1.5  875(68.75)  35000  2.3545 103


The estimate of the root is
n  nu
nm 
2
 
f n  f nm   f 62.5 f 68.75  76.735  2.3545 103  0
50  75
 Hence, the root is bracketed between n and n m , that is,
2
between 62.5 and 68.75. So the lower and upper
 62.5
limits of the new bracket are
n  62.5, nu  68.75
f nm   f 62.5  40(62.5)1.5  875(62.5)  35000  76.735
The absolute relative approximate error a at
the end of Iteration 3 is
f n  f nm   f 50 f 62.5  5392.176.735  0
nmnew  nmold
a   100
Hence, the root is bracketed between nm and nu , that is, nmnew
between 62.5 and 75. So the lower and upper limits of the 68.75  62.5
new bracket are   100
68.75
n  62.5, nu  75
 9.0909%
Still none of the significant digits are at least correct in the estimated root of the equation, as the absolute
relative approximate error is greater than 5%. The estimated minimum number of computers that need to
be sold to break even at the end of the third iteration is 69. Seven more iterations were conducted and
these iterations are shown in the Table 1.

Page 5 of 16
Table 1 Root of f x   0 as a function of the number of iterations for bisection method.

Iteration n nu nm a % f nm 
1 50 100 75 ----------  4.6442 103
2 50 75 62.5 20 76.735
3 62.5 75 68.75 9.0909  2.3545 103
4 62.5 68.75 65.625 4.7619
5 62.5 65.625 64.063 2.4390
 1.1569 103
6 62.5 64.063 63.281 1.2346 −544.68
7 62.5 63.281 62.891 0.62112 −235.12
8 62.5 62.891 62.695 0.31153 −79.483
9 62.5 62.695 62.598 0.15601 −1.4459
10 62.598 62.695 62.646 0.077942 37.627
18.086

At the end of the 10 th iteration, 0.15588  10 2m


a  0.077942% log0.15588  2  m
Hence the number of significant digits at least m  2  log0.15588  2.8072
correct is given by the largest value of m for So
which m2
2 m
a  0.5  10 The number of significant digits at least correct
in the estimated root 62.646 is 2.
0.077942  0.5  10 2m

METHOD OF FALSE POSITION


The method of False Position (also called the Regular Falsi method, and the linear interpolation method)
is another well-known bracketing method. It is very similar to Bisection method with the exception that it
uses a different strategy to end up with its new root estimate. Rather than bisecting the interval (a, b), it
locates the root by joining f (a1) and f (b1) with a straight line. The intersection of this line with the x-axis
represents an improved estimate of the root.
Here again, we assume that within a given interval (a,
b), f (x) is continuous and the equation has a solution.
As shown in Fig., the method starts by finding an initial
interval (a1, b1) that brackets the solution. f (a1) and f
(b1) are the values of the function at the end points a1
and b1. These end points are connected by a straight
line, and the first estimate of the numerical solution, xs1
, is the point where the straight line crosses the axis.
For the second iteration, a new interval (a2, b2) is
defined. The new interval is either (a1, xs1) where a1 is
assigned to a2 and xs1 to b2 or ( xs1, b1) where xs1 is
assigned to a2 and b1 to b2. The end points of the second interval are connected with a straight line, and
the point where this new line crosses the x-axis is the second estimate of the solution, xs1. A new
subinterval (a3, b3) is selected for the third iteration and the iterations will be continued until the
numerical solution is accurate enough.

Page 6 of 16
The equation of a straight line that connects points (b, f (b)) to point (a, f (a)) is given by
𝑓(𝑏) − 𝑓(𝑎)
𝑦= (𝑥 − 𝑏) + 𝑓(𝑏)
𝑏−𝑎
The points xs where the line intersects the x-axis is determined by substituting y = 0 in Equation above
and solving the equation for x.
𝑎𝑓(𝑏)−𝑏𝑓(𝑎)
Hence, 𝑥𝑠 = 𝑓(𝑏)−𝑓(𝑎)
The procedure (or algorithm) for finding a solution with the method of False Position is given below:
Algorithm for the method of False Position
1. Define the first interval (a, b) such that solution exists between them. Check f (a) f (b) < 0.
𝑎𝑓(𝑏)−𝑏𝑓(𝑎)
2. Compute the first estimate of the numerical solution xs using Eq( 𝑥𝑠 = 𝑓(𝑏)−𝑓(𝑎)
).
3. Find out whether the actual solution is between a and xs1 or between xs1 and b. This is
accomplished by checking the sign of the product f (a) f ( xs1).
i. If f (a) f (xs1) < 0, the solution is between a and xs1.
ii. If f (a) f (xs1) > 0, the solution is between xs1 and b.
4. Select the subinterval that contains the solution (a to xs1, or xs1 to b) is the new interval (a, b) and
go back to step 2. Steps 2 through 4 are repeated until a specified tolerance or error bound is
attained.
The method of False Position always converges to an answer, provided a root is initially bracketed in the
interval (a, b).

Newton-Raphson Method
Introduction
Methods such as the bisection method and the false position method of finding roots of a nonlinear equation
f ( x)  0 require bracketing of the root by two guesses. Such methods are called bracketing methods. These
methods are always convergent since they are based on reducing the interval between the two guesses so as to zero
in on the root of the equation.

In the Newton-Raphson method, the root is not bracketed. In fact, only one initial guess of the root is
needed to get the iterative process started to find the root of an equation. The method hence falls in the category of
open methods. Convergence in open methods is not guaranteed but if the method does converge, it does so much
faster than the bracketing methods.

Derivation
The Newton-Raphson method is based on the principle that if the initial guess of the root of f ( x)  0 is
at x i , then if one draws the tangent to the curve at f ( xi ) , the point xi 1 where the tangent crosses the x -
axis is an improved estimate of the root (Figure 1).
Using the definition of the slope of a function, at x  xi
f xi  = tan θ
f xi   0
= ,
xi  xi 1
This gives

Page 7 of 16
f xi 
xi 1 = xi  (1)
f xi 
Equation (1) is called the Newton-Raphson formula for solving nonlinear equations of the form f x   0 .
So starting with an initial guess, x i , one can find the next guess, xi 1 , by using Equation (1). One can
repeat this process until one finds the root within a desirable tolerance.

Algorithm
The steps of the Newton-Raphson method to find the root of an equation f x   0 are
1. Evaluate f x  symbolically
2. Use an initial guess of the root, x i , to estimate the new value of the root, xi 1 , as
f xi 
xi 1 = xi 
f xi 
3. Find the absolute relative approximate error a as

xi 1  xi
a =  100
xi 1
4. Compare the absolute relative approximate error with the pre-specified relative error
tolerance, s . If a > s , then go to Step 2, else stop the algorithm. Also, check if the
number of iterations has exceeded the maximum number of iterations allowed. If so, one
needs to terminate the algorithm and notify the user.

f (x)

f (xi) [xi, f (xi)]

f (xi+1)

θ
x
xi+2 xi+1 xi

Figure 1 Geometrical illustration of the Newton-Raphson method.

Drawbacks of the Newton-Raphson Method


I. Divergence at inflection points
Page 8 of 16
If the selection of the initial guess or an iterated value of the root turns out to be close to the inflection point (see the
definition in the appendix of this chapter) of the function f x  in the equation f x   0 , Newton-Raphson
method may start diverging away from the root. It may then start converging back to the root. For example, to find
the root of the equation
f x   x  1  0.512  0
3

The Newton-Raphson method reduces to


( xi  1)3  0.512
3
xi 1 = xi 
3( xi  1)2
Starting with an initial guess of x0  5.0 , Table 1 shows the iterated values of the root of the equation. As you can
observe, the root starts to diverge at Iteration 6 because the previous estimate of 0.92589 is close to the inflection
point of x  1 (the value of f ' x  is zero at the inflection point). Eventually, after 12 more iterations the root
converges to the exact value of x  0.2 .
Table 1 Divergence near inflection point.
Iteration Number xi 9 –8.2217
10 –5.1498
0 5.0000
11 –3.1044
1 3.6560
12 –1.7464
2 2.7465
13 –0.85356
3 2.1084
14 –0.28538
4 1.6000
15 0.039784
5 0.92589
16 0.17475
6 –30.119
17 0.19924
7 –19.746
18 0.2
8 –12.831

Figure 2 Divergence at inflection point for f x   x  1  0 .


3

II. Division by zero


For the equation
f x   x 3  0.03x 2  2.4  106  0
the Newton-Raphson method reduces to
xi  0.03xi  2.4  106
3 2
xi 1 = xi 
3xi  0.06 xi
2

Page 9 of 16
For x0  0 or x0  0.02 , division by zero occurs (Figure 4). For an initial guess close to 0.02 such as
x0  0.01999 , one may avoid division by zero, but then the denominator in the formula is a small
number. For this case, as given in Table 2, even after 9 iterations, the Newton-Raphson method does not
converge.
Table 2 Division by near zero in Newton-Raphson method.
Iteration
xi f ( xi ) a %
Number
1.00E-05
0.019990
0
–2.6480
 1.6000010-6 7.50E-06
f(x)

1 18.778 100.75 5.00E-06


–1.7620
2 –5.5638 50.282 2.50E-06
–1.1714 x
3 –1.6485 50.422 0.00E+00
–0.77765 -0.03 -0.02 -0.01 0 0.01 0.02 0.03 0.04
4 –0.48842 50.632 -2.50E-06
–0.51518 0.02
5 –0.14470 50.946 -5.00E-06

–0.34025
6 –0.042862 51.413 -7.50E-06

–0.22369
–0.012692
-1.00E-05
7 52.107
–0.14608
8 –0.0037553 53.127

9 –0.0011091 54.602
0.094490
Figure 3 Pitfall of division by zero or a near zero number.
III. Oscillations near local maximum and minimum
Results obtained from the Newton-Raphson method may oscillate about the local maximum or minimum
without converging on a root but converging on the local maximum or minimum. Eventually, it may lead
to division by a number close to zero and may diverge.
For example, for
f x   x 2  2  0 the equation has no real roots (Figure 4 and Table 3).

Iteration
xi f ( xi ) a %
Number
6
0 –1.0000 3.00
f(x) 1 0.5 2.25 300.00
2 –1.75 5.063 128.571
5
3 –0.30357 2.092 476.47
4 3.1423 11.874 109.66
4
5 1.2529 3.570 150.80
6 –0.17166 2.029 829.88
3
3 7 5.7395 34.942 102.99
8 2.6955 9.266 112.93
2
2
9 0.97678 2.954 175.96
11
4
x
0
-2 -1 0 1 2 3
-1.75 -0.3040 0.5 3.142
-1

Page 10 of 16
Figure 4 Oscillations around local Table 3 Oscillations near local maxima and
minima for f x   x  2 . 2 minima in Newton-Raphson method

IV. Root jumping


In some case where the function f (x) is oscillating and has a number of roots, one may choose an initial
guess close to a root. However, the guesses may jump and converge to some other root. For example for
solving the equation sin x  0 if you choose x0  2.4  7.539822 as an initial guess, it converges to
the root of x  0 as shown in Table 4 and Figure 6. However, one may have chosen this as an initial
guess to converge to x  2  6.2831853 .
Table 4 Root jumping in Newton-Raphson method.
Iteration
xi f ( xi ) a %
NO
0 7.539822 0.951
1 4.462 –0.969 68.973
2 0.5499 0.5226 711.44
3 –0.06307 –0.06303 971.91
4 8.376 10 4 8.375 10 5 7.54 10 4
5  1.95861 10 13  1.958611013 4.28 1010

1.5 x  1 (see Figure 3), and hence (1,0) is an


f(x)
inflection point.
1 An inflection points MAY exist at a point where
f ( x)  0 and where f ' ' ( x ) does not exist. The
0.5
reason we say that it MAY exist is because if
0
x f ( x)  0 , it only makes it a possible inflection
-2
-0.06307
0 2
0.5499
4 6
4.461
8
7.539822
10
point. For example, for f ( x )  x 4  16 , f (0)  0 ,
but the concavity does not change at x  0 . Hence
-0.5

the point (0, –16) is not an inflection point of


-1
f ( x )  x 4  16 .
For f x   x  1 , f (x) changes sign at
-1.5 3

x  1 ( f ( x)  0 for x  1 , and f ( x)  0 for


Figure 5 x  1for
Root jumping from intended location of root ), and thus brings up the Inflection Point
f x   sin x  0 . Theorem for a function f (x) that states the
What is an inflection point? following.
“If f ' (c) exists and f (c) changes sign at
For a function f x  , the point where the
concavity changes from up-to-down or down-to-up x  c , then the point (c, f (c)) is an inflection
is called its inflection point. For example, for the point of the graph of f .”
function f x   x  1 , the concavity changes at
3

Page 11 of 16
Example 1
You are working for a start-up computer assembly company and have been asked to determine the
minimum number of computers that the shop will have to sell to make a profit. The equation that gives
the minimum number of computers n to be sold after considering the total costs and the total sales is
f n  40n1.5  875n  35000  0
Use the Newton-Raphson method of finding roots of equations to find the minimum number of computers
that need to be sold to make a profit. Conduct three iterations to estimate the root of the above equations.
Find the absolute relative approximate error at the end of each iteration and the number of significant
digits at least correct at the end of each iteration.

Solution
f n  40n1.5  875n  35000  0
f n  60n 0.5  875
Let us take the initial guess of the root of f n  0 as n0  50 .
Iteration 1
The estimate of the root is
f n0 
n1  n0 
f n0 
4050  87550  35000
1.5
 50 
6050  875
0.5

5392.1
 50 
 450.74
 50   11.963
 61.963
The absolute relative approximate error a at the end of Iteration 1 is

n1  n0
a   100
n1
61.963  50
  100
61.963
 19.307%
The number of significant digits at least correct is 0, as you need an absolute relative approximate error of less than
5% for one significant digit to be correct in your result.
Iteration 2
The estimate of the root is
f n1 
n2  n1 
f n1 

Page 12 of 16
4061.963  87561.963  35000
1.5
 61.963 
6061.963  875
0.5

292.45
 61.963 
 402.70
 61.963   0.72623
 62.689
The absolute relative approximate error a at the end of Iteration 2 is

n2  n1
a   100
n2
62.689  61.963
  100
62.689
 1.1585%
The number of significant digits at least correct is 1, because the absolute relative approximate error is less than 5%
Iteration 3
The estimate of the root is
f n2 
n3  n 2 
f n2 
4062.689  87562.689  35000
1.5
 62.689 
6062.689  875
0.5

1.0031
 62.689 
 399.94
 62.689   2.5080  10 3 
 62.692
The absolute relative approximate error a at the end of Iteration 3 is
n3  n 2
a   100
n3
62.692  62.689
a   100
62.692
 4.0006  10 3 %
Hence the number of significant digits at least correct is given by the largest value of m for which
a  0.5  10 2m
4.0006  10 3  0.5  10 2m
8.0011 10 3  10 2m
log8.0011 10 3   2  m
m  2  log8.0011 10 3   4.0968
So
m4
The number of significant digits at least correct in the estimated root 62.692 is 4.

Page 13 of 16
SECANT METHOD
What is the secant method and why would we want to use it instead of the Newton-Raphson
method?
The Newton-Raphson method of solving a nonlinear equation f ( x)  0 is given by the iterative formula
f ( xi )
xi 1 = xi  (1)
f ( xi )
One of the drawbacks of the Newton-Raphson method is that you have to evaluate the derivative of the
function. With availability of symbolic manipulators such as Maple, MathCAD, MATHEMATICA and
MATLAB, this process has become more convenient. However, it still can be a laborious process, and
even intractable if the function is derived as part of a numerical scheme. To overcome these drawbacks,
the derivative of the function, f (x) is approximated as
f ( xi )  f ( xi 1 )
f ( xi )  (2)
xi  xi 1
Substituting Equation (2) in Equation (1) gives
f ( xi )( xi  xi 1 )
xi 1  xi  (3)
f ( xi )  f ( xi 1 )
The above equation is called the secant method. This method now requires two initial guesses, but unlike
the bisection method, the two initial guesses do not need to bracket the root of the equation. The secant
method is an open method and may or may not converge. However, when secant method converges, it
will typically converge faster than the bisection method. However, since the derivative is approximated
as given by Equation (2), it typically converges slower than the Newton-Raphson method.
The secant method can also be derived from geometry, as shown in Figure 1. Taking two initial guesses,
xi 1 and xi , one draws a straight line between f (x) f ( xi ) and f ( xi 1 ) passing through the x
-axis at xi 1 . ABE and DCE are similar triangles.
Hence
AB DC f (xi)
 B
AE DE
f ( xi ) f ( xi 1 )

xi  xi 1 xi 1  xi 1
On rearranging, the secant method is given as
f ( xi )( xi  xi 1 ) f (xi–1) C
xi 1  xi 
f ( xi )  f ( xi 1 ) A
E D
x
xi+1 xi–1 xi
Figure 1.Geometrical representation of the
secant method.

Page 14 of 16
Example 1
You are working for a start-up computer assembly company and have been asked to determine the
minimum number of computers that the shop will have to sell to make a profit.
The equation that gives the minimum number of computers n to be sold after considering the total costs
and the total sales is
f (n)  40n1.5  875n  35000  0
Use the secant method of finding roots of equations to find the minimum number of computers that need
to be sold to make a profit. Conduct three iterations to estimate the root of the above equation. Find the
absolute relative approximate error at the end of each iteration and the number of significant digits at least
correct at the end of each iteration.

Solution
Let us take the initial guesses of the root of f n   0 as n1  25 and n0  50 .
Iteration 1
The estimate of the root is
f n0 n0  n1 
n1  n0 
f n0   f n1 

 n0 
40n
 875n0  35000 n0  n1 
1.5

40n   
0
1.5
0  875n0  35000  40n1.15  875n1  35000

 50 
40(50) 
 875(50)  35000 50  25
1.5

40(50) 1.5
 
 875(50)  35000  40(25)1.5  875(25)  35000 
 60.587
The absolute relative approximate error a at the end of Iteration 1 is

n1  n0
a   100
n1
60.587  50
  100
60.587
 17.474%
The number of significant digits at least correct is 0, as you need an absolute relative approximate error of
less than 5% for one significant digit to be correct in your result.
Iteration 2
The estimate of the root is
f n1 n1  n0 
n2  n1 
f n1   f n0 

Page 15 of 16
 n1 
40n  875n1  35000 n1  n0 
1.5

40n   
1
1.5
1  875n1  35000  40n01.5  875n0  35000

 60.587 
4060.587  87560.587  3500060.587  50
1.5

4060.587   87560.587   35000


1.5

 
  4050  87550  35000 
1.5

 62.569
The absolute relative approximate error a at the end of Iteration 2 is

n2  n1
a   100
n2
62.569  60.587
  100
62.569
 3.1672%
The number of significant digits at least correct is 1, because the absolute relative approximate error is
less than 5% .
Iteration 3
The estimate of the root is
f n2 n2  n1 
n3  n2 
f n2   f n1 

 n2 
40n  875n2  35000 n2  n1 
1.5

40n   
2
1.5
2  875n2  35000  40n11.5  875n1  35000

 62.569 
4062.569  87562.569  3500062.569  60.587
1.5

4062.569  87562.569  35000 


1.5

 
  4060.587   87560.587   35000
1.5

 62.690
The absolute relative approximate error a at the end of Iteration 3 is

n3  n 2
a   100
n3
62.690  62.569
  100
62.690
 0.19425%
The number of significant digits at least correct is 2, because the absolute relative approximate error is
less than 0.5% .

Page 16 of 16

You might also like