CVE 154 Lesson 5 Open Methods For Roots of Equations Part 1
CVE 154 Lesson 5 Open Methods For Roots of Equations Part 1
LESSON 5:
ROOTS OF EQUATIONS -
OPEN METHODS Part 1
OPEN METHODS
Open methods are based on formulas that require only a single starting value
of 𝑥 or two starting values that do not necessarily bracket the root.
As such, they sometimes diverge or move away from the true root as the
computation progresses. However, when the open methods converge, they
usually do so much more quickly than the bracketing methods.
Therefore, a new estimate root 𝑥𝑖+1 can be computed from a given initial guess
root 𝑥𝑖 using the iterative formula:
𝑥𝑖+1 = 𝑔 𝑥𝑖
FIXED-POINT ITERATION
APPROXIMATE ERROR:
We use an error estimate that is not contingent on us knowing the true value of
the root. The approximate percent relative error is calculated as.
𝑥𝑖+1 − 𝑥𝑖
𝜀𝑎 = (100%)
𝑥𝑖+1
SAMPLE PROBLEM
EXAMPLE 1: SOLUTION:
Use simple fixed-point iteration to locate We will use the function we have derived in
the root of the previous example.
𝑓 𝑥 = 𝑒 −𝑥 − 𝑥
Note that the true value of the root is: No. of
|εa| |εt|
Iterations xi
𝑥𝑟 = 0.56714329. i (%) (%)
0 0.000000 100.000
SOLUTION: 1 1.000000 100.000 76.322
2 0.367879 171.828 35.135
We rearrange the function to convert it 3 0.692201 46.854 22.050
into an convenient formula for simple fixed- 4 0.500474 38.309 11.755
point iteration. 5 0.606244 17.447 6.894
6 0.545396 11.157 3.835
𝑥 = 𝑒 −𝑥 7 0.579612 5.903 2.199
8 0.560115 3.481 1.239
𝑥𝑖+1 = 𝑒 −𝑥𝑖 9 0.571143 1.931 0.705
10 0.564879 1.109 0.399
We start with an initial guess of 𝑥0 = 0.
MS Excel will be used to speed up
calculations.
FIXED-POINT ITERATION
CONVERGENCE:
Fixed-point iteration is characterized by its property of linear convergence,
wherein the true percent relative error for the each current iteration is roughly
proportional to the error from the previous iteration.
However, it is important to also see if the formula for fixed-point iteration will
lead to converge on the root. Note that the root of a function can be expressed
graphically as the intersection of two functions. For fixed point iteration, we
have the function at the left side and the function at the right side of the
equation.
𝑦1 = 𝑓1 𝑥 = 𝑥 and 𝑦2 = 𝑓2 𝑥 = 𝑔 𝑥
Therefore, the root is the value of 𝑥 where
𝑦1 = 𝑦2
𝑥=𝑔 𝑥
FIXED-POINT ITERATION
The Two-Curve Graphical Method for solving the
root is shown here using the function of the
previous example:
𝑓 𝑥 = 𝑒 −𝑥 − 𝑥
This can be separated into two functions:
𝑓1 𝑥 = 𝑥 𝑓2 𝑥 = 𝑒 −𝑥
The 2 functions are evaluated at trial 𝑥 values and
plotted at the figure to the right.
Note that the true value of the root is: Note that the absolute value of the slope for
𝑥𝑟 = + 5 and − 5. 𝑥𝑟 = + 5 is less than 1, therefore the
formula converges to that root. When
SOLUTION: 𝑥𝑟 = − 5, the absolute value of the slope is
We will now demonstrate two formulas greater than 1. Therefore, the formula does
and see if we converge to the roots or not not converge to that root.
using fixed-point iteration.
No. of
FIRST FORMULA: |εa| |εt|
Iterations xi
i (%) (%)
𝑥𝑖 + 5
𝑥𝑖+1 = 0 5.000000 123.607
𝑥𝑖 + 1 1 1.666667 200.000 25.464
DERIVATIVE 2 2.500000 33.333 11.803
3 2.142857 16.667 4.169
4 4 2.272727 5.714 1.639
𝑔′ 𝑥 =− 2 5 2.222222 2.273 0.619
𝑥𝑖 + 1
SAMPLE PROBLEM
SECOND FORMULA: No. of
|εa| |εt|
Iterations xi
3 5 i (%) (%)
𝑥𝑖+1 = 𝑥𝑖 −
2 2𝑥𝑖 0 2.000000 10.557
1 1.750000 14.286 21.738
DERIVATIVE 2 1.196429 46.269 46.494
′
3 5 3 -0.294909 505.694 113.189
𝑔 𝑥 = + 4 8.034816 103.670 259.328
2 2 𝑥𝑖 2 5 11.741078 31.567 425.077
6 17.398690 32.517 678.093
We use MS Excel to evaluate the values. 7 25.954346 32.964 1060.714
8 38.835196 33.168 1636.763
xr |g'(xr)| 9 58.188420 33.260 2502.265
2.236067977 2.00 10 87.239666 33.301 3801.476
-2.236067977 2.00
𝑓 𝑥𝑖
𝑥𝑖+1 = 𝑥𝑖 − ′
𝑓 𝑥𝑖
𝑥𝑖+1 − 𝑥𝑖
𝜀𝑎 = (100%)
𝑥𝑖+1
−𝑓 ′′ 𝑥𝑟 2
𝐸𝑡,𝑖+1 = 𝐸𝑡,𝑖
2𝑓 ′ 𝑥𝑟
Thus the error should be roughly proportional to the square of the previous
error. This behaviour is known as quadratic convergence. In other words, the
number of significant figures of accuracy approximately doubles with each
iteration.
SAMPLE PROBLEM
EXAMPLE 4: SOLUTION:
The Newton-Raphson method is The first derivative of the function is:
quadratically convergent. That is, the error 𝑓 ′ 𝑥 = −𝑒 −𝑥 − 1
is roughly proportional to the square of the
The second derivative of the function is:
previous error:
−𝑓 ′′ 𝑥𝑟 𝑓 ′′ 𝑥 = 𝑒 −𝑥
2
𝐸𝑡,𝑖+1 = 𝐸𝑡,𝑖 We evaluate this first and second derivatives
2𝑓 ′ 𝑥𝑟
with the value of the true root 𝑥𝑟 =
Examine this formula and see if it applies to
0.56714329.
the results of the previous example.
−𝑓 ′′ 𝑥𝑟
= 0.18095
2𝑓 ′ 𝑥𝑟
𝐸𝑡,𝑖+1 = 0.18095 𝐸𝑡,𝑖 2
xr = 0.56714329
No. of
−f′′(xr )
Iterations xi 2f′(xr) Et Et,i+1
i
0 0.00000000 0.18095 0.5671432900
1 0.50000000 0.18095 0.0671432900 0.0582022390
2 0.56631100 0.18095 0.0008322868 0.0008157542
3 0.56714317 0.18095 0.0000001250 0.0000001253
4 0.56714329 0.18095 -0.0000000004 0.0000000000
Thus, this example illustrates that the error of the Newton-Raphson method for this case
is, in fact, roughly proportional (by a factor of 0.18095) to the square of the error of the
previous iteration.
NEWTON-RAPHSON METHOD
PROBLEMS WITH NEWTON-RAPHSON METHOD:
1. Multiple roots.
Just like bracketing methods, the Newton-Raphson Method is designed to
locate only 1 distinct root based on an initial guess.
No. of
|εa| |εt|
Iterations xi
i (%) (%)
0 0.50000000 50.00000000
1 51.65000000 99.03194579 5065.00000000
2 46.48500000 11.11111111 4548.50000000
3 41.83650000 11.11111111 4083.65000000
4 37.65285000 11.11111111 3665.28500000
5 33.88756500 11.11111111 3288.75650000
After five, iterations the error is still very far from the true value 𝑥𝑟 = 1.
NEWTON-RAPHSON METHOD
3. Cases where an inflection point, 𝑓 ′′ 𝑥 = 0, occurs in the vicinity of the root.
6. Cases where there is a zero slope. The tangent line is horizontal and never hits
the x-axis. The Newton-Raphson formula would also result to a denominator
of zero which is undefined.
NEWTON-RAPHSON METHOD
IMPROVEMENTS TO NEWTON-RAPHSON ALGORITHMS:
1. A plotting routine should be included in the program.
2. At the end of the computation, the final root estimate should always be
substituted into the original function to compute whether the result is close to
zero. This check partially guards against those cases where slow or oscillating
convergence may lead to a small value of 𝜀𝑎 while the solution is still far from
a root.
3. The program should always include an upper limit on the number of iterations
to guard against oscillating, slowly convergent, or divergent solutions that
could persist interminably.
4. The program should alert the user and take account of the possibility that
𝑓 ′ 𝑥 might equal zero at any time during the computation.
REFERENCE
Chapra, S. C., & Canale, R. P. (2010). Numerical Methods for
Engineers (6th Edition). McGraw-Hill.
THE END