0% found this document useful (0 votes)
125 views22 pages

CVE 154 Lesson 5 Open Methods For Roots of Equations Part 1

This document discusses open methods for finding the roots of equations, specifically the fixed-point iteration method. It explains that fixed-point iteration uses a formula to iteratively calculate new estimates of the root without necessarily bracketing it. The document provides examples of how to derive the fixed-point iteration formula from different functions and use it to calculate successive estimates that converge on the root. It also discusses factors that influence whether the method will converge, such as the slope of the function being less than one, and presents graphical representations of convergence and divergence cases.

Uploaded by

Ice Box
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
125 views22 pages

CVE 154 Lesson 5 Open Methods For Roots of Equations Part 1

This document discusses open methods for finding the roots of equations, specifically the fixed-point iteration method. It explains that fixed-point iteration uses a formula to iteratively calculate new estimates of the root without necessarily bracketing it. The document provides examples of how to derive the fixed-point iteration formula from different functions and use it to calculate successive estimates that converge on the root. It also discusses factors that influence whether the method will converge, such as the slope of the function being less than one, and presents graphical representations of convergence and divergence cases.

Uploaded by

Ice Box
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

CVE 154 – Statics of Rigid Bodies

LESSON 5:
ROOTS OF EQUATIONS -
OPEN METHODS Part 1
OPEN METHODS
 Open methods are based on formulas that require only a single starting value
of 𝑥 or two starting values that do not necessarily bracket the root.
 As such, they sometimes diverge or move away from the true root as the
computation progresses. However, when the open methods converge, they
usually do so much more quickly than the bracketing methods.

Diverging open method Converging open method


FIXED-POINT ITERATION
 Simple fixed-point iteration, also called, one-point iteration or successive
substitution, uses a formula to solve for the root. The formula can be obtained
by rearranging the function 𝑓 𝑥 = 0 so that 𝑥 is on the left-hand side of the
equation:
𝑥 = 𝑔 𝑥
 This transformation can be accomplished either by algebraic manipulation or by
simply adding 𝑥 to both sides of the original equation.
Ex.
𝑥 2 − 2𝑥 + 3 = 0 sin 𝑥 = 0
𝑥 2 +3
𝑥= 𝑥 = sin 𝑥 + 𝑥
2

 Therefore, a new estimate root 𝑥𝑖+1 can be computed from a given initial guess
root 𝑥𝑖 using the iterative formula:
𝑥𝑖+1 = 𝑔 𝑥𝑖
FIXED-POINT ITERATION
APPROXIMATE ERROR:
 We use an error estimate that is not contingent on us knowing the true value of
the root. The approximate percent relative error is calculated as.
𝑥𝑖+1 − 𝑥𝑖
𝜀𝑎 = (100%)
𝑥𝑖+1
SAMPLE PROBLEM
EXAMPLE 1: SOLUTION:
Use simple fixed-point iteration to locate We will use the function we have derived in
the root of the previous example.
𝑓 𝑥 = 𝑒 −𝑥 − 𝑥
Note that the true value of the root is: No. of
|εa| |εt|
Iterations xi
𝑥𝑟 = 0.56714329. i (%) (%)
0 0.000000 100.000
SOLUTION: 1 1.000000 100.000 76.322
2 0.367879 171.828 35.135
We rearrange the function to convert it 3 0.692201 46.854 22.050
into an convenient formula for simple fixed- 4 0.500474 38.309 11.755
point iteration. 5 0.606244 17.447 6.894
6 0.545396 11.157 3.835
𝑥 = 𝑒 −𝑥 7 0.579612 5.903 2.199
8 0.560115 3.481 1.239
𝑥𝑖+1 = 𝑒 −𝑥𝑖 9 0.571143 1.931 0.705
10 0.564879 1.109 0.399
We start with an initial guess of 𝑥0 = 0.
MS Excel will be used to speed up
calculations.
FIXED-POINT ITERATION
CONVERGENCE:
 Fixed-point iteration is characterized by its property of linear convergence,
wherein the true percent relative error for the each current iteration is roughly
proportional to the error from the previous iteration.
 However, it is important to also see if the formula for fixed-point iteration will
lead to converge on the root. Note that the root of a function can be expressed
graphically as the intersection of two functions. For fixed point iteration, we
have the function at the left side and the function at the right side of the
equation.
𝑦1 = 𝑓1 𝑥 = 𝑥 and 𝑦2 = 𝑓2 𝑥 = 𝑔 𝑥
 Therefore, the root is the value of 𝑥 where
𝑦1 = 𝑦2
𝑥=𝑔 𝑥
FIXED-POINT ITERATION
The Two-Curve Graphical Method for solving the
root is shown here using the function of the
previous example:
𝑓 𝑥 = 𝑒 −𝑥 − 𝑥
This can be separated into two functions:
𝑓1 𝑥 = 𝑥 𝑓2 𝑥 = 𝑒 −𝑥
The 2 functions are evaluated at trial 𝑥 values and
plotted at the figure to the right.

The intersection of the two curves, at the bottom


figure, indicates a root estimate which corresponds
to the point where the single curve, at the top
figure, crosses the x axis.
FIXED-POINT ITERATION
The two-curve method can now be
used to illustrate the convergence
and divergence of fixed-point
iteration.
𝑦1 = 𝑥 𝑦2 = 𝑔 𝑥
The functions are plotted with
𝑦1 = 𝑥 and four different shapes for
𝑦2 = 𝑔 𝑥 .
In the figures, (a) and (b) shows
convergence while (c) and (d)
indicates divergence of simple fixed-
point iteration. Additionally, figure
(a) and (c) are called monotone
patterns, whereas (b) and (d) are
called oscillating or spiral patterns.
Note that convergence occurs when:
𝑔′ 𝑎 <1
when 𝑎 = true value of root.
FIXED-POINT ITERATION
 We can get some insight into the formula by looking at the Taylor series. Let 𝑎
be a root of the equation
𝑥=𝑔 𝑥
 Now, by Taylor's theorem,
𝑔 𝑥 = 𝑔 𝑎 + 𝑔′ 𝑎 𝑥 − 𝑎 + ⋯
 But 𝑔 𝑎 = 𝑎, so we have
𝑔 𝑥 − 𝑎 = 𝑔′ 𝑎 𝑥 − 𝑎
 But we are iterating 𝑔 𝑥 : that is, evaluating it repeatedly. It follows that if our
𝑖𝑡𝑕 estimate is 𝑥𝑖 then
𝑥𝑖+1 = 𝑔 𝑥𝑖
 So from the above, we have that
𝑥𝑖+1 − 𝑎 = 𝑔′ 𝑎 𝑥𝑖 − 𝑎
 In other words, the distance between our estimate and the root gets multiplied
by 𝑔′ 𝑎 approximately with each iteration. So the iteration converges if
𝑔′ 𝑎 < 1 , and diverges if 𝑔′ 𝑎 > 1. The rare case 𝑔′ 𝑎 = 1 can
correspond either to very slow convergence or to very slow divergence.
SAMPLE PROBLEM
EXAMPLE 2: We use MS Excel to evaluate the values.
Use simple fixed-point iteration to locate xr |g'(xr)|
the root of 2.236067977 0.38
𝑥2 = 5 -2.236067977 2.62

Note that the true value of the root is: Note that the absolute value of the slope for
𝑥𝑟 = + 5 and − 5. 𝑥𝑟 = + 5 is less than 1, therefore the
formula converges to that root. When
SOLUTION: 𝑥𝑟 = − 5, the absolute value of the slope is
We will now demonstrate two formulas greater than 1. Therefore, the formula does
and see if we converge to the roots or not not converge to that root.
using fixed-point iteration.
No. of
FIRST FORMULA: |εa| |εt|
Iterations xi
i (%) (%)
𝑥𝑖 + 5
𝑥𝑖+1 = 0 5.000000 123.607
𝑥𝑖 + 1 1 1.666667 200.000 25.464
DERIVATIVE 2 2.500000 33.333 11.803
3 2.142857 16.667 4.169
4 4 2.272727 5.714 1.639
𝑔′ 𝑥 =− 2 5 2.222222 2.273 0.619
𝑥𝑖 + 1
SAMPLE PROBLEM
SECOND FORMULA: No. of
|εa| |εt|
Iterations xi
3 5 i (%) (%)
𝑥𝑖+1 = 𝑥𝑖 −
2 2𝑥𝑖 0 2.000000 10.557
1 1.750000 14.286 21.738
DERIVATIVE 2 1.196429 46.269 46.494


3 5 3 -0.294909 505.694 113.189
𝑔 𝑥 = + 4 8.034816 103.670 259.328
2 2 𝑥𝑖 2 5 11.741078 31.567 425.077
6 17.398690 32.517 678.093
We use MS Excel to evaluate the values. 7 25.954346 32.964 1060.714
8 38.835196 33.168 1636.763
xr |g'(xr)| 9 58.188420 33.260 2502.265
2.236067977 2.00 10 87.239666 33.301 3801.476
-2.236067977 2.00

Note that the absolute value of the slope


for 𝑥𝑟 = + 5 is greater than 1, therefore
the formula converges to that root. When
𝑥𝑟 = − 5, the absolute value of the slope
is also greater than 1. Therefore, the
formula does not converge to that root.
The formula is diverges from both roots
NEWTON-RAPHSON METHOD
 The Newton-Raphson method is the most widely used root-locating numerical
method. If the initial guess of the root is 𝑥𝑖 , a tangent can be extended from
the point 𝑥𝑖 , 𝑓 𝑥𝑖 . The point where this tangent crosses the x axis usually
represents an improved estimate of the root.
 The Newton-Raphson method can be derived
on the basis of this geometrical
interpretation as shown in the figure. The
first derivative at 𝑥 is equivalent to the slope:
𝑓 𝑥𝑖 − 0
𝑓′ 𝑥𝑖 =
𝑥𝑖 − 𝑥𝑖+1
which can be rearranged to yield

𝑓 𝑥𝑖
𝑥𝑖+1 = 𝑥𝑖 − ′
𝑓 𝑥𝑖

which is called the Newton-Raphson formula.


SAMPLE PROBLEM
EXAMPLE 3: 𝑒 −𝑥 − 𝑥
𝑥𝑖+1 = 𝑥𝑖 −
−𝑒 −𝑥 − 1
Use the Newton-Raphson method to
estimate the root of
We use MS Excel to evaluate the values.
𝑓 𝑥 = 𝑒 −𝑥 −𝑥
employing an initial guess of 𝑥0 = 0. Note No. of
|εa| |εt|
that the true value of the root is: Iterations xi
i (%) (%)
𝑥𝑟 = 0.56714329.
0 0.00000000 100.00000000
1 0.50000000 100.00000000 11.83885822
SOLUTION: 2 0.56631100 11.70929098 0.14675071
The first derivative of the function can be 3 0.56714317 0.14672871 0.00002203
4 0.56714329 0.00002211 0.00000007
evaluated as
𝑓 ′ 𝑥 = −𝑒 −𝑥 − 1
which can be substituted along with the Thus, the approach rapidly converges on the
original function into the Newton-Raphson true root. Notice that the true percent
formula: relative error at each iteration decreases
much faster than it does in simple fixed-
𝑓 𝑥𝑖
𝑥𝑖+1 = 𝑥𝑖 − ′ point iteration.
𝑓 𝑥𝑖
NEWTON-RAPHSON METHOD
TERMINATION CRITERIA AND ERROR ESTIMATES:
 As with other root-location methods, the approximate percent relative error
can be used as a termination criterion.

𝑥𝑖+1 − 𝑥𝑖
𝜀𝑎 = (100%)
𝑥𝑖+1

 In addition, however, the Taylor series provides a theoretical insight regarding


the rate of convergence as expressed by 𝐸𝑖+1 = 𝑂(𝐸𝑖 2 ).

−𝑓 ′′ 𝑥𝑟 2
𝐸𝑡,𝑖+1 = 𝐸𝑡,𝑖
2𝑓 ′ 𝑥𝑟

 Thus the error should be roughly proportional to the square of the previous
error. This behaviour is known as quadratic convergence. In other words, the
number of significant figures of accuracy approximately doubles with each
iteration.
SAMPLE PROBLEM
EXAMPLE 4: SOLUTION:
The Newton-Raphson method is The first derivative of the function is:
quadratically convergent. That is, the error 𝑓 ′ 𝑥 = −𝑒 −𝑥 − 1
is roughly proportional to the square of the
The second derivative of the function is:
previous error:
−𝑓 ′′ 𝑥𝑟 𝑓 ′′ 𝑥 = 𝑒 −𝑥
2
𝐸𝑡,𝑖+1 = 𝐸𝑡,𝑖 We evaluate this first and second derivatives
2𝑓 ′ 𝑥𝑟
with the value of the true root 𝑥𝑟 =
Examine this formula and see if it applies to
0.56714329.
the results of the previous example.
−𝑓 ′′ 𝑥𝑟
= 0.18095
2𝑓 ′ 𝑥𝑟
𝐸𝑡,𝑖+1 = 0.18095 𝐸𝑡,𝑖 2

We use MS Excel to evaluate the values for


each iteration.
SAMPLE PROBLEM
NEWTON RAPHSON METHOD:
−𝑥 𝑓 ′ 𝑥 = −𝑒 −𝑥 − 1 −𝑓 ′′ 𝑥𝑟
𝑓 𝑥 =𝑒 −𝑥 𝐸𝑡,𝑖+1 = 𝐸𝑡,𝑖 2
𝑓 ′′ 𝑥 = 𝑒 −𝑥 ′
2𝑓 𝑥𝑟

xr = 0.56714329

No. of
−f′′(xr )
Iterations xi 2f′(xr) Et Et,i+1
i
0 0.00000000 0.18095 0.5671432900
1 0.50000000 0.18095 0.0671432900 0.0582022390
2 0.56631100 0.18095 0.0008322868 0.0008157542
3 0.56714317 0.18095 0.0000001250 0.0000001253
4 0.56714329 0.18095 -0.0000000004 0.0000000000

Thus, this example illustrates that the error of the Newton-Raphson method for this case
is, in fact, roughly proportional (by a factor of 0.18095) to the square of the error of the
previous iteration.
NEWTON-RAPHSON METHOD
PROBLEMS WITH NEWTON-RAPHSON METHOD:
1. Multiple roots.
 Just like bracketing methods, the Newton-Raphson Method is designed to
locate only 1 distinct root based on an initial guess.

2. Slow convergence due to the nature of the function.


 Ex. 𝑓 𝑥 = 𝑥 10 − 1 𝑓 ′ 𝑥 = 10𝑥 9

No. of
|εa| |εt|
Iterations xi
i (%) (%)

0 0.50000000 50.00000000
1 51.65000000 99.03194579 5065.00000000
2 46.48500000 11.11111111 4548.50000000
3 41.83650000 11.11111111 4083.65000000
4 37.65285000 11.11111111 3665.28500000
5 33.88756500 11.11111111 3288.75650000

 After five, iterations the error is still very far from the true value 𝑥𝑟 = 1.
NEWTON-RAPHSON METHOD
3. Cases where an inflection point, 𝑓 ′′ 𝑥 = 0, occurs in the vicinity of the root.

4. The Newton Raphson method has a tendency to oscillate around a local


maximum or minimum.
 Such oscillations may
persist or when a near-
zero slope is reached, the
solution is sent far from
the area of interest.
NEWTON-RAPHSON METHOD
5. Cases where an initial guess close to one root jumps to another.

6. Cases where there is a zero slope. The tangent line is horizontal and never hits
the x-axis. The Newton-Raphson formula would also result to a denominator
of zero which is undefined.
NEWTON-RAPHSON METHOD
IMPROVEMENTS TO NEWTON-RAPHSON ALGORITHMS:
1. A plotting routine should be included in the program.
2. At the end of the computation, the final root estimate should always be
substituted into the original function to compute whether the result is close to
zero. This check partially guards against those cases where slow or oscillating
convergence may lead to a small value of 𝜀𝑎 while the solution is still far from
a root.
3. The program should always include an upper limit on the number of iterations
to guard against oscillating, slowly convergent, or divergent solutions that
could persist interminably.
4. The program should alert the user and take account of the possibility that
𝑓 ′ 𝑥 might equal zero at any time during the computation.
REFERENCE
Chapra, S. C., & Canale, R. P. (2010). Numerical Methods for
Engineers (6th Edition). McGraw-Hill.
THE END

You might also like