Open Method Examples

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 11
At a glance
Powered by AI
The key takeaways are that there are different methods for finding roots of equations, including bracketing methods which require values that bracket the root and open methods which may converge faster but also risk divergence. Numerical methods like fixed-point iteration, Newton-Raphson, and secant methods are discussed.

Bracketing methods require values that bracket the root, while open methods only require one or two starting values that do not necessarily bracket the root. Open methods may diverge but usually converge faster. Bracketing methods are more robust but slower.

The Newton-Raphson method finds the root by forming the tangent line at a guess and following it to where it crosses the x-axis, giving an improved guess. It has quadratic convergence so errors reduce quickly, but some functions show slow or no convergence.

Chapter 6

Roots: Open Methods


Chapter Objectives
Recognizing the difference between bracketing and open
methods for root location.
Understanding the fixed-point iteration method and how you
can evaluate its convergence characteristics.
Knowing how to solve a roots problem with the Newton-
Raphson method and appreciating the concept of quadratic
convergence.
Knowing how to implement both the secant and the modified
secant methods.
Knowing how to use MATLAB’s fzero function to estimate
roots.
Open Methods
Open methods differ from bracketing
methods, in that open methods require only
a single starting value or two starting values
that do not necessarily bracket a root.
Open methods may diverge as the
computation progresses, but when they do
converge, they usually do so much faster
than bracketing methods.
Graphical Comparison of Methods

a) Bracketing method
b) Diverging open method
c) Converging open method
Simple Fixed-Point Iteration
Rearrange the function f(x)=0 so that x is on
the left-hand side of the equation: x=g(x)
Use the new function g to predict a new
value of x - that is, xi+1=g(xi)
The approximate error is given by:
𝑥𝑖+1 − 𝑥𝑖
𝜀𝑎 = 100%
𝑥𝑖+1
Example
Solve f(x)=e-x-x
Re-write as x = g(x) by isolating x
(example: x = e-x)
Start with an initial guess (here, 0)
i xi |a| % |t| % |t |i/|t |i-1
0 0.0000 100.000
1 1.0000 100.000 76.322 0.763
2 0.3679 171.828 35.135 0.460
3 0.6922 46.854 22.050 0.628
4 0.5005 38.309 11.755 0.533

Continue until some tolerance is reached


Convergence
Convergence of the
simple fixed-point
iteration method.
a) Convergent
b) Convergent
c) Divergent
d) Divergent
Newton-Raphson Method
Based on forming the tangent line to the f(x)
curve at some guess x, then following the
tangent line to where it crosses the x-axis.


𝑓 𝑥𝑖 − 0
𝑓 𝑥𝑖 =
𝑥𝑖 − 𝑥𝑖+1
𝑓(𝑥𝑖)
𝑥𝑖+1 = 𝑥𝑖 −
𝑓′(𝑥𝑖)
Pros and Cons
Pro: The error of the i+1th
iteration is roughly
proportional to the square
of the error of the ith
iteration - this is called
quadratic convergence

Con: Some functions show


slow or poor convergence
or divergence!
Secant Methods, 1
A potential problem in implementing the
Newton-Raphson method is the evaluation of
the derivative - there are certain functions
whose derivatives may be difficult or
inconvenient to evaluate.
For these cases, the derivative can be
approximated by a backward finite divided
difference:
𝑓 𝑥𝑖 − 1 − 𝑓(𝑥𝑖)
𝑓′(𝑥𝑖) ≅
𝑥𝑖 − 1 − 𝑥𝑖
Secant Methods, 2
(Modified Secant Method)
Substitution of this approximation for the
derivative to the Newton-Raphson method
equation gives:
𝑓(𝑥𝑖)(𝑥𝑖 − 1 − 𝑥𝑖)
𝑥𝑖 + 1 = 𝑥𝑖 −
𝑓 𝑥𝑖 − 1 − 𝑓(𝑥𝑖)
Note - this method requires two initial
estimates of x but does not require an
analytical expression of the derivative.

You might also like