Roots Open Methods
Roots Open Methods
Chapter 6
Roots: Open Methods
a) Bracketing method
b) Diverging open method
c) Converging open method - note speed!
Simple Fixed-Point Iteration
• Rearrange the function f(x)=0 so that x is on
the left-hand side of the equation: x=g(x)
• Use the new function g to predict a new
value of x - that is, xi+1=g(xi)
• The approximate error is given by:
x i1 x i
a 100%
x i1
Example
• Solve f(x)=e-x-x
• Re-write as x=g(x) by isolating x
(example: x=e-x)
• Start with an initial guess (here, 0)
i xi |a| % |t| % |t|i/|t|i-1
0 0.0000 100.000
1 1.0000 100.000 76.322 0.763
2 0.3679 171.828 35.135 0.460
3 0.6922 46.854 22.050 0.628
• 4Continue
0.5005 38.309 11.755 0.533
until some tolerance
is reached
Convergence
• Convergence of the simple
fixed-point iteration method
requires that the derivative
of g(x) near the root has a
magnitude less than 1.
a) Convergent, 0≤g’<1
b) Convergent, -1<g’≤0
c) Divergent, g’>1
d) Divergent, g’<-1
Newton-Raphson Method
• Based on forming the tangent line to the f(x)
curve at some guess x, then following the
tangent line to where it crosses the x-axis.
' f (x i ) 0
f (x i )
x i x i1
f (x i )
x i1 x i
f ' (x i )
Pros and Cons
• Pro: The error of the i+1th iteration
is roughly proportional to the
square of the error of the ith
iteration - this is called quadratic
convergence
• Con: Some functions show slow or
poor convergence
Secant Methods
• A potential problem in implementing the
Newton-Raphson method is the evaluation of
the derivative - there are certain functions
whose derivatives may be difficult or
inconvenient to evaluate.
• For these cases, the derivative can be
approximated by a backward finite divided
difference:
' f (x i1 ) f (x i )
f (x i )
x i1 x i
Secant Methods (cont)
• Substitution of this approximation for the
derivative to the Newton-Raphson method
equation gives:
f (x i )x i1 x i
x i1 x i
f (x i1 ) f (x i )