0% found this document useful (0 votes)
7 views28 pages

Annotated

The document discusses the secant method for root finding in numerical analysis, detailing its formulation and comparison to Newton's method. It highlights the convergence properties of the secant method, including the conditions required for convergence and its order of convergence, which is superlinear but slower than Newton's quadratic convergence. The document also outlines stopping criteria for the method and provides an example of its application.

Uploaded by

Ring Za
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views28 pages

Annotated

The document discusses the secant method for root finding in numerical analysis, detailing its formulation and comparison to Newton's method. It highlights the convergence properties of the secant method, including the conditions required for convergence and its order of convergence, which is superlinear but slower than Newton's quadratic convergence. The document also outlines stopping criteria for the method and provides an example of its application.

Uploaded by

Ring Za
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

CS 323 — Numerical analysis and computing, Spring 2025

Root finding: Secant method


Peng Zhang
2025-02-07
Today’s plan
• Root-finding: secant method

• Reference: [Atkinson-Han] Chapter 3.3


Root finding
• Given a function f : ℝ → ℝ, we want to find x* ∈ ℝ such that f(x*) = 0.

• x* is called a root or zero of f.

• Easy to solve analytically:

• f(x) = 3x - 2, f(x) = x2 + 2x - 3

• Hard to solve analytically:

• f(x) = x2 + ecos(x) - 3

• We have covered: the bisection method, Newton’s method


Recap: Newton’s method
• Consider f : ℝ → ℝ with a root x*.
• We have an estimate of x*, denoted by x0,
which is assumed to be near x*.

• We approximate the graph y = f(x) with a


tangent line, and approximate the root x*
with the root of the straight line.

Root of f Our estimate of x*

x*
Root of the
tangent line
Recap: Newton’s method
• Consider f : ℝ → ℝ with a root x*.
• We have an estimate of x*, denoted by x0,
which is assumed to be near x*.

• We approximate the graph y = f(x) with a


tangent line, and approximate the root x*
with the root of the straight line.

• Can we use a different straight-line Root of f Our estimate of x*


approximation?
x*
• This leads to the secant method. Root of the
tangent line
Secant method
• Assume two initial guesses for x*: x0, x1.

• Two points (x0, f(x0)), (x1, f(x1)) determine a straight line, called the secant line.

• The secant line approximates the graph y = f(x), and its root x2 approximates x*.
Secant line Secant line

x2: root of the secant line

x*
x*
x2: root of the secant line
Formula for x2
/
(Xo
f(x)
*

!
,

• The equation of the secant line: slope !

-
O

O
define pla to b
e.

x*
x*
Formula for x2 PIXzD =
f(X ) ,

+ (Xz -

Xi) .

If
• Formula for its root x2: Solving p(x2) = 0, we obtain

I
=
o

=>
(Xz-X . .
)
I
f(x
=

)
-

=> XX
X -X , = -

f(x) .
.

fat-fix

x*
x*
Secant method
• Drop x0, use x1, x2 as a new set of approximate values for x*. This leads to an
improved value x3. Repeat this process.

• General iteration formula:

• This is called the secant method.

x*
Secant method
• Drop x0, use x1, x2 as a new set of approximate values for x*. This leads to an
improved value x3. Repeat this process.

• General iteration formula:

• This is called the secant method.

• It is called a two-point method, since two


approximate values are needed to obtain an
improved value. Another example: the bisection
method. x*
Secant vs Newton iteration
• Secant method:
xn − xn−1
E
xn+1 = xn − f(xn) ⋅
f(xn) − f(xn−1)
=
I(x)
f(xn 1)
[(xn)
Xn
-

-
Xn -
1
-

• Newton’s method:
t
f(xn)

C
xn+1 = xn − [Xn- f(xa 1))
f′(xn)
-

f(Xu))
f(xn) − f(xn)
For xn ≈ xn−1, f′(xn) ≈
-

• -

xn − xn−1
E
-
the line
slope of the slope of Jecant

line
tangent
Example 3.3.1 in [Atkinson-Han]
6
• Find a root of f(x) = x − x − 1 using the secant method.
Example 3.3.1 in [Atkinson-Han]
6
• Find a root of f(x) = x − x − 1 using the secant method.

• Choose x0, x1.


I xn − xn−1
For n ≥ X
0, apply the iteration: xn+1 = xn − f(xn) ⋅
• f(xn) − f(xn−1)
*

* X

The secant method converges faster than the bisection method, but slower than Newton’s method.
The first few iterations don’t converge fast. But as xn becomes closer to x*, the convergence rate
increases.
Convergence
• For the secant method to converge, we require the same assumption on f as
Newton’s method:

• f has two continuous derivatives for all x in some interval around x* (root)

• f′(x*) ≠ 0 (aka, the graph of y = f(x) is not tangent to the x-axis when the graph intersects it at x = x*; it
implies that x* is a simple root)

• Since f′ is continuous, f′(x) ≠ 0 for all x near x*

• The case f′(x*) = 0 is treated in [AH] Chapter 3.5 (won’t cover in class)
Convergence
• Claim 1: Under the above assumption, for any n ≥ 1,

n−1 (
2f′(ζn) )
f′′(ξn)
• x* − xn+1 = (x* − xn )(x* − x ) ⋅ −

• where ζn is between xn−1, xn, and ξn is between the largest and the
smallest of xn−1, xn, x*.
Convergence
• Claim 1: Under the above assumption, for any n ≥ 1,

n−1 (
2f′(ζn) )
f′′(ξn)
• x* − xn+1 = (x* − xn )(x* − x ) ⋅ −

• where ζn is between xn−1, xn, and ξn is between the largest and the
smallest of xn−1, xn, x*.

• Similar to the errors in Newton’s method:

( 2f′(cn) )
2 f′′(x*)
• x* − xn+1 = (x* − xn ) ⋅ −
Convergence
• Claim 1: Under the above assumption, for any n ≥ 1, if xn, xn−1 are
sufficiently close to x*, then

n−1 (
2f′(x*) )
f′′(x*)
• x* − xn+1 ≈ (x* − xn )(x* − x ) ⋅ −
Convergence
• Claim 1: Under the above assumption, for any n ≥ 1, if xn, xn−1 are
sufficiently close to x*, then Let M
=i
-
n−1 (
2f′(x*) )
f′′(x*)
• x* − xn+1 ≈ (x* − xn )(x* − x ) ⋅ −
- --

en + 1 en en-1
-

M
• Similar to the error in Newton’s method:

( 2f′(x*) )
2 f′′(x*)
• x* − xn+1 ≈ (x* − xn ) ⋅ −
2
-
enti - En +

M
Convergence
f′′(x*)
Let M = − and en = x* − xn for n ≥ 0.
• 2f′(x*)

• en+1 ≈ enen−1M = Mento


(Men)(Ment)
• Similar to the error Newton’s method:

2
• en+1 ≈ en M => Menu (Mens
Convergence
f′′(x*)
Let M = − and en = x* − xn for n ≥ 0.
• 2f′(x*)
Let Sn = Men . for n= o

• Men+1 ≈ (Men)(Men−1) => Sati SnOn-

• Similar to the error Newton’s method:

2
• Men+1 ≈ (Men) => Saxo S ??
Convergence
f′′(x*)
Claim 2: Let δn = | − (x* − x ) | . Then, as xn → x*,
• 2f′(x*) n

r
1+ 5
δ
• n+1 ≈ δn, where r= ≈ 1.62 (the Golden ratio)
2
We know Sati SnSn-

Apply Snor = S = Suda =


So Sn-
UCr-1)
Apply Sn Sn = Sa -1
=
Sn-
=> Vr D vr
1 = 1
-

=
= 0
-
Convergence
Example :

f′′(x*) Sn 10-5
Claim 2: Let δn = | − (x* − x ) | . Then, as xn → x*,
=

• 2f′(x*) n
secant :

r
1+ 5 Sny105X1 .
62

δ
• n+1 ≈ δn, where r= ≈ 1.62 (the Golden ratio)
-
2
Men + = Mehr
• Recall Newton’s method: Newton :
Recall
:

2 Sn *
Sni 10 5x2
• δn+1 ≈ δn
=
M(x -

Xn) =

• Newton’s method converges faster than the secant method.


Order of convergence
• Let En := | x* − xn | (or En is a bound of | x* − xn | )

• Secant method: En+1 ≈M r−1


⋅ r
En r = 1 .
62

• Super linear convergence

2
• Newton’s method: En+1 ≈M⋅ En

• Quadratic convergence

1
• Bisection method: En+1 ≤ ⋅ En
2
• Linear convergence
X
secant iteration : Xn += Xn-f(x) ·

Stopping criteria
• Stop the iterations when | xn+1 − xn | < tol (a pre-specified tolerance)
• By the mean-value theorem:
• f(xn) − f(x*) = f′(c)(xn − x*) for some c between x* and xn
f(xn) f(xn)
Since f(x*) = 0, we have x* − xn = − ≈ − = x − x
• f′(c) f′(x )
n+1 n
E n
IFX
-

***
Assumption on Convergence
Methods Initial guess Each iteration Convergence
f rate

An interval (a,b)
Need evaluation Always
Bisection Continuous such that f(a) Linear
of f(x) convergence
f(b) < 0

Converge if f’, f’’


Continuous and Need evaluation exist and
Newton An x0 near x* Quadratic
differentiable of f(x) and f’(x) continuous, and
f’(x*) is nonzero

Converge if f’, f’’


Two points x0, Need evaluation exist and Super linear, but
Secant Continuous
x1 near x* of f(x) continuous, and not quadratic
f’(x*) is nonzero
Claim
of 1 :

secant iteration :

Xn + Xn - f(Xn)
X
=
-

Def *
en =
X -

Xn for no
S

*
= Xn = X -

en

.-en)
=>
en =
en-f(x *_ en)

By Taylor remainder thi :

fee R

f(x* fix+ tf(e


=
+ e) x + e +


between X *, **
X e
O

let e =
-en ,
-

en -

Approximate ford ~ fix ) *

=> enyen-1 fix * ) - en) + Efx (en)


*
·
-
f(x
*
)) - e) -[fix *
)) -

en -) + =f x
ent]

e-en
=em enem -
Cent

-
enen :

__)

You might also like