0% found this document useful (0 votes)
22 views

error analysis iterative methods

Uploaded by

leopard mpande
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

error analysis iterative methods

Uploaded by

leopard mpande
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

MAT360 Section Summary: 2.

4
Error Analysis for Iterative Methods
Summary
Is there a good way of getting a handle on the number of terms in New-
ton’s method? That’s essentially the subject of this section.
We learned a bit previously in section 2.2: in 2.2 we obtained useful
bounds for fixed-point methods, e.g.
kn
|pn − p| ≤ |p1 − p0 | (1)
1−k
where g(x) ∈ [a, b], ∀x ∈ [a, b], and |g 0(x)| ≤ k < 1 on [a, b], which brackets
the fixed point p. You can use this for Newton’s method, but perhaps we can
do better, since the convergence is better (quadratic, rather than linear).

Theorem 2.5 (from section 2.3): Let f ∈ C 2 [a, b]. If p ∈ [a, b] is such
that f (p) = 0 and f 0 (p) 6= 0, then ∃δ > 0 such that Newton’s method
generates a sequence {pn }∞n=1 converging to p for any initial approximation
p0 ∈ [p − δ, p + δ].

This result is “obvious” (I claimed, in 2.2), since


1 00
|pn+1 − p| ≈ |g (p)||pn − p|2
2
when pn gets into close proximity (i.e. a δ-neighborhood) of p. We can be
00
assured of “contracting” as long as the magnitude of g (x) is bounded (e.g.
00
|g (x)| < M) in that neighborhood, so long as
1
M|pn − p| < 1
2
2
It’s obviously true when pn = p, and we simply choose |pn − p| < M
to be
assured that we’ll converge by the Fixed-Point Theorem (2.3).

Definition 2.6: Suppose that {pn }∞ n=0 is a sequence that converges to p,


with pn 6= p for all n. If positive constants λ and α exist with
|pn+1 − p|
lim =λ
n→∞ |pn − p|α

1
then the sequence converges to p of order α, with asymptotic error
constant λ.

1. If α = 1, the sequence is linearly convergent (e.g. standard conver-


gent fixed point function, with g 0 (p) 6= 0), whereas

2. if α = 2, the sequence is quadratically convergent (e.g. Newton’s


method, with g 0(p) 6= 0).

Q: What does asymptotic mean?


Q: Is bisection linearly convergent?1 Contrast this with Exercise #11, for
your homework.

Theorem 2.7: Let g ∈ C[a, b] be such that g(x) ∈ [a, b], ∀x ∈ [a, b]. Sup-
pose, in addition, that g 0 is continuous on (a, b) and a positive constant k < 1
exists with
|g 0 (x)| ≤ k
∀x ∈ (a, b). If g 0(p) 6= 0, then for any number p0 in [a, b], the sequence of
iterates
pn = g(pn−1 )
for n ≥ 1 converges only linearly to the unique fixed point p ∈ [a, b].

Proof (by the MVT)


1
The Bisection Algorithm is Not Linearly Convergent. Sui-Sun Cheng and Tzon-Tzer
Lu, College Math Journal: Volume 16, Number 1, (1985), Pages: 56-57.

2
Theorem 2.8: Let p be a solution of the equation x = g(x). Suppose
that g 0 (p) = 0 and g 00 is continuous and strictly bounded by M on an open
interval I containing p. Then ∃δ > 0 such that, for p0 ∈ [p − δ, p + δ], the
sequence {pn = g(pn−1)}∞ n=1 converges at least quadratically to p. Moreover,
for sufficiently large values of n,
M
|pn+1 − p| < |pn − p|2
2
(Hence, Newton’s method is quadratic.)

Proof (by Taylor series, and Fixed-Point theorem)

3
Example: Here’s where we can make use of the quadratic convergence to ad-
dress our opening question about the number of iterates of Newton’s method:
For problem #5b, for example, with

f (x) = x3 + 3x2 − 1

p0 = 3 and a solution p3 = −2.87939, we use

x3 + 3x2 − 1
g(x) = x −
3x2 + 6x
and then compute the first and second derivatives of g. We note that by
theorem 2.2 there is a unique fixed point in the interval [−3, −2.74]; also we
00
see that g has a maximum value of |g (x)| < 2.5 on the interval [−3, −2.74].
g has a maximum value of < −.27 on the interval, so we could use Equation
(1) above to make our estimate (it gives 8 iterations).
We can do better, of course!

Theorem: the secant method is of order the golden mean.

Motivation: #14

4
It’s possible to create methods that are of higher order than Newton’s,
but one does so at the expense of more constraints on f (e.g. f ∈ C 3 [a, b],
and greater computational complexity:

Example: #13, p. 83

Definition 2.9: A solution p of f (x) = 0 is a zero of multiplicity m of f


if, for x 6= p, we can write f (x) = (x − p)m q(x), where limx→p q(x) 6= 0.

Theorem 2.10: f ∈ C 1 [a, b] has a simple zero at p ∈ (a, b) ⇐⇒ f (p) = 0,


but f 0 (p) 6= 0.

Theorem 2.11: f ∈ C m [a, b] has a zero of multiplicity m at p ∈ (a, b) ⇐⇒


0 = f (p) = f 0 (p) = . . . = f (m−1) (p), but f (m) (p) 6= 0.

To handle roots p of f multiplicity m > 1, we use a trick called “deflation”.


Consider
f (x)
µ(x) = 0
f (x)
Claim: µ has a simple zero at p (and hence we can use straightforward
Newton’s method on µ to find the root p).

You might also like