error analysis iterative methods
error analysis iterative methods
4
Error Analysis for Iterative Methods
Summary
Is there a good way of getting a handle on the number of terms in New-
ton’s method? That’s essentially the subject of this section.
We learned a bit previously in section 2.2: in 2.2 we obtained useful
bounds for fixed-point methods, e.g.
kn
|pn − p| ≤ |p1 − p0 | (1)
1−k
where g(x) ∈ [a, b], ∀x ∈ [a, b], and |g 0(x)| ≤ k < 1 on [a, b], which brackets
the fixed point p. You can use this for Newton’s method, but perhaps we can
do better, since the convergence is better (quadratic, rather than linear).
Theorem 2.5 (from section 2.3): Let f ∈ C 2 [a, b]. If p ∈ [a, b] is such
that f (p) = 0 and f 0 (p) 6= 0, then ∃δ > 0 such that Newton’s method
generates a sequence {pn }∞n=1 converging to p for any initial approximation
p0 ∈ [p − δ, p + δ].
1
then the sequence converges to p of order α, with asymptotic error
constant λ.
Theorem 2.7: Let g ∈ C[a, b] be such that g(x) ∈ [a, b], ∀x ∈ [a, b]. Sup-
pose, in addition, that g 0 is continuous on (a, b) and a positive constant k < 1
exists with
|g 0 (x)| ≤ k
∀x ∈ (a, b). If g 0(p) 6= 0, then for any number p0 in [a, b], the sequence of
iterates
pn = g(pn−1 )
for n ≥ 1 converges only linearly to the unique fixed point p ∈ [a, b].
2
Theorem 2.8: Let p be a solution of the equation x = g(x). Suppose
that g 0 (p) = 0 and g 00 is continuous and strictly bounded by M on an open
interval I containing p. Then ∃δ > 0 such that, for p0 ∈ [p − δ, p + δ], the
sequence {pn = g(pn−1)}∞ n=1 converges at least quadratically to p. Moreover,
for sufficiently large values of n,
M
|pn+1 − p| < |pn − p|2
2
(Hence, Newton’s method is quadratic.)
3
Example: Here’s where we can make use of the quadratic convergence to ad-
dress our opening question about the number of iterates of Newton’s method:
For problem #5b, for example, with
f (x) = x3 + 3x2 − 1
x3 + 3x2 − 1
g(x) = x −
3x2 + 6x
and then compute the first and second derivatives of g. We note that by
theorem 2.2 there is a unique fixed point in the interval [−3, −2.74]; also we
00
see that g has a maximum value of |g (x)| < 2.5 on the interval [−3, −2.74].
g has a maximum value of < −.27 on the interval, so we could use Equation
(1) above to make our estimate (it gives 8 iterations).
We can do better, of course!
Motivation: #14
4
It’s possible to create methods that are of higher order than Newton’s,
but one does so at the expense of more constraints on f (e.g. f ∈ C 3 [a, b],
and greater computational complexity:
Example: #13, p. 83