0% found this document useful (0 votes)
39 views

MAT 461/561: 10.3 Fixed-Point Iteration: Announcements

This document discusses fixed-point iteration, a method for solving nonlinear equations of the form x = g(x). It provides examples of functions that do and do not converge under fixed-point iteration. For convergence, g(x) must be a contraction mapping with derivative |g'(x)| < 1. The examples illustrate how rewriting the equation can sometimes improve convergence if |g'(x*)| is close to 1. Taylor expansion is used to analyze the error at each iteration, which is typically linear but can be quadratic if g'(x*) = 0.

Uploaded by

Debisa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views

MAT 461/561: 10.3 Fixed-Point Iteration: Announcements

This document discusses fixed-point iteration, a method for solving nonlinear equations of the form x = g(x). It provides examples of functions that do and do not converge under fixed-point iteration. For convergence, g(x) must be a contraction mapping with derivative |g'(x)| < 1. The examples illustrate how rewriting the equation can sometimes improve convergence if |g'(x*)| is close to 1. Taylor expansion is used to analyze the error at each iteration, which is typically linear but can be quadratic if g'(x*) = 0.

Uploaded by

Debisa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

MAT 461/561: 10.

3 Fixed-Point Iteration

James V. Lambers

March 4, 2020

Announcements
Get some homework done dangit!

Homework Questions
3.2.4: In showing L−1 is lower triangular, can show that the solution of the system

Lxj = ej ,

where X = L−1 and ej is the jth column of I, has elements x1j , x2j ,. . .,xj−1,j = 0. This would
mean that xij = 0 for i < j, meaning X is lower triangular
No need to repeat the whole proof for upper triangular after doing lower triangular: use the fact
that if A is nonsingular, then (A−1 )T = (AT )−1 = A−T
3.4.3: use loglog to plot vector of n-values against vector of condition numbers, use cond(A) to
compute 2-norm condition number of each A. Submit plot and code used to generate it
For example if using n = 10p : use a loop for p to create each matrix and get its condition number,
store in vectors: let conds be the vector of condition numbers, do conds(i)=cond(A)

Fixed-Point Iteration
To solve a nonlinear equation f (x) = 0, can rewrite (in many ways) in the form

x = g(x)

Example: g(x) = x + λf (x) where λ 6= 0. This leads to the algorithm fixed-point iteration.

Solution by Successive Substitution


The algorithm:

Choose initial guess x(0)


for k = 0, 1, 2, . . .
x(k+1) = g(x(k) )
Use x(k) and x(k+1) to check for convergence
end

1
Examples:
• f (x) = x − cos x leading to x = cos x. Converged, but slooooooowly
• f (x) = x2 − x + 3/16, or x = x2 + 3/16. Solutions: x∗ = 1/4, 3/4. Converges to 1/4 unless
x > 3/4, diverges!
• f (x) = x+ln x, or x = − ln x. By IVT, solution x∗ ∈ (0.5, 0.6). However, fixed-point iteration
with initial guess 0.55 diverges. Instead: from −x = ln x, exponentiate: e−x = x. This time
it converges.
• f (x) = x − cos x − 2, or x = cos x + 2. Initial guess x(0) = 2. Iterates bounce back and forth,
convergence very slow. Instead try:
x + g(x)
x=
2
This iteration converges rapidly. Why?

Convergence Analysis
Existence of a fixed point (solution) x∗ : Brouwer’s Fixed-Point Theorem states that g(x) has a
fixed point in [a, b] if g maps [a, b] into [a, b]. Proof: use IVT.
Uniqueness: if g has a fixed point in [a, b] and satisfies a Lipschitz condition on [a, b]:
|g(x) − g(y)| ≤ L|x − y|, x, y ∈ [a, b]
where L < 1 is a Lipschitz constant, then g is a contraction and the fixed point x∗ is unique,
and Fixed-Point iteration will converge for any x(0) ∈ [a, b]. (Contraction Mapping Theorem)
Fixed-Point Theorem: By the Mean Value Theorem,

g(x) − g(y) 0
x − y = |g (c)|, c between x, y

If |g 0 (x)| ≤ L < 1 on [a, b], then g is a contraction and L is a Lipschitz constant.


What about the examples:
• x = cos x: g(x) = cos x, g 0 (x) = − sin x, so is a contraction on an interval [0, 1]. But g 0 (x∗ ) is
not very small, so convergence is slow
• x = x2 + 3/16, g 0 (x) = 2x. g 0 (1/4) = 1/2, g 0 (3/4) = 3/2. Because g 0 (3/4) > 1, that fixed
point is unstable so Fixed-Point iteration will not converge to it.
• x = − ln x: g 0 (x) = −1/x, so |g 0 (x)| > 1 on [0.5, 0.6]. But for g(x) = e−x , g 0 (x) = −e−x . Here
we used this rule:
d −1 1
[f (x)] = 0 −1 .
dx f (f (x))
Note that x = g(x) is equivalent to x = g −1 (x).
• x = cos x + 2: g 0 (x) = − sin x, g 0 (x∗ ) ≈ 1 thus convergence is slow. For
x + g(x)
h(x) = ,
2
h0 (x) = (1 − sin x)/2, so h0 (x∗ ) ≈ 0.

2
Taylor expansion of g to analyze error:
1
x(k+1) − x∗ = g(x(k) ) − g(x∗ ) = g 0 (x∗ )(x(k) − x∗ ) + g 00 (x∗ )(x(k) − x∗ )2 + · · ·
2
Normally, convergence is linear, with asymptotic error constant |g 0 (x∗ )|. But if g 0 (x∗ ) = 0, conver-
gence is quadratic with asymptotic error constant |g 00 (x∗ )/2|.

Relaxation

You might also like