MAT 461/561: 5.1 Stationary Iterative Methods
MAT 461/561: 5.1 Stationary Iterative Methods
James V. Lambers
March 2, 2020
Announcements
• What if we don’t really need THAT much accuracy? Elimination has no way of “quitting
early” in exchange for less accuracy
• What if A is sparse (mostly zero entries)? Elimination (and pivoting) can lead to “fill-in”
• What if we don’t even have A? Example: instead we have a function A(x) that returns Ax
x(k+1) = g(x(k) )
where x(0) is an initial guess. This is based on fixed-point iteration for solving x = g(x).
Let A = M − N be a splitting of A. Then from Ax = (M − N )x = b, we have:
Mx = Nx + b
g(x) = M −1 (N x + b).
1
Convergence Analysis
How do we know whether this iteration will converge? From equations above:
x(k+1) − x = M −1 N (x(k) − x)
Over k iterations:
x(k) − x = (M −1 N )k (x(0) − x)
Then T = M −1 N is the iteration matrix for this method. Take the norm of both sides:
kx(k) − xk ≤ kT kk kx(0) − xk
To ensure error → 0 as k → ∞, we can require kT k < 1 in some norm. This is sufficient but not
necessary.
A condition that is necessary and sufficient is ρ(T ), the spectral radius of T , is less than 1.
The spectral radius is:
ρ(T ) = max |λ|,
λ∈λ(T )
A=L+D+U
where D is the diagonal part, L is the strictly lower triangular part, and U is the strictly upper
triangular part.
The Jacobi Method comes from M = D, N = −(L + U ). Then we have
Each component:
n
(k+1) 1 X (k)
xi = bi − aij xj , i = 1, 2, . . . , n
aii
j=1,j6=i
Implementation: given x(0) (can use x(0) = 0), have an outer loop for computing each new x(k+1)
from x(k) , k = 0, 1, 2, . . . until some stopping criterion is met (example: kx(k+1) − x(k) k < ) but
should also set a maximum number of iterations.
2
The Gauss-Seidel Method
If we examine Jacobi more closely:
(k+1) 1 X (k)
X (k)
xi = bi − aij xj − aij xj , i = 1, 2, . . . , n
aii
j<i j>i
(k)
This formula uses old information xj , j < i. If we update:
(k+1) 1 X (k+1)
X (k)
xi = bi − aij xj − aij xj , i = 1, 2, . . . , n
aii
j<i j>i
or h i
x(k+1) = (D + L)−1 b − U x(k) .
That is, M = D + L and N = −U . In some cases, it can be proven that Gauss-Seidel converges
more rapidly than Jacobi. Disadvantage: Jacobi can be parallelized, Gauss-Seidel can’t.
Successive Overrelaxation
We rewrite Gauss-Seidel as follows:
(k+1) (k) 1 X (k+1)
X (k) (k)
xi = xi + bi − aij xj − aij xj − aii xi , i = 1, 2, . . . , n
aii
j<i j>i
(k+1)
To take a step of different length in the direction of xGS − x(k) , we introduce a relaxation
parameter ω:
(k+1) (k) ω bi −
X (k+1)
X (k) (k)
xi = xi + aij xj − aij xj − aii xi , i = 1, 2, . . . , n
aii
j<i j>i
or:
(D + ωL)x(k+1) = [(1 − ω)D − U ] x(k) + ωb.
This is called Successive Overrelaxation (SOR). Convergence requires 0 < ω < 2. Why?
3
The iteration matrix is
Tω = (D + ωL)−1 [(1 − ω)D − U ] .
But
n
Y n
Y
det(Tω ) = a−1
ii (1 − ω)aii = (1 − ω)n
i=1 i=1
and we have
n
Y
det(Tω ) = λi
i=1
and therefore
ρ(Tω ) ≥ |1 − ω|.
SOR was introduced by David Young in his PhD dissertation, 1950.