Solvingsingular Linear Equation
Solvingsingular Linear Equation
COS 323
Last time
Linear system: Ax = b
Singular and ill-conditioned systems
Gaussian Elimination: A general purpose method
Nave Gauss (no pivoting)
Gauss with partial and full pivoting
Asymptotic analysis: O(n3)
Triangular systems and LU decomposition
Special matrices and algorithms:
Symmetric positive definite: Cholesky decomposition
Tridiagonal matrices
Singularity detection and condition numbers
Today:
Methods for large and sparse systems
Rank-one updating with Sherman-Morrison
Iterative refinement
Fixed-point and stationary methods
Introduction
Iterative refinement as a stationary method
Gauss-Seidel and Jacobi methods
Successive over-relaxation (SOR)
Solving a system as an optimization problem
Representing sparse systems
Problems with large systems
A * = A + uv T = A(I + A 1uv T )
* 1
(A ) = (I + A 1uv T ) 1 A 1
Sherman-Morrison Formula
1 T 1
1 A u v A b
x = (A * ) 1
b=A b
1+ v T A 1u
So, to solve (A * ) x = b,
z vT y
solve Ay = b, Az = u, x = y
1+ v T z
Applying Sherman-Morrison
Lets consider
cyclic tridiagonal again:
a11 1 a12 1 1
a21 a22 a23
a32 a33 a34
Take A =
a43 a44 a45
, u = , v =
a54 a55 a56
a65 a66 a61a16 a61 a16
Applying Sherman-Morrison
Compute pi using
f(x) = sin(x)
g(x) = sin(x) + x
Notes on fixed-point root-finding
Sensitive to starting x0
|g(x)| < 1 is sufficient for convergence
Converges linearly (when it converges)
Extending fixed-point iteration to systems of
multiple equations
General form:
Step 0. Formulate set of fixed-point equations
x1 = g1 (x1), x2 = g2 (x2), xn = gn (xn)
Step 1. Choose x10, x20, xn0
Step 2. Iterate:
x1(i+1) = g1(x1i), x2(i+1) = g2(x2i)
Example:
Fixed point method for 2 equations
f1(x) = (x1)2 + x1x2 - 10
f2(x) = x2 + 3x1(x2)2 - 57
Iteration steps:
x1(i+1) = sqrt(10 x1ix2i)
x2(i+1) = sqrt((57 x2i)/3x1i)
Stationary Iterative Methods for Linear
Systems
Can we formulate g(x) such that x*=g(x*)
when Ax* - b = 0?
Yes: let A = M N (for any satisfying M, N)
and let g(x) = Gx + c = M-1Nx + M-1b
Check: if x* = g(x*) = M-1Nx* + M-1b then
Ax* = (M N)(M-1Nx* + M-1b)
= Nx* + b + N(M-1Nx* + M-1b)
= Nx* + b Nx*
=b
So what?
x(k+1) = xk + e
= xk + A-1r for estimated A-1
This is equivalent to choosing
g(x) = Gx + c = M-1Nx + M-1b
where G = (I B-1 A) and c = B-1 b
(if B-1 is our most recent estimate of A-1)
So what?
bi aij x (k )
j aij x (k +1)
j
(k +1) j >i j< i
G.S.: x i=
aii
Notes on Gauss-Seidel
If w > 2: Divergence.
Questions?
One more method:
Conjugate Gradients
Transform problem to a function minimization!
Solve Ax=b
Minimize f(x) = xTAx 2bTx
Three arrays
Values: actual numbers in the matrix
Cols: column of corresponding entry in values
Rows: index of first entry in each row
Example: (zero-based! C/C++/Java, not Matlab!)
values 3 2 3 2 5 1 2 3
cols 1 2 3 0 3 1 2 3
rows 0 3 5 5 8
Compressed Sparse Row Format
values 3 2 3 2 5 1 2 3
cols 1 2 3 0 3 1 2 3
rows 0 3 5 5 8
Multiplying Ax: