Lec 12
Lec 12
Sherman-Morrison-Woodbury
The Sherman-Morrison formula describes the solution of A + uv T when there
is already a factorization for A. An easy way to derive the formula is through
block Gaussian elimination. In order to compute the product (A + uv T )x, we
would usually first compute = v T x and then compute (A+uv T )x = Ax+u.
So we can write (A + uv T )x = b in terms of an extended linear system
A u x b
= .
v T 1 0
We can factor the matrix in this extended system as
A u I 0 A u
= T 1 ,
v T 1 v A 1 0 1 v T A1 u
apply forward substitution with the block lower triangular factor,
y = b,
= v T A1 y,
and apply backward substitution with the block upper triangular factor,
= (1 v T A1 u)1
x = A1 (y u).
(A + U V T )1 = A1 A1 U (I + V T A1 U )1 V T A1 .
Bindel, Fall 2009 Matrix Computations (CS 6210)
(1) (A + A)x = b,
where |A| . 3nmach |L||U |, assuming L and U are the computed L and U
factors.
Though I didnt do this in class, here I will briefly sketch a part of the
error analysis following Demmels treatment (2.4.2). Mostly, this is because
I find the treatment in 3.3.1 of our book less clear than I would like but
also, the bound in Demmels book is marginally tighter. Here is the idea.
Suppose L and U are the computed L and U factors. We obtain ujk by
repeatedly subtracting lji uik from the original ajk , i.e.
j1
!
X
ujk = fl ajk lji uik .
i=1
Regardless of the order of the sum, we get an error that looks like
j1
X
ujk = ajk (1 + 0 ) lji uik (1 + i ) + O(2 )
mach
i=1
where
j1
X
Ejk = ljj ujk 0 + lji uik (i 0 ) + O(2 )
mach
i=1
Bindel, Fall 2009 Matrix Computations (CS 6210)
Pivoting
The backward error analysis in the previous section is not completely satis-
factory, since |L||U | may be much larger than |A|, yielding a large backward
error overall. For example, consider the matrix
1 1 0 1
A= = 1 .
1 1 1 0 1 1
Now the triangular factors for the re-ordered system matrix A have very
modest norms, and so we are happy. If we think of the re-ordering as the
effect of a permutation matrix P , we can write
1 0 1 1 0 1 1
A= = = P T LU.
1 1 1 0 1 0 1
1
Its obvious that Ej k is bounded in magnitude by 2(j 1)mach (|L||U |)jk + O(2mach ).
We cut a factor of two if we go down to the level of looking at the individual rounding
errors during the dot product, because some of those errors cancel.
Bindel, Fall 2009 Matrix Computations (CS 6210)
for j = 1:n-1
end
(2) Ax = Ax Ex = b,
Axk+1 (A A)xk = b,
or
xk+1 = xk + A1 (b Axk ).
Note that this latter form is the same as inexact Newton iteration on the
equation Axk b = 0 with the approximate Jacobian A.
If we subtract (2) from (3), we see
A(xk+1 x) E(xk x) = 0,
or
xk+1 x = A1 E(xk x).
Taking norms, we have