0% found this document useful (0 votes)
8 views81 pages

Na10f ch06

Uploaded by

luciezita34
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views81 pages

Na10f ch06

Uploaded by

luciezita34
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 81

Numerical Analysis I

Direct Methods for Solving Linear Systems

Instructor: Wei-Cheng Wang 1

Department of Mathematics
National TsingHua University

Fall 2010

1
These slides are based on Prof. Tsung-Ming Huang(NTNU)’s original slides
Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 1 / 81
Outline

1 Linear systems of equations

2 Pivoting Strategies

3 Matrix factorization

4 Special types of matrices

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 2 / 81


Linear systems of equations

Three operations to simplify the linear system:


1 (λEi ) → (Ei ): Equation Ei can be multiplied by λ 6= 0 with the
resulting equation used in place of Ei .
2 (Ei + λEj ) → (Ei ): Equation Ej can be multiplied by λ 6= 0 and
added to equation Ei with the resulting equation used in place of Ei .
3 (Ei ) ↔ (Ej ): Equation Ei and Ej can be transposed in order.

Example

E1 : x1 + x2 + 3x4 = 4,
E2 : 2x1 + x2 − x3 + x4 = 1,
E3 : 3x1 − x2 − x3 + 2x4 = −3,
E4 : −x1 + 2x2 + 3x3 − x4 = 4.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 3 / 81


Solution:
(E2 − 2E1 ) → (E2 ), (E3 − 3E1 ) → (E3 ) and (E4 + E1 ) → (E4 ):

E1 : x1 + x2 + 3x4 = 4,
E2 : − x2 − x3 − 5x4 = −7,
E3 : − 4x2 − x3 − 7x4 = −15,
E4 : 3x2 + 3x3 + 2x4 = 8.

(E3 − 4E2 ) → (E3 ) and (E4 + 3E2 ) → (E4 ):

E1 : x1 + x2 + 3x4 = 4,
E2 : − x2 − x3 − 5x4 = −7,
E3 : 3x3 + 13x4 = 13,
E4 : − 13x4 = −13.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 4 / 81


Backward-substitution process:
1 E4 ⇒ x4 = 1
2 Solve E3 for x3 :
1 1
x3 = (13 − 13x4 ) = (13 − 13) = 0.
3 3
3 E2 gives

x2 = −(−7 + 5x4 + x3 ) = −(−7 + 5 + 0) = 2.

4 E1 gives

x1 = 4 − 3x4 − x2 = 4 − 3 − 2 = −1.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 5 / 81


Solve linear systems of equations


 a11 x1 + a12 x2 + · · · + a1n xn = b1
 a21 x1 + a22 x2 + · · · + a2n xn = b2

..


 .
an1 x1 + an2 x2 + · · · + ann xn = bn

Rewrite in the matrix form


Ax = b, (1)
where
     
a11 a12 ··· a1n b1 x1
 a21 a22 ··· a2n   b2   x2 
A= , b= , x=
     
.. .. .. .. .. .. 
 . . . .   .   . 
an1 an2 · · · ann bn xn

and [A, b] is called the augmented matrix.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 6 / 81


Gaussian elimination with backward substitution
The augmented matrix in previous example is
 
1 1 0 3 4
 2
 1 −1 1 1 
.
 3 −1 −1 2 −3 
−1 2 3 −1 4

(E2 − 2E1 ) → (E2 ), (E3 − 3E1 ) → (E3 ) and (E4 + E1 ) → (E4 ):


 
1 1 0 3 4
 0 −1 −1 −5 −7 
 0 −4 −1 −7 −15  .
 

0 3 3 2 8
(E3 − 4E2 ) → (E3 ) and (E4 + 3E2 ) → (E4 ):
 
1 1 0 3 4
 0 −1 −1 −5 −7 
 .
 0 0 3 13 13 
0 0 0 −13 −13
Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 7 / 81
The general Gaussian elimination procedure
Provided a11 6= 0, for each i = 2, 3, . . . , n,
 
ai1
Ei − E1 → (Ei ).
a11
Transform all the entries in the first column below the diagonal are
zero. Denote the new entry in the ith row and jth column by aij .
For i = 2, 3 . . . , n − 1, provided aii 6= 0,
 
aji
Ej − Ei → (Ej ), ∀ j = i + 1, i + 2, . . . , n.
aii
Transform all the entries in the ith column below the diagonal are
zero.
Result an upper triangular matrix:
 
a11 a12 · · · a1n b1
 0 a22 · · · a2n b2 
..  .
 
 .. .. .. ..
 . . . . . 
0 ··· 0 ann bn
Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 8 / 81
The process of Gaussian elimination result in a sequence of matrices as
follows:

A = A(1) → A(2) → · · · → A(n) = upper triangular matrix,

The matrix A(k) has the following form:


 (1) (1) (1) (1) (1) 
a11 · · · a1,k−1 a1k ··· a1j ··· a1n
 . . . .. .. .. ..
 ..

 . . . . . 

(k−1) (k−1) (k−1) (k−1)
 0 · · · ak−1,k−1 ak−1,k · · · ak−1,j ··· ak−1,n
 

(k) (k) (k)
 
 0 ··· 0 akk ··· akj ··· akn 
A(k) =  .
 
 .. .. .. .. .. 
 . . . . 

(k) (k) (k)
··· 0 ··· ···
 
 0 aik aij ain 
 .. .. .. .. ..
 

 . . . . . 
(k) (k) (k)
0 ··· 0 ank ··· anj ··· ann

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 9 / 81


The entries of A(k) are produced by the formula
 (k−1)
a , for i = 1, . . . , k − 1, j = 1, . . . , n;
 ij


(k)
aij = 0, for i = k, . . . , n, j = 1, . . . , k − 1;
(k−1)
 (k−1) ai,k−1 (k−1)
 aij
 − (k−1) × ak−1,j , for i = k, . . . , n, j = k, . . . , n.
ak−1,k−1

(1) (2) (n)


The procedure will fail if one of the elements a11 , a22 , . . . , ann is
zero.
(i)
aii is called the pivot element.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 10 / 81


Backward substitution
The new linear system is triangular:
a11 x1 + a12 x2 + · · · + a1n xn = b1 ,
a22 x2 + · · · + a2n xn = b2 ,
..
.
ann xn = bn
Solving the nth equation for xn gives
bn
xn = .
ann
Solving the (n − 1)th equation for xn−1 and using the value for xn
yields
bn−1 − an−1,n xn
xn−1 = .
an−1,n−1
In general,
Pn
bi − j=i+1 aij xj
xi = , ∀ i = n − 1, n − 2, . . . , 1.
aii
Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 11 / 81
Algorithm (Backward Substitution)
Suppose that U ∈ Rn×n is nonsingular upper triangular and b ∈ Rn . This
algorithm computes the solution of U x = b.

For i = n, . . . , 1
tmp = 0
For j = i + 1, . . . , n
tmp = tmp + U (i, j) ∗ x(j)
End for
x(i) = (b(i) − tmp)/U (i, i)
End for

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 12 / 81


Example
Solve system of linear equations.
    
6 −2 2 4 x1 12
 12
 −8 6 10 

 x2   34 
= 
 3 −13 9 3   x3   27 
−6 4 1 −18 x4 −38

Solution:
1st step Use 6 as pivot element, the first row as pivot row, and
multipliers 2, 12 , −1 are produced to reduce the system to
    
6 −2 2 4 x1 12
 0 −4 2 2   x2   10 
  = 
 0 −12 8 1   x3   21 
0 2 3 −14 x4 −26

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 13 / 81


2nd step Use −4 as pivot element, the second row as pivot row, and
multipliers 3, − 12 are computed to reduce the system to
    
6 −2 2 4 x1 12
 0 −4 2 2   x2
    10 
 = 
 0 0 2 −5   x3   −9 
0 0 4 −13 x4 −21

3rd step Use 2 as pivot element, the third row as pivot row, and
multipliers 2 is found to reduce the system to
    
6 −2 2 4 x1 12
 0 −4 2 2 
  x2  =  10 
   

 0 0 2 −5   x3   −9 
0 0 0 −3 x4 −3

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 14 / 81


4th step The backward substitution is applied:
−3
x4 = = 1,
−3
−9 + 5x4 −9 + 5
x3 = = = −2,
2 2
10 − 2x4 − 2x3 10 − 2 + 4
x2 = = = −3,
−4 −4
12 − 4x4 − 2x3 + 2x2 12 − 4 + 4 − 6
x1 = = = 1.
6 6

(k)
This example is done since akk 6= 0 for all k = 1, 2, 3, 4.
(k)
How to do if akk = 0 for some k?

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 15 / 81


Example
Solve system of linear equations.
    
1 −1 2 −1 x1 −8
 2
 −2 3 −3 

 x2   −20 
= 
 1 1 1 0   x3   −2 
1 −1 4 3 x4 4

Solution:
1st step Use 1 as pivot element, the first row as pivot row, and
multipliers 2, 1, 1 are produced to reduce the system to
    
1 −1 2 −1 x1 −8
 0
 0 −1 −1    x2  =  −4 
   
 0 2 −1 1   x3   6 
0 0 2 4 x4 12

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 16 / 81


(2) (2)
2nd step Since a22 = 0 and a32 6= 0, the operation (E2 ) ↔ (E3 ) is
performed to obtain a new system
    
1 −1 2 −1 x1 −8
 0
 2 −1   x2  =  6 
1     
 0 0 −1 −1   x3   −4 
0 0 2 4 x4 12

3rd step Use −1 as pivot element, the third row as pivot row, and
multipliers −2 is found to reduce the system to
    
1 −1 2 −1 x1 −8
 0
 2 −1 1 
  x2  =  6 
   
 0 0 −1 −1   x3   −4 
0 0 0 2 x4 4

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 17 / 81


4th step The backward substitution is applied:
4
x4 = = 2,
2
−4 + x4
x3 = = 2,
−1
6 − x4 + x3
x2 = = 3,
2
−8 + x4 − 2x3 + x2
x1 = = −7.
1

(k)
This example illustrates what is done if akk = 0 for some k.
(k)
If apk 6= 0 for some p with k + 1 ≤ p ≤ n, then the operation
(Ek ) ↔ (Ep ) is performed to obtain new matrix.
(k)
If apk = 0 for each p, then the linear system does not have a unique
solution and the procedure stops.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 18 / 81


Algorithm (Gaussian elimination)
Given A ∈ Rn×n and b ∈ Rn , this algorithm implements the Gaussian
elimination procedure to reduce A to upper triangular and modify the
entries of b accordingly.

For k = 1, . . . , n − 1
Let p be the smallest integer with k ≤ p ≤ n and apk 6= 0.
If @ p, then stop.
If p 6= k, then perform (Ep ) ↔ (Ek ).
For i = k + 1, . . . , n
t = A(i, k)/A(k, k)
A(i, k) = 0
b(i) = b(i) − t × b(k)
For j = k + 1, . . . , n
A(i, j) = A(i, j) − t × A(k, j)
End for
End for
End for
Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 19 / 81
Number of floating-point arithmetic operations

Eliminate kth column


For i = k + 1, . . . , n
t = A(i, k)/A(k, k); b(i) = b(i) − t × b(k).
For j = k + 1, . . . , n
A(i, j) = A(i, j) − t × A(k, j)
End for
End for

Multiplications/divisions

(n − k) + (n − k) + (n − k)(n − k) = (n − k)(n − k + 2)

Additions/subtractions

(n − k) + (n − k)(n − k) = (n − k)(n − k + 1)

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 20 / 81


Total number of operations for multiplications/divisions
n−1
X n−1
X
(n − k)(n − k + 2) = (n2 − 2nk + k 2 + 2n − 2k)
k=1 k=1
n−1
X n−1
X n−1
X
2
= (n + 2n) 1 − 2(n + 1) k+ k2
k=1 k=1 k=1
(n − 1)n (n − 1)n(2n − 1)
= (n2 + 2n)(n − 1) − 2(n + 1) +
2 6
2n3 + 3n2 − 5n
= .
6
Total number of operations for additions/subtractions
n−1
X n−1
X
(n − k)(n − k + 1) = (n2 − 2nk + k 2 + n − k)
k=1 k=1
n−1 n−1 n−1
X X X n3 − n
= (n2 + n) 1 − (2n + 1) k+ k2 = .
3
k=1 k=1 k=1

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 21 / 81


Backward substitution
x(n) = b(n)/U (n, n).
For i = n − 1, . . . , 1
tmp = U (i, i + 1) × x(i + 1)
For j = i + 2, . . . , n
tmp = tmp + U (i, j) × x(j)
End for
x(i) = (b(i) − tmp)/U (i, i)
End for

Multiplications/divisions
n−1
X n2 + n
1+ [(n − i) + 1] =
2
i=1
Additions/subtractions
n−1
X n2 − n
[(n − i − 1) + 1] =
2
i=1
Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 22 / 81
The total number of arithmetic operations in Gaussian elimination with
backward substitution is:
Multiplications/divisions

2n3 + 3n2 − 5n n2 + n n3 n n3
+ = + n2 − ≈
6 2 3 3 3
Additions/subtractions

n3 − n n2 − n n3 n2 5n n3
+ = + − ≈
3 2 3 2 6 3

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 23 / 81


Pivoting Strategies
(k) (k)
If akk is small in magnitude compared to ajk , then

(k)
ajk
|mjk | = (k)
> 1.
akk

Round-off error introduced in the computation of


(k+1) (k) (k)
aj` = aj` − mjk ak` , for ` = k + 1, . . . , n.

Error can be increased when performing the backward substitution for


Pn (k)
bk − j=k+1 akj xj
xk = (k)
akk
(k)
with a small value of akk .

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 24 / 81


Example
The linear system

E1 : 0.003000x1 + 59.14x2 = 59.17,


E2 : 5.291x1 − 6.130x2 = 46.78,

has the exact solution x1 = 10.00 and x2 = 1.000. Suppose Gaussian


elimination is performed on this system using four-digit arithmetic with
rounding.

a11 = 0.0030 is small and


5.291
m21 = = 1763.66̄ ≈ 1764.
0.0030
Perform (E2 − m21 E1 ) → (E2 ):

0.0030x1 + 59.14x2 = 59.17


− 104309.376̄x2 = −104309.376̄.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 25 / 81


Rounding with four-digit arithmetic:
Coefficient of x2 :

−6.130 − 1764 × 59.14 = −6.130 − 104322.96


≈ −6.130 − 104300 = −104306.13
≈ −104300.

Right hand side:

46.78 − 1764 × 59.17 = 46.78 − 104375.88


≈ 46.78 − 104400 = −104353.22
≈ −104400.

New linear system:

0.0030x1 + 59.14x2 = 59.17


− 104300x2 ≈ −104400.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 26 / 81


Approximated solution:
104400
x2 = ≈ 1.001,
104300
59.17 − 59.14 × 1.001 59.17 − 59.19914
x1 = =
0.0030 0.0030
59.17 − 59.20
≈ = −10.00.
0.0030
This ruins the approximation to the actual value x1 = 10.00.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 27 / 81


Partial pivoting

To avoid the pivot element small relative to other entries, pivoting is


(k)
performed by selecting an element apq with a larger magnitude as the
pivot.
(k)
Specifically, select pivoting apk with

(k) (k)
|apk | = max |aik |
k≤i≤n

and perform (Ek ) ↔ (Ep ).


This row interchange strategy is called partial pivoting.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 28 / 81


Example
Reconsider the linear system

E1 : 0.003000x1 + 59.14x2 = 59.17,


E2 : 5.291x1 − 6.130x2 = 46.78.

Find pivoting with

max{|a11 |, |a21 |} = 5.291 = |a21 |.

Perform (E2 ) ↔ (E1 ):

E1 : 5.291x1 − 6.130x2 = 46.78,


E2 : 0.003000x1 + 59.14x2 = 59.17.

The multiplier for new system is


a21
m21 = = 0.0005670.
a11

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 29 / 81


The operation (E2 − m21 E1 ) → (E2 ) reduces the system to

5.291x1 − 6.130x2 = 46.78,


59.14x2 ≈ 59.14.

The four-digit answers resulting from the backward substitution are


the correct values x1 = 10.00 and x2 = 1.000.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 30 / 81


Example
The linear system

E1 : 30.00x1 + 591400x2 = 591700,


E2 : 5.291x1 − 6.130x2 = 46.78,

is the same as that in previous example except that all the entries in the
first equation have been multiplied by 104 .

The pivoting is a11 = 30.00 and the multiplier


5.291
m21 = = 0.1764
30.00
leads to the system

30.00x1 + 591400x2 = 591700


− 104300x2 ≈ −104400,

which has inaccurate solution x2 ≈ 1.001 and x1 ≈ −10.00.


Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 31 / 81
Scaled partial pivoting

Define a scale factor si as

si = max |aij |, for i = 1, . . . , n.


1≤j≤n

If si = 0 for some i, then the system has no unique solution.


In the ith column, choose the least integer p ≥ i with

|api | |aki |
= max
sp i≤k≤n sk

and perform (Ei ) ↔ (Ep ) if p 6= i.


The scale factors s1 , . . . , sn are computed only once and must also be
interchanged when row interchanges are performed.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 32 / 81


Example
Apply scaled partial pivoting to the linear system

E1 : 30.00x1 + 591400x2 = 591700,


E2 : 5.291x1 − 6.130x2 = 46.78.

The scale factors s1 and s2 are


s1 = max{|30.00|, |591400|} = 591400
and
s2 = max{|5.291|, | − 6.130|} = 6.130.
Consequently,
|a11 | 30.00
= = 0.5073 × 10−4 ,
s1 591400
|a21 | 5.291
= = 0.8631,
s2 6.130
and the interchange (E1 ) ↔ (E2 ) is made.
Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 33 / 81
Applying Gaussian elimination to the new system

5.291x1 − 6.130x2 = 46.78,


30.00x1 + 591400x2 = 591700

produces the correct results: x1 = 10.00 and x2 = 1.000.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 34 / 81


Matrix factorization

This equation has a unique solution x = A−1 b when the coefficient


matrix A is nonsingular.
Use Gaussian elimination to factor the coefficient matrix into a
product of matrices. The factorization is called LU -factorization and
has the form A = LU , where L is unit lower triangular and U is
upper triangular.
The solution to the original problem Ax = LU x = b is then found by
a two-step triangular solve process:

Ly = b, U x = y.

LU factorization requires O(n3 ) arithmetic operations. Forward


substitution for solving a lower-triangular system Ly = b requires
O(n2 ). Backward substitution for solving an upper-triangular system
U x = y requires O(n2 ) arithmetic operations.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 35 / 81


For a given vector v ∈ Rn with vk 6= 0 for some 1 ≤ k ≤ n, let
vi
`ik = , i = k + 1, . . . , n,
vk
 T
`k = 0 · · · 0 `k+1,k · · · `n,k ,

and
 
1 ··· 0 0 ··· 0
 .. . . .. .. .. 
 . . . . . 
 
0 ··· 1 0 ··· 0 
Mk = I − `k eTk = 

.
 0 ··· −`k+1,k 1 · · · 0 
 
 .. .. .. . . .. 
 . . . . . 
0 ··· −`n,k 0 · · · 1

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 36 / 81


Then one can verify that
 T
Mk v = v1 · · · vk 0 · · · 0 .

Mk is called a Gaussian transformation, the vector `k a Gauss vector.


Furthermore, one can verify that
 
1 ··· 0 0 ··· 0
 .. . . .. .. .. 
 . . . . . 
 
0 · · · 1 0 ··· 0 
Mk−1 = (I − `k eTk )−1 = I + `k eTk = 

.
 0 · · · `k+1,k 1 ··· 0 
 
 .. .. .. . . .. 
 . . . . . 
0 ··· `n,k 0 ··· 1

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 37 / 81


(1)
Given a nonsingular matrix A ∈ Rn×n , denote A(1) ≡ [aij ] = A. If
(1)
a11 6= 0, then
M1 = I − `1 eT1 ,
where
(1)
 T ai1
`1 = 0 `21 · · · `n1 , `i1 = (1)
, i = 2, . . . , n,
a11
can be formed such that
 (1) (1) (1)

a11 a12 · · · a1n
(2) (2)
a22 · · ·
 
(2) (1)
 0 a2n 
A = M1 A = .. .. .. .. ,
. . . .
 
 
(2) (2)
0 an2 ··· ann

where
(2) (1) (1)
aij = aij − `i1 × a1j , for i = 2, . . . , n and j = 2, . . . , n.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 38 / 81


In general, at the k-th step, we are confronted with a matrix

A(k) = Mk−1 · · · M2 M1 A(1)


 (1) (1) (1) (1) (1)

a11 a12 · · · a1,k−1 a1k ··· a1n
 0 a(2) ···
(2) (2)
···
(2)
 
22 a2,k−1 a2k a2n 
 .. .. .. .. ..
 
..
.

 . . . . . 
(k−1) (k−1) (k−1)
 
= 
 0 0 · · · ak−1,k−1 ak−1,k ··· ak−1,n .

 (k) (k) 
 0 0 ··· 0 akk ··· akn 
 .. .. .. .. .. ..
 
.

 . . . . . 
(k) (k)
0 0 ··· 0 akn ··· ann

(k)
If the pivot akk 6= 0, then the multipliers
(k)
aik
`ik = (k)
, i = k + 1, . . . , n,
akk

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 39 / 81


can be computed and the Gaussian transformation
T
Mk = I − `k eTk ,

where `k = 0 ··· 0 `k+1,k · · · `nk ,

can be applied to the left of A(k) to obtain

A(k+1) = Mk A(k)
 (1) (1) (1) (1) (1) (1) 
a11 a12 · · · a1,k−1 a1k a1,k+1 ··· a1n
(2) (2) (2) (2) (2)
 0 a22 · · · a2,k−1 a2k a2,k+1 ··· a2n
 

 . .. .. .. .. ..
..

 .
 . . . . . . .


 (k−1) (k−1) (k−1) (k−1) 
 0 0 · · · ak−1,k−1 ak−1,k ak−1,k+1 ··· ak−1,n 
=  ,
 
(k) (k) (k)
 0 0 ··· 0 akk ak,k+1 ··· akn 
 .. .. ..
 
(k+1) (k+1)
···

 . . . 0 ak+1,k+1 ak+1,n 
 .. .. .. .. .. ..
 

 . . . . . . 
(k+1) (k+1)
0 0 ··· 0 0 an,k+1 ··· ann

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 40 / 81


in which
(k+1) (k) (k)
aij = aij − `ik akj , (2)

for i = k + 1, . . . , n, j = k + 1, . . . , n. Upon the completion,

U ≡ A(n) = Mn−1 · · · M2 M1 A

is upper triangular. Hence

A = M1−1 M2−1 · · · Mn−1


−1
U ≡ LU,

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 41 / 81


where

L ≡ M1−1 M2−1 · · · Mn−1


−1
= (I − `1 eT1 )−1 (I − `2 eT2 )−1 · · · (I − `n−1 eTn−1 )−
= (I + `1 eT1 )(I + `2 eT2 ) · · · (I + `n−1 eTn−1 )
= I + `1 eT1 + `2 eT2 + · · · + `n−1 eTn−1
 
1 0 0 ··· 0
 `21 1
 0 ··· 0  
 `31 `32 1 · · · 0 
=  
 .. .. .. .. .. 
 . . . . . 
`n1 `n2 `n3 · · · 1

is unit lower triangular. This matrix factorization is called the


LU -factorization of A.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 42 / 81


Algorithm (LU Factorization)
Given A ∈ Rn×n nonsingular, this algorithm computes a unit lower
triangular matrix L and an upper triangular matrix U such that A = LU .
For i, j = 1, . . . , n
L(i, j) = 0, U (i, j) = 0
End for
For k = 1, . . . , n − 1
For j = k, . . . , n
U (k, j) = A(k, j) (Compute kth row of U )
End for
For i = k + 1, . . . , n (Compute kth column of L)
L(i, k) = A(i, k)/A(k, k)
For j = k + 1, . . . , n
A(i, j) = A(i, j) − L(i, k) × U (k, j)
(Update the right-lower sub-matrix, row by row)
End for
End for
End for
U (n, n) = A(n, n)
Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 43 / 81
Algorithm (LU Factorization)
Memory saving version of LU factorization. The matrix A is overwritten
by L and U .

For k = 1, . . . , n − 1
For i = k + 1, . . . , n
A(i, k) = A(i, k)/A(k, k)
For j = k + 1, . . . , n
A(i, j) = A(i, j) − A(i, k) × A(k, j)
End for
End for
End for

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 44 / 81


Forward Substitution
When a linear system Lx = b is lower triangular of the form
    
`11 0 · · · 0 x1 b1
 `21 `22 · · · 0    x2 
   b2 
=  ..  ,
  
 .. .. .. . .
..   .. 
  
 . . .  . 
`n1 `n2 · · · `nn xn bn

where all diagonals `ii 6= 0, xi can be obtained by the following procedure

x1 = b1 /`11 ,
x2 = (b2 − `21 x1 )/`22 ,
x3 = (b3 − `31 x1 − `32 x2 )/`33 ,
..
.
xn = (bn − `n1 x1 − `n2 x2 − · · · − `n,n−1 xn−1 )/`nn .

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 45 / 81


The general formulation for computing xi is
 
i−1
X 
xi = bi −
 `ij xj  `ii , i = 1, 2, . . . , n.
j=1

Algorithm (Forward Substitution)


Suppose that L ∈ Rn×n is nonsingular lower triangular and b ∈ Rn . This
algorithm computes the solution of Lx = b.

For i = 1, . . . , n
tmp = 0
For j = 1, . . . , i − 1
tmp = tmp + L(i, j) ∗ x(j)
End for
x(i) = (b(i) − tmp)/L(i, i)
End for

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 46 / 81


Example

E1 : x1 + x2 + 3x4 = 4,
E2 : 2x1 + x2 − x3 + x4 = 1,
E3 : 3x1 − x2 − x3 + 2x4 = −3,
E4 : −x1 + 2x2 + 3x3 − x4 = 4.

Solution:
The sequence {(E2 − 2E1 ) → (E2 ), (E3 − 3E1 ) → (E3 ),
(E4 − (−1)E1 ) → (E4 ), (E3 − 4E2 ) → (E3 ),
(E4 − (−3)E2 ) → (E4 )} converts the system to the triangular system

x1 + x2 + 3x4 = 4,
− x2 − x3 − 5x4 = −7,
3x3 + 13x4 = 13,
− 13x4 = −13.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 47 / 81


LU factorization of A:
 
1 1 0 3
 2 1 −1 1 
A =   3 −1 −1

2 
−1 2 3 −1
  
1 0 0 0 1 1 0 3
 2 1 0 0   0 −1 −1 −5 
=   3
  = LU.
4 1 0  0 0 3 13 
−1 −3 0 1 0 0 0 −13

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 48 / 81


Solve Ly = b:
    
1 0 0 0 y1 8
 2 1 0 0   y2   7 
  = 
 3 4 1 0   y3   14 
−1 −3 0 1 y4 −7

which implies that

y1 = 8,
y2 = 7 − 2y1 = −9,
y3 = 14 − 3y1 − 4y2 = 26,
y4 = −7 + y1 + 3y2 = −26.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 49 / 81


Solve U x = y:
    
1 1 0 3 x1 8
 0 −1 −1 −5   x2   −9 
  x3  =  26
    
 0 0 3 13 
0 0 0 −13 x4 −26

which implies that

x4 = 2,
x3 = (26 − 13x4 )/3 = 0,
x2 = (−9 + 5x4 + x3 )/(−1) = −1,
x1 = 8 − 3x4 − x2 = 3.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 50 / 81


Partial pivoting
(k)
At the k-th step, select pivoting apk with
(k) (k)
|apk | = max |aik |
k≤i≤n

and perform (Ek ) ↔ (Ep ). That is, choose a permutation matrix


 
Ik−1 0 0 0 0
 0 0 0 1 0 
 
Pk =  0
 0 Ip−k−1 0 0 
 0 1 0 0 0 
0 0 0 0 In−p
so that
(Pk A(k) )kk = max (A(k) )ik
k≤i≤n

and

A(k+1) = M (k) Pk A(k) .


Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 51 / 81
Let P1 , . . . , Pk−1 be the permutations chosen and M1 , . . . Mk−1 denote
the Gaussian transformations performed in the first k − 1 steps. At the
k-th step, a permutation matrix Pk is chosen so that

|(Pk Mk−1 · · · M1 P1 A)kk | = max |(Mk−1 · · · M1 P1 A)ik | .


k≤i≤n

As a consequence, |`ij | ≤ 1 for i = 1, . . . , n, j = 1, . . . , i. Upon


completion, we obtain an upper triangular matrix

U ≡ Mn−1 Pn−1 · · · M1 P1 A. (3)

Since any Pk is symmetric and PkT Pk = Pk2 = I, we have

Mn−1 Pn−1 · · · M2 P2 M1 P2 · · · Pn−1 Pn−1 · · · P2 P1 A = U,

therefore,

Pn−1 · · · P1 A = (Mn−1 Pn−1 · · · M2 P2 M1 P2 · · · Pn−1 )−1 U.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 52 / 81


In summary, Gaussian elimination with partial pivoting leads to the LU
factorization
P A = LU, (4)
where
P = Pn−1 · · · P1
is a permutation matrix, and
L ≡ (Mn−1 Pn−1 · · · M2 P2 M1 P2 · · · Pn−1 )−1
= Pn−1 · · · P2 M1−1 P2 M2−1 · · · Pn−1 Mn−1
−1
.
Since,
 
0
..
 
Ij−1 0 0 0 0  
 . 

 0 0 0 1 0 


 0


Pj = 
 0 0 Ip−j−1 0 0 ,
 `j = 
 `j+1,j
,

 0 1 0 0 0  
 ..


0 0 0 0 In−p  . 
`nj
Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 53 / 81
it implies that for i < j,

eTi Pj = eTi , eTi `j = 0,


T
Pj `i = 0 · · · 0 `˜i+1,i · · · `˜n,i ≡ `˜i ,


P2 M1−1 P2 = P2 (I + `1 eT1 )P2 = I + `˜1 eT1

P2 M1−1 P2 M2−1 = (I + `˜1 eT1 )(I + `2 eT2 ) = I + `˜1 eT1 + `2 eT2 ,

P3 P2 M1−1 P2 M2−1 P3 = I + `ˆ1 eT1 + `˜2 eT2




⇒ ···
Therefore, L is unit lower triangular.
Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 54 / 81
Algorithm (LU -factorization with Partial Pivoting)
Given a nonsingular A ∈ Rn×n , this algorithm finds a permutation P , and
computes a unit lower triangular L and an upper triangular U such that
P A = LU . A is overwritten by L and U , and P is not formed. An integer
array p is instead used for storing the row/column indices.
p(1 : n) = 1 : n
For k = 1, . . . , n − 1
m=k
For i = k + 1, . . . , n
If |A(p(m), k)| < |A(p(i), k)|, then m = i
End For
` = p(k); p(k) = p(m); p(m) = `
For i = k + 1, . . . , n
A(p(i), k) = A(p(i), k)/A(p(k), k)
For j = k + 1, . . . , n
A(p(i), j) = A(p(i), j) − A(p(i), k)A(p(k), j)
End For
End For
End For
Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 55 / 81
Since the Gaussian elimination with partial pivoting produces the
factorization (4), the linear system problem should comply accordingly

Ax = b =⇒ P Ax = P b =⇒ LU x = P b.

Example
Find an LU factorization of
 
0 1 −1 1
 1 1 −1 2 
A=
 −1 −1
.
1 0 
1 2 0 2

(E1 ) ↔ (E2 ), (E3 + E1 ) → (E3 ) and (E4 − E1 ) → (E4 ):


     
1 1 −1 2 0 1 0 0 1 0 0 0
 0 1 −1 1   1 0 0 0   0 1 0 0 
A(2) = ,P =   , M1 =  .
 0 0 0 2  1  0 0 1 0   1 0 1 0 
0 1 1 0 0 0 0 1 −1 0 0 1

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 56 / 81


(E3 ) ↔ (E4 ) and (E3 − E2 ) → (E3 ):
     
1 1 −1 2 1 0 0 0 1 0 0 0
 0 1 −1 1  0 1 0 0   0 1 0 0 
A(3)

= ,P =   , M2 = 
2 −1  2  0 0

 0 0 0 1   0 −1 1 0 
0 0 0 2 0 0 1 0 0 0 0 1
Permutation matrix P :
 
0 1 0 0
 1 0 0 0 
P = P2 P1 = 
 0

0 0 1 
0 0 1 0
Unit lower triangular matrix L:
 
1 0 0 0
 0 1 0 0 
L = P2 M1−1 P2 M2−1 =
 1

1 1 0 
−1 0 0 1

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 57 / 81


The LU factorization of P A:
  
1 0 0 0 1 1 −1 2
 0 1 0 0  0 1 −1 1 
PA =    = LU.
 1 1 1 0  0 0 2 −1 
−1 0 0 1 0 0 0 2

So
  
0 1 0 0 1 1 −1 2
1 0 0 0   0 1 −1 1
A = P −1 LU = (P T L)U = 
   
.
 −1 0 0 1  0 0 2 −1 
1 1 1 0 0 0 0 2

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 58 / 81


Special types of matrices

Definition
A matrix A ∈ Rn×n is said to be strictly diagonally dominant if
n
X
|aii | > |aij |.
j=1,j6=i

Lemma
If A ∈ Rn×n is strictly diagonally dominant, then A is nonsingular.

Proof: Suppose A is singular. Then there exists x ∈ Rn , x 6= 0 such that


Ax = 0. Let k be the integer index such that

|xi |
|xk | = max |xi | =⇒ ≤ 1, ∀ |xi |.
1≤i≤n |xk |

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 59 / 81


Since Ax = 0, for the fixed k, we have
n
X n
X n
X
akj xj = 0 ⇒ akk xk = − akj xj ⇒ |akk ||xk | ≤ |akj ||xj |,
j=1 j=1,j6=k j=1,j6=k

which implies
n n
X |xj | X
|akk | ≤ |akj | ≤ |akj |.
|xk |
j=1,j6=k j=1,j6=k

But this contradicts the assumption that A is diagonally dominant.


Therefore A must be nonsingular.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 60 / 81


Theorem
Gaussian elimination without pivoting preserve the diagonal dominance of
a matrix.
(2)
Proof: Let A ∈ Rn×n be a diagonally dominant matrix and A(2) = [aij ] is
the result of applying one step of Gaussian elimination to A(1) = A
without any pivoting strategy.
(2)
After one step of Gaussian elimination, ai1 = 0 for i = 2, . . . , n, and the
first row is unchanged. Therefore, the property
n
(2) (2)
X
a11 > |a1j |
j=2

is preserved, and all we need to show is that


n
(2) (2)
X
aii > |aij |, for i = 2, . . . , n.
j=2,j6=i

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 61 / 81


Using the Gaussian elimination formula (2), we have
(1)
(2) (1) ai1 (1) ai1
|aii | = aii − a
(1) 1i
= aii − a1i
a11 a11
|ai1 |
≥ |aii | − |a1i |
|a11 |
|ai1 |
= |aii | − |ai1 | + |ai1 | − |a1i |
|a11 |
|ai1 |
= |aii | − |ai1 | + (|a11 | − |a1i |)
|a11 |
n n
X |ai1 | X
> |aij | + |a1j |
|a11 |
j=2,j6=i j=2,j6=i
n n
X X |ai1 |
= |aij | + |a1j |
|a11 |
j=2,j6=i j=2,j6=i
n n
X ai1 X (2)
≥ aij − a1j = |aij |.
a11
j=2,j6=i j=2,j6=i
Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 62 / 81
Thus A(2) is still diagonally dominant. Since the subsequent steps of
Gaussian elimination mimic the first, except for being applied to
submatrices of smaller size, it suffices to conclude that Gaussian
elimination without pivoting preserves the diagonal dominance of a
matrix.
Theorem
Let A be strictly diagonally dominant. Then Gaussian elimination can be
performed on Ax = b to obtain its unique solution without row or column
interchanges.

Definition
A matrix A is positive definite if it is symmetric and xT Ax > 0 ∀ x 6= 0.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 63 / 81


Theorem
If A is an n × n positive definite matrix, then
(a) A has an inverse;
(b) aii > 0, ∀ i = 1, . . . , n;
(c) max1≤k,j≤n |akj | ≤ max1≤i≤n |aii |;
(d) (aij )2 < aii ajj , ∀ i 6= j.

Proof:
(a) If x satisfies Ax = 0, then xT Ax = 0. Since A is positive
definite, this implies x = 0. Consequently, Ax = 0 has only
the zero solution, and A is nonsingular.
(b) Since A is positive definite,

aii = eTi Aei > 0,

where ei is the i-th column of the n × n identify matrix.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 64 / 81


(c) For k 6= j, define x = [xi ] by

 0, if i 6= j and i 6= k,
xi = 1, if i = j,
−1, if i = k.

Since x 6= 0,

0 < xT Ax = ajj + akk − ajk − akj .

But AT = A, so

2akj < ajj + akk . (5)

Now define z = [zi ] by



0, if i 6= j and j 6= k,
zi =
1, if i = j or i = k.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 65 / 81


Then z T Az > 0, so

−2akj < ajj + akk . (6)

Equations (5) and (6) imply that for each k 6= j,


akk + ajj
|akj | < ≤ max |aii |,
2 1≤i≤n

so

max |akj | ≤ max |aii |.


1≤k,j≤n 1≤i≤n

(d) For i 6= j, define x = [xk ] by



 0, if k=6 j and k 6= i,
xk = α, if k = i,
1, if k = j,

where α represents an arbitrary real number.


Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 66 / 81
Since x 6= 0,
0 < xT Ax = aii α2 + 2aij α + ajj ≡ P (α), ∀ α ∈ R.
That is the quadratic polynomial P (α) has no real roots. It
implies that
4a2ij − 4aii ajj < 0 and a2ij < aii ajj .

Definition (Leading principal minor)


Let A be an n × n matrix. The upper left k × k submatrix, denoted as
 
a11 a12 · · · a1k
 a21 a22 · · · a2k 
Ak =  . ..  ,
 
. .. . .
 . . . . 
ak1 ak2 · · · akk

is called the leading k × k principal submatrix, and the determinant of Ak ,


det(Ak ), is called the leading principal minor.
Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 67 / 81
Theorem
A symmetric matrix A is positive definite if and only if each of its leading
principal submatrices has a positive determinant.

Theorem
The symmetric matrix A is positive definite if and only if Gaussian
elimination without row interchanges can be performed on Ax = b with all
pivot elements positive.

Corollary
The matrix A is positive definite if and only if A can be factored in the
form LDLT , where L is lower triangular with 1’s on its diagonal and D is
a diagonal matrix with positive diagonal entries.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 68 / 81


Theorem
If all leading principal submatrices of A ∈ Rn×n are nonsingular, then A
has an LU -factorization.
Proof: Proof by mathematical induction.
1 n = 1, A1 = [a11 ] is nonsingular, then a11 6= 0. Let L1 = [1] and
U1 = [a11 ]. Then A1 = L1 U1 . The theorem holds.
2 Assume that the leading principal submatrices A1 , . . . , Ak are
nonsingular and Ak has an LU -factorization Ak = Lk Uk , where Lk is
unit lower triangular and Uk is upper triangular.
3 Show that there exist an unit lower triangular matrix Lk+1 and an
upper triangular matrix Uk+1 such that Ak+1 = Lk+1 Uk+1 .

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 69 / 81


Write  
Ak vk
Ak+1 = ,
wkT ak+1,k+1
where    
a1,k+1 ak+1,1
 a2,k+1   ak+1,2 
vk =  and wk =  .
   
..  ..
 .   . 
ak,k+1 ak+1,k
Since Ak is nonsingular, both Lk and Uk are nonsingular. Therefore,
Lk yk = vk has a unique solution yk ∈ Rk , and z t Uk = wkT has a unique
solution zk ∈ Rk . Let
   
Lk 0 Uk yk
Lk+1 = and Uk+1 = .
zkT 1 0 ak+1,k+1 − zkT yk

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 70 / 81


Then Lk+1 is unit lower triangular, Uk+1 is upper triangular, and
 
Lk Uk Lk yk
Lk+1 Uk+1 =
zkT Uk zkT yk + ak+1,k+1 − zkT yk
 
Ak vk
= = Ak+1 .
wkT ak+1,k+1

This proves the theorem.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 71 / 81


Theorem
If A is nonsingular and the LU factorization exists, then the LU
factorization is unique.

Proof: Suppose both

A = L1 U1 and A = L2 U2

are LU factorizations. Since A is nonsingular, L1 , U1 , L2 , U2 are all


nonsingular, and

A = L1 U1 = L2 U2 =⇒ L−1 −1
2 L1 = U2 U1 .

Since L1 and L2 are unit lower triangular, it implies that L−1


2 L1 is also
unit lower triangular. On the other hand, since U1 and U2 are upper
triangular, U2 U1−1 is also upper triangular. Therefore,

L−1 −1
2 L1 = I = U2 U1

which implies that L1 = L2 and U1 = U2 .


Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 72 / 81
Lemma
If A ∈ Rn×n is positive definite, then all leading principal submatrices of A
are nonsingular.

Proof: For 1 ≤ k ≤ n, let

zk = [x1 , . . . , xk ]T ∈ Rk and x = [x1 , . . . , xk , 0, . . . , 0]T ∈ Rn ,

where x1 , . . . , xk ∈ R are not all zero. Since A is positive definite,

zkT Ak zk = xT Ax > 0,

where Ak is the k × k leading principal submatrix of A. This shows that


Ak are also positive definite, hence Ak are nonsingular.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 73 / 81


Corollary
The matrix A is positive definite if and only if

A = GGT , (7)

where G is lower triangular with positive diagonal entries.

Proof: “⇒”
A is positive definite
⇒ all leading principal submatrices of A are nonsingular
⇒ A has the LU factorization A = LU , where L is unit lower triangular
and U is upper triangular.
Since A is symmetric,
LU = A = AT = U T LT =⇒ U (LT )−1 = L−1 U T .
U (LT )−1 is upper triangular and L−1 U T is lower triangular
⇒ U (LT )−1 to be a diagonal matrix, say, U (LT )−1 = D.
⇒ U = DLT . Hence
A = LDLT .
Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 74 / 81
Since A is positive definite,

xT Ax > 0 =⇒ xT LDLT x = (LT x)T D(LT x) > 0.

This means D is also positive definite, and hence dii > 0. Thus D1/2 is
well-defined and we have

A = LDLT = LD1/2 D1/2 LT ≡ GGT ,

where G ≡ LD1/2 . Since the LU factorization is unique, G is unique.


“⇐”
Since G is lower triangular with positive diagonal entries, G is nonsingular.
It implies that

GT x 6= 0, ∀ x 6= 0.

Hence

xT Ax = xT GGT x = kGT xk22 > 0, ∀ x 6= 0

which implies that A is positive definite.


Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 75 / 81
The factorization (7) is referred to as the Cholesky factorization.
Derive an algorithm for computing the Cholesky factorization:
Let
 
g11 0 · · · 0
.. 
 g21 g22 . . .

. 
A ≡ [aij ] and G =  . 
. .
.
 . . .
. . . 0 

gn1 gn2 · · · gnn


Assume the first k − 1 columns of G have been determined after k − 1
steps. By componentwise comparison with
 
g11 0 · · · 0

g11 g21 · · · gn1
..   0 g
 g21 g22 . . . 22 · · · gn2 

. 
[aij ] =   ..  ,

 .. .. ..   ..
 ..
.
..
.
 . . . 0  . . 
gn1 gn2 · · · gnn 0 · · · 0 gnn
one has
k
X
2
akk = gkj ,
j=1
Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 76 / 81
which gives
k−1
X
2 2
gkk = akk − gkj .
j=1

Moreover,
k
X
aik = gij gkj , i = k + 1, . . . , n,
j=1

hence the k-th column of G can be computed by


 
k−1
X 
gik = aik −
 gij gkj  gkk , i = k + 1, . . . , n.
j=1

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 77 / 81


Algorithm (Cholesky Factorization)
Given an n × n symmetric positive definite matrix A, this algorithm
computes the Cholesky factorization A = GGT .
Initialize G = 0
For k = 1, . . .q
,n
G(k, k) = A(k, k) − k−1
P
j=1 G(k, j)G(k, j)
For i = k + 1, . . . , n
 Pk−1 
G(i, k) = A(i, k) − j=1 G(i, j)G(k, j) G(k, k)
End For
End For

In addition to n square root operations, there are approximately


n
X 1 1 5
[2k − 2 + (2k − 1)(n − k)] = n3 + n2 − n
3 2 6
k=1

floating-point arithmetic required by the algorithm.


Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 78 / 81
Band matrix
Definition
An n × n matrix A is called a band matrix if ∃ p and q with 1 < p, q < n
such that

aij = 0 whenever p ≤ j − i or q ≤ i − j.

The bandwidth of a band matrix is defined as w = p + q − 1. That is

a11 · · · a1p 0 ··· 0


 
 .. .. .. .. ..
 . . . . .


 .. .. 
 a
 . . 0

A =  q1 . .

.. ..
 0
 . an−p+1,n 

 . .. .. .. ..
 ..

. . . . 
0 ··· 0 an,n−q+1 ··· ann

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 79 / 81


Definition
A square matrix A = [aij ] is said to be tridiagonal if
 
a11 a12 0

 a21 a22 . .. 

A=  .. ..
.

 . . an−1,n 
0 an,n−1 an,n

If Gaussian elimination can be applied safely without pivoting. Then L and


U factors would have the form
 
u11 u12 0
 
1
.
u22 . .
 `21 1  



L= and U =  ,
 
. .. . ..   .. 
   . un−1,n 
0 `n,n−1 1 unn

and the entries are computed by the simple algorithm which only costs 3n
flops.
Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 80 / 81
Algorithm (Tridiagonal LU Factorization)
This algorithm computes the LU factorization for a tridiagonal matrix
without using pivoting strategy.

U (1, 1) = A(1, 1)
For i = 2, . . . , n
U (i − 1, i) = A(i − 1, i)
L(i, i − 1) = A(i, i − 1)/U (i − 1, i − 1)
U (i, i) = A(i, i) − L(i, i − 1)U (i − 1, i)
End For

A tridiagonal linear system arises in many applications, such as finite


difference discretization to second order linear boundary-value problem and
the cubic spline approximations.

Wei-Cheng Wang (NTHU) Direct methods for LS Fall 2010 81 / 81

You might also like