0% found this document useful (0 votes)
21 views17 pages

Notes For 351-1

The document discusses numerical analysis and LU decomposition methods. LU decomposition involves decomposing a matrix A into lower (L) and upper (U) triangular matrices. The document outlines the LU decomposition algorithm and methods for computing the elements of L and U directly, including Doolittle's method and Crout's method.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views17 pages

Notes For 351-1

The document discusses numerical analysis and LU decomposition methods. LU decomposition involves decomposing a matrix A into lower (L) and upper (U) triangular matrices. The document outlines the LU decomposition algorithm and methods for computing the elements of L and U directly, including Doolittle's method and Crout's method.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

NUMERICAL ANALYSIS

R. Owusu
February 21, 2023

1 LU Decomposition
In solving the system Ax = b, the matrix A can be decomposed into L and
U where

L = Lower triangular matrix


U = Upper triangular matrix

which leads to the LU decomposition method

1.1 Solving Systems with LU


Given Ax = b, if A can be decomposed into LU , then

LU x = b
Implies
L−1 LU x = L−1 b
which simplifies to =⇒ IU x = L−1 b since (L−1 L = I)

=⇒ U x = L−1 b
since (IU = U)

Now let L−1 b = z then Lz = b (where we solve for z by Forward substi-


tution)
And U x = z (where the solution vector x is solved for by Backward
substitution)

1
1.2 LU Decomposition Algorithm
···
 
l11 0 0
..

l21 l22 . 0 

L= .. ..  and

..
 . . . 0 
ln1 ln2 · · · lnn
u11 u12 · · ·
 
u1n
.. ..

0 u22 . .

U = .. . .
 
. . . . un−1,n

 . 
0 ··· 0 unn
written compactly as
L = (lij = 0; for i < j) and U = (uij = 0; for i > j)

Suppose A = LU such that L is a unit lower triangular matrix, thus

0 ··· 0
 
1
..
 l

1 . 0 
L =  21 .
 ... .. . .

. . 0 
ln1 ln2 · · · 1

the resulting LU decomposition method is called Dolittle Method.

Suppose A = LU such that U is a unit upper triangular matrix, thus

···
 
1 u12 u1n
.. ..
 0 1
 . .

U = . . ,

 .. . . ...
un−1,n 
0 ··· 0 1

the resulting LU decomposition method is called Crout’s Method

If A can be decomposed into A = LLT where L is a lower triangular


matrix, thus
···
 
l11 0 0
..
 l

l22 . 0 

L =  21 . .. ...
 ..

. 0 
ln1 ln2 ··· lnn
the resulting method is called Cholesky’s Method

2
1.3 Computing the Elements of L and U Directly
··· u11 u12 · · ·
  
l11 0 0 u1n
.. .. ..

l21 l22 . 0 

0 u22 . .

A = LU =  .. .. .. . .
 
.. ..
. . . un−1,n
 
 . . 0  . 
ln1 ln2 · · · lnn 0 ··· 0 unn
Implies
a11 = l11 u11 , a12 = l11 u12 , . . . , a1n = l11 u1n
a21 = l21 u11 , a22 = l21 u12 + l22 u22 , a23 = l21 u13 + l22 u23
From the Matrix Multiplication Rule, we can write that;
min(i,j)
X
aij = lik ukj , i, j = 1, · · · , n
k=1

To compute the factorization A = LU directly, follow the steps below

• STEP 1:
For k = 1 to n compute

k−1
X
lkk ukk = akk − lkm umk
m=1

• STEP 2:
Compute the k th column of L

k−1
!
1 X
lik = aik − lim umk for k < i ≤ n
ukk m=1

• STEP 3:
Compute the k th row of U

k−1
!
1 X
ukj = akj − lkm umj fork < j ≤ n
lkk m=1

Note this process is called Doolittle or Crout when lii = 1(1 ≤ i ≤ n)


or ujj = 1(1 ≤ j ≤ n) respectively.

3
1.3.1 Computing elements of L and U by Naive-Gaussian Elimi-
nation
Forward Elimination of Naive can also be used to achieve this as

0 ··· u12 · · ·
  
1 0 u11 u1n
. . ..
 l21 1 . . 0  0 u22 . . .
  
A= . ..  .  = LU

 .. .. .. ..
. . 0   .. . . un−1,n 
ln1 ln2 · · · 1 0 ··· 0 unn
This can be done by solving Ax = b in Pure Matrix Notation

• In this notation the U can be calculated by constructing specific per-


mutation matrix.

• And the elimination process achieved by pre-multiplying A the se-


quence Mk (known as the multiplier) below where

Mk−1 Mk−2 · · · M1 A = U

··· 0
 
1 0
. . . .. 
m21 1 . 

M1 =  .. ,

.. ..
 . . . 0 
mn1 · · · 0 1
··· 0
 
1 0
. . ..  (j−1)

0 1 . .  aij
M2 =  ..  with mij = −

.. ..
 . . . 0  ajj
0 mn2 ··· 1

• For Naive Gaussian Elimination we get

M A = U In this case Ax = b is the same as


0
U x = M b = b where M = Mk−1 Mk−2 Mk−3 · · · M1
The elementary matrices Mk are the k th Gaussian transformation
matrix

• For partial pivoting we have M as

M = Mn−1 Pn−1 Mn−2 Pn−2 · · · M2 P2 M1 P1

4
1.4 Finding Inverse Using LU
A matrix B is the inverse of A if AB = I = BA

• STEP 1:
Assume the first column of B, the inverse of A is [b11 b21 · · · bn1 ]T then
from Matrix Multiplication

   
b11 1
 b21   0 
A .. = ..
   
. .

   
bn1 0

• STEP 2:
Do the same for second column of B
   
b12 0
 b22   1 
A .. = ..
   
. .

   
bn2 0

• STEP 3:
Proceed to the last column of B and solve for the vector b in each step.

1.5 Solved Example 1


Find the LU decomposition of the matrix
 
25 5 1
A=  64 8 1 
144 12 1
  
1 0 0 u11 u12 u13
A = LU =  l21 1 0   0 u22 u23 
l31 l32 1 0 0 u33

• Forward Elimination of Unknowns


STEP 1
Divide Row 1 by 25 and multiply by 64 and subtract result from Row
2

5
 
  25 5 1
64
Row2 − ∗ (Row1) =  0 −4.8 −1.56 
25
144 12 1
Hence the multiplier, m21 = −64/25 = −2.56

Divide Row 1 by 25 and multiply by 144 and subtract result from Row 3
 
  25 5 1
144
Row3 − ∗ (Row1) =  0 −4.8 −1.56 
25
0 −16.8 −4.76

Hence the multiplier, m31 = −144/25 = −5.76 and the first Gaussian
transformation matrix is given by:
   
1 0 0 1 0 0
M1 =  m21 1 0  =  −2.56 1 0 
m31 0 1 −5.76 0 1
  
1 0 0 25 5 1
A(1) = M1 A =  −2.56 1 0   64 8 1 
−5.76 0 1 144 12 1
 
25 5 1
=  0 −4.8 −1.56 
0 −16.8 −4.76

• STEP 2
Now, divide Row 2 by -4.8 and multiply by -16.8 and subtract result
from Row 3

 
  25 5 1
−16.8
Row3 − ∗ (Row2) =  0 −4.8 −1.56 
4.8
0 0 0.7
which produces U.

Hence the multiplier, m32 = −(−16.8/ − 4.8) = −3.5 and the 2nd Gaus-
sian transformation matrix is given by:

   
1 0 0 1 0 0
M1 =  0 1 0  =  0 1 0 
0 m32 1 0 −3.5 1

6
  
1 0 0 25 5 1
A(2) (1)
= M2 A = 0  1 0   0 −4.8 −1.56 
0 −3.5 1 0 −16.8 −4.76
 
25 5 1
=  0 −4.8 −1.56  = U
0 0 0.7
64
l21 = −m21 = = 2.56
25

144
l31 = −m31 = = 5.76
25

−16.8
l32 = −m32 = = 3.5
−4.8

   
1 0 0 1 0 0
L =  −m21 1 0  =  2.56 1 0 
−m31 −m32 1 5.76 3.5 1
Confirm LU = A

  
1 0 0 25 5 1
LU =  2.56 1 0   0 −4.8 −1.56  =
5.76 3.5 1 0 0 0.7
 
25 5 1
 64 8 1 
144 12 1

1.6 Worked example 2


Now we use the LU decomposition to solve the system.
    
25 5 1 x1 106.8
 64 8 1   x2  =  177.2 
144 12 1 x3 279.2

7
From A = LU solve Lz = b then U x = z for the solution vector x

  
1 0 0 25 5 1
A = LU =  2.56 1 0   0 −4.8 −1.56 
5.76 3.5 1 0 0 0.7
First solve Lz = b
    
1 0 0 z1 106.8
 2.56 1 0   z2  =  177.2 
5.76 3.5 1 z3 279.2
to give

z1 = 106.8

2.56z1 + z2 = 177.2
5.76z1 + 3.5z2 + z3 = 279.2

Forward substitution from the first equation gives

z1 = 106.8
z2 = 177.2 − 2.56(106.8) = −96.2
z3 = 279.2 − 5.76(106.8) − 3.5(−96.21) = 0.735

Hence    
z1 106.8
z =  z2  =  −96.21 
z3 0.735
Now solve U x = z
    
25 5 1 x1 106.8
 0 −4.8 −1.56   x2  =  −96.21 
0 0 0.7 x3 0.735

25x1 + 5x2 + x3 = 106.8


−4.8x2 − 1.56x3 = −96.21
0.7x3 = 0.735

8
From the third equation
0.735
0.7x3 = 0.735 =⇒ x3 = = 1.050
0.7
Put the value of a3 in the second equation

−4.8x2 − 1.56x3 = −96.21

−96.21 + 1.56(1.050)
=⇒ x2 = = 19.70
−4.8
Put the value of x2 and x3 in the first equation

25x1 + 5x2 + x3 = 106.8

106.8 − 5(19.70) − 1.050


=⇒ x1 = = 0.2900
25
The solution vector is
   
x1 0.2900
 x2  =  19.70 
x3 1.050

1.7 LU Decomposition with Partial Pivoting


Given
Ax = b
we can obtain the Gaussian Elimination as

M2 P2 M1 P1 Ax = M2 P2 M1 P1 b

where Mk is the multiplier and Pk is the pivoting Matrix.

1.8 Worked Example


Solve Ax = b with
   
0 1 1 2
A=  1 2 3  and b =  6 
1 1 1 3

using partial pivoting and complete pivoting

9
1.8.1 (a) Partial Pivoting
We compute U as follows

• Step 1    
0 1 0 1 2 3
P1 =  1 0 0  ; P1 A =  0 1 1  ;
0 0 1 1 1 1
 
1 0 0
M1 =  0 1 0  ;
−1 0 1
 
1 2 3
A(1) = M1 P1 A =  0 1 1 ;
0 −1 −2

• Step 2    
1 0 0 1 2 3
P2 =  0 1 0  ; P2 A(1) =  0 1 1 ;
0 0 1 0 −1 −2
 
1 0 0
M2 =  0 1 0  ;
0 1 1
 
1 2 3
U = A(2) = M2 P2 A(1) = M2 P2 M1 P1 A =  0 1 1 
0 0 −1
Note: Defining P = P2 P1 and L = P (M2 P2 M1 P1 )−1 , we have P A = LU

NEXT: We compute b0 as follows:

• Step1:
     
0 1 0 6 1 0 0
P1 =  1 0 0  ; P1 b =  2  ; M1 =  0 1 0  ;
0 0 1 3 −1 0 1


6
M1 P1 b =  2  ;
−3

10
• Step2    
1 0 0 6
P2 =  0 1 0  ; P2 M1 P1 b =  2  ;
0 0 1 −3
   
1 0 0 6
M2 =  0 1 0  ; b0 = M2 P2 M1 P1 b =  2  ;
0 1 1 −1

The solution of the system


    
1 2 3 x1 6
U x = b0 =⇒  0 1 1   x2  =  2 
0 0 −1 x3 −1

and x1 = x2 = x3 = 1

1.9 Worked Example


Use LU to find the inverse of
 
25 5 1
 64 8 1 
144 12 1

Solution:
We have that
  
1 0 0 25 1 1
A = LU =  2.56 1 0   0 −4.8 −1.56 
5.76 3.5 1 0 0 0.7

Solve for the first column of B = A−1 by solving for


    
25 5 1 b11 1
 64 8 1   b21 = 0 
 
144 12 1 b31 0
First solve for Lz = c that is
    
1 0 0 z1 1
 2.56 1 0   z2  =  0 
5.76 3.5 1 z3 0
to give

11
z1 = 1
2.56z1 + z2 = 0
5.76z1 + 3.5z2 + z3 = 0

Forward elimination from first equation gives

z1 = 1
z2 = 0 − 2.56(1) = −2.56
z3 = 0 − 5.76(1) − 3.5(−2.56) = 3.2
Hence
   
z1 1
z =  z2  =  −2.56 
z3 3.2
Now solve U x = z that is
    
25 1 1 b11 1
 0 −4.8 −1.56   b21  =  −2.56 
0 0 0.7 b31 3.2

25b11 + 5b21 + b31 = 1


=⇒ − 4.8b21 − 1.56b31 = −2.56
0.7b31 = 3.2
Back substitution starting from the third equation gives

b31 = 3.2
0.7
= 4.571
−2.56+1.560(4.571)
b21 = −4.8
= −0.9524
1−5(0.9524)−4.571
b11 = 25
= 0.04762
Hence the first column of the inverse of A is
   
b11 0.04762
 b21  =  −0.9524 
b31 4.571
Similarly by solving
    
25 5 1 b12 0
 64 8 1   b22  =  1 
144 12 1 b32 0

12
   
b12 −0.08333
=⇒  b22  =  1.417 
b32 −5.0000
and solving
    
25 5 1 b13 0
 64 8 1   b23  =  0 
144 12 1 b33 1
   
b13 0.03571
=⇒  b23  =  −0.4643 
b33 1.429
Hence
 
0.4762 0.08333 0.0357
A−1 =  −0.9524 1.4177 − 0.4643 
4.571 −5.050 1.429

2 Special Systems
Special Systems Cholesky Factorization(Decomposition)

For any Symmetric Positive Definite Matrix A there exist a unique


factorization

A = HH T

where H is a lower triangular matrix

Symmetrix Matrix: A = AT
Positive Definite: Positive eigen-values and positive leading principal mi-
nors or x∗ Ax > 0 for any non-zero vector x.

From A = HH T
a11 a11 · · · ··· h11 h21 · · ·
    
a11 h11 0 0 hn1
. ... .
a21 a22 . . a2n   h21 h22 0 0 h22 . . hn2 
    
.. .. ..  = . . .. .. .. .. 
 
.. . ..
. .   .. .. .. .
 
 . . .  . . . 
an1 an2 · · · ann hn1 hn2 · · · hnn 0 0 ··· hnn

We have

13
√ ai1
h11 = a11 , hi1 = , i = 2, · · · , n
hi1
i j
X X
h2ik = aii , aij = hik hjk j < i
k=1 k=1

Which leads to the Cholesky algorithm.

2.1 Solving Ax = b using Cholesky Factorization


• STEP 1:
Find the Cholesky factorization of A = HH T

• STEP 2:
Solve the lower triangular system for y : Hy = b

• STEP 3:
Solve the upper triangular system for x : HT x = y

2.2 Special Systems: Tri-diagonal Systems


A tri-diagonal system is of the form

 
a1 b1
.. ..
 c
 . . 
T = 2

... ... 
 bn−1 
c n an
and can be decomposed into L and U as
   
1 u1 b1
. .. ..
 l2 . . 0 . . 0
   
  
L=

 . .
.. ..

U = 
 ... ... 

.. .. ..
  
. . . bn−1
   
 0   0 
ln 1 un

By equating the element of the matrices on both side we have

14
a1 = u1

ci = li ui−1 , i = 1, · · · , n

ai = ui + li bi−1 , i = 2, · · · , n

from this we compute for li and ui

2.3 Computing the LU Factorization for Tridiagonal Ma-


trix

Let

 
a1 b1
.. ..
 c
 . . 
T = 2

... ... 
 bn−1 
c n an
Set ui = ai

For i = 2, · · · , n do

ci
li = ui−1

ui = ai − li bi−1

2.4 Solving a Tridiagonal System


The system T x = b can be found by solving the two special bidiagonal
systems:

Ly = b and

Ux = y

15
2.5 Worked Example
Triangularize
 
0.9 0.1 0
A =  0.8 0.5 0.1 
0 0.1 0.5
using (a) the fomula A = LU and
(b) Gaussian Elimination

Solution
(a) From A = LU

u1 = 0.9
i = 2:
c2
l2 = u1
= 0.8
0.9
= 0.8889
0.8
u2 = a2 − l2 b1 = 0.5 − 0.9 ∗ 0.1 = 0.4111
i = 3:
c2 0.1
l3 = u2
= 0.41 = 0.2432
u3 = a3 − l3 b2 = 0.5 − 0.24 ∗ 0.1 = 0.4757

   
1 0 0 0.9 0.1 0
L =  0.8889 1 0  U =  0 0.4111 0.1 
0 0.2432 1 0 0 0.4757

(b) Using Gaussian elimination with partial pivoting we have;

• Step 1: Multiplier

0.8
m21 = − = −0.8889
0.9

 
0.9 0.1 0
A(1) =  0 0.4111 0.1 
0 0.1 0.5

• Step 2: Multiplier

16
0.1
m32 = − = −0.2432
0.4111

And finally we have:


 
0.9 0.1 0
A(2) =  0 0.4111 0.1  = U
0 0 0.4757
   
1 0 0 1 0 0
L =  −m21 1 0  =  0.8889 1 0 
0 −m32 1 0 0.2432 1

17

You might also like