0% found this document useful (0 votes)
82 views23 pages

Lecture02 - Factorization Methods

This document provides an overview of LU decomposition for solving linear systems of equations. It begins with the objectives of introducing LU decomposition, computational complexity, and using it to compute the matrix inverse. It then defines LU decomposition as factorizing a matrix A into the product of lower and upper triangular matrices L and U. The document provides an example to demonstrate solving a system of equations using LU decomposition. It discusses that LU decomposition is more efficient than Gaussian elimination when the system needs to be solved repeatedly with changing right-hand sides. In conclusion, LU decomposition separates the computationally intensive elimination step from manipulating the right-hand side vectors.

Uploaded by

Na2ry
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views23 pages

Lecture02 - Factorization Methods

This document provides an overview of LU decomposition for solving linear systems of equations. It begins with the objectives of introducing LU decomposition, computational complexity, and using it to compute the matrix inverse. It then defines LU decomposition as factorizing a matrix A into the product of lower and upper triangular matrices L and U. The document provides an example to demonstrate solving a system of equations using LU decomposition. It discusses that LU decomposition is more efficient than Gaussian elimination when the system needs to be solved repeatedly with changing right-hand sides. In conclusion, LU decomposition separates the computationally intensive elimination step from manipulating the right-hand side vectors.

Uploaded by

Na2ry
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 23

MTH2212 – Computational Methods and

Statistics

Solution of Linear System of Equations

Lecture 2:
Factorization Methods
Objectives

 Introduction
 LU Decomposition
 Computational complexity
 The Matrix Inverse
 Extending the Gaussian Elimination Process

Dr. M. Hrairi MTH2212 - Computational Methods and Statistics 2


Introduction

 Provides an efficient way to compute matrix inverse by


separating the time consuming elimination of the Matrix [A]
from manipulations of the right-hand side {B}.

 Gauss elimination, in which the forward elimination


comprises the bulk of the computational effort, can be
implemented as an LU decomposition.

Dr. M. Hrairi MTH2212 - Computational Methods and Statistics 3


LU Decomposition

 The matrix [A] for the linear system [A]{X}={B} is factorized


into the product of two matrices [L] and [U] (L- lower triangular
matrix and U- upper triangular matrix)
[L][U]=[A]
[L][U]{X}={B}
 Similar to first phase of Gauss elimination, consider
[U]{X}={D}
[L]{D}={B}
 The solution can be obtained by
1. First solve [L]{D}={B} to generate an intermediate vector {D} by
forward substitution
2. Then, solve [U]{X}={D} to get {X} by back substitution.

Dr. M. Hrairi MTH2212 - Computational Methods and Statistics 4


LU Decomposition

Dr. M. Hrairi MTH2212 - Computational Methods and Statistics 5


LU Decomposition

 In matrix form, this is written as


 a11 a12 a13   x1   b1 
a     
 21 a22 a23   x2   b2 
 a31 a32 a33   x3  b3 
 How to obtain the triangular factorization?
1 0 0  a11 a12 a13 
A  0 1 0 a21 a22 a23 
0 0 1  a31 a32 a33 
 Use Gauss elimination and store the multipliers mij as the
subdiagonal entries in [L]

Dr. M. Hrairi MTH2212 - Computational Methods and Statistics 6


LU Decomposition

 The multipliers are


a a a '32
m21  21 m31  31 m32  '
a11 a11 a 22
 The triangular factorization of matrix [A]
A= [L] [U]

 1 0 0 a11 a12 a13 


A  m21 1 0  0 a ' 22 a ' 23 
 m31 m32 1  0 0 a"33 

Dr. M. Hrairi MTH2212 - Computational Methods and Statistics 7


Example 1

 Use LU decomposition to solve:

3x1 – 0.1x2 – 0.2x3 = 7.85

0.1x1 + 7x2 – 0.3x3 = -19.3

0.3x1 – 0.2x2 + 10x3 = 71.4


 use 6 significant figures in your computation.

Dr. M. Hrairi MTH2212 - Computational Methods and Statistics 8


Example 1 - Solution

 In matrix form
 3  0.1  0.2  x1   7.85 
 0.1     
 7  0.3  x2    19.3
0.3  0.2 10   x3   71.4 

 The multipliers are


0.1 0.3
m21   0.0333333 m31   0.100000
3 3
 0.19
m32   0.0271300
7.00333

Dr. M. Hrairi MTH2212 - Computational Methods and Statistics 9


Example 1 - Solution

 The LU decomposition is
 1 0 0 3  0.1  0.2 
A  0.0333333 1 0 0 7.00333  0.293333
  
 0.100000  0.0271300 1 0 0 10.0120 

 The solution can be obtained by


1. First solve [L]{D}={B} for {D} by forward substitution

 1 0 0  d1   7.85 
0.0333333 1 0  d    19.3
  2   
 0.100000  0.0271300 1 d3   71.4 

Dr. M. Hrairi MTH2212 - Computational Methods and Statistics 10


Example 1 - Solution

 d1  7.85
d 2  19.3  0.0333333(7.85)  19.5617
d3  71.4  0.1(7.85)  0.02713(19.5617)  70.0843
 Then, solve [U]{X}={D} to get {X} by back substitution.
3  0.1  0.2   x1   7.85 
0 7.00333  0.293333  x    19.5617
  2   
0 0 10.0120   x3   70.0843 

 x3  70.0843 / 10.0120  7.00003


x2  (19.5617  0.293333(7.00003)) / 7.00333  2.5
x1  (7.85  0.1(2.5)  0.2(7.00003)) / 3  3

Dr. M. Hrairi MTH2212 - Computational Methods and Statistics 11


Computational Complexity

 The triangular factorization portion of [A]=[L][U] requires


 (N3-N)/3 multiplications and divisions
 (2N3-3N2+N)/6 subtractions
 Finding the solution to [L][U]{X}={B} requires
 N2 multiplications and divisions
 N2-N subtractions
 The bulk of the calculation lies in the triangularization portion.
 LU decomposition is usually chosen over Gauss elimination when the
linear system is to be solved many times, with the same [A] but with
different {B}.
 Saves computing time by separating time-consuming elimination step from
the manipulations of the right hand side.
 Provides efficient means to compute the matrix inverse which provides a
means to test whether systems are ill-conditioned

Dr. M. Hrairi MTH2212 - Computational Methods and Statistics 12


The Matrix Inverse

 Find matrix [A]-1, the inverse of [A], for which


[A][A]-1 = [A]-1 [A]=[I]
 The inverse can be computed in a column-by-column
fashion by generating solutions with unit vectors {B}
constants. 1 
 The solution of [L][U]{X}={B} with  B  0 will be the first
0
column of [A]-1  
 0
The solution of [L][U]{X}={B} with   will be the second
 B  1 
column of [A]-1 0
 
0
 B  0
 The solution of [L][U]{X}={B} with will be the third
column of [A]-1 1 
 

Dr. M. Hrairi MTH2212 - Computational Methods and Statistics 13


Example 2

 Use LU decomposition to determine the matrix inverse for


the following system and use it to find the solution:

3x1 – 0.1x2 – 0.2x3 = 7.85

0.1x1 + 7x2 – 0.3x3 = -19.3

0.3x1 – 0.2x2 + 10x3 = 71.4


 use 6 significant figures in your computation.

Dr. M. Hrairi MTH2212 - Computational Methods and Statistics 14


Example 2- Solution

 In matrix form
 3  0.1  0.2
 A  0.1 7  0.3
0.3  0.2 10 

 The triangular factorization of [A]

 1 0 0 3  0.1  0.2 
L  0.0333333 1 0 U  0 7.00333  0.293333
 0.100000  0.0271300 1 0 0 10.0120 

Dr. M. Hrairi MTH2212 - Computational Methods and Statistics 15


Example 2- Solution

 The first column of [A]-1


 1 0 0 d1  1  1 
0.0333333 1 0  d   0   D   0.03333
  2     
 0.100000  0.0271300 1 d 3  0  0.1009 
 

3  0.1  0.2   x1  1  0.33249 


0 7.00333  0.293333  x    0.03333   X    0.00518
  2     
0 0 10.0120   x3   0.1009   0.01008
 

Dr. M. Hrairi MTH2212 - Computational Methods and Statistics 16


Example 2- Solution

 The second column of [A]-1


 1 0 0 d1  0 0 
0.0333333 1 0  d   1    D  1 
  2     
 0.100000  0.0271300 1 d3  0 0.02713
 

3  0.1  0.2   x1  0  0.004944


0 7.00333  0.293333  x   1 
  X  

0 .142903

  2     
0 0 10.0120   x3  0.02713 0.00271 
 

Dr. M. Hrairi MTH2212 - Computational Methods and Statistics 17


Example 2- Solution

 The third column of [A]-1


 1 0 0 d1  0 0
0.0333333 1 0  d   0   D  0
  2     
 0.100000  0.0271300 1 d3  1  1 
 

3  0.1  0.2   x1  0 0.006798


0 7.00333  0.293333  x   0   X   0.004183
  2     
0 0 10.0120   x3  1  0.09988 
 

Dr. M. Hrairi MTH2212 - Computational Methods and Statistics 18


Example 2- Solution

 The matrix inverse [A]-1 is:


 0.33249 0.004944 0.006798
A1   0.00518 0.142903 0.004183
 0.01008 0.00271 0.09988 
 Check your result by verifying that [A][A]-1 =[I]
 The final solution is
 0.33249 0.004944 0.006798 7.85  3 
 X    A 1 B   0.00518 0.142903 0.004183  19.3   2.50002
 0.01008 0.00271 0.09988  71.4  7 

Dr. M. Hrairi MTH2212 - Computational Methods and Statistics 19


Extending the Gaussian Elimination Process

 If pivoting is required to solve [A]{X}={B}, then there exists


a permutation matrix [P] so that:
[P][A ]=[L][U]

 The solution {X} is found in four steps:


1. Construct the matrices [L], [U] and [P].
2. Compute the column vector [P]{B}.
3. Solve [L]{D}=[P]{B} for {D} using forward substitution.
4. Solve [U]{X}={D} for {X} using back substitution.

Dr. M. Hrairi MTH2212 - Computational Methods and Statistics 20


Example 3

 Use LU decomposition with permutation to solve the


following system of equations

0.0003 x1 + 3.0000 x2 = 2.0001


1.0000 x1 + 1.0000 x2 = 1.0000

Dr. M. Hrairi MTH2212 - Computational Methods and Statistics 21


Example 3 - Solution

 In matrix form [A ]{X}={B}


0.0003 3  x1  2.0001
 1    
 1 x
 2   1 
 We saw previously that pivoting is required to solve this
system of equations, hence [P][A ]=[L][U]
 The solution {X} is found in four steps:
1. Construct the matrices [L], [U] and [P].

U   
0 1  1 1 
 L  
1 0
 P     
1 0  0.0003 1 0 2.9997

Dr. M. Hrairi MTH2212 - Computational Methods and Statistics 22


Example 3 - Solution

2. Compute the column vector [P]{B}.


0 1 2.0001 1 
1 0 1 
  
    2.0001

3. Solve [L]{D}=[P]{B} for {D} using forward substitution.


 1 0 d1  1  1 

0.0003 1 d  2.0001   D   
  2    1. 9998

4. Solve [U]{X}={D} for {X} using back substitution.

1 1   x1  1  0.33333

0 2.9997  x  1.9998   X    
  2    0 .66667

Dr. M. Hrairi MTH2212 - Computational Methods and Statistics 23

You might also like