Numerical Analysis - Matrix
Numerical Analysis - Matrix
R Setiawan 84
7. Linear Algebraic Equations
R Setiawan 85
7. Linear Algebraic Equations
7.2 Gauss Elimination
There are two steps: Forward Elimination and Back Substitution
In the first step, a Pivot equation is selected, with the aii is the
pivot coefficient, to find new coefficient:
aik
aij aij akj (See pseudocode for details)
akk
xn
n 1
b
n
b
i
i 1
ij x j
a i 1
j i 1
n 1
a xi i 1
for i n 1, n 2, ...,1
nn a ii
R Setiawan 86
7. Linear Algebraic Equations
7.2.1 Pseudocode:
DO k=1, n-1
DO i=k+1, n; i ≠ k a11 a12 a13 b1
factor = ai,k / ak,k a a22 a 23 b2
DO j = k+1, n 21
ai,j = ai,j – factor. ak,j a31 a32 a33 b3
ENDDO
bi = bi – factor.bk
ENDDO Forward Elimination
a11 a12 a13 b1
ENDDO
0 a ' 22 a ' 23 b' 2
xn=bn/an,n 0 0 a"33 b"3
DO i=n-1,1,-1
sum=0
DO j=i+1,n x3 b"3 a"33
sum=sum+ai,j .xj Back Substitution
ENDDO x2 b' 2 a '23 x3 a '22
xi = (bi - sum)/ai,i x1 b1 a12 x2 a13 x3 a11
ENDDO
R Setiawan 87
7. Linear Algebraic Equations
*Study § 9.2, Fig. 9.4 and Example 9.5 for the development
and application of the algorithm
R Setiawan 88
7. Linear Algebraic Equations
7.2.2 Pitfalls of Gauss Elimination:
Division by 0 (Pivot coefficient)
Round-off error
Ill-condition (Fig. 9.2), due to:
Matrix Determinant 0
Matrix determinant = 0 singular, infinite solution
R Setiawan 89
7. Linear Algebraic Equations
7.2.3 Improvements
More significant figure, to resolve round-off error problem
Pivoting strategy, to avoid division by zero (Example 9.9)
The equation with the largest coefficient shall be the
pivoting equation
Scaling, to resolve round-off error problem (Example 9.10)
Modifying the order of magnitude of one or more
equations so that all equations have roughly similar order
of magnitude
Ill-conditioned problems are much more difficult and
requires advanced treatment. However, most real
engineering problems normally well-conditioned, so if you
find ill-condition problems check and recheck the physical
formulation
R Setiawan 90
7. Linear Algebraic Equations
7.3 Gauss-Jordan
Modification of Gauss-elimination
The first step is modified, so that all unknown is eliminated
rather than just the subsequent one. This results in an
Identity matrix
Hence, there is no need for Back-substitution
But, the elimination step requires even more work,
resulting in more computation than Gauss method, in
total.
R Setiawan 93
7. Linear Algebraic Equations
Algorithm for Gauss-Jordan:
Normalise the first row by the a11 a12 a13 b1
coefficient a11 a a 22 a 23 b2
With the use of the normalised 21
a 31 a32 a 33 b3
version of the fisrt row,
eliminate the first column of
each of the remaining rows (ai1) Forward elimination
Normalise a22’
process
Eliminate ai2’ and a32’
And so on until the last column
and row (amn) 1 0 0 b" "1
0 1 0 b" "
2
R Setiawan 95
7. Linear Algebraic Equations
Ex. 9.12
Normalise the second row by the coefficient a22
R Setiawan 96
7. Linear Algebraic Equations
Comparison with Gauss Elimination:
Conceptually simpler (no apparent “back substitution”
process)
All pitfalls in Gauss elimination prevails
But, relatively more computation (approx. 50% more)
Still used in some numerical methods.
R Setiawan 97
7. Linear Algebraic Equations
Text Book Further Study
Understand and Train yourself Example 9.5, 9.7, 9.8, 9.12
Build computer program based on Algorithm in Fig. 9.6
Try Text book problems:
9.9, 9.12, 9.13, 9.18, 9.19
R Setiawan 98
7. Linear Algebraic Equations
7.4 LU Decomposition
u11 u12 u13 1 0 0
0 u u23 l 1 0
22 21
0 0 u33 l31 l32 1
Upper matrix Lower matrix
R Setiawan 99
7. Linear Algebraic Equations
Description of Process:
a11 a12 a13
A a21 a22 a23
a 31 a 32 a33
U L
u11 u12 u13 a11 a12 a13
U 0 u22 u23 0 a ' 22 a ' 23
0 0 u33 0 0 a"33
1 0 0 1 0 0
L l21 1 0 f 21 1 0
l31 l32 1 f 31 f 32 1
check : LU A
R Setiawan 100
7. Linear Algebraic Equations
Programming Strategy:
The Decompose process is to form a matrix [A’] with the
following example form, So that, the computer only stores
one matrix but will be used according to the procedure [U],
[L] :
R Setiawan 101
7. Linear Algebraic Equations
Basic Algorithm:
During elimination phase, factors that are calculated are
stored as lij
Partial pivoting strategy is implemented, instead of the
whole row
While the equations are not scaled, scaling is used to
determine if pivoting is to be implemented
The diagonal terms are monitored for near-zero
occurrences in order to raise singularity warning.
R Setiawan 102
7. Linear Algebraic Equations
The Substitution Process is by obtaining {D} and finally {X},
using the following relationships:
n 1
d n lnn
i 1
d i d i aij b j for i 2,3,L, n
j 1
xn d n ann
n
di a x
j i 1
ij j
xi for i n 1, n 2,L,1
aii
R Setiawan 103
7. Linear Algebraic Equations
Example 10.1 – 10.2
3 0 .1 0 .2
A 0.1 7 0.3
0.3 0.2 10
R Setiawan 104
7. Linear Algebraic Equations
Example 10.1 – 10.2
3 0.1 0.2 3 0.1 0.2
A 0.1 7 0.3 0 7.00333 0.293333
0.3 0.2 10 0 0.19 10.02
3 0.1 0.2
0 7.00333 0.293333 U
0 0 10.0120
0.1
f 21 0.03333333
3
0.3
f 31 0.10000000
3
0.19
f 32 0.0271300
7.00333
1 0 0
L 0.03333333 1 0
0.10000000 0.0271300 1
R Setiawan 105
7. Linear Algebraic Equations
1 0 0 d1 7.85
LD B 0.03333333 1 0 d 2 19.3
0.10000000 0.0271300 1 d 3 71.4
D 7.85 19.5617 70.0843
T
3 0 .1 0.2 x1 7.85
U X D 0 7.00333 0.293333 x2 19.5617
0 0 10.0120 x3 70.0843
X 3 2.5 7.00003
T
Case I
R Setiawan 106
7. Linear Algebraic Equations
1 0 0
⇒ 0.03333333 1 0
0.10000000 0.0271300 1
⇒ ?? ?? ??
3 0.1 0.2 ??
⇒ 0 7.00333 0.293333 ??
0 0 10.0120 ??
⇒ ?? ?? ??
Case II
R Setiawan 107
7. Linear Algebraic Equations
1 0 0
⇒ 0.03333333 1 0
0.10000000 0.0271300 1
⇒ ?? ?? ??
3 0.1 0.2 ??
⇒ 0 7.00333 0.293333 ??
0 0 10.0120 ??
⇒ ?? ?? ??
Case III
R Setiawan 108
7. Linear Algebraic Equations
7.6 Matrix Inverse
AA1 A1A I
[A] = [L][L]T
R Setiawan 110
7. Linear Algebraic Equations
7.7 Error Analysis & System Condition
Application of Inverse Matrix for Ill-condition
matrix:
Scale the matrix [A]; Invert matrix [A]; compare the
order of magnitude of elements of [A]-1 for Ill-condition
matrix
1
A A I ?? if not, then Ill-condition
A
1 1
A?? if not, then Ill-condition
R Setiawan 113
7. Linear Algebraic Equations
7.7 Error Analysis & System Condition
Matrix Norm:
Euclidian norms, for vectors
Frobenius norms, for general matrices
Uniform vector norm
Row-sum norm
Matrix Condition Number:
Cond A A A
1
X A
Cond A
X A
R Setiawan 114
7. Linear Algebraic Equations
Matrix Norm
Euclidian Norms:
n n n
X e
i
x 2
Ae i, j
a 2
j 1 j 1
i 1
Alternatives:
P norm 1/ P
n
P
X P
xi
i 1 n
Sum of Absolutes X 1 xi
i 1
R Setiawan 115
7. Linear Algebraic Equations
Text Book Further Study
Understand and if necessary memorize procedure as in
Fig. 10.1
Train yourself Example 10.1, 10.2, 10.3
Build computer program based on Algorithm in Fig. 10.3
Try Text book problems:
10.3, 10.6, 10.10
R Setiawan 116