Chapter 2 Direct Methods For Solving Linear Systems
Chapter 2 Direct Methods For Solving Linear Systems
24/6/19 2
§2 Gaussian Elimination Method
2.1 The Idea of GEM
The 1st step, the first equation times by -2 and adds to the
second equation, and the first equation times by -1 and adds
to the third equation.
x1 2 x 2 3 x3 1
3 x 2 x3 4
2 x 2 6 x3 4
24/6/19
3
The 2nd step: the second equation times by -2/3and adds
to the third equation, we have
x1 2 x 2 3 x3 1
3 x 2 x3 4
20 20
x3
3 3
The 3rd step: solve the linear system by backward
substituting, we have x*=(2,1,-1)T
The above process can be described by matrix
form as
1 2 3 1 r2 r2 2 r1 1 2 3 1
A : b 2 7 5 6 0 3 1 4
r3 r3 r1
20 20
0 0
3 3
2.2 GEM
24/6/19 5
a11
(1) (1)
a12 a1(1)
n b1(1)
(1) (1) (1)
a21 a22 a2(1)n b2 ri ri li 1 r1
A b
(1) (1)
( i 2,3,, n )
(1) (1)
an1 an(1)2 (1)
ann bn
a11
(1) (1)
a12 a1(1)
n b1
(1)
(2) ( 2)
0 a22 a2(2) b2
n
A(2)b (2)
(2)
0 an(2)
2
(2)
ann bn
24/6/19 6
ai(1)
l
i1 1
(1)
i 2, 3, , n
a 11
( 2)
a
ij a (1)
ij l a (1)
i1 1 j i , j 2, 3, , n
( 2)
b
i b i
(1)
l b
i1 1
(1)
i 2, 3, , n
(k )
a
After k-1th eliminations(suppose kk ≠0),we have
a11
(1) (1)
a12 a1(1k) a1(1n) b1(1)
( 2) ( 2) ( 2) ( 2)
a22 a2 k a2 n b2
(k )
A :b (k )
(k )
akk (k )
akn bk (k )
(k ) (k )
ank ann bn (k )
24/6/19 7
Like the first elimination process, the goal of k-th
is eliminating the entries below ak( k,k) . The operations
in detail are as follows:
(k )
aik
lik ( k ) i k 1, , n ( mul t i pl i er s )
akk
( k 1)
a
ij a (k )
ij l a
ik kj
(k )
i , j k 1, k 2, , n
( k 1)
ib b i
(k )
lik kb (k )
i k 1, k 2, , n
(k )
a
If kk ≠0,(k=1,2,,n-1), the process can continue.
Finally, we obtain the following matrix
a11(1) a12(1) a1(1)n b1(1)
(2) (2) (2)
a22 a2 n b2
A b
(n) (n)
(n) (n)
ann bn
24/6/19 8
Because the matrix A(n) is an upper triangular
matrix, the linear system of equations is changed
(n ) ≠0,we
into a triangular system of equations. If ann
obtain the solution
bn( n )
xn ( n )
ann
n
bi
(i )
a (i )
ij x j
(i=n-1,n-2,,1)
xi j i 1
(i )
a ii
In a word, we can get the algorithm of GEM:
step 1elimination process
(1) (1)
①set aij aij , bi bi (i,j=1,2,…,n)
24/6/19 9
(k )
②for k=1 to n-1,if a
kk ≠0 ,do
(k )
aik
lik ( k ) (i=k+1,k+2,,n)
akk
( k 1)
a
ij a (k )
ij l ik a (k )
kj (i,j=k+1,k+2,,n)
( k 1)
b
i b (k )
l b (k )
i ik k (i=k+1,k+2,,n)
(i )
2backward substitution ( a
ii ≠0)
bn( n )
xn ( n)
a nn
n
bi a ij( i ) x j
( i )
j i 1
xi (i=n-1,n-2,,1)
a (i )
ii
24/6/19 10
2.3 A Condition on GEM
From the previous algorithm, we need assume the
(k )
entry akk ≠0 in the k-th elimination process. In
(k ) ≠0 for
order to finish the GEM, the condition akk
all k. But we hope decide only according to A
whether the GEM can finish or not. So, we have the
theorem:
24/6/19 11
§3 Gaussian Elimination with Pivoting
24/6/19 13
Method II: GEM with Column Pivoting
0.50 1.1 3.1 6.0 5.0 0.96 6.5 0.96
2.0 4.5 0.36 0.020 2.0 4.5 0.36 0.020
5.0 0.96 6.5 0.96 0.50 1.1 3.1 6.0
5.00 0.960 6.50 0.960 5.00 0.960 6.50 0.960
0 4.12 -2.24 -0.364 0 4.12 -2.24 -0.364
0 1.00 2.45 5.90 0 0 2.99 5.99
24/6/19 14
From the example, we can find that the simple
exchange between rows can better the precision of
algorithm.
(k )
A :b
(k )
(k ) (k ) (k )
akk akn bk
(k ) (k ) (k )
ank ann bn
24/6/19 15
Before kth elimination, the step ① 、 ② need be
done:
①determine ik according to | a (k )
ik , k | max | a (k )
ik | .
k i n
(k )
If aik ,k =0,we have det(A)= | A(k )| =0,namely,system
Ax=b doesn’t satisfy the Cramer’s rule.
24/6/19 16
Algorithm of GEM with column pivoting
1. Find the pivot element , select ik to satisfy
| aik ,k | max | aik |
k i n
2. if ik k , go to step 3. Else do
akj ai j b ( k ) b ( k ) (k j n)
k
k ik
3. Elimination process
aik
aik
akk
aij aij aik akj , bi bi aik bk
i k 1, k 2, , n
24/6/19 17
4. if ann =0 , break .
n
bi aij b j
bi j i 1
i n 1, n 2, , 3, 2,1
aii
6. output ( b1,b2,···,bn)T
24/6/19 18
§4 Gauss-Jordan Elimination method
24/6/19 19
1 a1k a1n b1
1
A( k ) b ( k ) 1 ak 1,k ak 1,n bk 1
akk akn bk
ank ann bn
aij=aij-likakj(i=1,2,,n, 且 i≠k,j=k+1,,n)
bi=bi-likbk (i=1,2,,n, 且 i≠k)
5do
akj=akjlkk (j=k,k+1,,n)
bk=bklkk
24/6/19 21
When k=n,
1 b1
1 b2
Ab A b
(n) (n)
1 bn
24/6/19 22
2.5.1 computing the inverse of matrix
1 1 0
Eg. 2.4 soppuse A 2 2 3 , show A-1
1 2 1
Solution.
1 1 0 1 0 0 2 2 3 0 1 0
[ AI ] 2 2 3 0 1 0 r1 r2
1 1 0 1 0 0
1 2 1 0 0 1 1 2 1 0 0 1
24/6/19 23
1 1 3
2 0 1
2 0 1 1 3
2 0 1
2 0
0 2 32 1 12 0 0 3 5
2 0 1
2 1
0 3 5
0 1
1
0 2 3
1 1
0
2 2 2 2
1 0 2
3 0 1
3 13 1 0 0 4 1 3
1
0 1 5
6 0 1
6 3 0 1 0 5 1 3
0 0 1
1 16 2
3
0 0 1 6 1 4
6
4 1 3
as a consequence, A1 5 1 3
6 1 4
4. if
| a | max | aik |
=0 , then A is a singular imatrix,
kk kbreak.
i n
ik k
6. Elimination process
akj aik j j k , k 1, , 2 n
24/6/19 25
aik
aik i 1, 2, , n。 i k
akk
1
akk
akk
aij aij aik akj
i 1, 2, , n , i k。 j k 1, k 2, , n
1
7. output A
24/6/19 26
§5 LU Factorization of Matrix
Suppose A is a n-order square matrix, if there exists a lower
triangular matrix L and an upper triangular matrix U, such
that A=LU, we call LU is the triangular factorization of A.
24/6/19 28
Similarly, A( k 1) Lk A( k ) , b ( k 1) Lk b ( k )
1
1
Lk
lk 1, k
lnk 1
l ln 2 lnk ln , n 1 1
n1
24/6/19 30
After LU factorization, it is an easy job for to solve system
Ax=b. System Ax=b is equivalent to the two systems:
Ly b
Ux y
24/6/19 31
According to the multiplication rule of matrix, we have
① becaus e a1 j u1 j s o u1 j a1 j j 1, 2, , n
ai1
②
becaus e ai1 li1u11 so li1 i 2,3, , n
u11
Assume that we have obtained the first k-1 rows of U and
the first k-1 columns of L ( 1≤k≤n),for the kth step,
n
③because akj l
r 1
kr urj
24/6/19 32
n
④according to
aik lir urk
r 1
when r>k , urk=0 , and ukk is a known value
k 1
aik lir urk
lik r 1
i k 1, k 2, , n
ukk
⑤solve L y = b
y1 b1
k 1
yk bk lkr yr k 2,3, , n
24/6/19
r 1
33
⑥ solve U x = y
yn
xn u
nn
n
yk ukr xr
xk r k 1
k n 1, , 2,1
ukk
The Doolittle’s Factorization Method have the same
efficient as GEM.
24/6/19 34
2.5.2 Crout Factorization
a11 a12 a1n l11 0 0 1 u12 u1n
a a22 a2 n l21 l22 0 0 1 u2 n
21
an1 an 2 ann ln1 ln 2 lnn 0 0 1
According to the matrix multiplication,
①
becaus e of ai1 li1 , we have li1 ai1 i 1, 2, , n
24/6/19 36
⑤solve L y = b
b1
y1 l
11
k 1
bk lkr yr
yk r 1
k 2,3, , n
lkk
⑥ solve U x = y
xn yn
n
xk yk ukr xr k n 1, , 2,1
r k 1
24/6/19 37
Finding the [U] matrix
Using the Forward Elimination Procedure of Gauss Elimination
25 5 1
64 8 1
144 12 1
25 5 1
64
Step 1: 2.56; Row2 Row12.56 0 4.8 1.56
25
144 12 1
25 5 1
144
5.76; Row3 Row15.76 0 4.8 1.56
25
0 16.8 4.76
http://numericalmethods.eng.usf.edu
Finding the [U] Matrix
25 5 1
Matrix after Step 1: 0 4.8 1.56
0 16.8 4.76
25 5 1
16.8
Step 2: 3.5; Row3 Row23.5 0 4.8 1.56
4.8
0 0 0.7
25 5 1
U 0 4.8 1.56
0 0 0.7
http://numericalmethods.eng.usf.edu
Finding the [L] matrix
1 0 0
1 0
21
31 32 1
http://numericalmethods.eng.usf.edu
Finding the [L] Matrix
1 0 0
L 2.56 1 0
5.76 3.5 1
http://numericalmethods.eng.usf.edu
Does [L][U] = [A]?
1 0 0 25 5 1
LU 2.56 1 0 0 4.8 1.56 ?
5.76 3.5 1 0 0 0.7
http://numericalmethods.eng.usf.edu
Using LU Decomposition to solve SLEs
Solve the following set of 25 5 1 x1 106.8
linear equations using 64 8 1 x 177.2
LU Decomposition
2
144 12 1 x3 279.2
http://numericalmethods.eng.usf.edu
Example
http://numericalmethods.eng.usf.edu
Example
Complete the forward substitution to solve for
[Z]
z1 106.8
z 2 177.2 2.56 z1 z1 106.8
177.2 2.56106.8
96.2
Z z2 96.21
z3 279.2 5.76 z1 3.5 z 2 z3 0.735
279.2 5.76106.8 3.5 96.21
0.735
http://numericalmethods.eng.usf.edu
Example
Set [U][X] = [Z]
25 5 1 x1 106.8
0 4.8 1.56 x 96.21
2
0 0 0.7 x3 0.735
http://numericalmethods.eng.usf.edu
Example
Substituting in a3 and using the
From the 3 equation
rd
second equation
http://numericalmethods.eng.usf.edu
Example
Substituting in a3 and a2 Hence the Solution Vector
using the first equation is:
http://numericalmethods.eng.usf.edu
Eg. 2.5 Solve the following system using Doolittle’s method
Ax b
1 2 3 4 4
1 4 9 16 10
wher e A
, b
28
1 8 9 64
1 16 81 256 82
24/6/19 49
Solution.
1 2 3 4 1 2 3 4
1
4 9 16 (1) 1 4 9 16
1 8 27 64
1 8 27 64
1 16 81 256 1 16 81 256
1 2 3 4 1 2 3 4
1 2 6 12 (3) 1 2 6 12
(2)
1 3 27 64 1 3 6 24
1 7 81 256 1 7 6 256
1 2 3 4 1 0 0 0 1 2 3 4
1
1 2 6 12 1 0 0 0 2 6 12
(4)
. That is, L = ,U = .
1 3 1 0 0 0 6 24
1 3 6 24
1 7 6
24 1 7 6 1 0 0 0 24
24/6/19 50
To solve Ly=b
1 0 0 0 y1 4 y1 4
1 1 0 0 y2 10 y 6
2 .
, we have
1 3 1 0 y3 28 y3 6
1 7 6 1 y4 82 y4 0
1 2 3 4 x1 4 x1 1
0 2 6 12 x2 6 x2 0
,
0 0 6 24 x3 6 x3 1
0 0 0 24 x4 0 x4 0
24/6/19 51
§6 Cholesky’s Method
If matrix A is a symmetric positive definite, we can simplify
further LU factorization and get a more efficient method.
2.6.1 LDU Factorization
Theorem. If all principle determinants don’t equal 0, then the
matrix can be factorized uniquely as
A=LDU
where L is a unit lower triangular matrix , U is a upper triangular
matrix , D is a diagonal matrix.
Proof. According to the Doolittle’s Factorization, A=LU*
uii* 0 i 1, 2, , n
then A LU LDU *
So, we have
Lz b
Ax b Dy z
Ux y
24/6/19 53
2.6.2 Cholesky’s Factorization
1 1 1 1
A L1 DL1T L1 D D L1T ( L1 D )( L1 D ) LLT
2 2 2 2 T
24/6/19 55
2.6.3 Cholesky’s Factorization method
Suppose
a11 a12 a1n l11 0 0 l11 l21 ln1
a a22 a2 n l21 l22 0 0 l22 ln 2
21
an1 an 2 ann ln1 ln 2 lnn 0 0 lnn
n
yk lrk xr
xk r k 1
k n 1, , 2,1
lkk
2.7.1 Norm
1. Norm of Vector
Definition 1. If for any a vector in Rn , there exists a
corresponding real number ‖x‖, satisfying
(1) || x || 0, x R n , and || x || 0 x 0
(2) || kx ||| k ||| x || x R n , k R
It is easy to verify that all the three norms satisfy the three
conditions of norm definition.
24/6/19 60
Eg.1 let x=(1,0.5,0,-0.3)T, show || x ||1 ,|| x ||2 ,|| x ||
x 1 1 0.5 0 0.3 1.8
Solution.
x 2 12 0.52 0.32 1.1576
x
1
Definition 2 If for any two vectors a and y in Rn, there exists
a corresponding real number (x,y), satisfying:
( x, x) 0, x R n , and ( x, x ) 0 x 0
( x, y ) ( y , x ) x, y R n
(kx, y ) k ( x, y ) x, y R n , k R
24/6/19 61
( x, y z ) ( x, y ) ( x, z ) x, y, z R n
Then, the real number (x,y) is called the inner product of vectors x
and y. n
( x, y ) x1 y1 x2 y2 xn yn xi yi
i 1
2.7.2 Norm of Matrices
Definition 3. If for any a n-order matrix A, there exists a
corresponding real number ‖A‖, satisfying:
|| A || 0, A R nn , 且 || A || 0 A 0
|| kA ||| k ||| A || A R nn , k R
24/6/19 62
|| AB |||| A |||| B || A, B R nn
|| Ax |||| A |||| x ||
Definiton 5.
|| Ax ||
|| A || max max || Ax ||
x0
n
|| x || || x|| 1
n
xR xR
24/6/19 63
The following norms are often used
n
|| A ||1 max | aij | ( column sun norm )
1 j n
i 1
{ A( k ) } converge to
We say A (aij ) nn
denote lim A( k ) A
k
24/6/19 65
Theorem 3. lim A( k ) A lim || A( k ) A || 0
k k
2 1 0
Eg. 2 Let A 1 2 1 show‖A‖ ,‖A‖ ,‖A‖ ,ρ(A).
0 1 2 1 2 ∞
Solution. Obviously, ‖A‖1 =4,‖A‖∞=4.
5 4 1
AT A 4 6 4
1 4 5
24/6/19 67
5 4 1
| I AT A | 4 6 4 ( 4)( 2 12 4) 0
1 4 5
1 4, 2,3 6 4 2,||A ||2 = 6+4 2 =3.4142
2 1 0
| I A | 1 2 1 ( 2)( 2 4 2) 0
0 1 2
1 2, 2,3 2 2
( A) | 3 | 2 2 3.4142
1. Perturbation δb
Suppose that there exists a perturbation δb of b and matrix A is
accurate , we analyze the error δx .
A( x x ) b b
Ax A x b b
Because Ax=b , so A δx= δb , further we have
δx=A-1 δb
24/6/19 69
‖ δx‖≤‖A-1‖ ‖ δb‖
Because ‖b‖=‖Ax‖≤‖A‖‖x‖
|| b ||
we have || x ||
|| A ||
|| x || 1 || b ||
|| A || || A ||
And then || x || || b ||
( A A )( x x ) b
24/6/19 71
We || A ||
Cond ( A)
have || x || || A ||
here || A || || A1 || 1
|| x || 1 Cond ( A) || A ||
|| A ||
3. Perturbations of δA and δb
( A A )( x x ) b b
We
|| x || Cond ( A) || A || || b ||
have ( + )
|| x || 1 Cond ( A) || A || || A || ||b||
|| A ||
her e || A || || A1 || 1
24/6/19 72
Cond(A) =|| A||||A-1||≥|| AA-1|| =||I||=1
When Cond(A) is closed to 1, we say system is well-
conditioned. When Cond(A) ﹥﹥1, we say system is ill-
conditioned. An ill-conditioned system can deduce a great error.
24/6/19 73
Eg. 3 n-order Hilbert’s matrix
1 1 1
1
2 3 n
1 1 1
1
2 3 4 n 1
Hn
1 1 1 1
n 1 n n 1 2n 2
1 1 1
1
n n 1 n2 2n 1
4 6 4 6
H 21 , 2 16 12 0, 8 2 13
6 12 6 12
3
|| H 2 ||1 || H 21 ||1 18,we have
, cond1 ( H 2 ) 27
2
24/6/19 75