0% found this document useful (0 votes)
52 views13 pages

Chapter 3: Solving Systems of Linear Equations Using Gaussian Elimination

1) Gaussian elimination is a method for solving systems of linear equations by reducing the system to an upper triangular matrix. 2) For a system Ax = b to have a unique solution, the matrix A must be invertible, meaning its determinant is non-zero. 3) Backward substitution involves solving the upper triangular system by finding the vector x entries in decreasing order.

Uploaded by

Farid Beyrouthy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views13 pages

Chapter 3: Solving Systems of Linear Equations Using Gaussian Elimination

1) Gaussian elimination is a method for solving systems of linear equations by reducing the system to an upper triangular matrix. 2) For a system Ax = b to have a unique solution, the matrix A must be invertible, meaning its determinant is non-zero. 3) Backward substitution involves solving the upper triangular system by finding the vector x entries in decreasing order.

Uploaded by

Farid Beyrouthy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Chapter 3: Solving Systems of Linear Equations

using Gaussian Elimination


Math 251: Numerical Computing
Dr. Sophie Moufawad

We consider the problem of computing the solution of a system of n linear equations in n unknowns. The scalar and
matrix form of that system are given by
$ A x “ b
» fi » fi » fi



’a1,1 x1 ` a1,2 x2 ` ¨ ¨ ¨ ` a1,n xn “ b1 a1,1 a1,2 ¨¨¨ a1,n x1 b1

’ — ffi — ffi — ffi

— a2,1 a2,2 a2,n ffi — x2 ffi — b2 ffi
— ffi — ffi — ffi
&a2,1 x1 ` a2,2 x2 ` ¨ ¨ ¨ ` a2,n xn “ b2
’ ¨¨¨
and — ..

.. .. .. ffi
ffi
— .. ffi
— ffi “ —
— .. ffi
ffi
.. — .


’ . – . . . ffi
fl
— . ffi
– fl
— . ffi
– fl





%a x ` a x ` ¨ ¨ ¨ ` a x “ b an,1 an,2 ¨¨¨ an,n xn bn
n,1 1 n,2 2 n,n n n

Theorem 1. The following statements are equivalent:

• The system Ax “ b has a unique solution

• The matrix A is invertible (there exists a matrix A´1 such that A´1 A “ I)

• The determinant of A is nonzero (detpAq ‰ 0).

Assuming that the system Ax “ b has a unique solution, we would like to find this solution. There are several methods
for solving this system. We will consider Gaussian elimination where the system is reduced to an upper triangular matrix
U x “ c.
U x “ c
A x “ b » fi » fi » fi
» fi » fi » fi u1,1 u1,2 ¨¨¨ u1,n x1 c1
a1,1 a1,2 ¨¨¨ a1,n x1 b1 — ffi — ffi — ffi
0 u2,2 u2,n ffi — x2 ffi — c2 ffi
— ffi — ffi — ffi
— ffi — ffi — ffi — ¨¨¨
— a2,1 a2,2 a2,n ffi — x2 ffi — b2 ffi “ —
— ffi — ffi — ffi
¨¨¨ ðñ —
— .. .. ..
ffi
ffi — .. ffi
— ffi
— .. ffi
ffi
— ..

.. .. .. ffi
ffi
— .. ffi
— ffi “ — .. ffi
— ffi — . . . ffi — . ffi — . ffi
— . . . . ffi — . ffi — . ffi – fl – fl – fl
– fl – fl – fl 0 ¨¨¨ 0 un,n xn cn
an,1 an,2 ¨¨¨ an,n xn bn

Note that the lower triangular system U x “ c is invertible if and only if

detpU q “ u1,1 u2,2 ¨ ¨ ¨ unn ‰ 0

i.e. all the diagonal entries of U are nonzero. Then, after applying Gaussian Elimination on Ax “ b and transforming it
into U x “ c, we can find x by applying backward substitution.

1 Backward Substitution
Backward substitution consists of solving an invertible upper triangular system by finding the entries of the vector x in
decreasing order (xn , ..., x1 ).
Although matrices are two-dimensional, however computer storage is usually one-dimensional. As a result, matrices
have to be mapped to an array. There are many was for mapping a matrix into an array. For dense matrices, the elements
are either stored column-wise (as in MATLAB), or row-wise (C, C++).
Thus, we will consider row-wise and column-wise versions of backward substitution.

1
Example 1. Solve

.
U x “ c $
» fi » fi » fi
1 2 3 4 x 17


’ 1x1 ` 2x2 ` 3x3 ` 4x4 “ 17 “ c1
— 1 ffi


— ffi — ffi ’


—0 2 5
ffi
6ffi
— ffi
—x2 ffi
— ffi
—19ffi ðñ &0x1 ` 2x2 ` 5x3 ` 6x4 “ 19 “ c2



ffi
ffi
— ffi
— ffi “ —
— ffi
ffi
—0 0 7 8ffi —x3 ffi —15ffi ’
’0x ` 0x2 ` 7x3 ` 8x4 “ 15 “ c3
’ 1
– fl – fl – fl ’


0 0 0 9 x4 9 ’

%0x ` 0x ` 0x ` 9x “ 9 “ c
1 2 3 4 4

1. Row-wise version: Access the matrix U row-wise


c4
• x4 “ “ 9{9 “ 1
U4,4
c3 ´ U3,4 ˚ x4 15 ´ 8 ˚ 1
• x3 “ “ “1
U3,3 7
c2 ´ U2,4 ˚ x4 ´ U2,3 ˚ x3 19 ´ 5 ´ 6
• x2 “ “ “4
U2,2 2
c1 ´ U1,4 ˚ x4 ´ U1,3 ˚ x3 ´ U1,2 ˚ x2 17 ´ 8 ´ 3 ´ 4
• x1 “ “ “2
U1,1 1

2. Column-wise version: Access the matrix A column-wise


c4
• x4 “ “ 9{9 “ 1
U4,4
c1 “ 17 ´ 4 ˚ x4 “ 17 ´ 4 “ 13

c2 “ 19 ´ 6 ˚ x4 “ 19 ´ 6 “ 13

c3 “ 15 ´ 8 ˚ x4 “ 15 ´ 8 “ 7
» fi » fi » fi
1 2 3 x 13
— ffi — 1 ffi — ffi
— ffi — ffi — ffi
—0 2 5ffi —x2 ffi “ —13ffi
– fl – fl – fl
0 0 7 x3 7
c3
• x3 “ “ 7{7 “ 1
U3,3
c1 “ 13 ´ 3 ˚ x3 “ 13 ´ 3 “ 10

c2 “ 13 ´ 5 ˚ x3 “ 13 ´ 5 “ 8
» fi » fi » fi
1 2 x 10
– fl – 1 fl “ – fl
0 2 x2 8
c2
• x2 “ “ 8{2 “ 4
U2,2
c1 “ 10 ´ 2 ˚ x2 “ 10 ´ 2 ˚ 4 “ 2
” ı” ı ” ı
1 x1 “ 2
c1
• x1 “ “ 2{1 “ 2
U1,1
The algorithm of the column-wise Backward substitution is given in Algorithm (1) or (2). As for the flop count, it can
be easily shown that at iteration j, 1 ` 2 ˚ pj ´ 1q flops are performed. Thus, total number of flops is
n
ÿ n
ÿ n
ÿ
2˚j´1“2 j´ 1 “ 2 ˚ npn ` 1q{2 ´ n “ n2 .
j“1 j“1 j“1

2
Algorithm 1 Column-wise Backward Substitution
function x = ColBackSub(U, c)
n “ lengthpU q;
1: for j = n:-1:1
2: xpjq “ cpjq{U pj, jq;
3: for i = 1:j-1
4: cpiq “ cpiq ´ U pi, jq ˚ xpjq;
5: end
6: end

Algorithm 2 Column-wise Backward Substitution


function x = ColBackSub2(U, c)
n “ lengthpU q;
1: for j = n:-1:1
2: xpjq “ cpjq{U pj, jq;
3: cp1 : j ´ 1q “ cp1 : j ´ 1q ´ U p1 : j ´ 1, jq ˚ xpjq;
4: end

2 Naive Gauss reduction


As discussed earlier, the aim of Gaussian elimination is to transform the system into upper triangular system so it can be
solved using backward substitution. We start by discussing the Naive Gaussian reduction, where the rows of A are not
permutated throughout the reduction, i.e at the ith reduction the pivot row is the ith row and the aim is to zero-out all the
entries below the diagonal.

Example 2. Apply Naive Gaussian to the system (scalar or matrix, augmented matrix):
$
» fi » fi » fi » fi


’x1 ´ x2 ` 2x3 ` x4 “ 1 1 ´1 2 1 x 1 1 ´1 2 1 1

’ 1

’ — ffi — ffi — ffi — ffi
3x1 ` 2x2 ` x3 ` 4x4 “ 1
’ — ffi — ffi — ffi — ffi
& —3 2 1 4ffi —x2 ffi — 1 ffi — 3 2 1 4 1 ffi
i.e. —

ffi — ffi “ — ffi, —
ffi — ffi — ffi —
ffi
ffi

’5x1 ` 8x2 ` 6x3 ` 3x4 “ 1
’ —5 8 6 3ffi —x3 ffi — 1 ffi — 5 8 6 3 1 ffi

’ – fl – fl – fl – fl



%4x ` 2x ` 5x ` 3x “ ´1 4 2 5 3 x 4 ´1 4 2 5 3 ´1
1 2 3 4
Api, jq
At reduction j, pivot = A(j,j), multiplier(i) = for i ą j, ri “ ri ´ multpiq ˚ rj for i ą j.
pivot
F irst Reduction Second Reduction T hird Reduction
» fi » fi » fi
1 ´1 2 1 1 1 ´1 2 1 1 1 ´1 2 1 1
— ffi — ffi — ffi
— ffi — ffi — ffi
— 3 5 ´5 1 ´2 ffi — 3 5 ´5 1 -2 ffi — 3 5 ´5 1 ´2 ffi
— ffi — ffi — ffi
— ffi — ffi — ffi
— 5 13 ´4 ´2 ´4 ffi — 5 13/5 9 ´23{5 6{5 ffi — 5 13/5 9 ´23{5 6/5 ffi
– fl – fl – fl
4 6 ´3 ´1 ´5 4 6/5 3 ´11{5 ´13{5 4 6/5 1/3 ´2{3 ´3

Naive Gaussian Reduction of the system Ax “ b transforms it into an upper triangular system U x “ c.
» fi » fi » fi
1 ´1 2 1 x 1
— ffi — 1 ffi — ffi
— ffi — ffi — ffi
— 0 5 ´5 1 ffi —x2 ffi — ´2 ffi
— ffi — ffi “ — ffi
— ffi — ffi — ffi
— 0 0 9 ´23{5 ffi —x3 ffi —6{5ffi
– fl – fl – fl
0 0 0 ´2{3 x4 ´3

3
6
´3 9 5 ` 23 9
5 2 73
Then, using backward substitution we solve the system U x “ c and obtain x4 “ “ , x3 “ “ ,
´2{3 2 9 30
9
´2 ´ 2 ` 5 73
30 17 9 73 17 217
x2 “ “ , x1 “ 1 ´ ´ 2 ` “ .
5 15 2 30 15 30

Algorithm 3 Column-wise Naive Gauss


function [U, c] = ColNaiveGauss(A,b)
n “ lengthpbq;
1: for k = 1:n-1
2: piv “ Apk, kq;
3: for i = k+1:n
4: Api, kq “ Api, kq{piv;
5: end
6: for j = k+1:n
7: for i = k+1:n
8: Api, jq “ Api, jq ´ Api, kq ˚ Apk, jq;
9: end
10: end
11: bpk ` 1 : nq “ bpk ` 1 : nq ´ Apk ` 1 : n, kq ˚ bpkq;
12: end
13: U “ triupAq, c “ b

n n n n
At the k th reduction we have a total of 2 “ 3pn ´ kq ` 2pn ´ kq2 flops. Then the
ř ř ř ř
1` 2`
j“k`1 j“k`1 i“k`1 j“k`1
n´1 n´1 pn ´ 1qn pn ´ 1qnp2n ´ 1q 2
3pn ´ kq ` 2pn ´ kq2 “ 3j ` 2j 2 “ 3 which is Op n3 q.
ř ř
Total Flops “ `2
k“1 j“1 2 6 3
Note that instead of storing the zeros below the diagonal entries of the matrix A throughout the reduction, we store
the multipliers. First, to gain memory and second the multipliers can be used to define a lower unit triangular matrix L
that satisfies (LU decomposition of A):
A “ L U
» fi » fi » fi
1 ´1 2 1 1 0 0 0 1 ´1 2 1
— ffi — ffi — ffi
— ffi — ffi — ffi
— 3 2 1 4 ffi — 3 1 0
0 ffi — 0 5 ´5 1 ffi


ffi
ffi “ —

ffi
ffi


ffi
ffi
— 5 8 6 3 ffi — 5 13{5 1 0 ffi — 0 0 9 ´23{5 ffi
– fl – fl – fl
4 2 5 3 4 6{5 1{3 1 0 0 0 ´2{3

Note that it is possible to write versions of columnwise Gaussian elimination that take different input and output. For
example, an LU version that takes as input A and outputs L, and U. In matlab, L “ trilpA, ´1q ` eyepnq.
Use of LU decomposition:

1. detpAq “ detpLU q “ detpLqdetpU q “ 1 ˚ U1,1 ˚ U2,2 ˚ U3,3 ˚ ...Un,n


Example: detpAq “ detpLU q “ detpLqdetpU q “ 1 ˚ 5 ˚ 9 ˚ p´2{3q “ ´30

2. Solve a system with multiple right-hand sides, such as finding the inverse of the matrix A (AM = I), which is equiv-
alent to solving

4
ArM1 M2 M3 ... Mn s “ re1 e2 e3 ... en s
$ where AMi “ ei
’ AM1

’ “ e1

’ ðñ LU Mi “ ei
AM2 e2



’ “ $
& & Ly “ ei
ðñ AM3 “ e3 ðñ
% UM “ y
..


’ i



’ .


% AM “ en
n

´1
Example 3. Find the third column of A $ of Example 2, i.e. find M3 that satisfies AM3 “ e3 where A “ LU .
& Ly “ e3
AM3 “ e3 ðñ LU M3 “ e3 ðñ
% UM “ y
3
» fi » fi » fi
1 0 0 0 y 0
— ffi — 1 ffi — ffi
using Forward Substitution we get y1 “ 0, y2 “ 0,
— ffi — ffi — ffi
—3 1 0 0ffi —y2 ffi —0ffi
— ffi — ffi “ — ffi
y3 “ 1, y4 “ ´1{3.
— ffi — ffi — ffi
—5 13{5 1 0ffi —y3 ffi —1ffi
– fl – fl – fl
4 6{5 1{3 1 y4 0
» fi » fi » fi
1 ´1 2 1 m 0 ´1{3 1
ffi — 1 ffi — ffi using Backward Substitution we get m4 “ “ ,

— ffi — ffi — ffi ´2{3 2
—0 5 ´5 1 ffi —m2 ffi — 0 ffi ´ 12 ` 5 30
11
— ffi — ffi “ — ffi 1 ` p23{5qp1{2q 11 4
m3 “ “ , m2 “ “
9 ´ 23
— ffi — ffi — ffi
—0 0 5 fl –m3 fl
ffi — ffi — 1 ffi 9 30 5 15
– – fl 1 11 4 29
0 0 0 ´ 23 m4 ´ 13 m1 “ ´ ´ 2 ` “´
2 30 15 30
„ 
29 4 11 1
ùñ M3 “ ´ ; ; ; . Check that AM3 “ e3
30 15 30 2

2.1 Principle Minor Property

Naive Gaussian elimination can’t


» be applied
fi to any matrix whose determinant is nonzero.» For example, afterfi the first
4 2 3 4 2 3
— ffi — ffi
reduction of the matrix A “ —2 1 2ffi , whose detpAq “ ´3 ‰ 0 (invertible), we get — 1/2 1{2 ffi. At the
— ffi — ffi
0
– fl – fl
1 2 4 1/4 3{2 13{4
second reduction, the pivot is zero. Thus we can not proceed. The reason is that Ap1 : 2, 1 : 2q is not invertible.
Hence, to be able to apply Naive Gaussian Elimination, in addition to nonzero determinant, the matrix must satisfy an
additional property called Principle minor property.

Definition 1. A square matrix A has the principal minor property, if all its principal sub-matrices Ai , i “ 1, ..., n are
invertible, where Ai is the i ˆ i submatrix of A starting with Ap1, 1q “ a1,1 .

If the matrix has the principal minor property, then Naive Gaussian elimination can be applied. But checking that
each of the submatrices is invertible, is expensive. Thus, we will introduce another version of Gaussian elimination that
does not depend on the principal minor property. But before that, we will look at special types of matrices that have the
principal minor property.

5
2.2 Strictly Diagonally Dominant Matrices

Definition 2. A square matrix A of size n ˆ n is strictly diagonally dominant if for every row, the magnitude of the
diagonal entry is larger then the sum of the magnitude of all the other non-diagonal entries, in that row. i.e.,
n
ÿ
|Api, iq| ą |Api, jq|, @i “ 1, ¨ ¨ ¨ , n
j“1j‰i

n
ř
If |Api, iq| ě |Api, jq|, then it is diagonally dominant.
j“1j‰i

It can be shown that strictly diagonally dominant matrices satisfy the following:

• Strictly diagonally dominant matrices are invertible.

• All the principle submatrices of a strictly diagonally dominant matrix are strictly diagonally dominant.
ùñ Strictly diagonally dominant matrices have the principal minor property.
ùñ Can use Gaussian elimination

Note that some kind of diagonally dominant matrices are invertible ùñ can use Gaussian elimination.

2.2.1 Tridiagonal Matrices

We consider tridiagonal Matrices that are strictly diagonally dominant, and adapt the Naive Gauss, backward and forward
substitution. » fi
d1 u1 0 ... ... 0
— ffi
— .. .. ffi
— l1
— d2 u2 . . ffi
ffi

—0 ..ffi
— l2 d3 u3 0 .ffi
ffi
— ..
— ffi
.. .. .. .. ffi
—. . . . . 0 ffi
— ffi
— ffi
—0 0 ln´2 dn´1 un´1 ffi
– fl
0 ... ... 0 ln´1 dn

Example 4. For the given 4 ˆ 4 matrix A,

1. Show»that the matrix


fihas the principle minor property.
2 1 0 0 Ap1, 1q “ 2 ą 1`0`0, Ap2, 2q “ 3 ą 1`1`0, Ap3, 3q “ 4 ą 1`2`0,
— ffi
Ap4, 4q “ 2 ą 1 ` 0 ` 0.
— ffi
—1 3 1 0ffi
A“— ffi
A is strictly diagonally dominant.
— ffi
—0 1 4 2ffi
– fl
0 0 1 2 ùñ A has the principle minor property

2. Apply Naive Gauss elimination on A, extract L and U , and find detpAq.


A F irst Reduction Second Reduction T hird Reduction
» fi » fi » fi » fi
2 1 0 0 2 1 0 0 2 1 0 0 2 1 0 0
— ffi — ffi — ffi — ffi
1{2 5{2 1 0 ffi ,
— ffi — ffi — ffi — ffi
—1 3 1 0ffi — 1{2 5{2 1 0ffi — 1{2 5{2 1 0ffi —
— ffi , — ffi ; — ffi ; — ffi
— ffi — ffi — ffi — ffi
—0 1 4 2ffi — 0 1 4 2ffi — 0 2{5 18{5 2ffi — 0 2{5 18{5 2 ffi
– fl – fl – fl – fl
0
0 1 2 0 0 1 2 0 0 1 2 0 0 5{18 13{9
» fi » fi
1 0 0 0 2 1 0 0
— ffi — ffi
— ffi — ffi
—1{2 1 0 0ffi —0 5{2 1 0 ffi
L“— ffi, U “ — ffi, detpAq “ detpLU q “ detpLqdetpU q
— ffi — ffi
— 0 2{5 1 0ffi —0 0 18{5 2 ffi 5 18 13
– fl – fl “ 1p2 ˚ ˚ ˚ q “ 26
0 0 5{18 1 0 0 0 13{9 2 5 9

6
3. Adapt the naive Gauss Algorithm to tridiagonal matrices by computing least number of flops (to be found). The
Algorithm should output the L and U matrices.
Total Flops = (3n-3)

Algorithm 4 TridNaiveGauss
function [L,U] = TridNaiveGauss(A)
n “ lengthpAq;
1: for k = 1:n-1
2: piv “ Apk, kq;
3: Apk ` 1, kq “ Apk ` 1, kq{piv;
4: Apk ` 1, k ` 1q “ Apk ` 1, k ` 1q ´ Apk ` 1, kq ˚ Apk, k ` 1q;
5: end
6: L = tril(A,-1)+eye(n);
7: U = triu(A);

4. Find the third column of A inverse. $


& Ly “ e3
AM3 “ e3 ðñ LU M3 “ e3 ðñ
% UM “ y
3
» fi » fi » fi
1 0 0 0 y 0
— ffi — 1 ffi — ffi
using Forward Substitution we get y1 “ 0, y2 “ 0,
— ffi — ffi — ffi
—1{2 1 0 0ffi —y2 ffi —0ffi
— ffi — ffi “ — ffi
y3 “ 1, y4 “ ´5{18.
— ffi — ffi — ffi
— 0 2{5 1 0ffi —y3 ffi —1ffi
– fl – fl – fl
0 0 5{18 1 y4 0
» fi » fi » fi
2 1 0 0 m1 0 ´5{18 5
using Backward Substitution we get m4 “ “´ ,


ffi — ffi —
ffi — ffi —
ffi
ffi 13{9 26
—0 5{2 1 0 ffi —m2 ffi — 0 ffi 1 ´ 2p´5{26q 5 5 2 2
— ffi — ffi “ — ffi m3 “ “ , m2 “ ´ “´
18{5 13 13 5 13
— ffi — ffi — ffi
—0 0 18{5 2 m 1 ffi
fl – 3 fl –
ffi — ffi —
– fl 2 1 1
0 0 0 13{9 m4 5
´ 18 m1 “ “
13 2 13
ùñ M3 “ r1{13; ´2{13; 5{13; ´5{26s

5. Adapt Backward substitution and Forward Substitution to tridiagonal matrices by computing least number of flops.

Algorithm 5 Tridiagonal Forward Substitution


function [y] = TridForwardSub(L,b)
1: n “ lengthpbq; yp1q “ bp1q;
2: for k = 2:n
3: ypkq “ bpkq ´ Lpk, k ´ 1q ˚ ypk ´ 1q;
4: end

Algorithm 6 Tridiagonal Backward Substitution


function [x] = TridBackwardSub(U,y)
1: n “ lengthpyq; xpnq “ ypnq{U pn, nq;
2: for k = n-1:-1:1
3: xpkq “ pypkq ´ U pk, k ` 1q ˚ xpk ` 1qq{U pk, kq;
4: end

7
2.2.2 Lower Quadridiagonal Matrices

We consider Lower Quadridiagonal Matrices that are strictly diagonally dominant, and adapt the Naive Gauss, backward
and forward substitution. » fi
d1 u1 0 ... ... 0
..
— ffi
— .. ffi
— l1 d2 u2 . .ffi
— ffi

—ll .. ffi
— 1 l2 d3 u3 0 . ffi
ffi
.. .. .. ..
— ffi
—0 . . . . 0 ffi
— ffi
— ffi
— .. .. ffi
— .
– . lln´3 ln´2 dn´1 un´1 ffi
fl
0 ... 0 lln´2 ln´1 dn
Example 5. For the given 4 ˆ 4 matrix A,

1. Show»that the matrix


fihas the principle minor property.
4 1 0 0 Ap1, 1q “ 4 ą 1`0`0, Ap2, 2q “ 4 ą 1`1`0, Ap3, 3q “ 4 ą 1`1`1,
— ffi
Ap4, 4q “ 4 ą 1 ` 1 ` 0.
— ffi
—1 4 1 0ffi
A“— ffi
A is strictly diagonally dominant.
— ffi
—1 1 4 1ffi
– fl
0 1 1 4 ùñ A has the principle minor property

2. Apply Naive Gauss elimination on A, extract L and U , and find detpAq.


A F irst Reduction Second Reduction T hird Reduction
» fi » fi » fi » fi
4 1 0 0 4 1 0 0 4 1 0 0 4 1 0 0
— ffi — ffi — ffi — ffi
— 1{4 15{4 1 0ffi — 1{4 15{4 1 0 ffi
— ffi — ffi — ffi — ffi
—1 4 1 0ffi — 1{4 15{4 1 0ffi
— ffi , — ffi , — ffi , — ffi
— ffi — ffi — ffi — ffi
—1 1 4 1ffi — 1{4 3{4 4 1ffi — 1{4 1{5 19{5 1ffi — 1{4 1{5 19{5 1 ffi
– fl – fl – fl – fl
0
1 1 4 0 1 1 4 0 4{15 11{15 4 0 4{15 11{57 217{57
» fi » fi
1 0 0 0 4 1 0 0
— ffi — ffi
— ffi — ffi
—1{4 1 0 0ffi —0 15{4 1 0 ffi
L“—

ffi, U “ —
ffi —
ffi, detpAq “ detpU q “ 217
ffi
—1{4 1{5 1 0ffi —0 0 19{5 1 ffi
– fl – fl
0 4{15 11{57 1 0 0 0 217{57

3. Adapt the naive Gauss Algorithm to Lower Quadridiagonal matrices by computing least number of flops.

Algorithm 7 QuadriNaiveGauss
function [L,U] = QuadriNaiveGauss(A)
n “ lengthpAq;
1: for k = 1:n-2
2: piv “ Apk, kq;
3: Apk ` 1, kq “ Apk ` 1, kq{piv;
4: Apk ` 2, kq “ Apk ` 2, kq{piv;
5: Apk ` 1, k ` 1q “ Apk ` 1, k ` 1q ´ Apk ` 1, kq ˚ Apk, k ` 1q;
6: Apk ` 2, k ` 1q “ Apk ` 2, k ` 1q ´ Apk ` 2, kq ˚ Apk, k ` 1q;
7: end
8: Apn, n ´ 1q “ Apn, n ´ 1q{Apn ´ 1, n ´ 1q
9: Apn, nq “ Apn, nq ´ Apn, n ´ 1q ˚ Apn ´ 1, nq;
10: L = tril(A,-1)+eye(n); U = triu(A);

8
Total Flops = 6(n-2)+3.
4. Solve
» Ly “ r5; 6; 7; 6s. fi » fi » fi
1 0 0 0 y 5 1 19
— ffi — 1 ffi — ffi where y1 “ 5, y2 “ 6 ´ 5 “ ,
— ffi — ffi — ffi 4 4
—1{4 1 0 0ffi —y2 ffi —6ffi 1 1 19 24
— ffi — ffi “ — ffi, y3 “ 7 ´ 5 ´ “ ,

—1{4 1{5 1
ffi — ffi — ffi
0ffi —y3 ffi —7ffi 4 5 4 5
4 19 11 24 217
y4 “ 6 ´
– fl – fl – fl
´ “
0 4{15 11{57 1 y4 6 15 4 57 5 57
ùñ y “ r5, 19{4, 24{5, 217{57s

5. Adapt Forward Substitution Algorithm for quadridiagonal matrices by computing least number of flop.
Algorithm 8 QuadriBackwardSubstitution
function [x] = QuadriBackwardSubstitution(U,y)
1: n “ lengthpAq; xpnq “ ypnq{U pn, nq;
2: for k = n-1:-1:1
3: xpkq “ pypkq ´ U pk, k ´ 1q ˚ xpk ´ 1qq{U pk, kq;
4: end

Algorithm 9 QuadriForwardSubstitution
function [y] = QuadriForwardSubstitution(L,c)
1: n “ lengthpAq;
2: yp1q “ cp1q;
3: yp2q “ cp2q ´ Lp2, 1q ˚ yp1q;
4: for k = 3:n
5: ypkq “ cpkq ´ Lpk, k ´ 2q ˚ ypk ´ 2q ´ Lpk, k ´ 1q ˚ ypk ´ 1q;
6: end

3 Partial pivoting
If the matrix has the special property of being strictly diagonally dominant, then the Gaussian elimination can be used.
Otherwise, we have to check if the n principle submatrices are invertible, i.e. we have to compute the determinants of
these matrices. So to avoid this tedious work and to avoid falling in the case where we obtain a zero pivot in the Naive
Gaussian elimination, the partial pivoting strategy is used, i.e. the best pivot is chosen based on some criterion.
On the other hand, even if the matrix verifies the principle minor property, the naive Gaussian elimination might fail
due to numerical errors in»Floating
fi »point
fi systems.
» fi

1 x 2
Counter example: Solve – fl – 1 fl “ – fl where  is very small number, detpAq “  ´ 3 ‰ 0
3 1 x2 1
1 1 ´6
the exact solution is x2 “ 2 ´ x1 , ùñ x1 “ « ´ , ùñ x2 “ « 2.
´3 3 ´3
Whereas, after applying naive Gaussian Thus to avoid this, we permute the
elimination,
» the augmented fimatrix becomes, rows,
» then apply Gaussian fi elimination:
 1 2 3 1 1
– fl and the solution – fl and we ob-
3{ 1 ´ 3{ 1 ´ 6{ {3 1 ´ {3 2 ´ {3
1 ´ 6{ 2 ´ {3
by backward substitution is x2 “ «2 tain the exact solution, x2 “ « 2 and
1 ´ 3{ 1 ´ {3
2 ´ x2 1 ´ x2 1
and x1 “ « 0. x1 “ «´ .
 3 3

9
Even though the matrix verifies the principle minor property, and we can use naive Gaussian elimination, however,
numerically in a floating point system, we might get errors due to absorption and division by small numbers to compute
multipliers. There are different types of pivoting to avoid such problems. We will only consider partial pivoting where
we check the correponding column per reduction for picking the pivot based on some criterion, i.e. we perform row
permutations only. And we will consider scaled and unscaled partial pivoting.

3.1 Unscaled Partial Pivoting

At the k th reduction of the unscaled partial pivoting Gaussian elimination of an n ˆ n matrix, the pivot is chosen to be the
entry in the k th column and rows k : n with the largest magnitude. Then, the row containing the pivot is permuted with
the k th row.
However, permuting two rows of a matrix in Matlab requires a lot of data movement given that the matrix is stored
column-wise. So to avoid this expensive data movement, we introduce an index vector IV . At first, IV is set to r1 2 ... ns.
Then at each reduction, instead of permuting the rows of the matrix A, the corresponding permutation is performed only
on IV . So at the end of the elimination, IV acts as a vector of pointers to the memory location of the pivot rows in the
correct order. Thus IV pkq refers to the chosen pivot row of matrix A at the k th reduction.

Example 6. Given the following system of linear equations:


$



’ 3x1 ´ 13x2 ` 9x3 ` 3x4 “ ´19



&´6x1 ` 4x2 ` x3 ´ 18x4 “ ´34




’ 6x1 ´ 2x2 ` 2x3 ` 4x4 “ 16




%12x ´ 8x ` 6x ` 10x “ 26
1 2 3 4

1. Solve the»above system Ax “ b using unscaled


fi partial »
pivoting
fi Gaussian reduction.
3 ´13 9 3 ´19 1
— ffi — ffi
— ffi — ffi
— ´6 4 1 ´18 ´34 ffi —2ffi
rA|bs “ —

ffi
ffi IV “ — ffi
— ffi
— 6 ´2 2 4 16 ffi —3ffi
– fl – fl
12 ´8 6 10 26 4

First Reduction: maxt|3|, | ´ 6|, |6|, |12|u “ 12 ùñ pivot row “ 4, pivot = 12, IV “ r4; 2; 3; 1s.
» fi » fi
1 1 1 1 1 1 15 1 51
— 4 ´13 ´ 4 p´8q 9 ´ 46 3 ´ 4 10 ´19 ´ 4 26 ffi — ´11 ´
— ffi — 4 2 2 2 ffi
ffi
1 1
— ffi — ffi
4 ´ p´ 21 qp´8q 1 ´ p´ 12 q6 ´18 ´ p´ 12 q10 ´34 ´ p´ 12 q26 ffi — ´
— ffi — ffi
— ´ 0 4 ´13 ´21 ffi

— 2 ffi “ —
ffi — 2 ffi
ffi
— 1 ffi — 1 ffi

— 2 ´2 ´ p 12 qp´8q 2 ´ p 12 q6 4 ´ p 12 q10 16 ´ p 12 q26 ffi —
ffi — 2 2 ´1 ´1 3 ffi
ffi
– fl – fl
12 ´8 6 10 26 12 ´8 6 10 26

Second Reduction: maxt| ´ 11|, |0|, |2|u “ 11 ùñ pivot row “ 1, pivot = -11, IV “ r4; 1; 3; 2s.
» fi » fi
1 11 15 1 51 1 11 15 1 51
— 4 ´ ´ ´ ´
— 1 2 2 2 ffi — 4
ffi — 1 2 2 2 ffi
ffi
1 1
— ffi — ffi
— ffi — ffi
— ´ 0 4 ´13 ´21 ffi — ´ 0 4 ´13 ´21 ffi

— 2 ffi “ —
ffi — 2 ffi
ffi
— 1 2 ffi — 1 2 4 10 18 ffi
2 15 2 1 2

— 2 ´ ´1 ´ p´ 11 q 2 ´1 ´ p´ 11 q 2 3 ´ p´ 11 qp´ 51
2 q ffi — ´ ´ ´ ffi
– 11 fl – 2
ffi — 11 11 11 11 ffi
fl
12 ´8 6 10 26 12 ´8 6 10 26

10
Third Reduction: maxt|4|, |4{11|u “ 4 ùñ pivot row “ 2, pivot = 4, IV “ r4; 1; 2; 3s.
» fi » fi
1 11 15 1 51 1 11 15 1 51
— 4 ´ ´ ´ ´
— 1 2 2 2 ffi — 4
ffi — 1 2 2 2 ffi
ffi
1 1
— ffi — ffi
— ffi — ffi
— ´ 0 4 ´13 ´21 ffi — ´ 0 4 ´13 ´21 ffi

— 2 ffi “ —
ffi — 2 ffi
ffi
— 1 2 1 10 1 18 1 ffi — 1 2 1 3 3 ffi

— 2 ´ ´ ´ p´13q ´ ´ p´21q ffi — ´ ffi
– 11 11 11 11 11 11 ffi — 2
fl – 11 11 11 11 ffi
fl
12 ´8 6 10 26 12 ´8 6 10 26
Recall that Ax “ b is transformed into U x “ c after applying gaussian elimination on the augmented matrix rA|bs.
Let M “ ApIV, :q “ rApIV p1q, :q; ApIV p2q, :q; ApIV p3q, :q; ApIV p4q, :qs, then U is the upper triangular part of
M and c “ bpIV q, i.e.
» fi
12 ´8 6 10 26 ´21 ` 13
x4 “ 1, x3 “ “ ´2,
— 15 1 51 ffi 4
— 0 ´11
— ffi
´ ´51{2 ´ p15{2qp´2q ´ 1{2
2 2 2 ffi
ffi
rU |cs “ — x2 “ “1

— 0 0 4
ffi
´13 ´21 ffi ´11
– fl 26 ´ p´8q1 ´ 6p´2q ´ 10p1q
3 3 x1 “ “3
0 0 0 12
11 11
2. Extract
» the L matrix and checkfiif A = LU.
» fi » fi » fi
1 0 0 0 12 ´8 6 10 1 0 0 0 0 0 0 1
— ffi
— 1 ffi — ffi — ffi — ffi
— 1 0 0 ffi —
3 ´13 9
ffi
3

— 0 1 0
ffi
0 ffi

— 1 0 0
ffi
0 ffi
L “ — 41
— ffi
ffi, LU “ — ffi, I “ — ffi, P “ — ffi.
— ffi
— ffi — ffi — ffi
— ´
— 2 0 1 0 ffiffi — ´6 4 1 ´18 ffi — 0 0 1 0 ffi — 0 1 0 0 ffi
– 1 – fl – fl – fl
2 1 fl
´ 1 6 ´2 2 4 0 0 0 1 0 0 1 0
2 11 11

Note that LU is not equal to A but to the permuted matrix, P A, where P “ IpIV, :q. This is called the PLU
decomposition of A, i.e. P A “ LU .

3. Find the determinant of A, detpAq.


Note that P A “ LU , thus detpP Aq “ detpLU q “ detpLqdetpU q “ detpU q “ detpP qdetpAq and
detpAq “ detpU q{detpP q. Since P is the permuted identity matrix, it can be shown that detpP q “ p´1qs where s
is the number of permutations performed in Gaussian reduction.
In the above reduction the IV vector was permuted 3 times ùñ s “ 3.
Thus detpAq “ detpU q{detpP q “ p12 ˚ p´11q ˚ 4 ˚ 3{11q{p´1q3 “ 144

4. Find the second column of A inverse.


Let M “ A´1 . Then » fi
0 $
AM “ I — ffi
— ffi
— 0 ffi &Ly “ P2

P AM “ PI “P ùñ LU M2 “ P2 “ —

ffi
ffi ðñ
— 1 ffi ’
%U M2 “ y
LU M “ P – fl
0
By Forward substitution y “ r0, 0, 1, ´1{11s. By Backward substitution, we solve
» fi » fi » fi
12 ´8 6 10 m1 0 1 1 ´ 13{3 5
to get m4 “ ´ , m3 “ “´ ,
— 15 1 ffi — ffi — ffi 3 4 6
— 0 ´11
— ffi — ffi — ffi
ffi —m2 ffi — 0 ffi ´p15{2qp´5{6q ´ p1{2qp´1{3q 7
— 2 2 ffi — ffi “ — ffi m2 “ “´ ,
12
— ffi — ffi — ffi
— 0 0 4 ´13 ffi —m3 ffi — 1 ffi ´11

3
fl – fl – fl ´8p´7{12q ` 6p´5{6q ` 10p´1{3q 11
0 0 0 m4 ´1{11 m1 “ ´ “
11 12 36
„ 
11 7 5 1
ùñ M2 “ ;´ ;´ ;´
36 12 6 3

11
» fi »
» fi fi
3 C C x1
Counter example: Solve – fl – fl “ – fl where C is a very large number, detpAq “ 3 ´ C ‰ 0.
1 1 x2 3
C ´9 2C
The exact solution is: x1 “ 3 ´ x2 , ùñ 3p3 ´ x2 q ` Cx2 “ C, 6 x2 “ « 1, x1 “ «2
C ´3 C ´3
Whereas, after applying unscaled partial pivot- We pick the pivot differently by scaling the entries in the first

ing Gaussian column with largest values in the corresponding rows of A, i.e.
» elimination, the augmented matrix
fi
3 C C maxt|3|{|C|, |1|{|1|u “fi 1, then apply Gaussian elimination:
becomes, – fl and »
1{3 1 ´ C{3 3 ´ C{3 3 C ´3 C ´9
– fl and we obtain the exact solution,
the solution by backward substitution is 1 1 3
3 ´ C{3 C ´ Cx2 C ´9
x2 “ « 1 and x1 “ « 0. x2 “ « 1 and x1 “ 3 ´ x2 « 2.
1 ´ C{3 3 C ´3

3.2 Scaled Partial Pivoting

At the k th reduction of the scaled partial pivoting Gaussian elimination of an n ˆ n matrix A, the pivot is chosen to be the
scaled entry in the k th column and rows k : n with the largest magnitude. So before starting the elimination, we define
the vector of scales S “ rs1 ; s2 ; s3 ; . . . ; sn s that contains in the ith entry si , the value in the ith row of matrix A with the
largest magnitude, i.e.
si “ max |Api, jq|
1ďjďn

Once the pivot is chosen, we proceed similarly to the unscaled partial pivoting Gaussian elimination where we permute
the corresponding entries in the index vector IV rather than permuting the rows of A.

Example 7. 1. Solve the system Ax “ b of example 6 using scaled partial pivoting Gaussian reduction.

» fi fi» » fi
3 ´13 9 3 ´19 13 1
— ffi — ffi — ffi
— ffi — ffi — ffi
— ´6 4 1 ´18 ´34 ffi —18ffi —2ffi
rA|bs “ —

ffi
ffi S“— ffi
— ffi IV “ — ffi
— ffi
— 6 ´2 2 4 16 ffi — 6 ffi —3ffi
– fl – fl – fl
12 ´8 6 10 26 12 4

First Reduction: maxt|3|{13, | ´ 6|{18, |6|{6, |12|{12u “ 1 ùñ pivot row “ 3, pivot = 6, IV “ r3 2 1 4s.
» fi » fi
1 1 1 1 1 1
´13 ´ 2 p´2q 9´ 22 3´ 24 ´19 ´ 2 16 ´12 8 1 ´27 ffi

— 2 ffi —
ffi — 2 ffi
— ffi — ffi
— ´1
— 4 ´ p´1qp´2q 1 ´ p´1q2 ´18 ´ p´1q4 ´34 ´ p´1q16 ffi
ffi “ — ´1
— 2 3 ´14 ´18 ffi
ffi
— ffi — ffi
— 6 ´2 2 4 16 ffi — 6 ´2 2 4 16 ffi
– fl – fl
2 ´8 ´ 2p´2q 6 ´ 2p2q 10 ´ 2p4q 26 ´ 2p16q 2 ´4 2 2 ´6

Second Reduction: maxt|´12|{13, |2|{18, |´4|{12u “ 12{13 ùñ pivot row “ 1, pivot = -12, IV “ r3 1 2 4s.
» fi » fi
1 1
´12 8 1 ´27 ´12 8 1 ´27 ffi

— 2 ffi —
ffi — 2 ffi
1 1 1 1 1 13 83 45 ffi
— ffi — ffi
— ffi —
— ´1 ´ 3 ´ p´ q8 ´14 ´ p´ q1 ´18 ´ p´ qp´27q ffi — ´1 ´ ´ ´ ffi

— 6 6 6 6 ffi “ —
ffi — 6 3 6 2 ffi
ffi
— 6 2 4 16 ffi — 6 2 4 16 ffi
— ffi — ffi
´2 ´2
— ffi — ffi
– 1 1 1 1 fl – 1 2 5 fl
2 2´ 8 2´ ´6 ´ p´27q 2 ´ 3
3 3 3 3 3 3 3

12
Third Reduction: maxt|13{3|{18, | ´ 2{3|{12u “ 13{3 ˚ 18 ùñ pivot row “ 2, pivot = 13/3, IV “ r3 1 2 4s.
» fi » fi
1 1
— 2 ´12 8 1 ´27 ffi — 2 ´12 8 1 ´27 ffi
— ffi — ffi
1 13 83 45 1 13 83 45
— ffi — ffi
— ffi — ffi
— ´1 ´ ´ ´ ffi — ´1 ´ ´ ´ ffi

— 6 3 6 2 ffi “ —
ffi — 6 3 6 2 ffi
ffi
— 6 2 4 16 ffi — 6 2 4 16
— ffi — ffi
´2 ´2 ffi
— ffi — ffi
– 1 2 5 2 83 2 45 fl – 1 2 6 6 fl
2 ´ ´ p´ qp´ q 3 ´ p´ qp´ q 2 ´ ´ ´
3 13 3 13 6 13 2 3 13 13 13
Recall that Ax “ b is transformed into U x “ c after applying gaussian elimination on the augmented matrix rA|bs.
Let M “ ApIV, :q “ rApIV p1q, :q; ApIV p2q, :q; ApIV p3q, :q; ApIV p4q, :qs, then U is the upper triangular part of
M and c »“ bpIV q, i.e. fi
6 ´2 2 4 16 ´45{2 ` p83{6q
— ffi x4 “ 1, x3 “ “ ´2,

— 0 ´12 8 1 ´27
ffi
ffi 13{3
´27 ´ 1 ´ 8p´2q
13 83 45
— ffi
rU |cs “ — ffi x2 “ “1
— 0 0 ´ ´ ffi ´12
— 3 6 2 ffi
16 ´ p´2q1 ´ 2p´2q ´ 4p1q
– 6 6 fl
x1 “ “3
0 0 0 ´ ´ 6
13 13
2. Extract
» the P,L,U matrices and find
fi det(A).
» fi » fi
1 0 0 0 6 ´2 2 4 0 0 1 0
— 1 ffi — ffi — ffi
1 0 0 ffi
— ffi — ffi
— — 0 ´12 8 1 ffi —
1 0 0 0 ffi
ffi
L“— 2 , U , P

ffi.
13 83 ffi
— ffi — ffi
1 ffi “ — “ —
— ´1 ´ 1 0 ffi — 0 0 ´ ffi —
0 1 0 0
ffi
— 6 ffi — 3 6 ffi —

ffi
fl

1 2
fl – 6 fl
0 0 0 1
2 ´ 1 0 0 0 ´
3 13 13
Note that P A “ LU , thus detpP Aq “ detpLU q “ detpLqdetpU q “ detpU q “ detpP qdetpAq and
detpAq “ detpU q{detpP q “ detpU q{p´1qs where s is the number of permutations performed in Gaussian reduc-
tion. In the above reduction the IV vector was permuted 2 times ùñ s “ 2.
Thus detpAq “ detpU q{detpP q “ p6 ˚ p´12q ˚ p13{3q ˚ p´6{13q{p´1q2 “ 144

3. Solve the system Ax “ b in part 1, using the PLU decomposition, i.e. P A “ LU .


$
Ax “ b
&Ly “ P b

P Ax “ P b ðñ

%U x “ y
LU x “ P b
By
» Forward substitution: fi » fi » fi » fi » fi
1 0 0 0 y 0 0 1 0 ´19 16
— 1 ffi — 1 ffi — ffi — ffi — ffi y1 “ 16, y2 “ ´19 ´ 8 “ ´27,
1 0 0 ffi —
— ffi
y2 ffi — 1 0 0 0 ffi —´34ffi —´19ffi y “ ´34 ` 16 ´ 27 “ ´ 45 ,
— ffi — ffi — ffi — ffi
— 2 ffi — ffi, 3
— 1 ffi — ffi “ —
— ffi — ffi —
ffi “ —
ffi —
6 2
— ´1 ´ 1 0 ffi —y ffi — 0 1 0 0 ffi
ffi – 3 fl –

16 ´34
ffi
27 2 45 6
6
ffi — ffi — ffi
— fl – fl – fl y4 “ 26 ´ 2p16q ` ´ “´

1 2
fl
y4 0 0 0 1 26 26 3 13 2 13
2 ´ 1
3 13
45 6
ùñ y “ r16; ´27; ´ ; ´ s. Note that y is exactly the c vector extracted in part 1.
2 13
Summary: Gaussian elimination can be applied on:

1. the system Ax “ b, i.e the augmented matrix rA|bs to obtain the upper triangular system U x “ c.
2. the matrix A to obtain its PLU decomposition P A “ LU where L is a lower triangular matrix, U is an upper
triangular matrix, and

• P “ I in the case of Naive Gaussian Elimination

• P “ IpIV, :q in the case of Scaled and Unscaled partial pivoting Gaussian Elimination

13

You might also like