Chapter505 1P1
Chapter505 1P1
Chapter505 1P1
1.1 Introduction
When engineering systems are modeled, the mathematical description is frequently developed in
terms of set of algebraic simultaneous equations. Sometimes these equations are non-linear and
sometimes linear. In this chapter we discuss systems of simultaneous linear equations and describe
the numerical methods for the approximate solutions of such systems. The solution of a system
of simultaneous linear algebraic equations is probably one of the most important topics in the
engineering computation. Problems involving simultaneous linear equations arise in the areas
of elasticity, electric-circuit analysis, heat transfer, vibrations and so on. Also, the numerical
integration of some types of ordinary and partial differential equations may be reduced to the
solution of such a system of equations. It has been estimated, for example, that about 75% of all
scientific problems require the solution of a system of linear equations at one stage or another. It
is therefore important to be able to solve linear problems efficiently and accurately.
It is an equation in which the highest exponent in a variable term is no more than one. The graph
of such equation is a straight line. •
A linear equation in two variables x1 and x2 is an equation that can be written in the form
a1 x1 + a2 x2 = b,
where a1 , a2 and b are real numbers. Note that this is the equation of a straight line in the plane.
For example, the equations
4
5x1 + 2x2 = 2, x1 + 2x2 = 1, 2x1 − 4x2 = π,
5
are all linear equations in two variables.
A linear equation in n variables x1 , x2 , . . . , xn is an equation that can be written as
a1 x1 + a2 x2 + · · · + an xn = b,
1
2 3.1 Introduction
where a1 , a2 , . . . , an are real numbers and called the coefficients of unknown variables x1 , x2 , . . . , xn
and the real number b, the right-hand side of equation, is called the constant term of the equation.
Definition 1.2 (System of Linear Equations)
A system of linear equations (or linear system) is simply a finite set of linear equations.
For example,
4x1 − 2x2 = 5
3x1 + 2x2 = 4
is the system of two equations in two variables x1 and x2 , while
2x1 + x2 − 5x3 + 2x4 = 9
4x1 + 3x2 + 2x3 + 4x4 = 3
x1 + 2x2 + 3x3 + 2x4 = 11
is the system of three equations in the four variables x1 , x2 , x3 and x4 .
In order to write a general system of m linear equations in the n variables x1 , . . . , xn , we have
a11 x1 + a12 x2 + ··· + a1n xn = b1
a21 x1 + a22 x2 + ··· + a2n xn = b2
.. .. .. .. (1.1)
. . ··· . .
am1 x1 + am2 x2 + · · · + amn xn = bm
or, in compact form the system (1.1) can be written
n
X
aij xj = bi , i = 1, 2, . . . , m. (1.2)
j=1
For such system we seek all possible ordered sets of numbers c1 , . . . , cn which satisfies all m equations
when they are substituted for the variables x1 , x2 , . . . , xn . Any such set {c1 , c2 , . . . , cn }, is called a
solution of the system of linear equations (1.1) or (1.2).
2. If there are more unknown variables than the number of the equations (n > m), then the
system is usually called underdetermined. Typically, an underdetermined system has infinite
number of solutions. For example, the following system has infinitely many solutions.
x1 + 5x2 = 45
3x2 + 4x3 = 21
Chapter Three Systems of Linear Algebraic Equations 3
3. If there are same number of equations as the unknown variables (m = n), then the system is
usually called simultaneous system. It has unique solution if the system satisfies the certain
conditions (which we will discuss below). For example, the system
2x1 + x2 − x3 = 1
x1 − 2x2 + 3x3 = 4
x1 + x2 = 1
are linear independent and therefore, has unique solution x1 = 1, x2 = 0 and x3 = 1. However,
the system
5x1 + x2 + x3 = 4
3x1 − x2 + x3 = −2
x1 + x2 = 3
does not have a unique solution since the equations are not linear independent; the first
equation is equal to the second equation plus twice the third equation.
Every system of linear equations has either no solution, exactly one solution, or infinitely many
solutions. •
For example, in the case of a system of two equations in two variables, we can have these three
possibilities for the solutions of the linear system. Firstly, the two lines (since the graph of linear
equation is straight line) may be parallel and distinct, in this case there is no solution to the system
because the two lines do not intersect each other at any point. For example, consider the following
system
x1 + x2 = 1
2x1 + 2x2 = 3
From the graphs (see Figure 1.1(a)) of the given two equations show that lines are parallel so,
the given system has no solution. It can be proved algebraically simply by multiplying the first
equation of the system by 2, to get a system of the form
2x1 + 2x2 = 2
2x1 + 2x2 = 3
4 3.1 Introduction
1.5 10 1
2x + 2x = 1
1 2
x +x =1
3x1 − x2 = 3 1 2
4
−1
−1
x1 − x2 = −1
−3 −4 −3
0 2 4 0 2 4 0 2 4
x1 − x2 = −1
3x1 − x2 = 3
From the graphs (see Figure 1.1(b)) of these two equations, we can see that the lines intersect in
exactly one point, namely, the (2,3), and so the system has exactly one solution, x1 = 2, x2 = 3.
To show algebraically, if we substitute x2 = x1 +1 in the second equation, we have 3x1 −x1 −1 = 3,
or x1 = 2 and using this value of x1 in x2 = x1 + 1, gives x2 = 3.
Finally, the two lines may actually be the same line, and so in this case, every point on the lines
gives a solution to the system, and therefore, there are infinitely many solutions. For example,
consider the following system
x1 + x2 = 1
2x1 + 2x2 = 2
Here, both equations have same line for their graph, see Figure 1.1(c). So this system has infinitely
many solutions because any point on this line gives a solution to this system. Since any solution
of first equation is also solution of the second equation. For example, if we set x2 = x1 − 1 and
choose x1 = 0, x2 = 1 and x1 = 1, x2 = 0, and so on. •
Note that a system of equations with no solution is said to be inconsistent system and if it has at
least one solution, it is said to be consistent system.
The system of linear equations (1.3) can be written as the single matrix equation
a11 a12 · · · a1n x1 b1
a21 a22 · · · a2n
x2
b2
.. .. .. .. .. = .. . (1.4)
. . . . . .
an1 an2 · · · ann xn bn
If we compute the product of the two matrices on the left-hand side of (1.4), we have
a11 x1 + a12 x2 + · · · + a1n xn b1
a21 x1 + a22 x2 + · · · + a2n xn
b2
.. .. .. .. = .. . (1.5)
. . . . .
an1 x1 + an2 x2 + · · · + ann xn bn
But two matrices are equal if and only if their corresponding elements are equal. Hence the single
matrix equation (1.4) is equivalent to the system of the linear (1.3). If we define
a11 a12 · · · a1n x1 b1
a21 a22 · · · a2n
x2
b2
A= .. .. .. .. , x= .. , b= .. ,
. . . . . .
an1 an2 · · · ann xn bn
the coefficient matrix, the column matrix of unknowns, and the column matrix of constants, re-
spectively, then the system (1.3) can be written very compactly as
Ax = b, (1.6)
which is called the matrix form of the system of linear equations (1.3). The column matrices x
and b are called vectors. If right-hand sides of the equal signs of (1.6) are not zero, then the linear
system (1.6) is called a nonhomogeneous system, and we will find that all the equations must be
independent to obtain unique solution.
then the matrix [A|b] is called augmented matrix of the system (1.6). In many instances, it may be
found convenient to operate on the augmented matrix instead of manipulating the equations. It is
6 3.2 Properties of Matrices and Determinant
customary to put a bar between the last two columns of the augmented matrix remind us where
the last column come from. However, the bar is not absolutely necessary. The augmented matrix
of a linear system will play key roles in our methods of solving linear systems.
Let’s take a look at an example. Here is the system of equations
x1 + 2x2 + 3x3 = 10
4x1 + 5x2 + 6x3 = 11
7x1 + 8x2 + 9x3 = 12
The first row consists of all the constants from the first equation with the coefficient of the x1 in
the first column, the coefficient of the x2 in the second column, the coefficient of the x3 in the third
column and the constant in the final column. The second row is the constants from the second
equation with the same placement and likewise for the third row. The dashed line represents where
the equal sign was in the original system of equations and is not always included.
Using MATLAB commands we can define augmented matrix as follows:
If all of the constant terms b1 , b2 , . . . , bn on the right-hand sides of the equal signs of the linear
system (1.6) are zero, then the system (1.6) is called a homogeneous system, and it can be written
in more compact form as
Ax = 0. (1.8)
It can be seen by inspection of the homogeneous system (1.8) that one of its solution is x = 0, such
solution, in which all of the unknowns are zero, is called the trivial solution or zero solution. For the
general nonhomogeneous linear system there are three possibilities: no solution, one solution, or
an infinitely many solutions. For the general homogeneous system, there are only two possibilities:
either the zero solution is the only solution or there are an infinitely many solutions (called non-
trivial solutions). Of course, it is usually non-trivial solutions that are of interest in physical
problems. A non-trivial solution to the homogeneous system can occurs with certain conditions on
the coefficient matrix A.
Two matrices A = (aij ) and B = (bij ) are equal if they are the same size and the corresponding
elements in A and B are equal, that is
A=B if and only if aij = bij ,
for i = 1, 2, . . . , m and j = 1, 2, . . . , n. For example, the following matrices
1 −1 2 1 −1 z
A= 1 3 2 and B= 1 3 2 ,
2 4 3 x y w
are equal if and only if x = 2, y = 4, z = 2 and w = 3. •
Let A = (aij ) and B = (bij ) are both m × n matrices, then the sum A + B of two matrices of same
size is a new matrix C = (cij ) each of whose elements is the sum of the two corresponding elements
in the original matrices, that is
cij = aij + bij , for i = 1, 2, . . . , m and j = 1, 2, . . . , n.
For example, let ! !
1 2 4 1
A= and B= ,
3 4 5 2
then ! ! !
1 2 4 1 5 3
+ = = C.
3 4 5 2 8 6
•
8 3.2 Properties of Matrices and Determinant
Using MATLAB commands adding two matrices A and B of same size, results in the answer C
another matrix of same size, are:
Let A and B are m × n matrices, we write A + (−1)B as A − B and call this the difference of
two matrices of same size is a new matrix C each of whose elements is the difference of the two
corresponding elements in the original matrices. For example, let
! !
1 2 4 1
A= and B= .
3 4 5 2
Then ! ! !
1 2 4 1 −3 1
− = = C.
3 4 5 2 −2 2
Note that (−1)B = −B is obtained by multiplying each entries of matrix B by (−1), called the
scalar multiple of matrix B by −1. The matrix −B is called negative of the matrix B. •
The multiplication of two matrices is defined only when the number of columns in the first matrix
is equal to the number of rows in the second. If an m × n matrix A is multiplied by an n × p matrix
B, then the product matrix C is an m × p matrix where each term is defined by
n
X
cij = aik bkj ,
k=1
Then ! ! ! !
1 2 4 1 4 + 10 1 + 4 14 5
= = = C.
3 4 5 2 12 + 20 3 + 8 32 11
Note that even AB is defined, the product BA may not be defined. Moreover, a simple multiplication
of two square matrices of same size will show that even BA is defined, it need not equal to AB,
that is, they do not commute. For example, if
! !
1 2 2 1
A= and B= ,
−1 3 0 1
then ! !
2 3 1 7
AB = while BA = .
−2 2 −1 3
Thus AB 6= BA. •
Chapter Three Systems of Linear Algebraic Equations 9
Using MATLAB commands matrix multiplication has the standard meaning as well. Multiplying
two matrices A and B of size m × p and p × n respectively, results in the answer C another matrix
of size m × n, are:
A matrix A which has the same number of rows m and columns n ,that is, m = n, defined as
are square matrices because both have the same numbers of rows and columns. •
It is also called zero matrix. It may be either rectangular or square. For example, the following
matrices
! 0 0 0
0 0
A= and B = 0 0 0 ,
0 0
0 0 0
are the zero matrices. •
It is a square matrix in which the main diagonal elements are equal to 1, and is defined as follows
(
aij = 0, if i 6= j,
I = (aij ) =
aij = 1, if i = j.
10 3.2 Properties of Matrices and Determinant
The Identity matrix (also called unit matrix) serves somewhat the same purpose in matrix algebra as
does the number one (unity) in scalar algebra. It is called the identity matrix because multiplication
of a matrix by it will result in a same matrix. For a square matrix A of order n, it can be seen that
In A = AIn = A.
Im B = BIn = B.
In MATLAB identity matrices are created with eye function, which can take either one or two
input arguments.
The transpose of a matrix A, which is a new matrix formed by interchanging the rows and columns
of the original matrix. If the original matrix A is of order m × n, then the transpose matrix, as
AT , will be of order n × m, that is
then
AT = (aji ), for i = 1, 2, . . . , n and j = 1, 2, . . . , m.
>> A = [1 2 3; 4 5 6; 7 8 9]; B = A0
It is to be noted that
AB = BA = In .
Then the matrix B is called the inverse of A and is denoted by A−1 . For example, let
! !
3
2 3 −1 −1 2
A= and B=A = .
2 2 1 −1
Then we have
AB = BA = I2 ,
which means that B is an inverse of A. The invertible matrix is also called, nonsingular matrix. •
To find the inverse of the square matrix A we use the author-defined function INVMAT and the
following MATLAB commands as follows:
MATLAB built-in function inv(A) can be also use to calculate the inverse of a square matrix A if
A is invertible. If the matrix A is not invertible, then the matrix A is called singular. There are
some well-known properties of the invertible matrix which are define as follows.
3. Its product with another invertible matrix is invertible, and the inverse of the product is the
product of the inverses in the reverse order. If A and B are invertible matrices of the same
size, then AB is invertible and (AB)−1 = B −1 A−1 .
It is a square matrix having all elements equal to zero except those on main diagonal, that is
(
aij = 0, if i 6= j,
A = (aij ) =
aij 6= 0, if i = j.
Note that all diagonal matrices are invertible if all diagonal entries are nonzero. •
The MATLAB diag function is used to either create a diagonal matrix from a vector or it extract
the diagonal entries of a matrix. If the input argument of the diag function is a vector, MATLAB
uses the vector to create a diagonal matrix:
The matrix A is called the scalar matrix because it has all the elements on the main diagonal equal
to the same scalars 2. Multiplication of a square matrix and a scalar matrix is commutative, and
the product is also a diagonal matrix.
Definition 1.13 (Upper-Triangular Matrix)
It is a square matrix which has zero elements below and to the left of the main diagonal. The
diagonal as well as the above diagonal elements can take on any value, that is
U = (uij ), where uij = 0, if i > j.
An example of such a matrix is
1 2 3
U = 0 4 5 .
0 0 6
The upper-triangular matrix is called upper-unit-triangular matrix if the diagonal elements are equal
to one. This type of matrix is used in solving linear algebraic equations by LU decomposition with
Crout’s method. Also, if the main diagonal elements of the upper-triangular matrix are zero, then
0 a12 a13
A= 0 0 a23 ,
0 0 0
is called the strictly upper-triangular matrix. This type of matrix will be used in solving linear
systems by iterative methods. •
Using MATLAB command triu(A) we can create an upper triangular matrix from a matrix A as
Also we can create strictly upper-triangular matrix, that is, an upper-triangular matrix with zero
diagonal, from a given matrix A by using MATLAB built-in function triu(A,I) as follows:
It is a square matrix which has zero elements above and to the right of the main diagonal and the
rest of the elements can take on any value, that is
In similar way like upper-triangular matrices we can create lower-triangular matrix and strictly
lower-triangular matrix from a given matrix A by using MATLAB built-in functions tril(A) and
tril(A,I) respectively.
Note that all the triangular matrices (upper or lower) with nonzero diagonal entries are invertible.
A symmetric matrix is one in which the elements aij of a matrix A, in the ith row and jth column
equal to the element aji in the jth row and ith column which means that
Note that any diagonal matrix, including the identity, is symmetric. A lower- or upper-triangular
matrix is symmetric if and only if it is, in fact , a diagonal matrix.
One way to generate a symmetric matrix is to multiply a matrix by its transpose, since AT A is
symmetric for any A. To generate a symmetric matrix using MATLAB commands we do as
>> A = [1 : 4; 5 : 8; 9 : 12]; B = A0 ∗ A; C = A ∗ A
Example 1.1 Find all the values of a, b and c for which the following matrix is symmetric:
4 a+b+c 0
A = −1 3 b − c .
−a + 2b − 2c 1 b − 2c
14 3.2 Properties of Matrices and Determinant
0 = −a + 2b − 2c, −1 = a + b + c, 1 = b − c.
Solving above system, we get, a = 2, b = −1, c = −2 and using these values, we have the given
matrix of the form
4 −1 0
A = −1 3 1 .
0 1 3
•
Theorem 1.3 If A and B are symmetric matrices with same size, and if k is any scalar, then
1. AT is also symmetric.
3. kA is also symmetric.
Note that product of symmetric matrices is not symmetric in general but the product is symmetric
if and only if the matrices commute. Also, note that if A is a square matrix, then the matrices
A, AAT and AT A are either all nonsingular or all singular. •
If for a matrix A, the aij = −aji for i 6= j and the main diagonal elements are not all zero, then
the matrix A is called skew matrix. If all the elements on the main diagonal of a skew matrix are
zero, then the matrix is called skew symmetric, that is
Any square matrix may be split into the sum of a symmetric and a skew symmetric matrix. Thus
1 1
A = (A + AT ) + (A − AT ),
2 2
1 1
where (A + AT ) is symmetric matrix and (A − AT ) is skew symmetric matrix. The following
2 2
matrices
1 2 3 1 2 3 0 2 3
2 4 5 , −2 4 −5 , −2 0 5 ,
3 5 6 −3 5 6 −3 5 0
are the examples of symmetric, skew and skew symmetric matrices respectively. •
Chapter Three Systems of Linear Algebraic Equations 15
A n × n square matrix A is called a band matrix if there exists positive integers p and q, with 1 < p
and q < n such that
aij = 0 for p ≤ j − i or q ≤ i − j.
The number p describes the number of diagonals above, and including, the main diagonal on which
nonzero entries may lie. The number q describes the number of diagonals below, and including, the
main diagonal on which nonzero entries may lie. The number p + q − 1 is called the bandwidth of
the matrix A, which tells us how many of the diagonals can contain nonzero entries. For example,
the following matrix
1 2 3
2 3 4 5
A= ,
0 5 6 7
0 0 7 8
is banded with p = 3 and q = 2, and so the bandwidth is equal to 4. An important property of the
band matrix is called the tridiagonal matrix, in this case p = q = 2, that is, all nonzero elements
lie either on or directly above or below the main diagonal. For such type of matrix, the Gaussian
elimination is particular simpler. In general, the nonzero elements of a tridiagonal matrix lie in
three bands: the superdiagonal, diagonal and subdiagonal. For example, the following matrix
1 2
2 3 1
3 2 1
A=
2 4 3 ,
1 2 3
1 6 4
3 4
is a tridiagonal matrix. A matrix which is predominantly zero is called a sparse matrix. A band
matrix or a tridiagonal matrix is a sparse matrix but the nonzero elements of a sparse matrix are
not necessarily near the diagonal. •
1. det(A) = a11 , if n = 1.
− − − −
a11 a12 a a11 a
13 12
a11 a12
a21 a22
a31 a32 a33 a31 a32
+ + + +
For example, if ! !
4 2 6 3
A= and B = ,
−3 7 2 5
then
det(A) = (4)(7) − (−3)(2) = 34 and det(B) = (6)(5) − (3)(2) = 24.
Notice that the determinant of a 2 × 2 matrix is given by the difference of the products of the two
diagonals of a matrix. The determinant of a 3 × 3 matrix is defined in terms of determinants of
2 × 2 matrices and the determinant of a 4 × 4 matrix is defined in terms of determinants of 3 × 3
matrices and so on.
MATLAB function det(A) calculated the determinant of the square matrix A as:
Other way to find the determinants of only 2 × 2 and 3 × 3 matrices can be found easily and
quickly using diagonals (or direct evaluation). For 2 × 2 matrix, the determinant can be obtained
by forming the product of the entries on the line from left to right and subtracting from this number
the product of the entries on the line from right to left. For a matrix of size 3 × 3, the diagonals of
an array consisting of the matrix with the two first columns added to the right are used. Then the
determinant can be obtained by forming the sum of the products of the entries on the lines from
left to right, and subtract from this number the products of the entries on the lines from right to
left, as shown in Figure 1.2.
Thus for 2 × 2 matrix
|A| = a11 a22 − a12 a21 ,
and for 3 × 3 matrix
|A| = a11 a22 a33 + a12 a23 a31 + a13 a21 a32 − a13 a22 a31 − a11 a23 a32 − a12 a21 a33
(diagonal products from left to right) (diagonal products from right to left)
The minor Mij of all elements aij of a matrix A of order n×n as the determinant of the sub-matrix
of order (n − 1) × (n − 1) obtained from A by deleting the ith row and jth column (also called ijth
minor of A). For example, let
2 3 −1
A= 5 3 2 ,
4 −2 4
then, the minor M11 will be obtained by deleting the first row and the first column of the given
matrix A, that is,
3 2
M11 = = (3)(4) − (−2)(2) = 12 + 4 = 16.
−2 4
Similarly, we can find the other possible minors of the given matrix as follows:
M12 = 12, M13 = −22, M21 = 10, M22 = 12,
and
M23 = −16, M31 = 9, M32 = 9, M33 = −9,
which are the required minors of the given matrix. •
Let A be a square matrix, then we define determinant of A is the sum of the products of the elements
of the first row and their cofactors. If A is 3 × 3 matrix, then its determinant is define as
where summation is on i for any fixed value of jth column (1 ≤ j ≤ n), or on j for any fixed value
of ith row (1 ≤ i ≤ n) and Aij is the cofactor of element aij . •
Example 1.2 Find the minors and cofactors of the matrix A and use it to evaluate the determinant
of the matrix
3 1 −4
A= 2 5 6 .
1 4 8
Solution. The minors of A are calculated as follows
and from these values of the minors, we can calculate the cofactors of the elements of the given
matrix as follows
A11 = 16, A12 = −10, A13 = 3.
Now by using the cofactor expansion along the first row, we can find the determinant of the matrix
as follows
det(A) = a11 A11 + a12 A12 + a13 A13 = (3)(16) + (1)(−10) + (−4)(3) = 26. •
Note that in the above Example 1.2, we computed the determinant of the matrix by using the
cofactor expansion along the first row but it can also be found along the first column of the matrix.
To get the results of the Example 1.2, we use the author-defined function Cofexp and the following
MATLAB commands as follows:
which is called the cofactor expansion along the ith row and also as
n
X
det(A) = a1j A1j + a2j A2j + · · · + anj Anj = aij Aij ,
i=1
is called cofactor expansion along jth column. It is called Laplace Expansion Theorem. •
Note that the cofactor and minor of an element aij differs only in sign, that is, Aij = ±Mij . A
quick way for determining whether to use the + or − is to use the fact that the sign relating Aij
and Mij is in the ith row and jth column of the checkerboard array
+ − + − + ···
− + − + − ···
+ − + − + ···
.
− − − ···
+ +
.. .. .. .. ..
..
. . . . . .
For example, A11 = M11 , A21 = −M21 , A12 = −M12 , A22 = M22 and so on.
If A is any n × n matrix and Aij is the cofactor of aij , then the matrix
A11 A12 · · · A1n
A21 A22 · · · A2n
Cof (A) = .. .. .. ,
. . ··· .
An1 An2 · · · Ann
is called the matrix of cofactor from A. For example, the cofactor of the matrix
3 2 −1
A= 1 6 3 ,
2 −4 0
Example 1.3 Find the determinant of the following matrix using cofactor expansion and show
that det(A) = 0 when x = 4
x+2 x 2
A= 1 x − 1 3 .
4 x+1 x
Solution. Using the cofactor expansion along the first row, we compute the determinant of the
given matrix as follows:
|A| = a11 A11 + a12 A12 + a13 A13 ,
where
A11 = M11 = x2 − 4x − 3, A12 = −M12 = −x + 12, A13 = −3x + 5.
Thus
|A| = (x + 2)[x2 − 4x − 3] + x[−x + 12] + 2[−3x + 5] = x3 − 3x2 − 5x + 4.
Now taking x = 4, we get
The following are special properties which will be helpful in reducing the amount of work involved
in evaluating determinants.
Let A be an n × n matrix:
Chapter Three Systems of Linear Algebraic Equations 21
1. The determinant of a matrix A is zero if any row or column is zero or equal to a linear
combination of other rows and columns.
For example, if
3 1 0
A= 2 1 0 ,
4 3 0
then det(A) = 0.
2. A determinant of a matrix A is changed in sign if the two rows or two columns are interchange.
For example, if !
3 2
A= ,
4 5
then det(A) = 7, but for the matrix
!
4 5
B= ,
3 2
obtained from the matrix A by interchanging its rows, we have det(B) = −7.
3. The determinant of a matrix A is equal to the determinant of its transposed. For example, if
!
5 3
A= ,
4 4
det(B) = 8 = det(A).
5. If the matrix B is obtained from the matrix A by multiplying every element in one row or in
one column by k, then determinant of the matrix B is equal to k times the determinant of A.
For example, if !
6 5
A= ,
3 4
then det(A) = 9, but for the matrix
!
12 10
B= ,
3 4
6. If the matrix B is obtained from the matrix A by adding to a row (or a column) of a multiple
of another row (or another column) of A, then determinant of the matrix B is equal to the
determinant of A. For example, if !
4 3
A=
5 4
then det(A) = 1, and for the matrix
!
4 3
B= ,
13 10
obtained from the matrix A by adding to its second row 2 times the first row, we have
det(B) = 1 = det(A).
7. If two rows or two columns of a matrix A are identical, then the determinant is zero. For
example, if !
2 3
A= , then det(A) = 0.
2 3
8. The determinant of a product of matrices is the product of the determinants of all matrices.
For example, if
3 4 5 1 2 3
A= 3 2 1 and B = 4 2 3 ,
2 1 6 1 3 5
then det(A) = −36 and det(A) = −3. Also,
24 29 46
AB = 12 13 20 ,
12 24 39
10. The determinant of an n×n matrix A times scalar multiple k equal to k n times the determinant
of the matrix A, that is det(kA) = k n det(A). For example, if
3 4 5
A = 2 3 6 ,
1 0 5
Chapter Three Systems of Linear Algebraic Equations 23
11. The determinant of the kth power of a matrix A equal to the kth power of the determinant
of the matrix A, that is det(Ak ) = (det(A))k . For example, if
2 −2 0
A= 2 3 −1 ,
1 0 1
12. The determinant of a scalar matrix (1 × 1) is equal to the element itself. For example, if
A = (8), then det(A) = 8.
Example 1.4 Find all the values of α for which det(A) = 0, where
α−3 1 0
A= 0 α − 1 1 .
0 2 α
Solution. We find the determinant of the given matrix by using the cofactor expansion along the
first row, so we compute
Example 1.5 For what values of α the following matrix has an inverse:
1 0 α
A= 2 2 1 .
0 2α 1
Solution. We find the determinant of the given matrix by using the cofactor expansion along the
first row as follows:
|A| = a11 A11 + a12 A12 + a13 A13 ,
which is equal to
From the Theorem 1.6 we know that the matrix has an inverse if det(A) 6= 0, so
Example 1.6 Use the adjoint method to compute the inverse of the following matrix
1 2 −1
A = 2 −1 1 .
1 2 2
det(A) = |A| = a11 A11 + a12 A12 + a13 A13 = (1)(−4) − (2)(3) + (−1)(5) = −15,
A11 = −4, A12 = −3, A13 = 5, A21 = −6, A22 = 3, A23 = 0, A31 = 1, A32 = −3, A33 = −5.
Thus we have the cofactor matrix and the adjoint matrix as follows
T
−4 −3 5 −4 −3 5 −4 −6 1
Cof (A) = −6 3 0 , adj(A) = −6 3 0 = −3 3 −3 .
1 −3 −5 1 −3 −5 5 0 −5
Chapter Three Systems of Linear Algebraic Equations 25
To get the adjoint of the matrix of the Example 1.6, we use the author-defined function Adjoint
and the following MATLAB commands as follows:
Then by using the Theorem 1.6 we can have the inverse of the matrix as follows:
−4 −6 1 4/15 2/5 −1/15
Adj(A) 1
A−1 = = − −3 3 −3 = 1/5 −1/5 1/5 .
det(A) 15
5 0 −5 −1/3 0 1/3
Using the Theorem 1.6 we can compute the inverse of the adjoint matrix as follows:
−1/15 −2/15 1/15
A
(adj(A))−1 = = −2/15 1/15 −1/15 ,
det(A)
−1/15 −2/15 −2/15
Example 1.8 Use matrix inversion method to find unique solution the linear system Ax = b,
where
1 2 0 1
A = −2 1 2 , b = 1 .
−1 1 1 1
Solution. First we compute the inverse of the given matrix which has the form
1 2 −4
A−1 = 0 −1 2 ,
1 3 −5
which is the solution of the given system by the matrix inversion method. •
Thus, when the matrix inverse A−1 of the coefficient matrix A is computed, the solution vector x
of linear system is simply the product of inverse matrix A−1 and the right-hand side vector b.
Using MATLAB commands the linear system of equations defined by the coefficient matrix A and
the right hand-side vector b using matrix inverse method is solved with:
Not all matrices have inverses. Singular matrices don’t have inverse and thus the corresponding
system of equations does not have a unique solution. The inverse of a matrix can also be computed
by using the following numerical methods for linear systems, called, Gauss-elimination method,
Gauss-Jordan method and LU-decomposition method but the best and simplest method for finding
the inverse of a matrix is to perform the Gauss-Jordan method on the augmented matrix with
identity matrix of same size.
Chapter Three Systems of Linear Algebraic Equations 27
Theorem 1.8 A homogeneous system of n equations in n unknowns has a solution other than the
trivial solution if and only if the determinant of the coefficients matrix A vanishes, that is matrix
A is singular. •
A nonhomogeneous system of n equations in n unknowns has a unique solution if and only if the
determinant of a coefficients matrix A is not vanishes, that is, A is nonsingular. •
Before, we discuss numerical methods for solving linear system, we introduce the most important
numerical quantity associated with a matrix.
The rank of a matrix A is the number of pivots. An m × n matrix will, in general, have a rank
r, where r is an integer and r ≤ min{m, n}. If r = min{m, n}, then the matrix is said to be full
rank. If r < min{m, n}, then the matrix is said to be rank deficient. •
In principle, the rank of a matrix can be determined by using the Gaussian elimination process in
which the coefficient matrix A is reduced to upper-triangular form U . After reducing the matrix
to triangular form, we find that the rank is the number of columns with nonzero values on the
diagonal of U . In practice, especially for large matrices, round-off errors during the row operation
may cause a loss of accuracy in this method of rank computation.
Theorem 1.10 For a system of n equations in n unknowns written in the form Ax = b, then
solution x of a system exists and is unique for any b if and only if rank(A) = n. •
Conversely, if rank(A) < n for an n × n matrix A, then the system of equations Ax = b may or
may not be consistent. Such a system may not have solution, or the solution, if it exists, will not
be unique. For example, the rank of the following matrix is 3.
1 2 4
A = 1 1 5 .
1 1 6
In MATLAB command, the built-in rank function can be use to estimate the rank of a matrix:
28 3.4 Direct Numerical Methods for Linear Systems
Note that:
rank(AB) ≤ min(rank(A), rank(B))
rank(A + B) ≤ rank(A) + rank(B)
rank(AAT ) = rank(A) = rank(AT A)
Although the rank of a matrix is very useful to categorize the behaviour of matrices and systems
of equations, the rank of a matrix is usually not computed. •
a11 x1 + a12 x2 = b1
a21 x1 + a22 x2 = b2
with the condition that a11 a22 − a12 a21 6= 0, that is, the determinant of the given matrix must not
be equal to zero or the matrix must be nonsingular. Solving the above system by using systematic
elimination by multiplying first equation of the system with a22 and the second equation by a12 ,
and subtracting, gives
(a11 a22 − a12 a21 )x1 = a22 b1 − a12 b2 ,
and now solving for x1 , gives
a22 b1 − a12 b2
x1 = ,
a11 a22 − a12 a21
and putting the value of x1 in any equation of the given system, we have x2 , as follows
a22 b2 − a12 b1
x2 = .
a11 a22 − a12 a21
Chapter Three Systems of Linear Algebraic Equations 29
which showed that the given matrix A is nonsingular. Then the matrices A1 , A2 and A3 can be
computed as
15 1 3 4 15 3 4 1 15
A1 = 25 2 6 , A2 = 3 25 6 , A3 = 3 2 25 .
20 5 3 1 20 3 1 5 20
The determinant of the matrices A1 , A2 and A3 can be computed as
|A1 | = 15(6 − 30) − 1(75 − 120) + 3(125 − 40) = −360 + 45 + 255 = −60,
|A2 | = 4(75 − 120) − 15(9 − 6) + 3(60 − 25) = −180 − 45 + 105 = −120,
|A3 | = 4(40 − 125) − 1(60 − 25) + 15(15 − 2) = −340 − 35 + 195 = −180.
Now applying the Cramer’s rule, we get
|A1 | −60 |A2 | −120 |A3 | −180
x1 = = = 1, x2 = = = 2, x3 = = = 3,
|A| −60 |A| −60 |A| −60
which is the required solution of the given system. •
30 3.4 Direct Numerical Methods for Linear Systems
Thus the Cramer’s rule is useful in hand calculations only if the determinants can be evaluated
easily, that is, for n = 3. The solution of a system of n linear equations by the Cramer’s rule will
n3
required N = (n + 1) multiplications. Therefore, this rule is much less efficient for large values
3
of n and is at most never used for computational purposes. When the number of equations is large
(n > 3), other methods of solutions are more desirable.
By using MATLAB commands for windows to find the solution of the above linear system by the
Cramer’s rule is as follows:
2. Compute the determinant of A. If det A = 0, then the system has no solution; otherwise go
to next step.
3. Compute the determinant of new matrix Ai by replacing the ith matrix with vector b.
To generate the solution of the Example 1.9, we use the author-defined function CRule and the
following MATLAB commands as follows:
then the backward substitution is used to solve the triangular system easily and one can find the
unknown variables involve in the system.
Now we shall describe the method in detail for a system of n linear equations. Consider the following
a system of n linear equations:
Forward Elimination
as first pivotal equation with first pivot element a11 . Then the first equation times multiples
mi1 = (ai1 /a11 ), i = 2, 3, . . . , n, is subtracted from the ith equation to eliminate first variable x1 ,
producing an equivalent system
(1)
as second pivotal equation with second pivot element a22 . Then the second equation times multiples
(1) (1)
mi2 = (ai2 /a22 ), i = 3, . . . , n, is subtracted from the ith equation to eliminate second variable x2 ,
producing an equivalent system
(2)
as the third pivotal equation with third pivot element a33 . Then the third equation times multiples
(2) (2)
mi3 = (ai3 /a33 ), i = 4, . . . , n, is subtracted from the ith equation to eliminate third variable x3 .
Similarly, after (n-1)th steps, we have the nth pivotal equation which have only one unknown
variable xn , that is
(n−1)
with nth pivotal element ann . After getting the upper-triangular system which is equivalent to
the original system, the forward elimination is completed.
Backward Substitution
After the triangular set of equations has been obtained, the last equation of the system (1.19) yields
the value of xn directly. The value is then substituted into the equation next to the last one of the
system (1.19) to obtain a value of xn−1 , which is, in turn, used along with the value of xn in the
second to the last equation to obtain a value of xn−2 , and so on. Mathematical formula can be
obtain for the backward substitution
(n−1)
bn
xn =
(n−1)
ann
1
(n−2) (n−2)
xn−1 = b − a x
(n−2) n−1 n−1n n
an−1n−1 (1.20)
..
.
n
1 X
x1 = b1 − a1j xj
a11 j=2
The Gaussian elimination can be carried out by writing only the coefficients and the right-hand
side terms in a matrix form, which means the augmented matrix form. Indeed, this is exactly what
a computer program for the Gaussian elimination does. Even for hand calculation, the augmented
matrix form is more convenient than writing all set of equations. The augmented matrix is formed
as follows
..
a11 a12 a13 · · · a1n . b1
..
21 a22 a23 · · · a2n . b2
a
.
a31 a32 a33 · · · a3n .. b3 . (1.21)
. .. .. .. .. ..
.
. . . . . .
..
a a
n1 a
n2 ··· a
n3 nn . b n
Chapter Three Systems of Linear Algebraic Equations 33
The operations used in the Gaussian elimination method can now be applied to the augmented
matrix. Consequently system (1.19) is now written directly as follows:
..
a11 a12 a13 ··· a1n . b1
(1) (1) (1) .. (1)
a22 a23 ··· a2n . b2
(2) (2) .. (2)
, (1.22)
a33 ··· a3n . b3
.. .. ..
. . .
.. (n−1)
(n−1)
ann . bn
from which the unknowns are determined as before by using backward substitution. The number of
multiplications and divisions for the Gaussian elimination method for one b vector is approximately
n3 n
N= + n2 − . (1.23)
3 3
Example 1.10 Solve the following linear system using the simple Gaussian elimination method
x1 + 2x2 + x3 = 2
2x1 + 5x2 + 3x3 = 1
x1 + 3x2 + 4x3 = 5
Since a11 = 1 6= 0, so we wish to eliminate the elements a21 and a31 by subtracting from the second
2
and third rows the appropriate multiples of the first row. In this case the multiples are m21 = = 2
1
1
and m31 = = 1. Hence
1 ..
1 2 1 . 2
.
0 1 1 .. −3 .
..
0 1 3 . 3
34 3.4 Direct Numerical Methods for Linear Systems
(1) (1)
As a22 = 1 6= 0, therefore, we wish to eliminate entry in a32 position by subtracting the multiple
1
m32 = = 1 of the second row from the third row, to get
1
..
1 2 1 . 2
.
0 1 1 .. −3 .
.
0 0 2 .. 6
Obviously, the original set of equations has been transformed to an upper-triangular form. Since all
the diagonal elements of the obtaining upper-triangular matrix are nonzero, which means that the
coefficient matrix of the given system is nonsingular and therefore, the given system has a unique
solution. Now expressing the set in algebraic form yields
x1 + 2x2 + x3 = 2
x2 + x3 = −3
2x3 = 6
Now using backward substitution, we get the solution, [x1 , x2 , x3 ]T = [11, −6, 3]T . •
This result can be obtained by using the author-defined function WPivoting and the following
MATLAB commands as follows:
In the simple description of Gaussian elimination without pivoting just given, we used the kth
equation to eliminate variable xk from equations k + 1, . . . , n during the kth step of the procedure.
(k−1)
This is possible only if at the beginning of the kth step, the coefficient akk of xk in equation k is
not zero. Since these coefficients are used as denominators both in the multipliers mij and in the
backward substitution equations. But this does not necessarily mean that the linear system is not
solvable, but that the procedure of solution must be altered.
Example 1.11 Solve the following linear system using the simple Gaussian elimination method.
x2 + x3 = 1
x1 + 2x2 + 2x3 = 1
2x1 + x2 + 2x3 = 3
To solve this system, the simple Gaussian elimination method will fail immediately because the
element in the first row on the leading diagonal, the pivot, is zero. Thus it is impossible to divide
Chapter Three Systems of Linear Algebraic Equations 35
that row by the pivot value. Clearly, this difficulty can be overcome by rearranging the order of the
rows; for example by making the first row the second, gives
..
1 2 2 . 1
.
0 1 1 .. 1 .
.
2 1 2 .. 3
Now we use the usual elimination process. The first elimination step is to eliminate the element
2
a31 = 2 from the third row by subtracting a multiple m31 = = 2 of row 1 from row 3, gives
1
.
1 2 .. 1
2
.
1 .. 1 .
0 1
.
0 −3 −2 .. 1
We finished with the first elimination step since the element a21 is already eliminated from second
(1)
row. The second elimination step is to eliminate the element a32 = −3 from the third row by
−3
subtracting a multiple m32 = of row 2 from row 3, gives
1
..
1 2 2 . 1
.
0 1 1 .. 1 .
.
0 0 1 .. 4
Obviously, the original set of equations has been transformed to an upper-triangular form. Now
expressing the set in algebraic form yields
x1 + 2x2 + 2x3 = 1
x2 + x3 = 1
x3 = 4
Using backward substitution, we get, x1 = −1, x2 = −3, x3 = 4, the solution of the system. •
Example 1.12 Solve the linear system using the simple Gaussian elimination method.
x1 + x2 + x3 = 3
2x1 + 2x2 + 3x3 = 7
x1 + 2x2 + 3x3 = 6
First elimination step is to eliminate the elements a21 = 2 and a31 = 1 from second and third rows
2 1
by subtracting multiples m21 = = 2 and m31 = = 1 of row 1 from row 2 and row 3 respectively,
1 1
gives ..
1 1 1 . 3
.
0 0 1 .. 1 .
..
0 1 2 . 3
We finished with the first elimination step. To start the second elimination step, since we note that
(1)
the element a22 = 0, called the second pivot element, so the simple Gaussian elimination cannot
continue in its present form. Therefore, we interchange the rows 2 and 3, to get
..
1 1 1 . 3
.
0 1 2 .. 3 .
.
0 0 1 .. 1
(1)
We finished with the second elimination step since the element a32 is already eliminated from third
row. Obviously, the original set of equations has been transformed to an upper-triangular form.
Now expressing the set in algebraic form yields
x1 + x2 + x3 = 3
x2 + 2x3 = 3
x3 = 1
Using backward substitution, we get, x1 = 1, x2 = 1, x3 = 1, the solution of the system. •
Example 1.13 Use the simple Gaussian elimination method, find all values of a and b for which
the following linear system is consistent or inconsistent.
2x1 − x2 + 3x3 = 1
4x1 + 2x2 + 2x3 = 2a
2x1 + x2 + x3 = b
1
Solution. Using m21 = 2, m31 = 1, and m32 = , we get
2
2 −1 3 1 2 −1 3 1 2 −1 3 1
[A|b] = 4 2 2 2a ≡ 0 4 −4 2a − 2 ≡ 0 4 −4 2a − 2 .
2 1 1 b 0 2 −2 b − 1 0 0 0 b−a
We finished with the second column. So third row of the equivalent upper-triangular system is
Firstly, if (1.24) has no constraint on unknowns x1 , x2 , and x3 , then the upper-triangular system
represents only two non-trivial equations, namely
2x1 − x2 + 3x3 = 1
4x2 − 4x3 = 2a − 2
Chapter Three Systems of Linear Algebraic Equations 37
in three unknowns. As a result, one of the unknowns can be chosen arbitrarily, say x3 = x∗3 , then
x∗2 and x∗1 can be obtained by using backward substitution
1
x∗2 = a/2 − 1/2 + x∗3 ; x∗1 = (1 + a/2 − 1/2 − 2x∗3 ).
2
Hence
1
x∗ = [ (1/2 + a/2 − 2x∗3 ), 1/2a − 1/2 + x∗3 , x∗3 ]T ,
2
is an approximation solution of given system for any value of x∗3 for any real value of a. Hence the
given linear system is consistent (infinitely many solutions).
Secondly, when b − a 6= 0, in this case (1.24) puts a restriction on unknowns x1 , x2 and x3 that is
impossible to satisfy. So the system cannot have any solutions and therefore, it is inconsistent. •
Example 1.14 Use the simple Gaussian elimination method, find all values of a and b for which
the following linear system is consistent or inconsistent. Find the solutions when the system is
consistent.
x1 − 2x2 + 3x3 = 4
2x1 − 3x2 + ax3 = 5
3x1 − 4x2 + 5x3 = b
Solution. Convert the given system into upper system as
1 −2 3 4 1 −2 3 4 1 −2 3 4
[A|b] = 2 −3 a 5 ≡ 0 1 a−6 −3 ≡ 0 1 a−6 −3 .
3 −4 5 b 0 2 −4 b − 12 0 0 −2a + 8 b − 6
x1 + 3x2 + αx3 = 4
2x1 − x2 + 2αx3 = 1
αx1 + 5x2 + x3 = 6
5 − 3α
Solution. Using the multiples m21 = 2, m31 = α, and m32 = , gives matrix form
−7
1 3 α 4 1 3 α 4 1 3 α 4
[A|b] = 2 −1 2α 1 ≡ 0 −7 0 −7 ≡ 0 −7 0 −7 .
α 5 1 6 0 5 − 3α 1 − α2 6 − 4α 0 0 1 − α2 1 − α
So if 1 − α2 6= 0, then we have the unique solution of the given system while for α = ±1, we have no
unique solution. If α = 1, then we have infinitely many solution because third row of above matrix
gives
0x1 + 0x2 + 0x3 = 0,
and when α = −1, we have
0x1 + 0x2 + 0x3 = 2,
which is not possible, so no solution.
Since we can not take α = 1 for the unique solution, so can take next positive integer α = 2, which
gives us upper-triangular system of the form
x1 + 3x2 + 2x3 = 4
− 7x2 = −7
− 3x3 = −1
Solving this system using backward substitution, we get, x1 = 1/3, x2 = 1, x3 = 1/3, the required
unique solution of the given system using smallest positive integer value of α. •
Example 1.16 Show that for α1 and α2 the following linear system has unique solution. Find α1
and α2 and compute the solution in each case.
x1 + x2 + x3 = α2
4x1 − x2 + x3 = α
x1 + x2 + 2x3 = 1
x1 + 6x2 + 5x3 = 1
Solution. Using the multiples m21 = 4, m31 = 1, and m41 = 1, gives matrix form
1 1 α2 α2
1 1 1 1
4 −1 1 α 0 −5 −3 α − 4α2
[A|b] = ≡ .
1 1 2 1 0 0 1 1 − α2
1 6 5 1 0 5 4 1 − α2
Chapter Three Systems of Linear Algebraic Equations 39
α2
1 1 1
0 −5 −3 α − 4α2
≡ .
0 0 1 1 − α2
0 0 1 1 − 5α2 + α
Last two equations gives, 1 − α2 = 1 − 5α2 + α and we have 4α2 − α = α(4α − 1) = 0. Thus α = 0
or α = 41 .
Now let α = 0, we have
1 1 1 0
≡ 0 −5 −3 1 .
0 0 1 1
Using backward substitution we get the solution [x1 , x2 , x3 ]T = [−2/5, −3/5, 1]. When α = 41 , then
we have
1
1 1 1 16
≡ 0 −5 −3 0 .
0 0 1 15 16
Again using backward substitution we obtain the solution [x1 , x2 , x3 ]T = [−5/16, −9/16, 15/16]. •
Theorem 1.11 An upper-triangular matrix A is nonsingular if and only if all its diagonal elements
are different from zero. •
Example 1.17 Use the simple Gaussian elimination method to find all the values of α which make
the following matrix singular.
1 −1 α
A= 2 2 1 .
0 α −1.5
Then use the smallest positive integer value of α to find the unique solution of the linear system
Ax = [1, 6, −4]T by simple Gaussian elimination method.
α
Solution. Using m21 = 2 and m32 = , we get
4
1 −1 α
1 −1 α
0
4 1 − 2α ≡ 0 4 1 − 2α
.
α(1 − 2α)
0 α −1.5 0 0 −1.5 −
4
To show that the given matrix is singular, we have to set the third diagonal element equal to zero
(by Theorem 1.11), that is
α(1 − 2α)
−1.5 − = 0, or 2α2 − α − 6 = 0.
4
3
Solving the above quadratic equation, we get, α = − and α = 2, the possible values of α which
2
make the given matrix singular.
40 3.4 Direct Numerical Methods for Linear Systems
To find the unique solution we take the smallest positive integer value α = 1 and using m21 = 2
1
and m32 = , gives:
4
. . .
1 −1 1 .. 1 1 −1 1 .. 1 1 −1 1 .. 1
.. .. ..
≡ 0 ≡ 0 .
2
2 1 . 6 4 −1 . 4 4 −1 . 4
.. .. ..
0 1 −1.5 . −4 0 1 −1.5 . −4 0 0 −5/4 . −5
x1 − x2 + x3 = 1
4x2 − x3 = 4
−5/4x3 = −5
The inverse of the nonsingular matrix A can be easily determined by using the simple Gaussian
elimination method. Here, we have to consider the augmented matrix as a combination of the given
matrix A and the identity matrix I (same size as of A). To find the inverse matrix B = A−1 we
must solve the linear system in which the jth column of the matrix B is the solution of the linear
system with right-hand side the jth column of the matrix I.
Example 1.18 Use the simple Gaussian elimination method to find the inverse of the following
matrix
2 −1 3
A = 4 −1 6 .
2 −3 4
Then use A−1 to find the unique solution of the system Ax = [1, 2, 2]T .
Solution. Suppose that the inverse A−1 = B of the given matrix exists and let
2 −1 3 b11 b12 b13 1 0 0
AB = 4 −1 6 b21 b22 b23 = 0 1 0 = I.
2 −3 4 b31 b32 b33 0 0 1
Now to find the elements of the matrix B, we apply the simple Gaussian elimination on the aug-
mented matrix
..
2 −1 3 . 1 0 0
[A|I] = 4 −1 6 ... 0 1 0
.
.
2 −3 4 .. 0 0 1
Chapter Three Systems of Linear Algebraic Equations 41
4 2 −2
Using m21 = = 2, m31 = = 1, and m32 = = −2, gives
2 2 1
.. ..
2 −1 3 . 1 0 0 2 −1 3 . 1 0 0
.
. ≡
.
.
.
0
1 0 . −2 0
1 0 1 0 . −2 1 0
. ..
0 −2 1 .. −1 0 1 0 0 1 . −5 2 1
We solve the first system
2 −1 3 b11 1
0 1 0 b21 = −2 ,
0 0 1 b31 −5
by using backward substitution, we get
2b11 − b21 + 3b31 = 1
b21 = −2
b31 = −5
which gives b11 = 7, b21 = −2, b31 = −5. Similarly, the solution of the second linear system
2 −1 3 b12 0
0 1 0 22 1 ,
b =
0 0 1 b32 2
can be obtained as follows:
2b12 − b22 + 3b32 = 0
b22 = 1
b32 = 2
which gives b12 = −5/2, b22 = 1, b32 = 2. Finally, the solution of the third linear system
2 −1 3 b13 0
0 1 0 b23 = 0 ,
0 0 1 b33 1
can be obtained as follows:
2b13 − b23 + 3b33 = 0
b23 = 0
b33 = 1
and it gives b13 = −3/2, b23 = 0, b33 = 1. Hence the elements of the inverse matrix B are
7 −5/2 −3/2
B = A−1 = −2 1 0 ,
−5 2 1
which is the required inverse of the given matrix A. Now to find the solution of the system, we do
as
7 −5/2 −3/2 1 −1
x = A−1 b = −2 1 0 2 = 0 ,
−5 2 1 2 1
the required solution. •
42 3.4 Direct Numerical Methods for Linear Systems
2. Check first pivot element a11 6= 0, then move to the next step; otherwise, interchange rows so
that a11 6= 0.
ai1
3. Multiply row one by multiplier mi1 = and subtract to the ith row for i = 2, 3, . . . , n.
a11
4. Repeat the steps 2 and 3 for the remaining pivots elements unless coefficient matrix A becomes
upper-triangular matrix U .
bnn−1
5. Use backward substitution to solve xn from the nth equation xn = and solve the other
ann
(n-1) unknowns variables by using (1.20).
The use of non-zero pivots is sufficient for the theoretical correctness of the simple Gaussian elimi-
nation, but more care must be taken if one is to obtain reliable results.
Example 1.19 Consider a linear system
0.000100x1 + x2 = 1
x1 + x2 = 2
which has exact solution x = [1.00010, 0.99990]T . Now we solve this system by the simple Gaussian
elimination. The first elimination step is to eliminate first variable x1 from second equation by
subtracting multiple m21 = 10000 of first equation from second equation, gives
0.000100x1 + x2 = 1
− 10000x2 = −10000
Using backward substitution to get the solution x∗ = [0, 1]T . Thus a computational disaster has
occurred. But if we interchange the equations, we obtain
x1 + x2 = 2
0.000100x1 + x2 = 1
Apply the Gaussian elimination again, and we got the solution x∗ = [1, 1]T . This solution is as
good as one would hope. So, we conclude from this example that it is not enough just to avoid zero
pivot, one must also avoid relatively small one. •
Here we need some pivoting strategies which help us to over come these difficulties facing during
the process of simple Gaussian elimination.
Pivoting is used to change sequential order of the equations for two purposes, first to prevent
diagonal coefficients from becoming zero, and second, to make each diagonal coefficient larger
in magnitude than any other coefficient below it, that is, to decrease the round-off errors. The
equations are not mathematical affected by changes of the sequential order, but changing the order
makes coefficient become non-zero. Even when all diagonal coefficients are non-zero, the changes
of order increases accuracy of the computations. The standard pivoting strategy which handled
these difficulties easily are explained below.
x1 + x2 + x3 = 1
2x1 + 3x2 + 4x3 = 3
4x1 + 9x2 + 16x3 = 11
Solution. For the first elimination step, since 4 is the largest absolute coefficient of first variable
x1 , therefore, the first row and the third row are interchange, giving us
2
Eliminate first variable x1 from the second and third rows by subtracting the multiples m21 =
4
1
and m31 = of row 1 from row 2 and row 3 respectively, gives
4
4x1 + 9x2 + 16x3 = 11
− 3/2x2 − 4x3 = −5/2
− 5/4x2 − 3x3 = −7/4
For the second elimination step, −3/2 is the largest absolute coefficient of second variable x2 , so
5
eliminate second variable x2 from the third row by subtracting the multiple m32 = of row 2 from
6
44 3.4 Direct Numerical Methods for Linear Systems
row 3, gives
4x1 + 9x2 + 16x3 = 11
− 3/2x2 − 4x3 = −5/2
1/3x3 = 1/3
Obviously, the original set of equations has been transformed to an equivalent upper-triangular form.
Now using backward substitution, gives, x1 = 1, x2 = −1, x3 = 1, the required solution. •
By using the author-defined function PPivoting and the MATLAB commands give the same re-
sults as we obtained in the preceding Example 1.20 of the Gaussian elimination method with partial
pivoting:
1. Suppose we are about to work on the ith column of the matrix. Then we search that portion
of the ith column below and including the diagonal, and find the element that has the largest
absolute value. Let p denote the index of the row that contains this element.
by solving the system AB = I, using Gauss elimination by partial pivoting where B = A−1 . Then
use it to solve the system Ax = [1, 1, 2]T .
Solution. Suppose that the inverse A−1 = B of the given matrix exists and let
2 1 2 b11 b12 b13 1 0 0
AB = 1 2 3 b21 b22 b23 = 0 1 0 = I.
4 1 2 b31 b32 b33 0 0 1
Now to find the elements of the matrix B, we apply the Gaussian elimination using partial pivoting
on the augmented matrix
..
2 1 2 . 1 0 0
[A|I] = 1 2 3 ... 0 1 0 .
..
4 1 2 . 0 0 1
Chapter Three Systems of Linear Algebraic Equations 45
For the first elimination step, since 4 is the largest absolute coefficient of first variable x1 , therefore,
the first row and the third row are interchange, giving us
..
4 1 2 . 0 0 1
≡ 1 2 3 ... 0 1 0 .
..
2 1 2 . 1 0 0
1 1 5
Using all three possible multiples m21 = , m31 = and m32 = , gives
4 2 14
. .
4 1 2 .. 0 0 1 4 1 2 .. 0 0 1
≡ 0 7/4 5/2 ... 0 1 −1/4 ≡ 0 7/4 5/2 ... .
0 1 −1/4
.. .
0 1/2 2 . 1 0 −1/2 0 0 2/7 .. 1 −2/7 −3/7
Obviously, the original set of equations has been transformed to an equivalent upper-triangular form.
We solve the first upper triangular linear system
4 1 2 b11 0
0 7/4 5/2 b21 = 0 ,
0 0 2/7 b31 1
by using backward substitution, we get
4b11 + b21 + 2b31 = 0
7/4b21 + 5/2b31 = 0
2/7b31 = 1
which gives
b11 −0.5
b21 = −5.0 .
b31 3.5
Similarly, the solution of the second upper triangular linear system
4 1 2 b12 0
0 7/5 5/2 b22 = 1 ,
0 0 2/7 b32 −2/7
can be obtained by using backward substitution, as
4b12 + b22 + 2b32 = 0
7/5b22 + 5/2b32 = 1
2/7b32 = −2/7
which gives, [b12 , b22 , b32 ]T = [0.0, 2.0, −1.0]T . Finally, the solution of the third upper triangular
linear system
4 1 2 b13 1
0 7/5 5/2 b23 = −1/4 ,
0 0 2/7 b33 −3/7
46 3.4 Direct Numerical Methods for Linear Systems
which gives, [b13 , b23 , b33 ]T = [0.5, 2.0, −1.5]T . Hence the elements of the inverse matrix B are
−0.5 0.0 0.5
B = A−1 = −5.0 2.0 2.0 ,
3.5 −1.0 −1.5
There are times when the partial pivoting procedure is inadequate. When some rows have coef-
ficients that are very large in comparison to those in other rows, partial pivoting may not give a
correct solution.
Therefore, when in doubt, use total pivoting. No amount of pivoting will remove inherent ill-
conditioning (we discuss later in the chapter) from a set of equations, but it helps to ensure that
no further ill-conditioning is introduced in the course of computation.
Example 1.22 Solve the following linear system using the Gaussian elimination with total pivoting
x1 + x2 + x3 = 1
2x1 + 3x2 + 4x3 = 3
4x1 + 9x2 + 16x3 = 11
Solution. For the first elimination step, since 16 is the largest absolute coefficient of variable x3
in the given system, therefore, the first row and the third row are interchange as well as the first
Chapter Three Systems of Linear Algebraic Equations 47
Then eliminate third variable x3 from the second and the third rows by subtracting the multiples
4 1
m21 = and m31 = of row 1 from rows 2 and 3 respectively, gives
16 16
16x3 + 9x2 + 4x1 = 11
3/4x2 + x1 = 1/4
7/16x2 + 3/4x1 = 5/16
For the second elimination step, 1 is the largest absolute coefficient of first variable x1 in the second
row and the third column, so the second and third columns are interchange, giving us
3
Eliminate first variable x1 from the third row by subtracting the multiple m32 = of row 2 from
4
row 3, gives
16x3 + 4x1 + 9x2 = 11
x1 + 3/4x2 = 1/4
− 1/8x2 = 1/8
The original set of equations has been transformed to an equivalent upper-triangular form. Now
using backward substitution, gives, x1 = 1, x2 = −1, x3 = 1, the required solution of the given
linear system. •
The author-defined function TPivoting and the MATLAB commands can be use to get the same
results as we obtained in the preceding Example 1.22 of the Gaussian elimination method with
total pivoting as follows:
Total pivoting offers little advantage over partial pivoting and it is significantly slower, requiring
n(n + 1)(2n + 1)
N = elements to be examined in total. It is rarely used in practice because
6
interchanging columns changes the order of the x0 s and, consequently, add significant and usually
unjustified complexity to the computer program. So for getting good results partial pivoting shown
to be a very reliable procedure.
of this method is to convert the given matrix into a diagonal form. The forward elimination of the
Gauss-Jordan method is identical to that of Gaussian elimination method. However, Gauss-Jordan
elimination uses backward elimination rather than backward substitution. In the Gauss-Jordan
method the forward elimination and backward elimination need not be separated. This is possible
because a pivot element can be used to eliminate the coefficients not only below but also above
at the same time. If this approach is taken, the form of the coefficients matrix become diagonal
when elimination by the last pivot are completed. The Gauss-Jordan method simply yields a
transformation of the augmented matrix of the form
[A|b] → [I|c],
where I is the identity matrix and c is the column matrix, which represents the possible solution
of the given linear system.
The Gauss-Jordan method particularly well suited to compute the inverse of a matrix through the
transformation
[A|I] → [I|A−1 ].
Note if the inverse of the matrix can be found, then the solution of the linear system can be
computed easily from the product of matrix A−1 and column matrix b, that is
x = A−1 b. (1.25)
Note that one can get easily the solution of linear system Ax = b and inverse of the coefficient
matrix A together by the Gauss-Jordan method using the augmented matrix of the form
[A|b|I] → [I|x|A−1 ].
Example 1.23 Apply the Gauss-Jordan method to find the inverse of the coefficient matrix and
also the solution of the linear system Ax = b, where
! !
1 2 1
A= and b= .
1 3 2
Then we have
. . .. .
1 2 .. 1 .. 1 0 1 0 . −1 .. 3 −2
≡
.. .. ≡ .. .. .
0 1 . 1 . −1 1 0 1 . 1 . −1 1
The above results can be obtained by using the author-defined function GaussJ and the following
MATLAB commands as follows:
where L is a lower-triangular matrix and U is the upper-triangular matrix. Both are of same size
as the coefficients matrix A. To solve a number of linear equations sets in which the coefficients
matrices are all identical but the right-hand side are different, then the LU decomposition is more
efficient than elimination method. Specifying the diagonal elements of either L and U makes the
factoring unique. The procedure based on unity elements on the diagonal of matrix L is called
Doolittle’s method(or Gauss factorization), while the procedure based on unity elements on the
diagonal of matrix U is called Crout’s method.
The general forms of L and U are written as
l11 0 ··· 0 u11 u12 · · · u1n
l21 l22 · · · 0
0 u22 · · · u2n
L= .. .. .. .. and U = .. .. .. .. , (1.27)
. . . . . . . .
ln1 ln2 · · · lnn 0 0 · · · unn
and let A be factored into the product of L and U , as shown by (1.27). Then the linear system
(1.28) becomes
LU x = b,
The unknown elements of matrix L and matrix U are computed by equating corresponding elements
in matrices A and LU in a systematic way. Once the matrices L and U have been constructed, the
solution of system (1.28) can be computed in the following two steps:
By using the forward elimination, we will find the components of the unknown vector y, by
using the following steps:
y1 = b1 ,
i−1
X . (1.29)
yi = bi − lij yj , i = 2, 3, . . . , n
j=1
By using the backward substitution, we will find the components of the unknown vector x, by
using the following steps:
yn
xn = ,
unn
n . (1.30)
1 X
xi = yi − uij xj , i = n − 1, n − 2, . . . , 1
uii
j=i+1
Thus the relationship of the matrices L and U to the original matrix A is given by the following
theorem.
Theorem 1.12 If the Gaussian elimination can be performed on the linear system Ax = b without
row interchanges, then the matrix A can be factored into the product of a lower-triangular matrix
L and an upper-triangular matrix U , that is
A = LU,
A = LU.
If A has rank n (that is, all pivots are non-zeros), then L and U are uniquely determined by A. •
Now we discuss the two possible variations of the LU decomposition to find the solution of the
nonsingular linear system in the following.
Example 1.24 Construct the LU decomposition of the following matrix A by using the Gauss fac-
torization (that is, the LU decomposition by Doolittle’s method).
Solution. Applying the forward elimination step of Simple Gauss-elimination to the given matrix
1 2 1
A = 2 5 3 ,
1 3 4
using the multiples m21 = 2 = l21 , m31 = 1 = l31 , and m32 = 1 = l32 , we obtain
1 2 1 1 2 1
≡ 0 1 1 ≡ 0 1 1 = U.
0 1 3 0 0 2
Hence we obtained the LU-decomposition of the given matrix as follows
1 2 1 1 0 0 1 2 1
2 5 3 = 2 1 0 0 1 1 ,
1 3 4 1 1 1 0 0 2
where the unknown elements of matrix L are the used multiples and the matrix U is same as we
obtained in forward elimination process. •
Example 1.25 Construct the LU decomposition of the following matrix A by using the Gauss
factorization (that is, the LU decomposition by Doolittle’s method). Find the value(s) of α for
which the following matrix
1 −1 α
A = −1 2 −α ,
α 1 1
is singular. Also, find the unique solution of the linear system Ax = [1, 1, 2]T by using the smallest
positive integer value of α.
which is the required decomposition of A. The matrix will be singular if the third diagonal element
1 − α2 of the upper-triangular U is equal to zero (Theorem 1.11), gives, α = ±1.
To find the unique solution of the given system we take α = 2 and it gives
1 −1 2 1 0 0 1 −1 2
−1 2 −2 = −1 1 0 0 1 0 .
2 1 1 2 3 1 0 0 −3
We use the author-defined function LU-Gauss and the following MATLAB commands to factored
a nonsingular matrix A into a unit lower triangular matrix L and an upper triangular matrix U
as:
f unctiony = F orwardSubs(L, b)
[n, n] = size(L); y = zeros(n, 1);
f ork = 1 : n; y(k)=(b(k)-L(k,1:k-1)*y(1:k-1))/L(k, k) end
and the following MATLAB commands to generate the solution of lower-triangular system as:
f unctionx = BackwardSubs(U, y)
[n, n] = size(U ); x = zeros(n, 1); x(n) = y(n)/U (n, n);
f ork = n − 1 : −1 : 1; x(k) = (y(k) − U (k, k + 1 : n) ∗ x(k + 1 : n))/U (k, k); end
and the following MATLAB commands to generate the solution of upper-triangular system as:
Chapter Three Systems of Linear Algebraic Equations 53
There is an other way to find the values of the unknown elements of the matrices L and U , which
we describe in the following example.
Example 1.26 Construct the LU decomposition of the following matrix using Doolittle’s method
1 2 4
A = 1 3 3 .
2 2 2
Solution. Since
1 0 0 u11 u12 u13
A = LU = l21 1 0 0 u22 u23 .
l31 l32 1 0 0 u33
Performing the multiplication on the right-hand side, gives
1 2 4 u11 u12 u13
1 3 3 = l21 u11 l21 u12 + u22 l21 u13 + u23 .
2 2 2 l31 u11 l31 u12 + l32 u22 l31 u13 + l32 u23 + u33
Then equating elements of first column to obtain, u11 = 1, l21 = 1, l31 = 2. Now equating elements
of second column to obtain
u12 = 2, u22 = 1, l32 = −2.
Finally, equating elements of third column to obtain
Thus we obtain
1 2 4 1 0 0 1 2 4
1 3 3 = 1 1 0 0 1 −1 ,
2 2 2 2 −2 1 0 0 −8
the factorization of the given matrix. •
The general formula for getting elements of L and U corresponding to the coefficient matrix A for
a set of n linear equations can be written as
i−1
X
uij = aij − lik ukj , 2 ≤i≤j
k=1
j−1
1 X
lij = aij − lik ukj , i >j≥2
uii k=1
. (1.31)
uij = a1j , i =1
ai1 ai1
lij = = , j =1
u11 a11
54 3.4 Direct Numerical Methods for Linear Systems
Example 1.27 Solve the following linear system by LU decomposition using Doolittle’s method
1 2 4 −2
A= 1 3 3 and b = 3 .
2 2 2 −6
Solution. Since the factorization of the coefficient matrix A has been already constructed in the
Example 1.26 as
1 2 4 1 0 0 1 2 4
1 3 3 = 1 1 0 0 1 −1 .
2 2 2 2 −2 1 0 0 −8
Then solving the first system Ly = b for unknown vector y, that is
1 0 0 y1 −2
1 1 0 y2 = 3 .
2 −2 1 y3 −6
Performing forward substitution yields, [y1 , y2 , y3 ]T = [−2, 5, 8]T . Then solving the second system
U x = y for unknown vector x, that is
1 2 4 x1 −2
0 1 −1 x2 = 5 .
0 0 −8 x3 8
We use the author-defined function Doolittle and the following MATLAB commands to get the
solution of the linear system by LU decomposition by using Doolittle’s method as follows:
Example 1.28 Use LU-factorization method with Doolittle’s method (lii = 1) to find values of α
for which the following linear system has unique solution and infinitely many solutions. Write down
the solution for both cases.
Solution. We use Simple Gauss-elimination method to convert the following matrix of the given
system by using the multiples m21 = 2, m31 = −1 and m32 = 1/4,
1 0.5 α 1 0.5 α
A= 2 −3 1 ≡ 0 −4 1 − 2α ,
−1 −1.5 2.5 0 0 1.5α + 2.25
Then by solving the lower-triangular system of the form Ly = [0.5, −1, −1]T and obtained the
solution y = [0.5, −2, 0]T . Now solving the upper-triangular system U x = y of the form
1 0.5 α x1 0.5
0 −4 1 − 2α x = −2 .
2
0 0 1.5α + 2.25 x3 0
Example 1.29 Use LU-factorization method with Doolittle’s method (lii = 1) to find the constant
α such that the following homogeneous linear system has non-trivial solutions. Find these solutions.
x1 + x2 = 0
3x1 + αx2 + 5x3 = 0
7x2 + 3x3 = 0
where det(L) = 1 because L is lower-triangular matrix and all its diagonal elements are unity. Thus
the determinant of the given matrix A is
So |A| = 0, gives, α = 44/3 and for this value of α we have non-trivial solutions. By solving the
lower-triangular system of the form Ly = [0, 0, 0]T , we obtained the solution y = [0, 0, 0]T . Now
solving the upper-triangular system U x = y of the form
1 1 0 x1 0
0 35/3 5 x2 = 0 .
0 0 0 x3 0
Then
U A−1 = L−1 , where LL−1 = I.
A practical way of calculating the determinant is to use the forward elimination process of the
Gaussian elimination or, alternatively, the LU decomposition.
If no pivoting is used, calculation of the determinant using the LU decomposition is very easy.
Example 1.30 Find determinant and inverse of the following matrix using LU decomposition by
Doolittle’s method.
1 −2 1
A = 1 −1 1 .
1 1 2
Solution. Since we know that
1 −2 1 1 0 0 u11 u12 u13
A = 1 −1 1 = m21 1 0 0 u22 u23 = LU.
1 1 2 m31 m32 1 0 0 u33
Chapter Three Systems of Linear Algebraic Equations 57
by using backward substitution, we get, a011 = −3, a021 = −1, a031 = 2. Similarly, the solution of the
second linear system
1 −2 1 a012 0
0 1 0 a022 = 1 ,
0 0 1 a032 −3
can be obtained as, a012 = 5, a022 = 1, a032 = −3. Finally, the solution of the third linear system
1 −2 1 a013 0
0
0 1 0 a23 = 0 ,
0 0 1 a033 1
can be obtained as, a013 = −1, a023 = 0, a033 = 1. Hence the elements of the inverse matrix A−1 are
−3 5 −1
A−1 = −1 1 0 .
2 −3 1
Let D denotes the diagonal matrix having the same diagonal elements as the upper-triangular
matrix U , in other words, D contains the pivots on its diagonal and zeros everywhere else. Let
V be the redefining upper-triangular matrix obtained from original upper-triangular matrix U by
dividing each row by its pivot, so that V has all 10 s on the diagonal. It is easily seen that U = DV ,
which allows any LU-decomposition to be written as
A = LDV,
where L and V are lower- and upper-triangular matrices with 10 s on both of their diagonals. This
is called the LDV factorization of A.
Chapter Three Systems of Linear Algebraic Equations 59
Thus
1 0 0 1 0 0 1 2 −1
A = LDV = 3 1 0 0 −4 0 0 1 4 ,
2 0 1 0 0 3 0 0 1
is the LDV factorization of the given matrix A. •
If given matrix A is symmetric, then there is the connection between lower-triangular matrix L and
upper-triangular matrix U in the LU-decomposition. In the first elimination step, the elements in
L0 s first column are obtained by dividing U 0 s first row by the diagonal elements. Similarly, during
the second elimination step, l32 = u23 /u22 . In general, when a symmetric matrix is decomposed
without pivots, lij is related to uji through the identity
uji
lij = .
ujj
In other words, each column of a matrix L equals the corresponding row of a matrix U divided
by the diagonal element. It is uniquely determined that the LDV decomposition of a symmetric
matrix has a form LDLT .
Since A = LDV . Take the transpose of it, we get
(the diagonal matrix D is symmetric) and the uniqueness of the LDV decomposition implies that
L=VT and V = LT .
Note that not every symmetric matrix has an LDLT factorization. However, if A = LDLT , then
A must be symmetric because
Example 1.32 Find the LDLT factorization of the following symmetric matrix
1 3 2
A = 3 4 1 .
2 1 2
Solution. By using Doolittle’s method the LU-decomposition of A can be obtained as
1 3 2 1 0 0 1 3 2
A = 3 4 1 = 3 1 0 0 −5 −5 = LU.
2 1 2 2 1 1 0 0 3
Then the matrix D and the matrix V can be obtained as follows:
1 3 2 1 0 0 1 3 2
U = 0 −5 −5 = 0 −5 0 0 1 1 = DV.
0 0 3 0 0 3 0 0 1
Note that
1 3 2 1 3 2
V = 0 1 1 = LT = 0 1 1 .
0 0 1 0 0 1
Thus we obtain
1 0 0 1 0 0 1 3 2
A = LDLT = 3 1 0 0 −5 0 0 1 1 ,
2 1 1 0 0 3 0 0 1
the LDLT factorization of the given matrix A. •
Example 1.33 Construct the LU decomposition of the following matrix using Crout’s method
1 2 3
A = 6 5 4 .
2 5 6
Solution. Since
l11 0 0 1 u12 u13
A = LU = l21 l22 0 0 1 u23 .
l31 l32 l33 0 0 1
Performing the multiplication on the right-hand side, gives
1 2 3 l11 l11 u12 l11 u13
6 5 4 = l21 l21 u12 + l22 l21 u13 + l22 u23 .
2 5 6 l31 l31 u12 + l32 l31 u13 + l32 u23 + l33
Chapter Three Systems of Linear Algebraic Equations 61
Then equating elements of first column to obtain, l11 = 1, l21 = 6, l31 = 2. Then equating elements
of second column to obtain
u12 = 2, l22 = −7, l32 = 1.
Thus we get
1 2 3 1 0 0 1 2 3
6 5 4 = 6 −7 0 0 1 2 ,
2 5 6 2 1 −2 0 0 1
The general formula for getting elements of L and U corresponding to the coefficient matrix A for
a set of n linear equations can be written as
j−1
X
lij = aij − lik ukj , i ≥ j, i = 1, 2, . . . , n
k=1
i−1
1 X
uij = [aij − lik ukj ], i < j, j = 2, 3, . . . , n
lii k=1
. (1.33)
lij = ai1 , j=1
aij
uij = , i=1
a11
Example 1.34 Solve the following linear system by LU decomposition using Crout’s method
1 2 3 1
A= 6 5 4 and b = −1 .
2 5 6 5
Solution. Since the factorization of the coefficient matrix A has been already constructed in the
Example (1.33) as
1 2 3 1 0 0 1 2 3
6 5 4 = 6 −7 0 0 1 2 .
2 5 6 2 1 −2 0 0 1
Performing forward substitution yields, [y1 , y2 , y3 ]T = [1, 1, −1]T . Then solving the second system
U x = y for unknown vector x, that is
1 2 3 x1 1
0 1 2 x2 = 1 .
0 0 1 x3 −1
Note that we can also find the LU-decomposition of a matrix A by using simple Gauss-elimination
method. We start with the product matrices of the form IA and convert it to the equivalent form
LU , that is, we have to convert right matrix A to unit upper-triangular matrix U . We describe the
procedure in the following example.
Example 1.35 Solve the following system using LU-decomposition by Crout’s method
x1 + 2x2 = 3
−x1 − 2x3 = 1
−3x1 − 5x2 + x3 = 1
solution. The Crout’s method makes LU factorization a byproduct of Gaussian elimination. To
illustrate, let the given matrix of the system is
1 2 0
−1 0 −2 .
−3 −5 1
The process begins with the product matrices form
1 0 0 1 2 0
IA = 0 1 0 −1 0 −2 .
0 0 1 −3 −5 1
In each of the steps below, we arrange so that the product of the two matrices is always equal to
the original matrix A. Now the first step of Gaussian elimination on the right factor is to divide
the first row by the pivot element. Then the Crout’s rule copies the pivot element to the matching
element of the left factor at the same time we divide. The next step in Gaussian elimination is
to eliminate all the elements below the pivot element. This is done by multiplying the first row by
below (n − 1) eliminating elements, subtracting the product from the (n − 1) rows, and putting the
result in the (n − 1) rows. The Crout’s rule copies all those eliminating elements into the matching
elements of the left factor. We repeat the same procedure for the remaining pivot elements. Thus
we obtain
1 0 0 1 2 0
A = −1 1 0 0 2 −2 .
−3 0 1 0 1 1
Chapter Three Systems of Linear Algebraic Equations 63
Performing forward substitution yields, [y1 , y2 , y3 ]T = [3, 2, 4]T . Then solving the second system
U x = y for unknown vector x, that is
1 2 0 x1 3
0 1 −1 x
2 2 .
=
0 0 1 x3 4
Example 1.36 Use LU decomposition by Crout’s method to find the value(s) of α for which the
following matrix
1 1 α
A = 1 α 1 ,
α 1 1
is singular. Compute the unique solution of the linear system Ax = [1, 1, −2]T by using the smallest
positive integer value of α.
solution. Using the procedure done in Example (1.35), we obtained the factored of the matrix
1 0 0 1 1 α 1 0 0 1 1 α
A = IA = 0 1 0 1 α 1 = 1 α − 1 0 0 1 −1 = LU.
0 0 1 α 1 1 α 1 − α (2 − α − α2 ) 0 0 1
and we obtained x = [−2, 1, 1]T , the solution of the given system. One can check that for α = 1 we
have no solution and for α = −2 we have infinitely many solutions. •
U −1 U = I and L−1 L = I.
Example 1.37 Use Crout’s method to find L−1 , U −1 , and the determinant of the inverse of the
following matrix.
2 −2 1
A= 5 1 −3 .
3 4 1
Solution. First we compute the LU decomposition of A, which is
2 −2 1 2 0 0 1 −1 1/2
A= 5 1 −3 = 5 6 0 0 1 −11/12 = LU.
3 4 1 3 7 71/12 0 0 1
To find the inverse of the matrix A, first we will compute the inverse of the lower-triangular matrix
L−1 from
2 0 0 0
l11 0 0 1 0 0
LL−1 = 5 6
0 0
0 l21 l22 0 = 0 1 0 = I,
3 7 71/12 0
l31 0
l32 0
l33 0 0 1
Chapter Three Systems of Linear Algebraic Equations 65
which is the required inverse of the lower-triangular matrix L. To find the inverse U −1 , let
1 a012 a013 1 −1 + a012 1/2 − 11/12 + a013 1 0 0
U −1 = 0 1 a023 then −1
U U = 0 1 −11/12 + a023 = 0 1 0 = I,
0 0 1 0 0 1 0 0 1
1
det(A−1 ) = det(U −1 ) det(L−1 ) = det(L−1 ) = (1/2)(1/6)(12/71) = .
71