Linear System of Equations Additional
Linear System of Equations Additional
a1 x1 + a2 x2 = b,
where a1 , a2 and b are real numbers. Note that this is the equation of a straight
line in the plane. For example, the equations
4
5x1 + 2x2 = 2, x1 + 2x2 = 1, 2x1 − 4x2 = π,
5
a1 x1 + a2 x2 + · · · + an xn = b,
For such system we seek all possible ordered sets of numbers c1 , . . . , cn which
satisfies all m equations when they are substituted for the variables x1 , x2 , . . . , xn .
Any such set {c1 , c2 , . . . , cn }, is called a solution of the system of linear equations
(1) or (2).
Theorem 3
(Solution of a Linear System)
Every system of linear equations has either no solution, exactly one solution, or
infinitely many solutions. •
Linear System in Matrix Notation
The system of linear equations (3) can be written as the single matrix equation
the coefficient matrix, the column matrix of unknowns, and the column
matrix of constants, respectively, then the system (3) can be written very
compactly as
Ax = b, (5)
which is called the matrix form of the system of linear equations (3). The column
matrices x and b are called vectors.
Solutions of Linear Systems of Equations
Now we shall discuss numerical methods for solving system of linear equations.
We shall discuss both direct and indirect (iterative) methods for the solution of
given linear systems. In direct method we shall discuss the familiar technique
called the method of elimination to find the solution of linear systems. This
method starts with the augmented matrix of the given linear system and obtain a
matrix of a certain form. This new matrix represents a linear system that has
exactly the same solutions as the given origin system. In indirect methods we shall
discuss Jacobi and Gauss-Seidel methods.
Gaussian Elimination Method
Simple Gaussian Elimination Method
The Gaussian elimination procedure start with forward elimination, in which the
first equation in the linear system is used to eliminate the first variable from the
rest of (n − 1) equations. Then the new second equation is used to elimination
second variable from the rest of (n − 2) equations, and so on. If (n − 1) such
elimination is performed then the resulting system will be the triangular form.
Once this forward elimination is completed, we can determine whether the system
is overdetermined or underdetermined or has a unique solution. If it has a unique
solution, then the backward substitution is used to solve the triangular system
easily and one can find the unknown variables involve in the system.
Now we shall describe the method in detail for a system of n linear equations.
Consider the following a system of n linear equations:
Forward Elimination
Consider first equation of the given system (6)
as first pivotal equation with first pivot element a11 . Then the first equation times
multiples mi1 = (ai1 /a11 ), i = 2, 3, . . . , n, is subtracted from the ith equation to
eliminate first variable x1 , producing an equivalent system
(1)
as second pivotal equation with second pivot element a22 . Then the second
(1) (1)
equation times multiples mi2 = (ai2 /a22 ), i = 3, . . . , n, is subtracted from the
ith equation to eliminate second variable x2 , producing an equivalent system
(2)
as the third pivotal equation with third pivot element a33 . Then the third
(2) (2)
equation times multiples mi3 = (ai3 /a33 ), i = 4, . . . , n, is subtracted from the
ith equation to eliminatethird variable x3 . Similarly, after (n-1)th steps, we have
the nth pivotal equation which have only one unknown variable xn , that is
(n−1)
with nth pivotal element ann . After getting the upper-triangular system which
is equivalent to the original system, the forward elimination is completed. After
the triangular set of equations has been obtained, the last equation of the system
(12) yields the value of xn directly. The value is then substituted into the
equation next to the last one of the system (12) to obtain a value of xn−1 , which
is, in turn, used along with the value of xn in the second to the last equation to
obtain a value of xn−2 , and so on.
Example 0.1
Solve the following linear system using the simple Gaussian elimination method
x1 + 2x2 + x3 = 2
2x1 + 5x2 + 3x3 = 1
x1 + 3x2 + 4x3 = 5
x1 + 2x2 + x3 = 2
x2 + x3 = −3
2x3 = 6
2x3 = 6, gives x3 = 3,
x2 = −x3 − 3 = −(3) − 3 = −6, gives x2 = −6,
x1 = 2 − 2x2 − x3 = 2 − 2(−6) − 3, gives x1 = 11,
x2 + x3 = 1
x1 + 2x2 + 2x3 = 1
2x1 + x2 + 2x3 = 3
To solve this system, the simple Gaussian elimination method will fail
immediately because the element in the first row on the leading diagonal, the
pivot, is zero. Thus it is impossible to divide that row by the pivot value. Clearly,
this difficulty can be overcome by rearranging the order of the rows; for example
by making the first row the second, gives
.
1 2 2 .. 1
.
0 1 1 .. 1 .
..
2 1 2 . 3
Now we use the usual elimination process. The first elimination step is to
eliminate the element a31 = 2 from the third row by subtracting a multiple
2
m31 = = 2 of row 1 from row 3, gives
1
..
1 2 2 . 1
..
.
0 1 1 . 1
..
0 −3 −2 . 1
We finished with the first elimination step since the element a21 is already
eliminated from second row. The second elimination step is to eliminate the
(1) −3
element a32 = −3 from the third row by subtracting a multiple m32 = of row
1
2 from row 3, gives
.
1 2 2 .. 1
.
0 1 1 .. 1 .
.
0 0 1 . 4 .
Obviously, the original set of equations has been transformed to an
upper-triangular form. Now expressing the set in algebraic form yields
x1 + 2x2 + 2x3 = 1
x2 + x3 = 1
x3 = 4
Using backward substitution, we get, x1 = −1, x2 = −3, x3 = 4, the solution of
the system. •
Example 0.3
For what values of α the following linear system has (i) Unique solution, (ii) No
solution, (iii) Infinitely many solutions, by using the simple Gaussian elimination
method. Use smallest positive integer value of α to get the unique solution of the
system.
x1 + 3x2 + αx3 = 4
2x1 − x2 + 2αx3 = 1
αx1 + 5x2 + x3 = 6
x1 + 3x2 + 2x3 = 4
− 7x2 = −7
− 3x3 = −1
α(1 − 2α)
−1.5 − = 0, or 2α2 − α − 6 = 0.
4
3
Solving the above quadratic equation, we get, α = − and α = 2, the possible
2
values of α which make the given matrix singular. •
Procedure
[Gaussian Elimination Method]
In using the Gaussian elimination by partial pivoting(or row pivoting), the basic
approach is to use the largest (in absolute value) element on or below the diagonal
in the column of current interest as the pivotal element for elimination in the rest
of that column.
One immediate effect of this will be to force all the multiples used to be not
greater than 1 in absolute value. This will inhibit the growth of error in the rest of
elimination phase and in subsequent backward substitution.
At stage k of forward elimination, it is necessary, therefore, to be able to identify
the largest element from |akk |, |ak+1,k |, . . . , |ank |, where these aik ’s are the
elements in the current partially triangularized coefficient matrix. If this
maximum occurs in row p, then pth and kth rows of the augmented matrix are
interchange and the elimination proceed as usual. In solving n linear equations, a
n(n + 1)
total of N = coefficients must be examined.
2
Example 0.5
Solve the following linear system using the Gaussian elimination with partial
pivoting
x1 + x2 + x3 = 1
2x1 + 3x2 + 4x3 = 3
4x1 + 9x2 + 16x3 = 11
Solution. For the first elimination step, since 4 is the largest absolute coefficient
of first variable x1 , therefore, the first row and the third row are interchange,
giving us
4x1 + 9x2 + 16x3 = 11
2x1 + 3x2 + 4x3 = 3
x1 + x2 + x3 = 1
Eliminate first variable x1 from the second and third rows by subtracting the
2 1
multiples m21 = and m31 = of row 1 from row 2 and row 3 respectively, gives
4 4
4x1 + 9x2 + 16x3 = 11
− 3/2x2 − 4x3 = −5/2
− 5/4x2 − x3 = −7/5
For the second elimination step, −3/2 is the largest absolute coefficient of second
variable x2 , so eliminate second variable x2 from the third row by subtracting the
5
multiple m32 = of row 2 from row 3, gives
6
4x1 + 9x2 + 16x3 = 11
− 3/2x2 − 4x3 = −5/2
1/3x3 = 1/3
Definition 5
A square matrix is said to be strictly diagonally dominant (SDD) if the absolute
value of each element on the main diagonal is greater than the sum of the absolute
values of all the other elements in that row. Thus, strictly diagonally
dominant matrix is defined as
n
X
|aii | > |aij |, for i = 1, 2, . . . , n. (14)
j=1
j6=i
Example 0.6
The matrix
7 3 1
A= 1 6 3 ,
−2 4 8
is strictly diagonally dominant since
Example 0.7
Solve the following linear system using the simple Gaussian elimination method.
5x1 + x2 + x3 = 7
2x1 + 6x2 + x3 = 9
x1 + 2x2 + 9x3 = 12
5x1 + x2 + x3 = 7
(28/5)x2 + (3/5)x3 = 31/5
(43/5)x3 = 43/5
For solving linear systems, we discuss a method for quantitatively measuring the
distance between vectors in Rn , the set of all column vectors with real
components, to determine whether the sequence of vectors that results from using
an direct method converges to a solution of the system. To define a distance in
Rn , we use the notation of the norm of a vector.
Vector Norms
It is sometimes useful to have a scalar measure of the magnitude of a vector. Such
a measure is called a vector norm and for a vector x is written as kxk.
A vector norm on Rn is a function, from Rn to R satisfying:
1. kxk > 0 for all x ∈ Rn .
2. kxk = 0 if and only if x = 0.
3. kαxk = |α|kxk, for all α ∈ R, x ∈ Rn .
4. kx + yk ≤ kxk + kyk, for all x, y ∈ Rn .
There are three norms in Rn that are most commonly used in applications, called
l1 -norm, l2 -norm, and l∞ -norm, and are defined for the given vectors
x = [x1 , x2 , . . . , xn ]T
as !1/2
n
X n
X
kxk1 = |xi |, kxk2 = x2i , kxk∞ = max |xi |.
1≤i≤n
i=1 i=1
The l1 -norm is called the absolute norm, the l2 -norm is frequently called the
Euclidean norm as it is just the formula for distance in ordinary three-dimensional
Euclidean space extended to dimension n. Finally, the l∞ -norm is called the
maximum norm or occasionally the uniform norm. All these three norms are also
called the natural norms.
Here we will consider the vector l∞ -norm only.
Example 0.8
Compute lp -norms (p = 1, 2, ∞) of the vector x = [−5, 3, −2]T in R3 .
Solution. These lp -norms (p = 1, 2, ∞) of the given vector are:
A matrix norm is a measure of how well one matrix approximates another, or,
more accurately, of how well their difference approximates the zero matrix. An
iterative procedure for inverting a matrix produces a sequence of approximate
inverses. Since in practices such a process must be terminated, it is desirable to
have some measure of the error of approximate inverse.
So a matrix norm on the set of all n × n matrices is a real-valued function, k.k,
defined on this set, satisfying for all n × n matrices A and B and all real number α
as follows:
1. kAk > 0, A 6= 0.
2. kAk = 0, A = 0.
3. kIk = 1, I is the identity matrix.
4. kαAk = |α|kAk, for some scalar α ∈ R.
5. kA + Bk ≤ kAk + kBk.
6. kABk ≤ kAkkBk.
7. kA − Bk ≥ kAk − kBk .
Several norms for matrices have been defined, we shall use the following three
natural norms l1 , l2 , and l∞ for a square matrix of order n:
n
!
X
kAk1 = max |aij | = maximum column-sum.
j
i=1
For m × n matrix, we can paraphrase the Frobenius norm (or Euclidean norm),
which is not a natural norm and is define as
1/2
m X
X n
2
kAkF = |aij | .
i=1 j=1
where tr(AT A) is the trace of a matrix AT A, that is, the sum of the diagonal
entries of AT A. The Frobenius norm of a matrix is a good measure of the
magnitude of a matrix. It is to be noted that kAkF 6= kAk2 . For a diagonal
matrix, all norms have the same values.
Also, here we will consider the matrix l∞ -norm only.
Example 0.9
Compute lp -norms (p = ∞, F ) of the following matrix
4 2 −1
A= 3 5 −2 .
1 −2 7
l∞ -norm is defined as
3
X
|a1j | = |4| + |2| + | − 1| = 7,
j=1
X3
|a2j | = |3| + |5| + | − 2| = 10,
j=1
X3
|a3j | = |1| + | − 2| + |7| = 10,
j=1
so
kAk∞ = max{7, 10, 10} = 10.
In addition, we have the lF -norm of the matrix as
The methods discussed in the previous section for the solution of the system of
linear equations have been direct, which required a finite number of arithmetic
operations. The elimination methods of solving such systems usually yield
sufficiently accurate solutions for approximately 20 to 25 simultaneous equations,
where most of the unknowns are present in all of the equations. When the
coefficients matrix is sparse (has many zeros), a considerably large number of
equations can be handled by the elimination methods. But these methods are
generally impractical when many hundreds or thousands of equations must be
solved simultaneously.
There are, however, several methods which can be used to solve large numbers of
simultaneous equations. These methods are, called iterative methods by which an
approximation to the solution of a system of linear equations may be obtained.
Here, we consider just two of these iterative methods. These two forms the basis
of a family of methods which are designed either to accelerate the convergence or
to suit some particular computer architecture.
Jacobi Iterative Method
This is one of the easiest iterative method to find the approximate solution of the
system of linear equations
Ax = b, (15)
To explain its procedure, consider a system of three linear equations as follows:
a11 x1 + a12 x2 + a13 x3 = b1
a21 x1 + a22 x2 + a23 x3 = b2
a31 x1 + a32 x2 + a33 x3 = b3
The solution process starts by solving for the first variable x1 from first equation,
second variable x2 from second equation and third variable x3 from third
equation, gives
a11 x1 = b1 − a12 x2 − a13 x3
a22 x2 = b2 − a21 x1 − a23 x3
a33 x3 = b3 − a31 x1 − a32 x2
Divide both sides of the above three equations by their diagonal elements, a11 , a22
and a33 respectively, to have
1 h i
x1 = b1 − a12 x2 − a13 x3
a11
1 h i
x2 = b2 − a21 x1 − a23 x3
a22
1 h i
x3 = b3 − a31 x1 − a32 x2
a33
(k) T
h i
(k) (k)
Let x(k) = x1 , x2 , x3 be an initial solution of the exact solution x of the
linear system (22), then we define an iterative sequence
i = 1, 2, . . . , n, k = 0, 1, 2, . . . ,
provided that the diagonal elements aii 6= 0 for each i = 1, 2, . . . , n. If the diagonal
elements equal to zero, then reordering of the equations can be performed so that
no element in the diagonal position equal to zero. As usual with iterative
(0)
methods, an initial approximation xi must be supplied. If we don’t have
(0)
knowledge about the exact solution, it is conventional to start with xi = 0 for all
i. The iterations defined by (17) are stopped when
kx(k+1) − x(k) k
< , (19)
kx(k+1) k
where is a preassigned small positive number. For this purpose, any convenient
norm can be used, the most usual being the l∞ -norm.
Example 0.10
Solve the following system of equations using the Jacobi iterative method, using
= 10−6 in the l∞ -norm.
5x1 − x2 + x3 = 10
2x1 + 8x2 − x3 = 11
−x1 + x2 + 4x3 = 3
2x1 + 8x2 − x3 = 11
5x1 − x2 + x3 = 10
−x1 + x2 + 4x3 = 3
Notice that Jacobi method diverges rapidly. Although the given linear system is
same as the linear system of the previous Example 0.10 except the first and second
equations are interchanged. From this example we concluded that Jacobi iterative
method is not always convergent.
Program 3.10
MATLAB m-file for the Jacobi Iterative Method for Linear System
function x=JacobiM(Ab,x,acc)
[n,t]=size(Ab); b=Ab(1:n,t); R=1; k=1; d(1,1:n+1)=[0 x]; while R > acc
for i=1:n; sum=0; for j=1:n; if j ˜ =i
sum = sum + Ab(i, j) ∗ d(k, j + 1); end; x(1, i) = (1/Ab(i, i)) ∗ (b(i, 1) − sum);end;end
k=k+1; d(k,1:n+1)=[k-1 x]; R=max(abs((d(k,2:n+1)-d(k-1,2:n+1))));
if k > 10 & R > 100 (’Jacobi Method is diverges’) break; end; end; x=d;
This is one of the most popular and widely used iterative method to find the
approximate solution of the system of linear equations. This iterative method is a
modification of the Jacobi iterative method and give us good accuracy by using
the most recently calculated values.
From the Jacobi iterative formula (17), it is seen that the new estimates for
solution x are computed from the old estimates and only when all the new
estimates have been determined are then used in the right-hand side of the
equation to perform the next iteration. But the Gauss-Seidel method is to make
use of the new estimates in the right-hand side of the equation as soon as they
become available. For example, the Gauss-Seidel formula for the system of three
equations can be define an iterative sequence
(k+1) 1 h (k) (k)
i
x1 = b1 − a12 x2 − a13 x3
a11
i = 1, 2, . . . , n, k = 0, 1, 2, . . .
5x1 − x2 + x3 = 10
2x1 + 8x2 − x3 = 11
−x1 + x2 + 4x3 = 3
2x1 + 8x2 − x3 = 11
5x1 − x2 + x3 = 10
−x1 + x2 + 4x3 = 3
Ax = b, (22)
start with an initial approximation x(0) ∈ R to the solution x of the linear system
(22), and generates a sequence of vectors {x(k) }∞
k=0 that converges to x. Most of
these iterative methods involve a process that converts the system (22) into an
equivalent system of the form
x = T x + c, (23)
for some square matrix T and vector c. After the initial vector x(0) is selected, the
sequence of approximate solutions vector is generated by computing
A = L + D + U, (26)
and
a11 0 0 ··· 0
0 a22 0 ··· 0
0 0 a33 ··· 0
D= .
.. .. .. .. ..
. . . . .
0 0 0 ··· ann
Then the linear system (22) can be written as
(L + D + U )x = b. (27)
Now we find forms of both matrices T and c which help us to solve the linear
system.
TJ = −D−1 (L + U ) and cj = D−1 b, (28)
are called Jacobi iteration matrix and Jacobi constant column matrix,
respectively, and their elements are defined by
aij
tij = , i, j = 1, 2, . . . , n, i 6= j,
aii
tij = 0, i = j,
bi
ci = , i = 1, 2, . . . , n.
aii
Note that the diagonal elements of Jacobi iteration matrix TJ are always zero.
6x1 + 2x2 = 1
x1 + 7x2 − 2x3 = 2
3x1 − 2x2 + 9x3 = −1
and so
0 0 0 0 2 0 6 0 0
A=L+U +D = 1 0 0 + 0 0 −2 + 0 7 0 .
3 −2 0 0 0 0 0 0 9
Jacobi Iterative Method
Since the matrix form of the Jacobi iterative method can be written as
x(k+1) = TJ x(k) + cJ , k = 0, 1, 2, . . . ,
where
TJ = −D−1 (L + U ) and cJ = D−1 b.
One can easily compute the Jacobi iteration matrix TJ and the vector cJ as
follows:
1 2 1
0 0 0 − 0
6 6 6
0 2 0
1 1 2 2
TJ = − 0
0 1 0 −2 =
−7 0 and c=
7
3 −2 0 7
7
1 3 2 1
0 0 − 0 −
9 9 9 9
Thus the matrix form of Jacobi iterative method is
2 1
0 − 0
6
6
1 2 (k) 2
x(k+1) =
−7 x +
0 , k = 0, 1, 2.
7 7
3 2 1
− 0 −
9 9 9
Gauss-Seidel Iterative Method
Now by using Gauss-Seidel method, first we compute the Gauss-Seidel iteration
matrix TG and the vector cG as follows:
1 1
0 − 0
3
6
1 2 11
TG = 0 and cG = .
21 7
42
23 4 41
0 −
189 63 378
Thus the matrix form of Gauss-Seidel iterative method is
1 1
0 − 0
3
6
1 2 11
x(k+1) = x(k) +
, k = 0, 1, 2.
0 21 7 42
23 4 41
0 −
189 63 378
Theorem 7
(Second Sufficient Condition for Convergence)
For any initial approximation x(0) ∈ R, the sequence {x(k) }∞
k=0 of
approximations defined by
kT kk
kx − x(k) k ≤ kx(1) − x(0) k.
1 − kT k
Note that smaller the value of the kT k, faster the convergence of the iterative
methods.
Example 0.15
Consider the following nonhomogeneous linear system Ax = b, where
5 0 −1 1
A = −1 3 0 and b = 2 .
0 −1 4 4
Find the matrix form of iterative (Jacobi and Gauss-Seidel) methods and show
that Gauss-Seidel iterative method converges faster than Jacobi iterative method
for the given system.
Solution. Here we will show that the l∞ -norm of the Gauss-Seidel iteration
matrix TG is less than the l∞ -norm of the Jacobi iteration matrix TJ , that is
The Jacobi iteration matrix TJ can be obtained from the given matrix A as follows
1
0 0
−1
5
5 0 0 0 0 −1
−1
1
TJ = −D (L+U ) = − 0 3 0 −1 0 0 = 3 0 0 .
0 0 4 0 −1 0
1
0 0
4
Thus the matrix form of Jacobi iterative method is
1 1
0 0
5 5
1
x(k+1) =
(k)
0 0 x
+ 2 , k ≥ 0.
3
3
1
0 0 1
4
Similarly, Gauss-Seidel iteration matrix TG is defined as
−1
5 0 0 0 0 −1
TG = −(D + L)−1 U = − −1 3 0 0 0 0 ,
0 −1 4 0 0 0
and it gives
1 1
0 0 0 0
5
5
1
0 0 −1
1 1
TG = −
15 0 0 0 0 =
0 0 .
3
0 0 0 15
1 1 1 1
0 0
60 15 4 60
So the matrix form of Gauss-Seidel iterative method is
1 1
0 0
5
5
(k+1)
1 (k)
11
x = 0 0
x + , k ≥ 0.
15
15
1 71
0 0
60 60
Since the l∞ -norm of the matrix TJ is
1 1 1 1
kTJ k∞ = max , , = = 0.3333 < 1,
5 3 4 3
Since kTG k∞ < kTJ k∞ , which shows that Gauss-Seidel method will converge
faster than Jacobi method for the given linear system. •
Example 0.16
Consider the following linear system of equations
4x1 − x2 + x3 = 12
−x1 + 3x2 + x3 = 1
x1 + x2 + 5x3 = −14
(a) Show that both iterative methods (Jacobi and Gauss-Seidel) will converge by
using kT k∞ < 1.
(b) Find second approximation x(2) when the initial solution is x(0) = [4, 3, −3]T .
(c) Compute the error bounds for your approximations.
(d) How many iterations needed to get an accuracy within 10−4 .
Solution. From (26), we have
4 −1 1 0 0 0 0 −1 1 4 0 0
A = −1 3 1 = −1 0 0 + 0 0 1 + 0 3 0
1 1 5 1 1 0 0 0 0 0 0 5
= L + U + D.
Jacobi Method:
(a) Since the Jacobi iteration matrix is defined as
TJ = −D−1 (L + U ),
Thus the Jacobi method will converge for the given linear system.
(b) The Jacobi method for the given system is
kTJ kk
kx − x(k) k ≤ kx(1) − x(0) k ≤ 10−4 .
1 − kTJ k
It gives
(2/3)k 10−4
(1.2) ≤ 10−4 , or (2/3)k ≤ .
1/3 3.6
Taking ln on both sides, we obtain
−4
10
k ln(2/3) ≤ ln , gives k ≥ 25.8789, or k = 26,
3.6
TG = −(D + L)−1 U,
Thus the Gauss-Seidel method will converge for the given linear system.
(b) The Gauss-Seidel method for the given system is
kTJ kk
kx − x(k) k ≤ kx(1) − x(0) k ≤ 10−4 .
1 − kTJ k
It gives
(1/2)k 10−4
(1.2667) ≤ 10−4 , or (1/2)k ≤ .
1/2 2.5334
Taking ln on both sides, we obtain
10−4
k ln(1/2) ≤ ln , gives k ≥ 14.6084 or k = 15,
2.5334
Any computed solution of a linear system must, because of round-off and other
errors, be considered an approximate solution. Here we shall consider the most
natural method for determining the accuracy of a solution of the linear system.
One obvious way of estimating the accuracy of the computed solution x∗ is to
compute Ax∗ and to see how close Ax∗ comes to b. Thus if x∗ is an approximate
solution of the given system Ax = b, we compute a vector
r = b − Ax∗ , (31)
which is called the residual vector and can be easily calculated. The quantity
krk kb − Ax∗ k
= ,
kbk kbk
has exact solution x = [1, 1, 1, 1]T and having the approximate solution due to the
Gaussian elimination without pivoting is
The approximate solution due to the Gaussian elimination with partial pivoting is
In solving the linear system numerically we have to see the problem conditioning,
algorithm stability, and cost. Above we discussed efficient elimination schemes to
solve a linear system and these schemes are stable when pivoting is employed. But
there are some ill-conditioned systems which are tough to solve by any method.
Definition 8
(Condition Number of a Matrix)
The number kAkkA−1 k is called the condition number of a nonsingular matrix A
and is denoted by K(A), that is
and
n 8 −2 −1 3 −4 −2 4 −1 6 o
kA−1 k∞ = max + + , + + , + + ,
13 13 13 13 13 13 13 13 13
which gives
11
kA−1 k∞ = .
13
Therefore,
11
K(A) = kAk∞ kA−1 k∞ = (7) ≈ 5.9231.
13
Depending on the application, we might consider this number to be reasonably
small and conclude that the given matrix A is reasonably well-conditioned. •
Example 0.18
If the condition number of following matrix A is 8.8671, then find the l∞ -norm of
its inverse matrix, that is, kA−1 k∞
10.2 2.4 4.5
A = −2.3 7.7 11.1 .
−5.5 −3.2 0.9
First we calculate the l∞ -norm of the given matrix A which is the maximum of
the absolute row sums, we have
8.8671 = (21.1000)kA−1 k∞ .
kx − x∗ k ≤ krkkA−1 k, (33)
kx − x∗ k krk 0.005
≤ K(A) = (8) = 0.0400,
kxk kbk 1
x1 + x2 − x3 = 1
x1 + 2x2 − 2x3 = 0
−2x1 + x2 + x3 = −1
kx − x∗ k krk
≤ K(A) , provided that x 6= 0, b 6= 0. (35)
kxk kbk
.
(d) Use the simple Gaussian elimination method to find approximate error using
.
Solution. (a) Given the matrix
1 1 −1
A= 1 2 −2 ,
−2 1 1
and whose inverse can be computed as
2 −1 0
−1
A = 1.5 −0.5 0.5 .
(b) The residual vector can be calculated as
1 1 1 −1 2.01 −0.04
∗
r = b − Ax = 0 − 1 2 −2 1.01 = −0.07 ,
−1 −2 1 1 1.98 0.03
and it gives
krk∞ = 0.07.
kx − x∗ k (0.07)
≤ (22.5) = 1.575.
kxk 1
After applying forward elimination step of the simple Gauss elimination method,
we obtain
..
1 1 −1 . −0.04
.
0 1 −1 .. −0.03 .
..
0 0 2 . 0.04
Now by using the backward substitution, we obtain the solution