4 - Filled-2
4 - Filled-2
4.1 Introduction
A system of linear equations arises in problems that include several (possibly many)
variables that are dependent on each other. A system of m linear equations in n unknowns
can be written as,
a11 x1 a12 x 2 a1n x n b1
a 21 x1 a 22 x 2 a 2 n x n b2
a m1 x1 a m 2 x 2 a mn x n bm
We can write the equations in
(1). Matrix form
37
(2). If m>n, i.e. the number of unknowns is less than the number of equations,
Overdetermined system (e.g., least-squares problems).
(3). If m<n, i.e. the number of unknowns is greater than the number of equations,
Underdetermined system (e.g., optimization).
The numerical methods to solving systems of linear equations can be classified as two
types:
1). Direct method: the solution is calculated by performing arithmetic operations with the
equations.
2). Iterative method: an initial approximate solution is assumed and then used in an
iterative process for obtaining more accurate solution.
Three systems of equations that can be solved easily by direct method are in the upper
triangular, lower triangular, and diagonal forms.
However, most systems of linear equations are in general form other than above forms.
Gauss Elimination Method: is the most basic systematic scheme for solving system of
linear equations of general from, it manipulates the equations into upper triangular form
and then solves it by using back substitution. The procedure consists of two steps:
1) Forward elimination of the linear equation matrix using row operation to obtain an upper
triangular matrix.
38
2) Backward substitution to solve for the unknowns.
To use Gaussian elimination method, the first equation is unchanged, and the terms that
include x1 in all other equations are eliminated. The first equation is called pivot equation
(or pivot row), the coefficient of x1 is called pivot coefficient (a11 here). Then begin with
the new 2nd equation (new pivot equation, and the new coefficient of x2 is called pivot
coefficient this time), eliminate the terms that include x2 in all other new equations. Follow
this procedure and repeat until the equations are in upper triangular form.
Example 4.1 Use Gauss Elimination to find the roots for linear equations
3x1 2 x 2 x3 1
6 x1 6 x2 7 x3 7
3x1 4 x 2 4 x3 6
39
Backward substitution:
If the pivot element is 0 or becomes 0 after row operation, a problem will arise during the
execution of Gaussian elimination procedure. This problem can easily removed by
changing the order of the rows, i.e. the pivot row that has pivot element of 0 is exchanged
with another row that has a nonzero pivot element. This procedure is called pivoting. After
pivoting, Gaussian elimination procedure can be applied. This modified Gaussian
elimination method is called Gaussian elimination with pivoting.
Example 4.2 Use Gauss Elimination to find the roots for linear equations
2x 1 + 4x 2 - 2x 3 - 2x 4 = -4
1x 1 + 2x 2 + 4x 3 - 3x 4 = 5
-3x 1 - 3x 2 + 8x 3 - 2x 4 = 7
-x 1 + x 2 + 6x 3 + -3x 4 = 7
40
Pivoting:
Backward substitution:
Note: We can also manipulate the equations into diagonal form and solve it, this method is
called Gaussian-Jordan elimination method. Here in this course, it will not be covered.
41
With this decomposition, the equations can be written as LUx = b. If define Ux =y, then the
equations become Ly = b from which y can be solved by forward substitution because L is
in lower triangular form. Then plug y into Ux =y and solve x by back substitution since U is
in upper triangular form.
In order to find LU, you need to record the steps used in Gaussian elimination.
LU=
42
4.2.2.1 Solving Linear Equations with LU Decomposition
Ax = b → LUx = b
Gauss elimination can be used to decompose A into L and U as illustrated for a three-
equation system,
a11 a12 a13 x1 b1
a
21 a 22 a 23 x2 b2
a31 a32 a33 x3 b3
43
Example 6.3 Use LU decomposition to find the roots for linear equations
3 2 1 x1 1
6 6 7 x 7
2
3 4 4 x3 6
44
4.3 Iterative Methods
The idea of iterative method is the same as that in the fixed-point iteration method for
solving a single nonlinear equation. In an iterative process for solving a system of linear
equations, the equations are rewritten in an explicit form in which each unknown is written
in terms of other unknowns.
Extend to n-equation system, the explicit equations for the xi unknowns are:
The solution process starts by assuming initial values for all the unknowns, and then plug
them into the right-hand side of the explicit equations to get new values of the unknowns,
the substitute the new values (solution) to repeat the procedure until ideal solution is
obtained.
45
The iteration converges toward the solution, and A is classified as diagonal dominant.
The iterative methods for solving systems of linear equations include Jacobi iterative
method and Gauss-Seidel iterative method.
Jacobi Iterative Method: the estimated values of the unknowns that are used on the right-
hand side of the explicit equations are updated all at once at the end of each iteration. The
(k+1)th estimate of the solution is calculated from the (k)th estimate by
Ideal solution is obtained when the absolute value of the estimated relative error of all the
unknowns is smaller than some predetermined value:
Gauss-Seidel Iterative Method: the value of each unknown is updated (and used in the
calculation of new estimate of the rest of the unknowns in the same iteration) when a new
estimate of this unknown is calculated.
The values of the unknowns in the (k+1)th iteration, xi(k+1), are calculated by using the
values xj(k+1) obtained in the (k+1)th iteration for j<i and using the values xj(k) for j>i.
The criterion for stopping the iteration is the same as in the Jacobi method.
Note: The Gauss-Seidel method converges faster than the Jacobi method and requires less
computer memory when programmed.
46
Example 4.4 Use Gauss-Seidel iterative method to find the roots for linear equations
9 x1 2 x 2 3x3 2 x 4 54.5
2 x1 8 x 2 2 x3 3x 4 14
3x1 2 x 2 11x3 4 x 4 12.5
2 x1 3x 2 2 x3 10 x 4 21
with initial guess to be 0 for all unknowns.
First, rewrite equations into explicit form,
47
k x1 x2 x3 x4
1 0.00000 0.00000 0.00000 0.00000
2 6.05556 -3.26389 3.38131 -0.58598
3 4.33336 -1.76827 2.42661 -1.18817
4 5.11778 -1.97723 2.45956 -0.97519
5 5.01303 -2.02267 2.51670 -0.99393
6 4.98805 -1.99511 2.49806 -1.00347
7 5.00250 -1.99981 2.49939 -0.99943
8 5.00012 -2.00040 2.50031 -0.99992
Left division \: for a system of linear equations written in the form Ax = b, the solution by
MATLAB is
x = A\b.
Right division /: for a system of linear equations written in the form Ax = b, the solution by
MATLAB is
x = b/A.
For a system of linear equations written in the form Ax = b, multiply both sides from the
left by A-1 (the inverse matrix of A) MATLAB is
A-1 A x = A-1b
i.e. the solution is
x = A-1b
In MATLAB, the inverse of a square matrix A is calculated by raising the matrix to the
power of -1 or by using the inv(A) function.
48
Example 4.5 Use MATLAB left division, right division and inverse matrix operation to
find the roots for linear equations in Example 4.4.
x=
5.0000
-2.0000
2.5000
-1.0000
x=
x=
5.0000
-2.0000
2.5000
-1.0000
49
Example 6.6 Use MATLAB function lu() to find the roots for linear equations in Example
6.3.
50