0% found this document useful (0 votes)
56 views

A A A A A A: Solution of Linear System

The document discusses methods for solving systems of linear equations. It introduces direct methods like Gaussian elimination and iterative methods. Gaussian elimination reduces a system of equations to row echelon form by eliminating variables using elementary row operations. Pivoting strategies like partial pivoting and total pivoting are used to avoid round-off errors during Gaussian elimination. The LU decomposition method decomposes the coefficient matrix A as A = LU, where L is a lower triangular matrix and U is an upper triangular matrix. This transforms the system of equations into an equivalent system that can be solved using forward and back substitution. Crout's factorization is an LU decomposition method where the diagonal entries of L and U are set to 1 to simplify calculations.

Uploaded by

Asif Karim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views

A A A A A A: Solution of Linear System

The document discusses methods for solving systems of linear equations. It introduces direct methods like Gaussian elimination and iterative methods. Gaussian elimination reduces a system of equations to row echelon form by eliminating variables using elementary row operations. Pivoting strategies like partial pivoting and total pivoting are used to avoid round-off errors during Gaussian elimination. The LU decomposition method decomposes the coefficient matrix A as A = LU, where L is a lower triangular matrix and U is an upper triangular matrix. This transforms the system of equations into an equivalent system that can be solved using forward and back substitution. Crout's factorization is an LU decomposition method where the diagonal entries of L and U are set to 1 to simplify calculations.

Uploaded by

Asif Karim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 11

Chapter 3

Solution of Linear System


3.1 Introduction
System of linear equations occur in a variety of applications in the fields like elasticity, electrical
engineering, statistical analysis. The techniques and methods for solving system of linear equations
belong to two categories: direct and iterative methods.
Some of the direct methods are Gauss elimination method, matrix inverse method, LU factorization
and Cholesky method. Elimination approach reduces the given system of equations to a form from
which the solution can be obtained by simple substitution. Since calculators and computers have
some limit to the number of digits for their use. This may lead to round off errors and produces
poorer results. It will be assumed that readers are familiar with some of the direct methods suitable
for small systems. Handling of large systems are also time consuming.
Iterative methods provide an alternative to the direct methods for solving system of linear equations.
This method involves assumptions of some initial values which are then refined repeatedly till they
reach some accepted range of accuracy.
In this chapter we shall consider Gauss elimination method and iterative methods suitable for
numerical calculations.

3.2 Linear System of Equations


Consider a system of n linear equations in the n unknowns x1 , x 2 , , x n
a11 x1 a12 x 2 a1n x n b1
(E1)
a21 x1 a 22 x 2 a 2 n x n b2
(E2)
(1)

a n1 x1 a n 2 x 2 a nn x n bn
(En)
where aij , bi .
Exactly one of the following three cases must occur:
(a) The system has a unique solution.
(b) The system has no solution.
(c) The system has an infinite number of solutions

For unique solution:


In matrix notation, we can write the system as
AX=B
where

A=

a11 a12 . . . a1n


a a ... a
21 22 2n
. . . . . .

(2)

. . .
aij

an1 an2 . . . ann

X x1 x 2 . . .

xn T

B b1 b2

bn T

. . .

The solution of the system exists and is unique if A 0 .


The solution (2) may then be written as
where A 1 is the inverse of A.
X A 1B

3.3 Method of Elimination


Equivalent Systems: Two systems of equations are called equivalent if and only if they have the
same solution set.
Elementary Transformations: A system of equations is transformed into an equivalent system if
the following elementary operations are applied on the system:
(1) two equations are interchanged
(2) an equation is multiplied by a non-zero constant
(3) an equation is replaced by the sum of that equation and a multiple of any
other equation.

Gaussian Elimination
The process which eliminates an unknown from succeeding equations using elementary operations
is known as Gaussian elimination.
The equation which is used to eliminate an unknown from the succeeding equations is known as the
pivotal equation. The coefficient of the eliminated variable in a pivotal equation is known as the
pivot. If the pivot is zero, it cannot be used to eliminate the variable from the other equations.
However, we can continue the elimination process by interchanging the equation with a nonzero
pivot.
Solution of a Linear System
A systematic procedure for solving a linear system is to reduce a system that is easier to solve. One
such system is the echelon form. The back substitution is then used to solve the system in reverse
order.
A system is in echelon form or upper triangular form if
(i) all equations containing nonzero terms are above any equation with zeros only.
(ii) the first nonzero term in every equation occurs to the right of the first nonzero term term
in the equation above it.

3.4 Pivotal Elimination Method


Computers and calculators use fixed number of digits in its calculation and we may need to round
the numbers. This introduces error in the calculations. Also when two nearly equal numbers are
subtracted, the accuracy in the calculation is lost. To reduce the propagation of errors, pivoting
strategy is to be used.

3.4.1 Partial Pivoting (Partial Column Pivoting)


In partial pivoting, at any time we use the maximum magnitude of coefficient of the eliminating
variable as the pivot. The process is continued for resulting subsystems. Pivotal equation is divided
throughout by the pivot to reduce to build up large coefficients when solving a system. The method
is illustrated with an example.

Example: Solve the following linear system by the Gaussian elimination with partial pivoting,
giving your answers to 3 decimal places.
5 x 12 y 9 z 5,

Operation
x
5
8
16
1
1
1

Eq3/16
Eq1/5
Eq2/8
Eq5Eq4
Eq6Eq4
Eq7/2.0875
Eq8/1.0625
Eq10Eq9

8 x 11 y 20 z 35, 16 x 5 y 7 z 29

Coefficient of
R. H. S.
y
z
12
9
5
11
20
35
5
7
29
0.3125 0.4375 1.8125
2.4000 1.8000 1.0000
1.3750 2.5000 4.3750
2.0875 1.3625 0.8125
1.0625 2.0625
2.5625
1
0.6527 0.3892
1
1.9412
2.4118
1.2885
2.8010

Eq. #
Eq1
Eq2
Eq3
Eq4*
Eq5
Eq6
Eq7
Eq8
Eq9*
Eq10
Eq11*

Check
Sum
31
74
57
3.5625
6.2000
9.2500
2.6375
5.6875
1.2635
5.3530
4.0895

Solution of the system is obtained by the back substitution as follows:


2.8010
z
2.1738
1.2885
y 0.3892 0.6527 2.1738 1.8080

x 1.8124 0.3125 (1.8080) 0.4375 2.1738 1.4264


To check the calculations an extra column headed by check sum is included which is the sum of the
numbers in the row. It is also worked out in exactly the same way as the other numbers in the line.

3.4.2 Total Pivoting


Partial pivoting is adequate for most of the simultaneous equations which arise in practice. But we
may encounter sets of equations where wrong or incorrect solutions may results. To improve the
calculation in such cases total pivoting is used. In total pivoting, maximum magnitude of the
coefficients is used for the pivot in each case.
Example: Solve the above system using total pivoting.
Operatio
n

Eq2/20
Eq3/7
Eq1/9
Eq5Eq4
Eq6Eq4
Eq7/1.8857
Eq8/0.1556
Eq10Eq9

Coefficient of
x
y
5
12
8
11
16
5
0.4000 0.5500
2.2857 0.7143
0.5556 1.3333
1.8857
0.1643
0.1556
0.7833
1
0.0871
1
5.0341
0
4.9470

z
9
20
7
1
1
1
0
0
0
0
0

Solution of the system is


3

R. H. S.

Eq. #

5
35
29
1.7500
4.1429
0.5556
2.3929
1.1944
1.2690
7.6761
8.9451

Eq1
Eq2
Eq3
Eq4*
Eq5
Eq6
Eq7
Eq8
Eq9*
Eq10
Eq11*

Check
Sum
31
74
57
3.7000
8.1429
3.4445
4.4429
0.2555
2.3561
1.6420
3.9981

8.9451
1.8082
4.9470
x 1.2690 0.0871 ( 1.8082) 1.4265
z 1.75 0.4 1.4265 0.55 ( 1.8042) 2.1739
Solutions of the system are summarized below for comparison.
y

x
y
z

using
Mathematica
1.4265
1.8081
2.1739

With
Partial Pivoting
1.4264
1.8080
2.1738

with
Total Pivoting
1.4265
1.8082
2.1739

3.5 Matrix Factorization Method


In this method the coefficient matrix A is decomposed into the product of a lower triangular matrix
L and an upper triangular matrix U.
A=LU
The system of equations becomes
LUX=B
We rewrite this system as
UX=Z
LZ=B
We first solve for Z using forward substitution and then find X using the back substitution.
Any matrix that has all diagonal elements non-zero can be written as a product of a lower triangular
and an upper triangular matrix in an infinite number of ways.
For example,

11 1
112 2 0 2 2
0 4 2 0 4 0 10 1
2

6 3 1 6 0 4 100

001 2 1 1
010 0 4 0

103 0 0 4

001 2 1 1
020 0 2 0

103 0 0 4
and so on.

1 2 3

Example: Show that the matrix 2 4 1

1 0 2

is non-singular but it cannot be expressed as the LU

form.

3.5.1 Crouts Factorization


In practice, we write A = LU with u ii 1. which simplifies the calculations.
Crouts factorization is explained by the following example.

Example:

Given that

1 2 3

A = 3 4 11 ,

5 14 12

5

B = 21 ,

15

x

X=
y
z

(i) Determine a lower triangular matrix L and an upper triangular matrix U such that L U = A.
(ii) Use the above factorization to solve the equation A X = B.

Solution:

(i)

1 2 3

A = 3 4 11 = L U =

5 14 12

Let

a 00 1 ml

cb 0 10 n
fed 100

a al am

= b bl c
bm cn

d d l e d m en f

Equating the corresponding elements of the two matrices, we have


a 1
b3
d 5
2
al 2
l 2
or
1
am3
m3
or
bl c 4
c 4 3(2) 2
or
d l e 14
e 14 5( 2) 4
or
1
b m c n 11
11 3(3) 1
n
or
2
d m e n f 12
f 12 5(3) 4( 1) 1
or
Thus

1 0

L = 3 2

5 4

0
1

1 2 3

U = 0 1 1

0 0 1

(ii) The equation can be written as


AX = LUX = LY = B
where

UX = Y

and

Y=

y1

y2
y
3

Consider the solution of


LY = B

or

1 0

3 2
5 4

0
1

y1

y2

y3
=

21
15

Using forward elimination, we have


y1 5, y 2 3, y3 2
Now consider the solution of
UX = Y

or

1 2 3 x


0 1 1 y
0 0 1 z

5

= 3

2

Using backward elimination, we have


x 1,

y 1,

z2

and hence

1

X = 1

2

3.5.1. Cholesky Method (Square Root Method)
8

Positive Definite
A symmetric matrix A is positive definite if X * AX 0, X 0 . Here X * X T and X is the
complex conjugate of X.

Example: Show that the matrix

2 1
A
1 2

is positive definite but

1 2
B
2 1

is not.

If the coefficient matrix, A, is symmetric and positive definite, it can be decomposed as


A LLT

In this case the inverse A 1 can be obtained as follows:

A 1 L LT

1 LT 1 L1 L1 T L1

Recall that inverse of a lower triangular matrix is also a lower triangular matrix. This property may
be used to find L1 .
For a third order lower triangular matrix L, we may write the relation LL1 I as

a1 00 b1 0 1 0

21 aa 2 0 21 bb 2 0 0 1 0
aaa b bb 0 1
1 2 3 1 2 3

Comparing two sides we may then find the unknowns bij .

3.6 Solution of Linear System by Iterative Method


Iterative method for linear system is similar as the method of fixed-point iteration for an equation in
one variable, To solve a linear system by iteration, we solve each equation for one of the variables ,
in turn, in terms of the other variables. Starting from an approximation to the solution, if convergent,
9

derive a new sequence of approximations. Repeat the calculations till the required accuracy is
obtained.
An iterative method converges, for any choice of the first approximation, if every equation satisfies
the condition that the magnitude of the coefficient of solving variable is greater than the sum of the
absolute values of the coefficients of the other variables. A system satisfying this condition is called
diagonally dominant. A linear system can always be reduced to diagonally dominant form by
elementary operations.

For example in the following system, we have


12 x 2 y 5 z 20
(E1)
4 x 5 y 11z 8
(E2)
7 x 12 y 10 z 27 (E3)
and is not diagonally dominant.
Rearranging as (E1), (E3) (E2), (E2),

12 2 5
5 4 11
10 7 12

we have

12 x 2 y 5 z 20
3 x 7 y z 19
4 x 5 y 11z 8

12 2 5
7 3 1
11 4 5

The system reduces to diagonally dominant form.


Two commonly used iterative process are discussed below:

3.6.1 Jacobi Iterative Method:


In this method, a fixed set of values is used to calculate all the variables and then repeated for the
next iteration with the values obtained previously. The iterative formulas of the above system are
x n 1 1 20 2 y n 5 z n
12

y n 1 1 19 3 x n z n
7

z n 1 1 8 4 x n 5 y n
11

Starting with initial values


x0 0, y 0 0, z 0 0
we get
x1 1 [ 20 0 0] 1.67
12

y1 1 [19 0 0] 2.71
7

z1 1 [8 0 0] 0.73
11

Second approximation is
x 2 1 [ 20 2( 2.71) 5(0.73)] 1.81

12
y 2 1 [19 3(1.67) 0.73] 2.10
7
z 2 1 [8 4(1.67) 5( 2.71)] 1.11
11

and so on. Results are summarized below.

xn
yn
zn

0
0
0

1. 67
2.71
0.73

Successive iterates of solution (Jacobi Method)


2
3
4
9
. . .
1.81
2.10
1.11

2.48
1.78
0.89

2.33
1.52
0.98
10

2.29
1.62
0.84

10

11

2.29
1.61
0.84

2.29
1.61
0.84

3.6.2 Gauss-Seidel Iterative Method:


In this method, the values of each variable is calculated using the most recent approximations to the
values of the other variables. The iterative formulas of the above system are
x n 1 1 20 2 y n 5 z n
12

y n 1 1 19 3 x n 1 z n
7

z n 1 1 8 4 x n 1 5 y n 1
11

Starting with initial values


x0 0, y 0 0, z 0 0
we get the solutions as follows:
First approximation:
x1 1 [ 20 0 0] 1.67

12
y1 1 [19 3(1.67) 0] 2.00
7
z1 1 [8 4(1.67) 5( 2)] 0.79
11

Second approximation:
x 2 1 [ 20 2( 2.00) 5( 0.79)] 2.33
12
y 2 1 [19 3(2.33) 0.79] 1.60
7
z 2 1 [8 4( 2.33) 5(1.60] 0.89
11

Third approximation:
x3 1 [ 20 2(1.60) 5( 0.85)] 2.29
12

y 3 1 [19 3( 2.29) 0.85] 1.61


7

z 3 1 [8 4( 2.29) 5(1.61] 0.84


11

Fourth approximation:
x 4 1 [20 2(1.61) 5(0.84)] 2.29
12

y 4 1 [19 3( 2.29) 0.84] 1.61

z4

7
1 [8
11

4( 2.29) 5(1.61] 0.84

which gives the results correct to 2 d.p.


It can be observed that the Gauss-Seidel method converges twice as fast as the Jacobi method.

11

You might also like