0% found this document useful (0 votes)
23 views35 pages

Numerical Solutions of Algebraic Equations: Direct Methods

This document discusses direct methods for numerically solving systems of linear algebraic equations. It introduces the problem, covers methods like inverse matrices, Cramer's rule, factorization methods, and Gauss elimination. It also discusses pivoting, positive definite matrices, and error analysis for direct methods.

Uploaded by

Best negi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views35 pages

Numerical Solutions of Algebraic Equations: Direct Methods

This document discusses direct methods for numerically solving systems of linear algebraic equations. It introduces the problem, covers methods like inverse matrices, Cramer's rule, factorization methods, and Gauss elimination. It also discusses pivoting, positive definite matrices, and error analysis for direct methods.

Uploaded by

Best negi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Numerical Solutions of Algebraic Equations: Direct Methods

Lesson: Numerical Solutions of Algebraic Equations:


Direct Methods

Lesson Developer: Brijendra Yadav & Chaman Singh

College/Department: Assistant Professor, Department of


Mathematics, A.N.D. College, University of Delhi

Institute of Lifelong Learning, University of Delhi pg.1


Numerical Solutions of Algebraic Equations: Direct Methods

Table of Contents

Chapter: Numerical Solutions of Algebraic Equations: Direct Methods


 1: Learning Outcomes
 2: Introduction
 3: Direct Methods to Solve the System of Linear Equations
o 3.1: Method of Solution using Inverse of the Matrix
o 3.2: Cramer's Rule
 4: Method of Factorization (Triangularization Method)
o 4.1: Doolittle's method
o 4.2: Crout's method
o 4.3: Cholesky Method
 5: Positive Definite Matrix
o 5.1: Properties of Positive Definite Matrices
 6: Pivoting
o 6.1: Partial Pivoting
o 6.2: Complete Pivoting
 7: Gauss Elimination Method
 8: Gauss-Jordan Elimination Method
 9. Error Analysis for Direct Methods
o 9.1: Operational Count for Gauss Elimination

 Exercises
 Summary
 Reference

1. Learning outcomes:

After studying this chapter you should be able to understand the

 Direct methods to solve the system of linear equations


 Inverse of the matrix method
 Cramer's Rule
 Method of Factorization (Triangularization Method)
 Doolittle's method
 Crout's method
 Cholesky Method
 Positive Definite Matrix
 Gauss Elimination Method
 Pivoting
 Gauss-Jordan Elimination Method
 Error Analysis for Direct Methods
 Operational Count for Gauss Elimination

Institute of Lifelong Learning, University of Delhi pg.2


Numerical Solutions of Algebraic Equations: Direct Methods

2. Introduction:

Consider a system of n linear algebraic equations in n unknowns


x1 , x2 , . . . , xn .

a11 x1  a12 x2  . . .  a1n xn  b1


a21 x1  a22 x2  . . .  a2 n xn  b2
(1)
. . ... . .
an1 x1  an 2 x2  . . .  ann xn  bn

where aij (i  1, 2,..., n & j  1, 2, ..., n) are the known coefficients, bi (i  1, 2,..., n) are
the known values and xi (i  1, 2,..., n) are the unknowns to be determined.

Above system of linear equations may be represented at the matrix


equations as follow

AX  b

where

 x1  b1 
x  b 
 a11 a12 . . . a1n 
 2  2
a a22 . . . a2 n  .  . 
A   21 , X    and b   
. . ... .  .  . 
  .  . 
 an1 an 2 . . . ann 
   
 xn  bn 

The system of equations given above is said to be homogeneous if all the


bi (i  1, 2,..., n) vanish otherwise it is called as non-homogeneous system of
equations.

By finding a solution of a system of equations we mean to obtain the value of


x1 , x2 , . . . , xn such that they satisfy the given equations and a solution vector of
system of equations (1) is a vector X whose components constitute a solution of
(1)

There are two types of numerical methods to solve the above system of
equations

(I) Direct Methods: direct methods such as Gauss Elimination method, in such
methods the amount of computation to get a solution can be specified in
advance.

Institute of Lifelong Learning, University of Delhi pg.3


Numerical Solutions of Algebraic Equations: Direct Methods

(II) Indirect or Iterative Methods: Such as Gauss-Siedel Methods, in such


methods we start from a (possibly crude) approximation and improve it stepwise
by repeatedly performing the same cycle of composition with changing data.

3. Direct Methods to Solve the System of Linear Equations:

The necessary and sufficient condition for the existence of a solution of


the system of equations

AX  b

where

 x1  b1 
x  b 
 a11 a12 . . . a1n 
 2  2
a a22 . . . a2 n  .  . 
A   21 , X    and b   
. . ... .  .  . 
  .  . 
 an1 an 2 . . . ann 
   
 xn  bn 

is that

Rank [A] = Rank [A : b]

or we can say that rank of the coefficient matrix is the same as the rank of the
augmented matrix.

Value Addition: Existence of a Solution of the Equation AX=b


(I) if bi  0 and det A  0 , then there exist infinite number of non-trivial solutions
besides trivial solution X = 0.
(II) If bi  0 and det A  0 , then the system has the only unique trivial solution X
= 0. In this case Rank (A) = n (Number of variables).
(III) If bi  0 and det A  0 , then the system has only unique solution and in this
case Rank (A) = n (Numbers of variables).
(IV) If bi  0 and det A  0 , then there exist infinite number of solutions provided
the equations are consistent. In this case we have Rank (A) < n.

3.1. Method of Solution using Inverse of the Matrix:

Consider the system of linear equations

AX = b (1)

We know that A is an invertible matrix iff det A  0 . Now if A is invertible matrix,


then from equation (1), we have

A1 AX  A1b

Institute of Lifelong Learning, University of Delhi pg.4


Numerical Solutions of Algebraic Equations: Direct Methods

 X  A1b

where

1
A1  Adj A
det A

Adj A
 X  b
det A

Hence solution is determined.

3.2. Cramer's Rule:

Consider the system of linear equations

AX  b

where

 x1  b1 
x  b 
 a11 a12 . . . a1n 
 2  2
a a22 . . . a2 n  .  . 
A   21 , X    and b   
. . ... .  .  . 
  .  . 
 an1 an 2 . . . ann 
   
 xn  bn 

then the jth component of the solution vector X is determined by

det Aj
xj 
det A

where det Aj is the determinant obtained by replacing jth column of det A by b,


i.e.

 a11 a12 ... a1( j 1) b1 a1( j 1) ... a1n 


 
a21 a22 ... a2( j 1) b2 a2( j 1) ... a2 n 
Aj   .
. . ... . . . ... . 
 
 an1 an 2 ... an( j 1) bn an( j 1) ... ann 

Value Addition
Cramer's Rule is feasible only when n = 2, 3 or 4 only.

Example 1: Solve the systems of linear equations

Institute of Lifelong Learning, University of Delhi pg.5


Numerical Solutions of Algebraic Equations: Direct Methods

3x  2 y  2 z  3
2 x  3 y  z  3
x  2y  z  4

using inverse of the matrix method.

Solution: Given system of equations is

AX = b

where

3 1 2  x  3
A   2  3  1 , X   y  and b   3 
   
1 2 1  z   4

3 1 2
 det A  A  2  3  1  8
1 2 1

 1 3 5

adj A   3 1 7 
 7  5  11

 1 3 5
1
7 
11
 A  adj A   3 1
A 8
 7  5  11

 1 3 5  3   1 
1
 1
X  A b   3 1 7   3    2
8
 7  5  11  4  1 

Thus,

x  1, y  2 and z  1.

Example 2: Solve the systems of linear equations

x  2y  z  2
3x  6 y  z  1
3x  3 y  2 z  3

using Cramer's Rule.

Solution: Given system of equations is

Institute of Lifelong Learning, University of Delhi pg.6


Numerical Solutions of Algebraic Equations: Direct Methods

AX = b (1)

where

1 2 1 x   2
 A  3 6 1  , X   y  and b  1 
3 3 2   z  3 

1 2 1
 det A  A  3 6 1  12
3 3 2

2 2  1 1 2 1 1 2 2
Also A1  1 6 1  , A2  3
 1 1  and A3  3
 6 1 
3 3 2  3 3 2  3 3 3

2 2 1 1 2 1 1 2 2
 A1  1 6 1  35, A2  3 1 1  13 and A3  3 6 1  15
3 3 2 3 3 2 3 3 3

Using Cramer's rule we have

A1 35 A 13 A 15
 x  , y 2  and z  3  
A 12 A 12 A 12

Thus,

35 13 5
x , y and z   .
12 12 4

4. Method of Factorization (Triangularization Method):

This method is also known as decomposition method. This method is based on


the fact that a square matrix A can be factored into the product of a lower
triangular matrix L and an upper triangular matrix U, if all the principal minors of
A are non-singular, i.e. if

a11 a12 a13


a11 a12
a11  0,  0, a21 a22 a22  0, etc.
a21 a22
a31 a32 a33

Thus, the matrix A can be expressed as

A = LU (1)

where

Institute of Lifelong Learning, University of Delhi pg.7


Numerical Solutions of Algebraic Equations: Direct Methods

l11 0 0 ... 0  u11 u12 u13 . . . u1n 


l l 0 u22 u23 . . . u2 n 
 21 22 0 . . . 0  
L  l31 l32 l33 . . . 0  and U  0 0 u33 . . . u3n 
   
. . . ... .  . . . ... . 
l ln 3 . . . lnn  0 . . . unn 
 n1 ln 2  0 0

Using the matrix multiplication rule to multiply the matrices L and U and
comparing the elements of the resulting matrix with those of A we obtain

li1u1 j  li 2u2 j  . . .  linunj  aij (i  1, 2, ..., n and j  1, 2,..., n)

where

lij  0 if i  j and uij  0 if i  j

this system of equations involves n2  n unknowns. Thus there are n parameters


family of solutions. To produce a unique solution it is convenient to choose either

uii  1 or lii  1 i  1, 2,..., n

Now,

4.1. Doolittle's method:

If we take lii  1 , in the factorization method then the factorization method


is called Doolittle's method.

Now if lii  1 , then we have

1 0 0 . . . 0 u11 u12 u13 . . . u1n 


l  0 u22 u23 . . . u2 n 
 21 1 0 . . . 0 
L  l31 l32 1 . . . 0  and U   0 0 u33 . . . u3n 
   
. . . ... .  . . . ... . 
ln1 
ln 3 . . . 1 0 0 . . . unn 
ln 2  0

thus, for the system of equations

AX = b (2)

We have

LUX = b (3)

Putting UX = y in equation (3), we have

Ly = b (4)

Institute of Lifelong Learning, University of Delhi pg.8


Numerical Solutions of Algebraic Equations: Direct Methods

On solving equation (4) by forward substitution, we find the vector y now solve
the system of equations

UX = b

by backward substitution we get the values

x1 , x2 , . . ., xn .

We have

UX = y

and Ly = b

 y  L1b and x  U 1 y

Thus the inverse of A can also be determined as

A1  U 1L1 .

Example 3: Solve the system of equations

2x  3y  z  9
x  2 y  3z  6
3x  y  2 z  8

using factorization method (Doolittle's method).

Solution (Doolittle's method): We have system of equations

AX = b (1)

where

 2 3 1 x 9 
A  1 2 3 , X   y  and b  6
   
3 1 2   z  8 

Now let

A=LU (2)

where

1 0 0 u11 u12 u13 


L  l21 1 0  and U   0
 u22 u23 
l31 l32 1   0 0 u33 

Thus from equation (2) we have

Institute of Lifelong Learning, University of Delhi pg.9


Numerical Solutions of Algebraic Equations: Direct Methods

LU=A

1 0 0  u11 u12 u13   2 3 1



l
 21 1 0  0 u22 u23   1 2 3
l31 l32 1   0 0 u33  3 1 2 

u11  2, u12  3, u13  1


1 3
l21u11  1  l21  and l31u11  3  l31 
2 2

1 5
l21u12  u22  2  u22  and l21u13  u23  3  u23 
2 2
l31u12  l32u22  1  l32  7 and l31u13  l32u23  u33  2  u33  18

Thus, we have

 
1 0 0 2 3 1
  
1 1 5 
L 1 0  and U   0
2   2 2
3  0 0 18 
  7 1 
2 

Now using equation (1) and (2) we have

LUX = b

Let

UX=Y (3)

 y1 
 
where Y  y2
 
 y3 

 LY = b

 
1 0 0
   y1  9 
 1 1 0   y2   6 
2 
3   y3  8 
  7 1
2 

Institute of Lifelong Learning, University of Delhi pg.10


Numerical Solutions of Algebraic Equations: Direct Methods

y1  9
1
 y1  y2  6
2
3
y1  7 y2  y3  8
2

On solving using forward substitution we have

3
y1  9, y2  and y3  5
2

Now using the equation (3) we have

2 3 1 9 
  x   
0 1 5    3
y 
 2 2    2
z 
0
 0 18    5 

2x  3y  z  9
1 5 3
 y z 
2 2 2
18 z  5

On solving using backward substitution we have

35 29 5
x , y and z 
18 18 18

Thus, the solution of the given system of equations is

35 29 5
x , y and z  .
18 18 18

4.2. Crout's method:

If we take uii  1 , in the factorization method then the factorization


method is called the Crout's method.

For the matrix A where

A = LU (1)

if lii  1 , then we have

Institute of Lifelong Learning, University of Delhi pg.11


Numerical Solutions of Algebraic Equations: Direct Methods

l11 0 0 ... 0  1 u12 u13 . . . u1n 


l  0 1 u23 . . . u2 n 
 21 l22 0 ... 0  
L  l31 l32 l33 . . . 0  and U   0 0 1 . . . u3n 
   
. . . ... .  . . . ... . 
l ln 3 . . . lnn  0 0 . . . 1 
 n1 ln 2  0

thus, for the system of equations

AX = b (2)

We have

LUX = b (3)

Putting UX = y in equation (3), we have

Ly = b (4)

On solving equation (4) by forward substitution, we find the vector y now solve
the system of equations

UX = b

by backward substitution we get the values

x1 , x2 , . . ., xn .

We have

UX = y

and Ly = b

 y  L1b and x  U 1 y

Thus the inverse of A can also be determined as

A1  U 1L1 .

Example 4: Solve the system of equations

x  y  z 1
4x  3y  z  6
3x  5 y  3z  4

using factorization method (Crout's method).

Solution (Crout's method): We have system of equations

AX = b (1)

Institute of Lifelong Learning, University of Delhi pg.12


Numerical Solutions of Algebraic Equations: Direct Methods

where

1 1 1  x  1 
A   4 3  1 , X   y  and b  6 
   
3 5 3   z   4 

Now let

A=LU (2)

where

l11 0 0  1 u12 u13 


L  l21 l22 0  and U   0 1 u23 

l31 l32 l33   0 0 1 

Thus from equation (2) we have

LU=A

l11 0 0   1 u12 u13  1 1 1 



l
 21 l22 0   0 1 u23    4 3  1
l31 l32 l33   0 0 1  3 5 3 

l11  1, l21  4, l31  3


l11u12  1  u12  1 and l11u13  1  u13  1

l21u12  l22  3  l22  1 and l21u13  l22u23  1  u23  5
l31u12  l32  5  l32  2 and l31u13  l32u23  l33  3  l33  10

Thus, we have

1 0 0  1 1 1
L   4 1 0  and U   0
 1 5 
3 2 10   0 0 1 

Now using equation (1) and (2) we have

LUX = b

Let

UX=Y (3)

 y1 
 
where Y  y2
 
 y3 

Institute of Lifelong Learning, University of Delhi pg.13


Numerical Solutions of Algebraic Equations: Direct Methods

 LY = b

1 0 0   y1  1 
 4
 1 0   y2   6 
3 2 10   y3   4 

y1  1
 4 y1  y2  6
3 y1  2 y2  10 y3  4

On solving using forward substitution we have

1
y1  1, y2  2 and y3  
2

Now using the equation (3) we have

 
1 1 1  x   1 
0  
 1 5   y     2 
 0 0 1   z   1 
 
 2

x  y  z 1
 y  5 z  2
1
z
2

On solving using backward substitution we have

1 1
x  1, y  and z  
2 2

Thus, the solution of the given system of equations is

1 1
x  1, y  and z   .
2 2

4.3. Cholesky Method:

This method is also known as the square root method. If the coefficient matrix A
is symmetric and positive definite, then the matrix A can be decomposed as

A  LLT

where L   ij ,  ij  0 if i  j

Institute of Lifelong Learning, University of Delhi pg.14


Numerical Solutions of Algebraic Equations: Direct Methods

Thus, the system of equations

AX  b (1)

becomes

LLT X  b (2)

Let LT X  Y , then (2) becomes

LY  b (3)

Now system of equations in (3) can be solved by forward substitution and


solution vector X is determined from

LT X  Y (4)

by the backward substitutions.

The inverse of coefficient matrix A can also be obtained from

A1  ( LLT )1  ( LT )1 L1  ( L1 )T L1 .

Value Addition: Note


1. The matrix A can also be decomposed as
A  UU T
Where U  uij , uij  0 if i  j .
2. If the coefficient matrix A is symmetric but not positive definite, then the
Cholesky's method could still be applied, but then leads to a complex matrix L,
so that it becomes impractical.
Example 5: Solve the system of equations

x  2 y  3z  5
2 x  8 y  22 z  6
3x  22 y  82 z  10

using Cholesky method.

Solution: Given system of equations is

AX = b (1)

where

1 2 3  x  5
A   2 8 22  , X   y  and b   6 
   
3 22 82   z   10

Now let

Institute of Lifelong Learning, University of Delhi pg.15


Numerical Solutions of Algebraic Equations: Direct Methods

A  LLT (2)

where

l11 0 0
L  l21 l22 0 
l31 l32 l33 

Thus from equation (2) we have

l11 0 0   l11 l21 l31  1 2 3 



l
 21 l22 0   0 l22 l32    2 8 22 
l31 l32 l33   0 0 l33  3 22 82 

l112  1  l11  1,
l11l21  2  l21  2,
l11l31  3  l31  3,

l212  l22 2  8  l22  2,
l31l21  l32l22  22  l32  8,
l312  l32 2  l332  82  l33  3.

Thus, we have

1 0 0
L   2 2 0 
3 8 3

Now using equation (1) and (2) we have

LLT X  b

Let

LT X  Y (3)

 y1 
 
where Y  y2
 
 y3 

 LY  b

1 0 0   y1   5 
 2
 2 0   y2    6 
3 8 3  y3   10 

Institute of Lifelong Learning, University of Delhi pg.16


Numerical Solutions of Algebraic Equations: Direct Methods

y1  5
 2 y1  2 y2  6
3 y1  8 y2  3 y3  10

On solving using forward substitution we have

y1  5, y2  2 and y3  3

Now using the equation (3) we have

1 2 3  x   5 
0 2 8   y     2 

0 0 3   z   3 

x  2 y  3z  5
 2 y  8 z  2
3z  3

On solving using backward substitution we have

x  2, y  3 and z  1

Thus, the solution of the given system of equations is

x  2, y  3 and z  1.

Example 6: Solve the system of equations

4 x  2 y  14 z  14
2 x  17 y  5 z  101
14 x  5 y  83z  155

using Cholesky method.

Solution: Given system of equations is

AX = b (1)

where

 4 2 14  x   14 
A   2 17  5 , X   y  and b   101
   
14  5 83   z   155 

Now let

A  LLT (2)

Institute of Lifelong Learning, University of Delhi pg.17


Numerical Solutions of Algebraic Equations: Direct Methods

where

l11 0 0
L  l21 l22 0 
l31 l32 l33 

Thus from equation (2) we have

l11 0 0   l11 l21 l31   4 2 14 



l
 21 l22 0   0 l22 l32    2 17  5
l31 l32 l33   0 0 l33  14  5 83 

l112  4  l11  2,
l11l21  2  l21  1,
l11l31  14  l31  7,

l212  l22 2  17  l22  4,
l31l21  l32l22  5  l32  3,
l312  l32 2  l332  83  l33  5.

Thus, we have

2 0 0
L  1 4 0 
7  3 5

Now using equation (1) and (2) we have

LLT X  b

Let

LT X  Y (3)

 y1 
 
where Y  y2
 
 y3 

 LY  b

2 0 0   y1   14 
 1 4
 0   y2    101
7  3 5  y3   155 

Institute of Lifelong Learning, University of Delhi pg.18


Numerical Solutions of Algebraic Equations: Direct Methods

2 y1  14
 y1  4 y2  101
7 y1  3 y2  5 y3  155

On solving using forward substitution we have

y1  7, y2  27 and y3  5

Now using the equation (3) we have

2 1 7   x   7
0 4  3   y    27 

0 0 5   z   5 

2x  y  7z  7
 4 y  3z  27
5z  5

On solving using backward substitution we have

x  3, y  6 and z  1

Thus, the solution of the given system of equations is

x  3, y  6 and z  1 .

Theorem 1: (Stability of the Cholesky Factorization): The Cholesky LLT -


factorization is numerically stable.

Proof: For the Cholesky method, we know that

A  LLT (1)

where L   ij ,  ij  0 if i  j

Thus,

a jj  2j1  2j 2  . . .  2jj [using equation (1)] (2)

Hence for all  2jk (  jk  0 for k  j ). We obtain

2jk  2j1  2j 2  . . .  2jj  a jj

Thus,  2jk is bounded by an entry of A, which means stability against round-off.

Institute of Lifelong Learning, University of Delhi pg.19


Numerical Solutions of Algebraic Equations: Direct Methods

5. Positive Definite Matrix:

A matrix A is said to be positive definite matrix, if X * AX  0 for any vector


X  0 and X *  ( X )T . Further, X * AX  0 if and only if X = 0.

Value Addition: Note


If a matrix A is Hermitian, strictly diagonal dominant matrix with positive real
diagonal entries, then A is positive definite.

5.1. Properties of Positive Definite Matrices:

A positive definite matrix has the following properties:

(I) If A is non-singular and positive definite matrix then B  A * A is Hermitian


and positive definite.

(II) The eigenvalues of a positive definite matrix are all positive.

(III) All the leading minors of A are positive.

Value Addition: Note


1. A real matrix B is said to have 'property A' iff there exists a permutation
 A11 A12 
matrix P such that PBPT   , where A11 and A22 are diagonal matrices.
 A21 A22 
2. Inverse of a symmetric matrix is a symmetric matrix.
3. Inverse of a upper triangular matrix is a upper triangular matrix.
4. Inverse of a lower triangular matrix is a lower triangular matrix.

6. Pivoting:

In the Gauss elimination process, sometimes it may happen that any one of the
pivot element a11 , a '22 , a '33 , . . ., a 'nn vanishes or becomes very small compared to
other elements in that column, then we attempt to rearrange the remaining rows
so as to obtain a non-vanishing pivot or to avoid the multiplication by a large
number. This process is called pivoting.

There are two types of the pivoting

6.1. Partial Pivoting: Partial pivoting is done in the following steps:

Step 1: In the first stage of elimantion, we searched the first column for the
largest element in magnitude and brought as the first pivot by interchanging the
first equation with equation having the largest element in magnitude.

Institute of Lifelong Learning, University of Delhi pg.20


Numerical Solutions of Algebraic Equations: Direct Methods

Step 2: In the second stage we searched the second column for the largest
element in magnitude among the (n-1) elements leaving the first element and
brought this element as the second pivot by interchanging the second equation
with the equation having the largest element in magnitude.

This procedure is continued until we got the upper triangular matrix. In the
partial pivoting, pivot is found by the algorithm choose j, the smallest integer for
which

a(jkk )  max aik( k ) ; k  i  n

and interchange rows k and j.

6.2. Complete Pivoting:

In the complete pivoting search the matrix A for the largest element in
magnitude and bring it as the first pivot. In complete pivoting not only an
interchange of equations requires but also an interchange of position of the
variables requires.

In complete pivoting following algorithm is used to find the pivot choose l and m
as the smallest integers for which

(k )
alm  max aij( k ) , k  i, j  n

and interchange rows k and l and columns k and m.

Value Addition: Note


If the matrix A is diagonally dominant or real, symmetric and positive definite
then no pivoting is necessary.

7. Gauss Elimination Method:

From the previous methods, we have learnt that any system of linear algebraic
equations can be solved by the use of determinants. But the method of solving
the system of linear equations by determinants is not very practical, even with
efficient methods for evaluating the determinants. Because if the order of the
determinant is large, then the evaluation becomes tedious. Therefore to avoid
these unnecessary computations, mathematicians have tried to develop simpler
and less time consuming procedures and various methods for solving system of
linear equations have been suggested. Gauss elimination method is one of the
most important method to solve the system of linear equations .

Gauss elimination method for solving linear systems is a systematic


process of elimination that reduces the system of linear equations to triangular
form. In Gauss elimination method, we proceed with the following steps.

Institute of Lifelong Learning, University of Delhi pg.21


Numerical Solutions of Algebraic Equations: Direct Methods

Step 1: Elimination of x1 from the second, third, . . . , nth equations

In the first step of Gauss elimination method we eliminate x1 from the


second, third, . . . , nth equations by subtracting suitable multiple of first
equation from second, third, . . . , nth equations.

The first equation is called the pivot equation and the coefficient of x1 in
the first equation i.e., a11  0 is called the pivot. Thus first step gives the new
system as follows.

a11 x1  a12 x2  . . .  a1n xn  b1


a '22 x2  . . .  a '2 n xn  b '2
. . . . .
a 'n 2 x2  . . .  a 'nn xn  b 'n

Step 2: Elimination of x2 from the third, . . . , nth equation

In the second step of Gauss elimination method, we take the new second
equation (which no longer contains x1) as the pivot equation and use it to
eliminate x2 from the third, fourth, . . . , nth equation.

In the third step we eliminate x3 and in the fourth step we eliminate x4


and so on. After (n-1) steps when the elimination is complete this process gives
upper triangular system of the form

c11 x1  c12 x2  . . .  c1n xn  d1


c22 x2  . . .  c2 n xn  d 2
. . . . .
cnn xn  d n

Thus, the new system of equations is of upper triangular form that can be solved
by the back substitution.

Value Addition: Note


In the Gauss elimination method the pivot equation remains unchanged also we
may make the pivot as 1 before elimination at each step.

Example 7: Solve the system of equations

8 x2  2 x3  7
3x1  5 x2  2 x3  8
6 x1  2 x2  8 x3  26

using Gauss elimination method.

Institute of Lifelong Learning, University of Delhi pg.22


Numerical Solutions of Algebraic Equations: Direct Methods

Solution: Given system of equations is

8x2  2 x3  7 (1)

3x1  5x2  2 x3  8 (2)

6 x1  2 x2  8x3  26 (3)

Since the coefficient of x1 in first equation is zero therefore we must rearrange


the equations by interchanging first equation to third i.e.,

6 x1  2 x2  8x3  26 (4)

3x1  5x2  2 x3  8 (5)

8x2  2 x3  7 (6)

Step 1: Elimination of x1:

1
On subtracting times of equation (4) from equation (5) we have
2

6 x1  2 x2  8x3  26 (7)

4 x2  2 x3  5 (8)

8x2  2 x3  7 (9)

Step 2: Elimination of x2:

On subtracting 2 times of equation (8) from equation (9) we have

6 x1  2 x2  8x3  26 (10)

4 x2  2 x3  5 (11)

6 x3  3 (12)

On solving equation (10), (11) and (12) by back substitution we have

1
x3  , x2  1 and x1  4 .
2

Thus, the required solution is

1
x1  4, x2  1 and x3  .
2

Institute of Lifelong Learning, University of Delhi pg.23


Numerical Solutions of Algebraic Equations: Direct Methods

Example 8: Solve the system of equations

1 1
x1  x2  x3  1
2 3
1 1 1
x1  x2  x3  0
2 3 4
1 1 1
x1  x2  x3  0
3 4 5

using Gauss elimination method.

Solution: Given system of equations is

1 1
x1  x2  x3  1 (1)
2 3

1 1 1
x1  x2  x3  0 (2)
2 3 4

1 1 1
x1  x2  x3  0 (3)
3 4 5

Step 1: Elimination of x1:

1 1
On subtracting times of equation (1) from equation (2) and times of
2 3
equation (1) from equation (3) we have

1 1
x1  x2  x3  1 (4)
2 3

1 1 1
x2  x3   (5)
12 12 2

1 4 1
x2  x3   (6)
12 45 3

Step 2: Elimination of x2:

On subtracting equation (5) from equation (6) we have

1 1
x1  x2  x3  1 (7)
2 3

1 1 1
x2  x3   (8)
12 12 2

1 1
 x3   (9)
180 6

Institute of Lifelong Learning, University of Delhi pg.24


Numerical Solutions of Algebraic Equations: Direct Methods

On solving equation (7), (8) and (9) by back substitution we have

x3  30, x2  36 and x1  9 .

Thus, the required solution is

x1  9, x2  36 and x3  30 .

Example 9: Solve the system of equations

10 x1  7 x2  3x3  5 x4  6
6 x1  8 x2  x3  4 x4  5
3x1  x2  4 x3  11x4  2
5 x1  9 x2  2 x3  4 x4  7

using Gauss elimination method.

Solution: Given system of equations can be written as

x1  0.7 x2  0.3x3  0.5x4  0.6 (1)

6 x1  8x2  x3  4 x4  5 (2)

3x1  x2  4 x3  11x4  2 (3)

5x1  9 x2  2 x3  4 x4  7 (4)

Step 1: Elimination of x1:

On subtracting ( 6 ) times of equation (1) from equation (2), 3 times of


equation (1) from equation (3) and 5 times of equation (1) from eqution (4) we
have

x1  0.7 x2  0.3x3  0.5x4  0.6 (5)

3.8x2  0.8x3  x4  8.6 (6)

3.1x2  3.1x3  9.5x4  0.2 (7)

 5.5x2  3.5x3  1.5x4  4 (8)

Step 2: Elimination of x2:

In the above equations (6), (7) and (8) coefficient of x2 is maximum


(numerically) in equation (8) therefore interchanging the equation (6) and (8)
After that x2 is eliminated from equations (7) and (8) we have

x1  0.7 x2  0.3x3  0.5x4  0.6 (9)

Institute of Lifelong Learning, University of Delhi pg.25


Numerical Solutions of Algebraic Equations: Direct Methods

x2  0.6363x3  0.27275x4  0.72727 (10)

 1.61818x3  0.03636 x4  11.36364 (11)

1.12727 x3  10.34545x4  2.45455 (12)

Step 3: Elimination of x3:

On eliminating x3 from equation (12) we have

x1  0.7 x2  0.3x3  0.5x4  0.6 (13)

x2  0.6363x3  0.27275x4  0.72727 (14)

x3  0.02247 x4  7.02247 (15)

10.3607947 x4  10.37079 (16)

On solving equation (13), (14), (15) and (16) by back substitution we have

x4  1, x3  7, x2  4 and x1  5 .

Thus, the required solution is

x1  5, x2  4, x3  7 and x4  1 .

Example 10: Solve the system of equations

10 x1  x2  2 x3  4
x1  10 x2  x3  3
2 x1  3x2  20 x3  7

using Gauss elimination method.

Solution: Since the given system is diagonally dominant therefore no pivoting is


necessary. Thus we have

10 x1  x2  2 x3  4 (1)

x1  10 x2  x3  3 (2)

2 x1  3x2  20 x3  7 (3)

Step 1: Elimination of x1:

On eliminating x1 from equations (2) and (3) we have

10 x1  x2  2 x3  4 (4)

Institute of Lifelong Learning, University of Delhi pg.26


Numerical Solutions of Algebraic Equations: Direct Methods

101 12 26
x2  x3  (5)
10 10 10

32 196 62
x2  x3  (6)
10 10 10

Step 2: Elimination of x2:

On eliminating x2 from equation (6) we have

10 x1  x2  2 x3  4 (7)

101 12 26
x2  x3  (8)
10 10 10

20180 5430
x3  (9)
1010 1010

On solving equation (7), (8) and (9) by back substitution we have

x3  0.269, x2  0.289 and x1  0.375 .

Thus, the required solution is

x1  0.375, x2  0.289 and x3  0.269 .

8. Gauss-Jordan Elimination Method:

M. Jordan in 1920 introduced another variant of the Gauss elimination method.


In Gauss-Jordan method the coefficient matrix is reduced to a diagonal form
rather than a triangular form in the Gauss elimination and we have the solution
without further computations. Generally, this method is not used for the solution
of a system of equations, because the reduction from the Gauss triangular to
diagonal form requires more operations than back substitution does. Therefore
this method is disadvantageous for solving system of equations. However it
gives a simple method for finding the inverse of a given matrix by operating on
the unit matrix I in the same way as the Gauss-Jordan method reducing A to I.

Example 11: Solve the system of equations

x1  2 x2  x3  8
2 x1  3x2  4 x3  20
4 x1  3x2  2 x3  16

using Gauss elimination method.

Solution: Given system of equations is

x1  2 x2  x3  8 (1)

Institute of Lifelong Learning, University of Delhi pg.27


Numerical Solutions of Algebraic Equations: Direct Methods

2 x1  3x2  4 x3  20 (2)

4 x1  3x2  2 x3  16 (3)

Step 1: Elimination of x1:

On eliminating x1 from equations (2) and (3) we have

x1  2 x2  x3  8 (4)

 x2  2 x3  4 (5)

 5x2  2 x3  16 (6)

Step 2: Elimination of x2:

On eliminating x2 from equations (4) and (6) we have

x1  0.x2  5x3  16 (7)

 x2  2 x3  36 (8)

 12 x3  36 (9)

Step 2: Elimination of x3:

On eliminating x3 from equations (7) and (8) we have

x1  1 (7)

 x2  2 (8)

12 x3  36 (9)

This gives

x1  1, x2  2 and x3  3 .

Example 12: Find the inverse of the coefficient matrix of the given system of
equations

x1  x2  x3  1
4 x1  3x2  1x3  6
3x1  5 x2  3x3  4

using Gauss elimination method with partial pivoting and hence solve the system
of the equations..

Institute of Lifelong Learning, University of Delhi pg.28


Numerical Solutions of Algebraic Equations: Direct Methods

Solution: Given system of equations is

AX  b (1)

where

1 1 1  x1  1 
A   4 3  1  , X   x2 
 and b  6 
3 5 3  x3   4

Using the augmented matrix [A | I], we have

1 1 1 1 0 0
 
[ A | I ]  4 3 1 0 1 0
3 5 3 0 0 1 

4 3 1 0 1 0
 
 1 1 1 1 0 0 [ R1  R2 ]
3 5 3 0 0 1 

 3 1 1 
1  0 0
4 4 4
  1
 1 1 1 1 0 0 R1  R1
3 4
5 3 0 0 1
 
 

 3 1 1 
1  0 0
4 4 4
 
R2  R2  R1
 0 0
1 5 1
1 
 4 4 4  R3  R3  3R1
 
0 11 15 3
0  1
 4 4 4 

 3 1 1 
1  0 0
4 4 4
 
 0 1
11 15 3
0  R2  R3
 4 4 4 
 
0 1 5 1
1  0
 4 4 4 

Institute of Lifelong Learning, University of Delhi pg.29


Numerical Solutions of Algebraic Equations: Direct Methods

 3 1 1 
1  0 0
4 4 4
 
 0
15 3 4 4
1 0  R2  R2
 11 11 11  11
 
0 1 5 1
1  0
 4 4 4 

 14 5 3
1 0  0  
11 11 11 3
  R1  R1  R2
 0
15 3 4  4
1 0 
 11 11 11  1
  R3  R3  R2
0 10 2 1 4
0 1  
 11 11 11 

 14 5 3
1 0  0  
11 11 11
 
 0
15 3 4 11
1 0  R3  R3
 11 11 11  10
 
0 0 1 11

1

1
 10 5 10 

 7 1 2
 
5 5 5 14
1 0 0  R1  R1  R3
 0 1 0 
3 1 11
0
 2 2 15
0 0 1  R2  R2  R3
 11 1 1 11
 
 10 5 10 

Thus the inverse of the coefficient matrix A is

 7 1 2
 5 
5 5
 
A  
1 3 1
0
 2 2
 
 11 
1

1
 10 5 10 

Therefore the solution of the system of equation (1) is

X  A1b

Institute of Lifelong Learning, University of Delhi pg.30


Numerical Solutions of Algebraic Equations: Direct Methods

 7 1 2  
 5  
5 5 1   1 
   
1    1 
X  
3
 0 6 
 2 2    2 
  4
 11 1 1     1
   
 10 5 10   2

Thus,

1 1
x1  1, x2  and x3   .
2 2

9. Error Analysis for Direct Methods:

The quality of a numerical method is judged in terms of:

Amount of storage

Amount of time (  Number of operations)

Effect of round-off error.

9.1. Operational Count for Gauss Elimination:

The number of divisions and multiplications involved in solving the system


of equations is usually called the operational count for that method. For Gauss
elimination, the operation count for a system of equations is as follows:

a21
Elimination of x1: For eliminating x1, the factor is computed once. There
a11
are (n-1) multiplications in the (n-1) terms on the left side and 1 multiplication
on the right side.

Hence the number of multiplications/divisions required for eliminating x1 is

(1  n 1  1  n  1) ,

Since x1 is eliminated from (n-1) equations. Therefore, the total number of


multiplications/divisions required to eliminated x1 from (n-1) equations is

(n 1)(n  1)  (n 1)(n  2 1) ,

Elimination of x 2: For eliminating x2 , the total number of


multiplications/divisions required to eliminate x2 from (n-2) equations is

(n  2)n  (n  2)(n  2  2) ,

Elimination of x 3: For eliminating x3 , the total number of


multiplications/divisions required to eliminate x3 from (n-3) equations is

Institute of Lifelong Learning, University of Delhi pg.31


Numerical Solutions of Algebraic Equations: Direct Methods

(n  3)(n 1)  (n  3)(n  2  3) ,

Elimination of x k: For eliminating xk, the total number of


multiplications/divisions required to eliminate xk from (n-k) equations is

 (n  k )(n  2  k ) ,

Elimination of x(n-1): For eliminating x(n-1), the total number of


multiplications/divisions required to eliminate x(n-1) from (n-k) equations is

  n  (n  1) n  2  (n  1)  1.3 .

Thus, the total number of operations required to eliminate x1, x2, x3, . . ., xn-1
are as follows

n 1 n 1

 (n  k )(n  2  k )   (n  k )2  (n  k ) 
k 1 k 1
n 1
   n 2  k 2  2nk  2n  2k 
k 1

(n  1) n(2 n  2  1)
 n 2 (n  1)  
6
(n  1) n (n  1) n
2n  2n(n  1)  2
2 2
n 1
n3
  (n  k )(n  2  k ) 
k 1 3

Thus, the total number of multiplications and divisions required in Gaussian


n3
elimination method is .
3

n3
Note 1: Similarly, it can be shown that the Gauss-Jordan Method requires
2
arithmetic operations. Hence, Gauss elimination method is preferred to Gauss-
Jordan method to solve the large system of equations.

n3
Note 2: In L-U decompositions method total number of operations count is
3
same as in Gauss elimination method.

n3
Note 3: In Cholesky method total number of operations count is .
6

Exercise:

1. Solve the following system of equations using LU-factorization method:

Institute of Lifelong Learning, University of Delhi pg.32


Numerical Solutions of Algebraic Equations: Direct Methods

4x  5 y  7
(I)
12 x  14 y  18

5 x  9 y  2 z  24
(II) 9 x  4 y  z  25
2 x  y  z  11

4 x  6 y  8z  0
(III) 6 x  34 y  52 z  160
8 x  52 y  129 z  452

2. Solve the following system of equations using Cholesky method


4 x  y 1
(I) x  4 y  z  0
 y  4z  0

4 x1  x2  1
 x1  4 x2  x3  0
(II)
 x2  4 x3  x4  0
 x3  4 x4  0
3. Solve the following system of equations using Gauss elimination method
2 x  2 y  3z  1
(I) 4 x  2 y  3z  2
x yz 3

2 x1  x2  x3  2 x4  2
4 x1  2 x3  x4  3
(II)
3x1  2 x2  2 x3  1
x1  3x2  2 x3  4

4 x1  x2  x3  4
(III) x1  4 x2  2 x3  4
3x1  2 x2  4 x3  6

x1  x2  x3  2
(IV) 2 x1  3x2  5 x3  3
3x1  2 x2  3x3  6

Institute of Lifelong Learning, University of Delhi pg.33


Numerical Solutions of Algebraic Equations: Direct Methods

4. Solve the following system of equations using Gauss-Jordan method:


2 x  y  3z  1
(I) 4 x  y  5 z  7
3x  2 y  4 z  3

2 x1  x2  4 x3  x4  4
4 x1  3x2  5 x3  2 x4  1
(II)
x1  x2  x3  x4  1
x1  3x2  3x3  2 x4  1

4 x1  2 x2  4 x3  10
2 x1  2 x2  3x3  2 x4  18
(III)
4 x1  2 x2  6 x3  3x4  30
2 x2  3x3  9 x4  61

10 x1  2 x2  x3  59
(IV) x1  8 x2  2 x3  4 .
7 x1  x2  20 x3  5

Summary:

In this lesson we have emphasized on the followings

 Direct methods to solve the system of linear equations


 Inverse of the matrix method
 Cramer's Rule
 Method of Factorization (Triangularization Method)
 Doolittle's method
 Crout's method
 Cholesky Method
 Positive Definite Matrix
 Gauss Elimination Method
 Pivoting
 Gauss-Jordan Elimination Method
 Error Analysis for Direct Methods
 Operational Count for Gauss Elimination

Institute of Lifelong Learning, University of Delhi pg.34


Numerical Solutions of Algebraic Equations: Direct Methods

References:

1. Brian Bradie, A Friendly Introduction to Numerical Analysis, Peason


Education, India, 2007.
2. M.K. Jain, S.R. K. Iyengar and R. K. Jain, Numerical Methods for Scientific
and Engineering Computation, New Age International Publisher, India, 6th
edition, 2007.

Institute of Lifelong Learning, University of Delhi pg.35

You might also like