0% found this document useful (0 votes)
8 views58 pages

NA Lecture 14

The document discusses methods for solving linear systems of equations and matrix inversion, focusing on techniques such as Gaussian elimination, Gauss-Jordan elimination, and the relaxation method. It explains the iterative relaxation method for improving solutions by reducing residuals and provides examples for clarity. Additionally, it outlines the process for finding the inverse of a matrix using Gaussian elimination and Gauss-Jordan methods.

Uploaded by

xibejir727
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views58 pages

NA Lecture 14

The document discusses methods for solving linear systems of equations and matrix inversion, focusing on techniques such as Gaussian elimination, Gauss-Jordan elimination, and the relaxation method. It explains the iterative relaxation method for improving solutions by reducing residuals and provides examples for clarity. Additionally, it outlines the process for finding the inverse of a matrix using Gaussian elimination and Gauss-Jordan methods.

Uploaded by

xibejir727
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 58

Numerical

Analysis
Lecture14
Chapter 3
Solution of
Linear System
of Equations
and Matrix
Inversion
Introduction
Gaussian Elimination
Gauss-Jordon Elimination
Crout’s Reduction
Jacobi’s
Gauss- Seidal Iteration
Relaxation
Matrix Inversion
Relaxation
Method
This is also an iterative method
and is due to Southwell.
To explain the details, consider
again the system of equations
a11 x1  a12 x2    a1n xn b1 
a21 x1  a22 x2    a2 n xn b2 

   
an1 x1  an 2 x2    ann xn bn 
Let ( p) ( p) ( p) ( p) T
X ( x , x ,..., x )
1 2 n

be the solution vector


obtained iteratively after p-th
( p)
iteration. If Ri denotes the
residual of the i-th equation
of system given above , that
is of a x  a x    a x b
i1 1 i2 2 in n i
defined
defined by
by
( p) ( p) ( p) ( p)
Ri bi  a x
i1 1 a x
i2 2   a x
in n

we can improve the solution


vector successively by
reducing the largest residual
to zero at that iteration. This
is the basic idea of relaxation
method.
To achieve the fast convergence of the
procedure, we take all terms to one side
and then reorder the equations so that the
largest negative coefficients in the
equations appear on the diagonal.
Now, if at any iteration, Ri is
the largest residual in
magnitude, then we give an
increment to xi ; aii being the
coefficient of xi

Ri
dxi 
aii
In other words, we change xi .
to ( xi  dxi )
to relax Ri
that is to reduce Ri to zero.
Example
Solve the system of equations
6 x1  3 x2  x3 11
2 x1  x2  8 x3  15
x1  7 x2  x3 10
by the relaxation method,
starting with the vector (0, 0, 0).
Solution
At first, we transfer all the
terms to the right-hand side
and reorder the equations, so
that the largest coefficients in
the equations appear on the
diagonal.
Thus, we get
0  11  6 x1  3 x2  x3 

0  10  x1  7 x2  x3 

0  15  2 x1  x2  8 x3 
after interchanging the 2 nd

and 3 equations.
rd
Starting with the initial
solution vector (0, 0, 0), that is
taking x1  x2  x3 0,
we find the residuals
R1 11, R2 10, R3  15
of which the largest residual in
magnitude is R3, i.e. the 3rd equation
has more error and needs immediate
attention for improvement.
Thus, we introduce a change,
dx3in x3 which is obtained
from the formula
R3 15
dx3   1.875
a33 8
Similarly, we find the new
residuals of large
magnitude and relax it to
zero, and so on.
We shall continue this
process, until all the
residuals are zero or very
small.
Iteration Residuals Maximum Difference Variables

number R1 R2 R3 x1 x2 x3
Ri dxi

0 11 10 -15 -15 1.875 0 0 0

1 9.125 8.125 0 9.125 1.5288 0 0 1.875

2 0.0478 6.5962 -3.0576 6.5962 -0.9423 1.5288 0 1.875


Matrix
Inversion
Consider a system of
equations in the form
[ A]( X ) ( B )

One way of writing its


solution is in the form
1
( X ) [ A] ( B )
Thus, the solution to the
system can also be obtained if
the inverse of the coefficient
matrix [A] is known. That is the
product of two square
matrices is an identity matrix

[ A][ B] [ I ]
1
then, [ B ] [ A]
and [ A] [ B ] 1

Every square non-


singular matrix will
have an inverse.
1
then, [ B ] [ A]
and [ A] [ B ]1

Every square non-singular


matrix will have an inverse.
Gauss elimination and
Gauss-Jordan methods are
popular among many
methods available for finding
the inverse of a matrix.
Gaussian
Elimination
Method
In this method, if A is a given
matrix, for which we have to
find the inverse; at first, we
place an identity matrix,
whose order is same as that
of A, adjacent to A which we
call an augmented matrix.
Then the inverse of A is
computed in two stages. In
the first stage, A is
converted into an upper
triangular form, using
Gaussian elimination
method
In the second stage, the
above upper triangular
matrix is reduced to an
identity matrix by row
transformations. All these
operations are also
performed on the adjacently
placed identity matrix.
Finally, when A is transformed
into an identity matrix, the
adjacent matrix gives the
inverse of A.
In order to increase the
accuracy of the result, it is
essential to employ partial
pivoting.
Example
Use the Gaussian elimination
method to find the inverse of
the matrix
1 1 1 

A  4 3  1
 3 5 3 
Solution
At first, we place an identity
matrix of the same order
adjacent to the given matrix.
Thus, the augmented matrix
can be written as
 1 1 1 1 0 0
 4 3  1 0 1 0
 
 3 5 3 0 0 1 
Stage I (Reduction to upper
triangular form): Let R1, R2 and R3
denote the 1st , 2nd and 3rd rows of a
matrix. In the 1st column, 4 is the
largest element, thus
interchanging R1 and R2 to bring
the pivot element 4 to the place of
a11, we have the augmented matrix
in the form
 4 3  1 0 1 0
 1 1 1 1 0 0
 
 3 5 3 0 0 1 
Divide R11 by 4 to get

 3 1 1 
 1  0 0
4 4 4
 
 1 1 1 1 0 0
3 5 3 0 0 1
 
 
Perform R2  R1  , which
gives
 3 1 1 
 1  0 0
4 4 4
 
0 1 5 1
1  0
 4 4 4 
3 5 3 0 0 
1

 
Perform R3  3R1  R3 in the
above equation , which yields

 3 1 1 
 1  0 0
4 4 4
 
0 11 15 1 
1  0
 4 4 4 
 1 1 3 
0 0  1
 4 4 4 
Now, looking at the second
column for the pivot, the
max (1/4. 11/4) is 11/4.
Therefore, we interchange
R2 and R3 in the last
equation and get
 3 1 1 
 1  0 0 
4 4 4
 
 0 11 15 0  3 1 
 4 4 4 
 1 5 1 
0 1  0
 4 4 4 
Now, divide R2 by the pivot a22
= 11/4, and obtain
 3 1 1 
 1  0 0
4 4 4
 
0 15 3 4
1 0 
 11 11 11 
 1 5 1 
0 1  0
 4 4 4 
Performing R3  (1 4) R2  R3
yields
 3 1 1 
 1  0 0 
4 4 4
 
0 15 3 4 
1 0 
 11 11 11 
 10 2 1 
0 0 1  
 11 11 11 
Finally, we divide R3 by (10/11),
thus getting an upper
triangular form
 3 1 1 
 1  0 0 
4 4 4
 
0 1 15 3 4 
0 
 11 11 11 
 11 1 1 
0 0 1  
 10 5 10 
Stage II
Reduction to an identity matrix
(1/4)R3 + R1 and (-15/11)R3 + R2

 3 11 1 1 
1 0  
4 40 5 40
 
0 3 1 
1 0  0
 2 2 
 11 1 1 
0 0 1  
 10 5 10 
Finally, performing
R1  (3 4) R2  R1
we obtain
 7 1 2
 1 0 0  
5 5 5
 
0 3 1 
1 0  0
 2 2 
 
0 11 1 1
0 1  
 10 5 10 
Thus, we have

 7 1 2 
 5  
5 5
 
1  3 1 
A   0
 2 2 
 
 11 1 1 
 
 10 5 10 
Gauss -
Jordan
Method
This method is similar to
Gaussian elimination
method, with the essential
difference that the stage I
of reducing the given
matrix to an upper
triangular form is not
needed.
However, the given
matrix can be directly
reduced to an identity
matrix using elementary
row operations.
Example
Find the inverse of the given
matrix by Gauss-Jordan method

1 1 1 

A  4 3  1
 3 5 3 
Solution
Let R1, R2 and R3 denote the 1 ,
st

2nd and 3rd rows of a matrix. We


place the identity matrix
adjacent to the given matrix. So
the augmented matrix is given
by
 1 1 1 1 0 0
 4 3  1 0 1 0
 
 3 5 3 0 0 1 
Performing R2  4 R1  R2 , we
get

1 1 1 1 0 0
0 1 5 0 1 0 

 3 5 3 0 0 1 
Now, performing
R3  3R1  R3 ,
we obtain

1 1 1 1 0 0
0 1 5 4 1 0 

 0 2 0 3 0 1 
Carrying out further operations
R2  R1  R1 R3 and
 2 R2  R3 ,

we arrive at
 1 0  4  3 1 0
 0  1  5  4 1 0
 
 0 0  10  11 2 1 
Now, dividing the third row by
–10, we get
 
1 0 4 3 1 0 
 
0 1 5 4 1 0 
 11 1
0 0 1 1  
 10 10 
Further, we performR1  4 R3  R1 ,
and
R2  5 R3  R2 to get
 7 1 2
 1 0 0  
5 5 5
 
0 3 1
1 0 0 
 2 2
 11 1 1 
0 0 1  
 10 5 10 
Finally, multiplying R2 by –1, we
obtain
 7 1 2
 1 0 0  
5 5 5
 
0 3 1 
1 0  0
 2 2 
 
0 11 1 1
0 1  
 10 5 10 
Hence, we have

 7 1 2 
 5  
5 5
 
1  3 1 
A   0
 2 2 
 11 1 1 
  
 10 5 10 
Numerical
Analysis
Lecture14

You might also like