0% found this document useful (0 votes)
16 views12 pages

Num Chap 3 Edited

numerical analysis chapter three

Uploaded by

noah05609
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views12 pages

Num Chap 3 Edited

numerical analysis chapter three

Uploaded by

noah05609
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Chapter III

Solving System of Linear Equations


Consider the system of n equations in n unknowns x1 , x2 , . . ., xn as follows

a11 x1  a12 x2  . . .  a1n xn  b1


a21 x1  a22 x2  . . .  a2n xn  b2 (1)

an1 x1  an 2 x2  . . .  ann xn  bn

Where a11 , . . . , a nn are given numbers called coefficients of the system, b1 , b2 , . . ., bn


are also given numbers. (*) is system of linear equation because the variables
x j ( j  1, . . ., n) appear in the first power only. The solution of (*) is the number
x j ( j  1, . . ., n) that satisfy all the n equations.

Ax  b (2)

 a11 a12 . . . a1n   x1   b1 


     
 a21 a22 . . . a2 n   x2   b2 
Where A   , x  and b  (3)
        .  . 
     
a a . . . a  x  b 
 n1 n 2 nn   n  n

(3.2) is called matrix form of the linear system (3.1)


(3.2) has a unique solution if det(A) different from zero.

Direct (Exact) method of solving system of linear equation


This method gives the solution after a certain amount of fixed computation.

Crammer’s rule
Gauss elimination
Gauss Jordan
Triangular factorization (LUD)
Matrix inversion

The above are direct methods for solving system of linear equations the detail you know
from your linear algebra course.

1
Indirect Direct (Iterative) method of solving system of linear equation

This is an iterative method. An iterative technique to solve the n  n linear system


Ax  b starts with an initial approximation
 x10   x1 
   
 x 0
x 
x ( 0)   2  to the solution x   2  and generates a sequence of vectors
.  .
 
 0 x 
 n
x  n
 x11   x12 
   
 1  x2 
   x
x k k 1   2  ,  2  , . . . . that converges to x .
.  . 
 1  2
 xn   xn 
In this section we discuss on the Jacobi and Gauss-Seidal Iterative Methods.

Iterative technique here involves a process that converts the system Ax  b into an
equivalent system of the form
x  Tx  c (4)
for some fixed matrix T and vector c as follows.
Solving the ith equation in Ax  b for xi to obtain (provided aii  0 )

x1 
1
 a12 x2  a13 x3  . . .  a1n xn   b1
a11 a11
1 b
x2  (a21 x1 a23 x3  . . .  a2 n xn )  2 (5)
a22 a22

1 b
xn  ( an1 x1  an 2 x2  . . .  an ( n 1) xn 1  n
ann ann

Matrix representation of (3.5) is

2
 a a a  b 
 0  12  13 . . .  1n  x1   1 
 a11 a11 ann    a11 
 x1    x2   b 
  a21 a23 a2 n
 x2    a 0  a . . .  a  .   2 
 .    22 22 nn     a22 
. 
                      . 
x    .  
 n  an ( n 1)    bn 
an1 an 2 an3
    x
ann  n   ann 
...
 ann ann ann

And this is the same as, x  Tx  c where,

 a a a   b1 
 0  12  13 . . .  1n   
 a11 a11 ann   a11 
 x1  
  a21 a23 a2 n  b 
 x2   0  . . .    2 
x =   , T =  a22 a22 ann  and c =  a22 
.                  . 
 
x     
 n
 an1 an 2 an3 an ( n 1)   bn 
 a  a  a . . . a  a 
 nn nn nn nn   nn 

( 0)
After the initial vector x is selected, the sequence of approximate solution
vectors is generated by computing
x ( k )  Tx ( k 1)  c for each k  1,2,3,.... (6)
(3.6) is called Jacobi Iterative Method. It consists of solving the ith equation in Ax  b
for xi to obtain (provided aii  0 )

n  aij x j  bi
xi       for i  1,2,..., n (7 )
j 1  aii  aii
j i

( k 1)
And generating each xi
(k )
from the components of x for k  1 by

n
( k 1)
 (aij x j )  bi
j 1
j i
xi( k )  for i  1,2,3,..., n (8)
aii

3
Note: - The absolute error or relative error of the approximate solution can be obtained
using either the l 2 norm or the l  norm.
Definition: - The l 2 and l  norms for the vector x  ( x1 , x2 ,....., xn ) are defined by
t

1
 n 2
x 2    xi2  and x 
 max xi
i 1  i i  n
Example: - solve the following linear system using Jacobi method accurate to within
10-2 in l2 norm .
20 x1  x2  2 x3  17
3x1  20 x2  x3  18
2 x1  3x2  20 x3  25
Solution: - Solving the ith equation for xi (i  1,2,3) we get
1
x1  (17  x2  2 x3 )
20
1
x2  (18  3 x1  x3 )
20
1
x3  (25  2 x1  3 x2 )
20
And from Jacobi Method we have

1
x1( k )  (17  x2( k 1)  2 x3( k 1) )
20
1
x2( k )  (18  3x1( k 1)  x3( k 1) )
20
1
x3( k )  (25  2 x1( k 1)  3x2( k 1) )
20

Taking x
( 0)
 ( x10 , x20 , x30 )t  (0,0,0)t

1
x1(1)  (17  x2(0)  2 x3(0) )  0.85
20
1
x2(1)  (18  3x1(0)  x3(0) )  0.9
20
1
x3(1)  (25  2 x1(0)  3x2(0) )  1.25
20

4
Additional iterates, x  ( x1 , x2
(k ) (k ) (k )
, x3( k ) ) t , are generated in a similar manner and
are presented as in the table below

-------------------------------------------------------------
k k k
Iteration (k) x1 x2 x3
-------------------------------------------------------------
1 0.85000 -0.90000 1.25000
2 1.02000 -0.96500 1.03000
3 1.00125 -1.00150 1.00325
4 1.00040 -1.00003 0.99965
5 0.99997 -1.00008 0.99996
6 1.00000 -1.00000 0.99999
7 1.00000 -1.00000 1.00000
---------------------------------------------------------------

Since x
( 4)
 x (3)  0.0040  10 2 the approximation accurate to within 10 2 is
x (3)  (1.00125,  1.00150, 1.00325)t

A possible improvement over Jacobi’s Method can be seen by reconsidering equation


( k 1)
are used to compute xi . Since, for i  1,
k
(3.8). The component of x
x ( k1) , . . . , xi(k1) have already been computed and are probably better approximation to
( k 1)
the actual solution x1 , x2 , . . . , xi 1 than x1 , x2( k 1) , . . . , xi(k11) , it seems more
k
reasonable to compute xi using these most recently calculated values. That is, as soon as
a new approximation for an unknown is found, it is immediately used in the next step.
This Modification is called The Gauss-Seidel Iterative Method and is illustrated by the
following example.

Example: - solve the following linear system using Gauss-Seidel method accurate to
within 10-2 in l2 norm .
20 x1  x2  2 x3  17
3x1  20 x2  x3  18
2 x1  3x2  20 x3  25

5
Solution: - Solving the ith equation for xi (i  1,2,3) we get
1
x1  (17  x2  2 x3 )
20
1
x2  (18  3 x1  x3 )
20
1
x3  (25  2 x1  3 x2 )
20

Letting x
( 0)
 ( x10 , x20 , x30 )t  (0,0,0)t

1
x1(1)  (17  x2(0)  2 x3(0) )  0.85
20
1
x2(1)  (18  3x1(1)  x3(0) )  1.0275
20
1
x3(1)  (25  2 x1(1)  3x2(1) )  1.01087
20

Additional iterates, x  ( x1 , x2
(k ) (k ) (k )
, x3( k ) ) t are generated in a similar manner and
are presented as in the table below

--------------------------------------------------------------
k k k
Iteration (k) x1 x2 x3
---------------------------------------------------------------
1 0.85000 -1.02750 1.01087
2 1.00246 -0.99983 0.99978
3 0.99997 -1.00001 1.00000
4 1.00000 -1.00000 1.00000
---------------------------------------------------------------

Since x
(3)
 x ( 2)  0.0025 10 2 the approximation accurate to within 10 2 is
x (3)  (1.00246,  0.99983, 0.99978)t

6
Remarks: -
1. Since the most recent approximation of the unknowns are used while proceeding
to the next step, the convergence in the Gauss-Seidel Method is faster than that of
Jacobi’s Meyhod
n
2. If A is strictly diagonally dominant ( i.e. aii   aij ), then for any choice of
j 1
j i
initial approximation, both the Jacobi and Gauss-Seidel Methods give sequences
of approximations x 
(k ) 
k 1 that converges to the unique solution of Ax  b .

Example: - Solve the following linear system accurate to within 10-2 in l2 norm .
3 x1  x2  x3  1
3 x1  3 x2  7 x3  4
3 x1  6 x2  2 x3  0
a. Using Jacobi Method
b. Using Gauss-Seidel Method

Solution: -
First we arrange the given system for diagonal dominancy

3 x1  x2  x3  1
3 x1  6 x2  2 x3  0
3 x1  3x2  7 x3  4

Solving the ith equation for xi (i  1,2,3) we get


1
x1  (1  x2  x3 )
3
1
x2  (3x1  2 x3 )
6
1
x3  (4  3x1  3x2 )
7
Letting x
( 0)
 ( x1 , x2 , x3 )  (0,0,0)t
0 0 0 t

7
a. Using Jacobi Method
1
x1(1)  (1  x2(0)  x3(0) )  0.33333
3
1
x2(1)  (3x1(0)  2 x3(0) )  0
6
1
x3(1)  (4  3x1(0)  3x2(0) )  0.57143
7
Additional iterates, x
(k )
 ( x1( k ) , x2( k ) , x3( k ) ) t are generated in a similar manner and
are presented as in the table below

------------------------------------------------------------
k k k
Iteration (k) x1 x2 x3
-------------------------------------------------------------
1 0.33333 0.00000 0.57143
2 0.14286 -0.35714 0.42857
3 0.07143 -0.21429 0.66327
4 0.04082 -0.25680 0.63265
5 0.03685 -0.23129 0.66399
6 0.03490 -0.23976 0.65476
7 0.03516 -0.23571 0.65922
8 0.03502 -0.23732 0.65738
------------------------------------------------------------

Since the x
(7)
 x (6)  0.0060  10 2 the approximation accurate to within 10 2 is
x (6)  (0.0349 ,  0.23976, 0.65476)t

b. Using Gauss-Seidel Method

1
x1(1)  (1  x2(0)  x3(0) )  0.33333
3
1
x2(1)  (3x1(1)  2 x3(0) )  1.66667
6
1
x3(1)  (4  3x1(1)  3x2(1) )  0.50000
7

8
---------------------------------------------------------------
k k k
Iteration (k) x1 x2 x3
----------------------------------------------------------------
1 0.33333 -0.16667 0.50000
2 0.11111 -0.22222 0.61905
3 0.05291 -0.23280 0.64853
4 0.03956 -0.23595 0.65560
5 0.03615 -0.23661 0.65734
6 0.03535 -0.23679 0.65776
-----------------------------------------------------------------

Since x
(5)
 x ( 4)  0.0039  10 2 the approximation accurate to within 10 2 is
x ( 4)  (0.03956 ,  0.23595, 0.65560)t

Exercise: -

1. Solve the following linear system using Jacobi method with tolerance   0.01.
2 x1  2 x3  5 x3  1
a) 4 x1  x2  x3  5
 x1  3 x2  x3  4

x2  2 x3  0
1
b) x1  2 x2  x3  4
2
1
 2 x1  x2  x3  4
2

 4 x2  8 x3  x4  11
10x1  5 x2  6
c)
 x3  5 x4  11
5 x1  10x2  4 x3  25

2. Repeat 1 Using the Gauss-Seidel Method

9
Solving Systems of non-linear equations using Newton’s Method

Theorem: - Taylor’s series for function of two variables.


f f 1 2 f 2 f 2 f 
f ( x  x, y  y)  f ( x, y)  x  y   2 (x)  2 2
xy  2 (y) 2   .......
x x 2  x xy y 
Consider the equations
f ( x, y )  0
(1)
g ( x, y )  0
If an initial approximation ( x0 , y0 ) to the solution has been found by graphical method
or otherwise, then a better approximation ( x1 , y1 ) can be obtained as follow:

Let for small h and k , ( x1 , y1 )  ( x0  h , y0  k ) be the correct root so that


f ( x0  h, y0  k )  0
(2)
g ( x0  h, y0  k )  0

Expanding each of the functions in (2) by Taylor’s series for function of two variables to
first degree term, we get approximately

f ( x0 , y0 )  hf x ( x0 , y0 )  kf y ( x0 , y0 )  0
(3)
g ( x0 , y0 )  hg x ( x0 , y0 )  kg y ( x0 , y0 )  0

Solving equation (3) for h and k , we get a new approximation to the root as
( x1 , y1 )  ( x0  h , y0  k )

In the next step (i.e. to find ( x2 , y 2 ) ) we substitute ( x1 , y1 ) instead of ( x0 , y0 ) in (3) to


find h and k and the second and better approximation to the solution becomes
( x2 , y2 )  ( x1  h , y1  k )

This process is repeated till we get the values to the desired accuracy.

Note: -
1. This method will not converge unless the starting values of the root chosen are
close to the actual root.
2. The method can be extended to 3 equations in 3 unknowns. But it is very
cumbersome to obtain a meaningful solution unless the entire information about
the equations and their physical context is available.

( 2)
Example: -Use Newton’s Method to compute x for the following nonlinear equation.

10
x 2  y  11
x  y2  7
Solution: -

Initial approximation to the solution using the graphing facility of MATHLAB as below

20

10

D B
0
C A

-10

-20

-30

-40
-8 -6 -4 -2 0 2 4 6 8

To approximate the solution at A chose a point near A. Let ( x0 , y0 )  (3.5,1.8)


f ( x, y )  x 2  y  11
g ( x, y )  y 2  x  7

f x ( x , y )  2 x , f y ( x, y )  1
g x ( x, y )  1 , g y ( x , y )  2 y

The Newton’s equations (3) will be


7h  k  0.55
h  3.6k  0.26
Solving this we get h  0.0855 , k  0.0485
Therefore, the better approximation to the root is

11
( x1 , y1 )  ( x0  h, y0  k )  (3.5855,1.8485)
Repeating the above process that is, replacing ( x0 , y0 ) by ( x1 , y1 ) in the Newton’s
equation (3), we obtain ( x2 , y 2 )  (3.5844 ,1.8482 )

To approximate the solution at B chose a point near A. Let ( x0 , y0 )  (2.5, 3) and check
that the first three approximations to the solution are

( x1 , y1 )  (3.017241, 2.163793)
( x2 , y2 )  (2.998983, 2.006434)
( x3 , y3 )  (2.999998, 2.000011)
Check also that at point B the exact solution is (3, 2).
Also approximate the solution at the points C and D.

Exercise: - Use Newton’s Method for system of nonlinear equations to solve the
following. Perform only two iterations in each case. Try to interpret each equation
graphically to obtain an initial approximation.

x2  y  5
a)
y2  x  3

x  2( y  1)
b)
y 2  3xy  7

xy  x  9
c)
y2  x2  y2

x2  y2  4
d)
x 2  y 2  16

12

You might also like