0% found this document useful (0 votes)
12 views27 pages

CH 11

Chapter 11 discusses iterative methods for solving linear and nonlinear equations, focusing on techniques like the Jacobi and Gauss-Seidel methods. It emphasizes the importance of initial guesses, convergence criteria, and the use of norms to measure error. Additionally, the chapter introduces relaxation techniques to improve convergence and fixed-point iteration methods for nonlinear systems.

Uploaded by

usc12629
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views27 pages

CH 11

Chapter 11 discusses iterative methods for solving linear and nonlinear equations, focusing on techniques like the Jacobi and Gauss-Seidel methods. It emphasizes the importance of initial guesses, convergence criteria, and the use of norms to measure error. Additionally, the chapter introduces relaxation techniques to improve convergence and fixed-point iteration methods for nonlinear systems.

Uploaded by

usc12629
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Chapter 11

Iterative Methods for Linear and


Nonlinear Equations

“These notes are only to be used in class presentations”


1
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Iterative Methods

• Iterative methods give approximate results for linear


algebraic systems
AX  B
and provide an alternative to the elimination methods
• They are efficient for large systems with a high percentage of
0 entries (A is sparse) in terms of both computer storage and
computation.
•As any other iterative methods, they need initial guesses and
they may not converge or converge slowly.

2
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Jacobi Iterative Method

To solve n linear equations;


1. Choose an initial approximation x ( 0)
A simple way to obtain initial guesses is to assume that
they are zero.
2. Convert the system AX=B into an equivalent system of the
form
X  TX  C
It consists of solving ith equation AX=B for xi to obtain (aii≠0)
n 
1  
xi   
aii j 1
 a x
ij j  bi i  1,2  n
 j i 
3
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
3. Generate sequence of approximate solution vectors by
computing
( k 1) (k )
X  TX C k: iteration number

n 
1  
xi ( k 1)   
aii j 1
( aij x j )  bi 
(k )
i  1,2  n
 j i 
4
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
4. Terminate the procedure when

a   s
or max. number of iterations is reached.

How to define εa
We have vectors so we’ll use norms to to define εa

X ( k 1)  X ( k )
a 
X ( k 1)

A norm is a real-valued function that provides a measure of the size or


length of multi-component mathematical entities such as vectors or
matrices. Norm of the vector gives a measure for the distance between an
arbitrary vector and zero vector.
5
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
x  x1 x2  xn T

Maximum magnitude norm

x 
 max xi
1i  n
p norm
1/ p
 n p

x p   xi 
i 1 
n
p=1 x 1   xi
i 1
1/ 2
 2
n
p=2 x 2   xi  Euclidean norm
i 1  6
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
The distance between two vectors is defined as the norm of the
difference of the vectors.

y   y1 y2  yn T

x y 
 max xi  yi
1i  n

1/ 2
 n
2
x  y 2   ( xi  yi ) 
i 1 

7
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Example 11.1
Solve the linear system with Jacobi iterative method
4 x1  x2  x3  7
4 x1  8 x2  x3  21 x ( 0)
 1 2 2 T

 2 x1  x2  5 x3  15

Carry out four iterations.

iter. no x1 x2 x3
0 1 2 2
1 1.75 3.375 3
2 1.8438 3.875 3.025
3 1.9625 3.925 2.963
4 1.9906 3.9766 3

8
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Gauss-Seidel Method
The Gauss-Seidel method is a commonly used iterative
method. It is same as Jacobi technique except that it uses the
latest values of x’s.

1  i 1 n 
  ij j )  
( k 1) ( k 1)
xi   ( a x (aij x (jk ) )  bi  i  1,2n
aii  j 1 j i 1 
9
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Example 11.2

Solve the linear system with Gauss-Seidel iterative method


4 x1  x2  x3  7
4 x1  8 x2  x3  21 x (0)  1 2 2T
 2 x1  x2  5 x3  15

Iterate until εa ≤ 0.001 . Use maximum magnitude norm to


calculate εa .

iter.no x1 x2 x3 εa
0 1 2 2

1 1.75 3.75 2.95


2 1.95 3.9688 2.9863 0.0551
3 1.9956 3.9961 2.9990 0.0114
4 1.9993 3.9995 2.9998 0.0009
10
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Convergence criterion

If the coefficient matrix A is Diagonally Dominant Jacobi


iteration and Gauss-Seidel is guaranteed to converge.
For each equation i :
Diagonally Dominant 
n
aii   aij
j 1
j i

For each row, the absolute value of the diagonal element is


greater than the sum of the absolute values of the rest of the
elements.
Note that this is not a necessary condition, i.e. the system may
still have a chance to converge even if A is not diagonally
dominant. 11
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Example 11.3
Solve the linear system with Gauss-Seidel iterative method
 3x1  9 x2  x3  2
5 x1  2 x2  3x3  1
x ( 0 )  0 0 0
T

2 x1  x2  7 x3  3
Iterate until εa ≤ 0.02 . Use maximum magnitude norm to
calculate εa . Ensure convergence before starting to iterate.

iter.no x1 x2 x3 εa
0 0 0 0
1 -0.2000 0.1556 -0.5079
2 0.1670 0.3343 -0.4286 0.8562
3 0.1909 0.3335 -0.4217 0.0567
4 0.1864 0.3312 -0.4226 0.0107

12
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Improvement of Convergence Using Relaxation

Relaxation represents a slight modification of the Gauss-Seidel


method and is designed to enhance convergence.
After each new value of x is computed , that value is modified
by a weighted average of the results of the previous and the
present iterations

new old new


xi  (1  w) xi  wx i
0 w2
13
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
•For choices of w with 0 < w < 1, the procedures are called
under-relaxation methods
•For choices of w with 1 < w, the procedures are called over-
relaxation methods
These methods are abbreviated SOR, for Successive Over-
Relaxation.
Substitute w in Gauss-Seidel iterative equation

xi ( k 1)  (1  w) xi( k )  wxi( k 1) i  1,2 n


 i 1 n 
bi   aij x (jk 1)   aij x (jk ) 
( k 1) (k ) w
xi  (1  w) xi 
aii     
 j 1 j i 1 
i  1,2 n 14
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Example 11.4
Use SOR method with w=1.25 to solve the linear system
4 x1  3x2  24
3x1  4 x2  x3  30
x ( 0)
 1 1 1 T
 x2  4 x3  24
Compute three iterations.
The result is given for 7 iterations and compared with Gauss-Seidel Method.

15
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Solution of Nonlinear Systems of Equations

• Fixed Point Iteration:

f1 ( x1 , x2 , x3 ,  , xn )  0  x1  g1 ( x1 , x2 , x3 ,  , xn )
f 2 ( x1 , x2 , x3 ,  , xn )  0  x2  g 2 ( x1 , x2 , x3 ,  , xn )

f n ( x1 , x2 , x3 ,  , xn )  0  xn  g n ( x1 , x2 , x3 ,  , xn )

16
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
• Fixed Point Iteration:

x1( k 1)  g1 ( x1( k ) , x2( k ) , x3( k ) , , xn( k ) )


x2( k 1)  g 2 ( x1( k ) , x2( k ) , x3( k ) , , xn( k ) ) Resembles Jacobi
iteration

xn( k 1)  g n ( x1( k ) , x2( k ) , x3( k ) , , xn( k ) )

x1( k 1)  g1 ( x1( k ) , x2( k ) , x3( k ) ,, xn(k)1, xn( k ) )


x2( k 1)  g 2 ( x1( k 1) , x2( k ) , x3( k ) ,, xn( k)1, xn( k ) ) Resembles
Gauss-Seidel

xn( k 1)  g n ( x1( k 1) , x2( k 1) , x3( k 1) ,, xn( k11) , xn(k ) )
17
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Convergence Criterion:

Let D={(x1, x2,…… xn )} ai≤xi≤bi for each i=1,2,…n. Suppose G


is a continuous function and G(x)Є D whenever x Є D, then G
has a fixed point in D.
In addition, suppose all component functions of G have
continuous partial derivatives and a constant K<1 exists with

for each j=1,….n

then fixed point iteration method converges.


18
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Example 11.5:
Use fixed-point iteration method to determine the roots of
x1  x2  x2  0.25
8 x12  16 x2  8 x1x2  5
with initial guesses of x1(0) = x2(0) =0. Compute 3 iterations
with Gauss-Seidel Scheme.

iter. no x1 x2

0 0 0

1 0.2500

2 0.4955 0.2522

3 0.5000 0.2500
19
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Example 11.6:
Use fixed-point iteration method to determine the roots of

x1  10 x1  x22  8  0
2

x1 x22  x1  10 x2  8  0
with D{ (x1 , x2) \ 0≤ x1, x2≤ 1.5}. Compute 3 iterations with
Gauss-Seidel Scheme and calculate εa in each iteration by
using maximum magnitude norm.

iter. no. x1 x2 εa
0 0 0
1 0.8 0.88
2 0.9414 0.967 0.1462
3 0.9821 0.99 0.0411
20
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
• Newton-Raphson method
f1 ( x1 , x2 , x3 ,  , xn )  0
f 2 ( x1 , x2 , x3 ,  , xn )  0

f n ( x1 , x2 , x3 ,  , xn )  0

( k 1) (k ) f ( x(k ) )
Newton Raphson equation: x x 
f ( x ( k ) )

 f1 f1 f1 


Newton Raphson equation in matrix form  x 
x2 xn 
 1 
( k 1) (k ) F (k )  f 2 f 2

f 2 
X X  J   x1 x2 xn 
J (k )      
Jacobian
 f f n 
matrix
J (k ) X (k 1)  J (k ) X (k )  F (k )  n
f n
 
 x1 x2 xn 
21
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
General solution of Newton Raphson method

(k ) (k )
 f1 f1 f1  ( k 1)  f1 f1 f1  (k ) (k )
 x   x1    x1   f1 ( x1 , x2 , x3 , , xn ) 
x2 xn   
 x x2 xn     
 1   1 
 f 2 f 2 f 2     f 2 f 2 f 2     
  x2    x2   f 2 ( x1 , x2 , x3 , , xn )
 x1 x2 xn     x1 x2 xn    
  
               
 f f n f n    f f n f n     
 n    xn   n    xn   f n ( x1 , x2 , x3 ,, xn )
 x1 x2 xn   x1 x2 xn 

22
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Solve this set of linear equations at each iteration:

(k ) ( k 1) (k ) (k ) (k )
J X J X F
J ( k ) { X ( k 1)  X ( k ) }   F ( k )
Rearrange:
J ( k ) X ( k )   F ( k )
(k )
 f1 f1 f1 
 x1 
(k )
 f1 ( x1 , x2 , x3 ,, xn ) 
(k )
 x 
x2 xn     
 1     
 f 2 f 2

f 2 
 x2   f ( x , x , x ,, xn ) 
 x1 x2 xn       2 1 2 3 

         
 f n f n f n       
   xn   f n ( x1 , x2 , x3 ,, xn ) 
 x1 x2 xn 
(k 1) (k ) (k )
X X  X 23
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Example 11.7:
Use Newton-Raphson method to determine the roots of

2 x12  x22  4.32


x12  x22  0
with initial guesses of x1(0) = x2(0) =1. Compute two iterations.

iter.no x1 x2
0 1 1
1 1.2200 1.2200
2 1.2002 1.2002

24
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Solution:  f1 f1 
 x x2  4 x1 2 x2 
f1 ( x1, x2 )  2 x12  x22  4.32  0 J  1 
 f 2 f 2  2 x1  2 x2 
f 2 ( x1, x2 )  x12  x22  0  x1 x2 

1st iteration
x1(0)  x2(0)  1
4 x1( 0) 2 x2( 0)  x1( 0)   f1 ( x1( 0) , x2( 0) ) 
 (0) ( 0)   ( 0) 
  (0) 
2 x1  2 x2  x2  (0)
 f 2 ( x1 , x2 )
4 2  x1(0)  1.32

 2  2  ( 0 )   0  , x1
( 0)
 0.22, x2  0.22
(0)

  x2   
x1(1)  x1( 0)  x1(0)  1.22
x2(1)  x2( 0)  x2(0)  1.22
25
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
2nd iteration
x1(1)  x2(1)  1.22
4 x1(1) 2 x2(1)  x1(1)   f1 ( x1(1) , x2(1) ) 
 (1) (1)   (1) 
  (1) 
2 x1  2 x2  x2  (1)
 f 2 ( x1 , x2 )

4.88 2.44  x1(1)   0.1452


2.44  2.44     0  , x1
(1)
 x 2  0.0198
(1)

 x2  
(1)
 
x1( 2)  x1(1)  x1(1)  1.2002
x2( 2)  x2(1)  x2(1)  1.2002

x1true  x2true  1.2

26
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Example 11.8:
Use Newton-Raphson method to determine the roots of

x1  x1 x2  10
2

x2  3x1 x22  57
with initial guesses of x1(0) =1.5, x2(0) =3.5.
Iterate until εa ≤ 0.001 . Use maximum magnitude norm to
calculate εa .

iter. no. x1 x2 εa
0 1.5 3.5
1 2.036 2.8439
2 1.9987 3.0023 0.0528
3 2.0000 2.999999≈3 0.0008
27

Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

You might also like