0% found this document useful (0 votes)
12 views32 pages

Computation Methods

The document outlines a course on computational methods for engineers, focusing on linear algebraic equations, optimization, curve fitting, and numerical integration. It covers various methods including Gauss elimination, LU decomposition, and fixed-point iteration, along with concepts like matrix inversion and condition numbers. The course is structured into chapters that provide both theoretical background and practical coding exercises.

Uploaded by

15kimlong8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views32 pages

Computation Methods

The document outlines a course on computational methods for engineers, focusing on linear algebraic equations, optimization, curve fitting, and numerical integration. It covers various methods including Gauss elimination, LU decomposition, and fixed-point iteration, along with concepts like matrix inversion and condition numbers. The course is structured into chapters that provide both theoretical background and practical coding exercises.

Uploaded by

15kimlong8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

COMPUTATIONAL METHODS FOR

ENGINEERS
Linear Algebraic Equations (cont.)

Lecturer: Tran Quoc Viet ([email protected])

Fall 2023

1/26
Topics

Ch.1. Approximations, Round-Off errors, Truncation errors.


Ch.2. Roots of equations: Bracketing Methods, Open Methods
Ch.3. Linear Algebraic Equations: Gauss Elimination, LU Decom-
position, Matrix Inversion, Iterative methods
Ch.4. Optimization: One-Dimensional Unconstrained Optimiza-
tion, Multidimensional Unconstrained Optimization, Mul-
tidimensional Constrained Optimization
Ch.5. Curve fitting: Least-squares regression, Interpolation
Ch.6. Numerical Integration and Differentiation
Ch.7. Integration of differential equations.

2/26
Overview
PT 3.2
Mathematical
PT 3.1 background PT 3.3
Motivation Orientation

9.2
PT 3.6 9.1 Naive Gauss
Advanced PART 3 Small elimination
methods systems 9.3
Pitfalls
Linear Algebraic
PT 3.5
Equations
Important 9.4
formulas Remedies
CHAPTER 9

EPILOGUE Gauss 9.5


Complex
Elimination systems
PT 3.4
Trade-offs 9.6
9.7 Nonlinear
Gauss-J ordan systems

10.1
LU
12.4 decomposition
Mechanical
engineering
CHAPTER 10
CHAPTER 12
LU Decomposition 10.2
Matrix
Engineering and
12.3 inverse
Electrical Case Studies Matrix Inversion
engineering
CHAPTER 11
10.3
12.2 Special Matrices System
Civil condition
and Gauss-Seidel
engineering 12.1
Chemical
engineering
11.3 11.1
Software Special
matrices
11.2
Gauss-
Seidel

3/26
Contents

1. Matrix Inversion (cont.)

2. Matrix Condition Number

3. Fixed-point iteration methods


3.1. Jacobi method
3.2. Gauss-Seidel method
3.3. Improvement of convergence using relaxation

4. Codes and exercises

4/26
1. Matrix Inversion (cont.) (1/2)
If A is invertible, i.e. A−1 exists, and A B = IN then we have
B = A−1 .
For N = 3, we seek for A−1 from A B = IN :
     
a1,1 a1,2 a1,3 b1,1 b1,2 b1,3 1 0 0
 a2,1 a2,2 a2,3   b2,1 b2,2 b2,3  =  0 1 0 
a3,1 a3,2 a3,3 b3,1 b3,2 b3,3 0 0 1
| {z }
?
     
a1,1 a1,2 a1,3 b1,1 1
 a2,1 a2,2 a2,3   b2,1  =  0 
a3,1 a3,2 a3,3 b3,1 0
| {z }
?
     
a1,1 a1,2 a1,3 b1,2 0
 a2,1 a2,2 a2,3   b2,2  =  1 
a3,1 a3,2 a3,3 b3,2 0
| {z }
?
     
a1,1 a1,2 a1,3 b1,3 0
 a2,1 a2,2 a2,3   b2,3  =  0 
a3,1 a3,2 a3,3 b3,3 1
| {z }
? 5/26
1. Matrix Inversion (cont.) (2/2)

We can use the Gauss Elimination method as follows


 
a1,1 a1,2 a1,3 1 0 0
[hi,j ]3×6 =  a2,1 a2,2 a2,3 0 1 0 
a3,1 a3,2 a3,3 0 0 1




 Elementary row operations
y

 
1 0 0 b1,1 b1,2 b1,3
 0 1 0 b2,1 b2,2 b2,3 
0 0 1 b3,1 b3,2 b3,3

6/26
2. Matrix Condition Number (1/4)

For a matrix A, we always have its approximate one, say Ã, where
the error A − Ã may come from
▶ Rounding errors,
▶ errors from an earlier computation,
▶ measurement errors, etc.
So, instead of solving
Ax=b
to obtain the exact solution x, we are dealing with

à x̃ = b

to get the numerical solution x̃ that approximates x.

We put à = A + ∆A , x̃ = x + ∆x , where ∆A and ∆x stand for


the (absolute) errors
7/26
2. Matrix Condition Number (2/4)

Substituting à = A + ∆A and x̃ = x + ∆x to à x̃ = b:

(A + ∆A ) (x + ∆x ) = b
−1
A (A + ∆A ) (x + ∆x ) = A−1 b
(A−1 A + A−1 ∆A ) (x + ∆x ) = A−1 b = x
(In + A−1 ∆A ) (x + ∆x ) = x
In x + A−1 ∆A x + In ∆x + A−1 ∆A ∆x = x
x + A−1 ∆A x + ∆x + A−1 ∆A ∆x = x
A−1 ∆A x + ∆x + A−1 ∆A ∆x = 0
A−1 ∆A (x + ∆x) = −∆x
A−1 ∆A x̃ = −∆x .

8/26
2. Matrix Condition Number (3/4)

Thus we have

∥ − ∆x ∥ = A−1 ∆A x̃ ≤ A−1 ∥∆A ∥ ∥x̃∥


−1
∥∆x ∥ ≤ A ∥∆A ∥ ∥x̃∥

due to the basic feature of matrix norm, i.e. ∥A B∥ ≤ ∥A∥ ∥B∥,


where ∥∥˙ is some matrix norm, e.g. ∥∥˙ F the Frobenius norm
(read cmfe-b04-c03 s01.pdf).

Assume that x̃ is not trivial solution, i.e. x̃ ̸= 0. We obtain

∥∆x ∥ ∥∆A ∥
≤ A−1 ∥A∥ . (1)
∥x̃∥ | {z } ∥A∥
| {z } Condition number of A | {z }
Rel. error of the solution Rel. error of A

Here Rel. abbreviates ”relative”.


9/26
2. Matrix Condition Number (4/4)
Condition number of matrix A is defined by

Cond(A) = ∥A∥ ∥A−1 ∥

Thus, (1) becomes


∥∆x ∥ ∥∆A ∥
≤ Cond(A) . (2)
∥x̃∥ ∥A∥
Notes:
▶ Cond(A) ≥ 1.
Indeed, since A A−1 = IN and A A−1 ≤ ∥A∥ A−1 , we
have Cond(A) = ∥A∥ A−1 ≥ A A−1 = ∥IN ∥ = 1.
▶ If A−1 does not exist, ones define Cond(A) = ∞ and call A
singular matrix. When Cond(A) is very large, A is called
ill-conditioned matrix.
▶ To compute Cond(A), use numpy.linalg.cond (see
Example 1). 10/26
Example 1: To evaluate a upper-bound of Rel. error of solution of Eq. Ax = b

import numpy as np
import scipy.linalg as la

A = np.array ([ [ 1, 2, 3 ], [ 2.01, 4, 6 ], [ -2, 3, 5 ] ], dtype=float )

# Pick up and vector as exact solution:


x0 = np.array ([ 1., 2., 3. ] )

# Generate the RHS of Ax = b using the exact solution:


b = A @ x0

# Inverse of A:
B = la.inv(A)

# To obtain numerical solution: x = A^(-1) b


x = B @ b

# Calculating the condition number of A:


cond = np.linalg.cond( A , ’fro’ )

# Getting the machine epsilon value: (reps plays the role of |Delta A|/|A|)
reps = np.finfo(float).eps

# Estimating upper-bound of relative error of solution: (read slide)


rerr = cond * reps
10/26
# Report:
print( "A =\n", A )
print( "b =\n", b )
print( "B.A =\n", np.dot(B,A) )
print( "*** %30s %12.5e " %("Condition number Cond(A) =", cond) )
print( "*** %30s %12.5e " %("E = |x - x0|/|x| =", la.norm(x-x0)/la.norm(x)) )
print( "*** %30s %12.5e " %("Upper-bound of Rel. error E =", rerr) )

Output:

A =
[[ 1. 2. 3. ]
[ 2.01 4. 6. ]
[-2. 3. 5. ]]
b =
[14. 28.01 19. ]
B.A =
[[ 1.00000000e+00 1.47659662e-14 5.66213743e-15]
[-1.54543045e-13 1.00000000e+00 1.29674049e-12]
[ 1.19459997e-13 -6.57252031e-14 1.00000000e+00]]

*** Condition number Cond(A) = 3.04472e+04


*** E = |x - x0|/|x| = 1.67701e-12
*** Upper-bound of Rel. error E = 6.76064e-12

10/26
3. Fixed-point iteration methods (1/2)
General idea: To use the fixed-point iteration method to solve

Ax=b (3)

numerically,
1. We derive from (3) a new one given in the form

x=B x+c (4)

where B is some special matrix and c is vector.


2. Fixed-point iteration: we repeat the calculation

xl = B xl−1 + c, l = 1, 2, . . . (5)

until a convergence criteria is met, e.g.

∥xl0 − xl0 −1 ∥ ≤ ε∥xl0 ∥, ε is some tolerance

where x0 is some initial guess. 11/26


3. Fixed-point iteration methods (2/2)

For
a1,1 a1,2 a1,3 ... a1,N b1
   
 a2,1 a2,2 a2,3 ... a2,N   b2 
A= a3,1 a3,2 a3,3 ... a3,N b3
, b=
   
.. .. .. .. ..

..
.
   
. . . . .
aN,1 aN,2 aN,3 ... aN,N bN

In this section let us decompose

A=D+L+U

where
     
a1,1 0 0 ... 0 0 0 0 ... 0 0 a1,2 a1,3 ... a1,N

 0 a2,2 0 ... 0 


 a2,1 0 0 ... 0 


 0 0 a2,3 ... a2,N 

D=
 0 0 a3,3 ... 0 , L = 
  a3,1 a3,2 0 ... 0 , U = 
  0 0 0 ... a3,N 

 .. .. .. .. ..   .. .. .. .. ..   .. .. .. .. .. 
 . . . . .   . . . . .   . . . . . 
0 0 0 . . . aN,N aN,1 aN,2 aN,3 . . . 0 0 0 0 ... 0
12/26
3.1. Jacobi method (1/4)

Jacobi Iteration: Substituting A = D + L + U into A x = b,

(D + L + U ) x = b
D x + (L + U ) x = b
D x = −(L + U ) x + b
x = D−1 (−(L + U ) x + b)
x = −D−1 (L + U ) x + D −1
| {z }b (6)
| {z }
B c

where  1 
a1,1 0 0 ... 0 
0 a1,2 a1,3 ... a1,N

1

 0 a2,2 0 ... 0 
  a2,1 0 a2,3 ... a2,N 
1
 
−1 0 0 ... 0 a3,1 a3,2 0 ... a3,N
 
D = , L+U =
 
a3,3 

.. .. .. .. ..
  .. .. .. .. .. 

 . . . . .

  . . . . . 
1 aN,1 aN,2 aN,3 . . . 0
0 0 0 ... aN,N

13/26
3.1. Jacobi method (2/4)
Thus we can see that
a1,N
a1,2 a1,3
 
  b1
0 a1,1 a1,1 ... a1,1 a1,1
 a2,1 a2,3 a2,N  b2 
 a2,2 0 a2,2 ... a2,2

  a2,2

 a3,1 a3,2 a3,N 
b3

D−1 0 ... D−1

(L + U ) =  , b=
 
 a3,3 a3,3 a3,3 a3,3 
 .. .. .. ..  .

..  ..

.

 . . . .  


aN,1 aN,2 aN,3 bN
aN,N aN,N aN,N ... 0 aN,N

In view of the iteration (5), Eq. (6) becomes

xl = −D−1 (L + U ) xl−1 + D−1 b, l = 1, 2, . . . (7)

or a
 
a1,2 a1,3 b1
. . . a1,N
  

xl1
 0 xl−1

 a2,1
a1,1 a1,1
a2,3 a
1,1 1  a1,1
. . . a2,N xl−1   ab2,2
2

 xl2   a2,2 0 a2,2 2,2
 
  2

   a3,1 a3,2 a  
  b3

xl3 0 . . . a3,N xl−1
 
= −  +  a3,3
   a3,3 
  a3,3 3,3
  3 
 ..   ..

.. .. . . ..
  ..   . 
 .   . . . . .
 
 .   .. 
aN,1 aN,2 aN,3
 
xlN ... 0 xl−1
N
bN
aN,N aN,N aN,N aN,N
14/26
3.1. Jacobi method (3/4)

Thus we obtain
a a a1,N l−1 
 

xl1
 − 0 + a1,2 xl−1
1,1 2
+ a1,3 xl−1
1,1 3
+ ... + a1,1 xN + b1
a1,1
a a2,3 l−1 a2,N l−1 
− a2,1 xl−1 + 0 + a2,2 x3 + ... + a2,2 xN + b2
 
 xl2   2,2 1 a2,2

a3,1 l−1 a3,2 l−1 a3,N l−1 
   
b3

 xl3 =
  − a3,3 x1 + a3,3 x2 + 0 + ... + a3,3 xN + a3,3


 ..   .. 
 .  
 . 

xlN aN,1 l−1 aN,2 l−1 aN,3 l−1 bN

− aN,N x1 + aN,N x2 + aN,N x3 + ... + 0 + aN,N

Or,

ai,j xl−1
PN
− +bi
xli = j=1, j̸=i
ai,i
j
, i = 1, N , l = 1, 2, . . . (8)

15/26
3.1. Jacobi method (4/4)

Starting with some guess, e.g. x0 = [0, 0, . . . , 0], the iteration (7)
(or (8)) is proceeded for l = l + 1. It should be terminated at
some l = l0 ∈ N such that

∥xl0 − xl0 −1 ∥ ≤ ε ∥xl0 ∥

for ε some given tolerance, where ∥ · ∥ is some matrix-norm, e.g.


the Frobenius norm.

A sufficient condition for convergence:


N
X
|ai,i | > |ai,j |, i = 1, N .
j=1
j̸=i

16/26
3.2. Gauss-Seidel method (1/5)
Gauss-Seidel Iteration: From A = D + L + U , we have

(D + L + U ) x = b
(D + L) x + U x = b
(D + L) x = −U x + b (9)
−1
x = (D + L) (−U x + b)
−1
x = −(D + L) U x + (D + L)−1 b
| {z } | {z }
B c

In view of the iteration (5), Eq. (9) becomes

(D + L) xl = −U xl−1 + b, l = 1, 2, . . . (10)
   
a1,1 0 0 ... 0 0 a1,2 a1,3 . . . a1,N

 a2,1 a2,2 0 ... 0 

 0
 0 a2,3 . . . a2,N 

where D+L=
 a3,1 a3,2 a3,3 ... 0 , U =  0
  0 0 . . . a3,N 

 .. .. .. .. ..   .. .. .. . . .. 
 . . . . .   . . . . . 
aN,1 aN,2 aN,3 . . . aN,N 0 0 0 ... 0 17/26
3.2. Gauss-Seidel method (2/5)

Some examples for the inverse of lower triangular matrix:


" #
1
0
 
a1,1 0
P2 = , P2−1 = a1,1
a2,1 1 .
a2,1 a2,2 − a1,1 a2,2 a2,2

1
0 0
   
a1,1 0 0 a1,1
a2,1 1
P3 = a2,1 a2,2 0  , P3−1 =

 − a1,1 a2,2 a2,2 0 
.
a1,1 a2,1 a3,2 −a1,1 a2,2 a3,1 a3,2 1
a3,1 a3,2 a3,3 a21,1 a2,2 a3,3
− a2,2 a3,3 a3,3
 
a1,1 0 0 0
a2,1 a2,2 0 0 
P4 = 
a3,1 a3,2 a3,3 0  ,

a a a a
 4,1 4,2 4,3 4,4 1

a1,1 0 0 0
a2,1 1
− a1,1 a2,2 0 0 
 
 a2,2
P4−1 =  a1,1 a2,1 a3,2 −a1,1 a2,2 a3,1 a3,2 1 .
 
 a21,1 a2,2 a3,3
− a2,2 a3,3 a3,3 0 

 a2 a2,2 a3,3 (a1,1 a2,1 a4,2 −a1,1 a2,2 a4,1 )−a2 a2,2 a4,3 (a1,1 a2,1 a3,2 −a1,1 a2,2 a3,1 ) a41,1 a2,2 a3,2 a4,3 −a41,1 a2,2 a3,3 a4,2 a

1,1 1,1 1
a41,1 a22,2 a3,3 a4,4 a41,1 a22,2 a3,3 a4,4
− a3,34,3a4,4 a4,4

It is so complicated! Let us use (10).


18/26
3.2. Gauss-Seidel method (3/5)
xl1
   
a1,1 0 0 ... 0

 a2,1 a2,2 0 ... 0 


 xl2 

(D + L) xl = 
 a3,1 a3,2 a3,3 ... 0 


 xl3 

 .. .. .. .. ..   .. 
 . . . . .   . 
aN,1 aN,2 aN,3 . . . aN,N xlN
a1,1 xl1
 

 a2,1 xl1 + a2,2 xl2 

= 
 a3,1 xl1 + a3,2 xl2 + a3,3 xl3 ,

 .. 
 . 
aN,1 xl1 + aN,2 xl2 + aN,3 xl3 + . . . + aN,N xlN

xl−1
     
0 a1,2 a1,3 . . . a1,N 1 b1

 0 0 a2,3 . . . a2,N 


 xl−1
2
 
  b2 

−U xl−1 + b = − 
 0 0 0 . . . a3,N 


 xl−1
3 +
  b3 

 .. .. .. . . ..   ..   .. 
 . . . . .   .   . 
0 0 0 ... 0 xl−1
N
bN
a1,2 xl−1 a1,3 xl−1 + . . . + a1,N xl−1
  
− 0 + 2 + 3 N  + b1

 − 0 + a2,3 xl−1
3
l−1
+ . . . + a2,N xN  + b2 

= 
 − 0 + . . . + a3,N xl−1
N + b3 .

 .. 
 . 
0 + bN
19/26
3.2. Gauss-Seidel method (4/5)

Eq. (10) gives


− 0 + a1,2xl−1 + a1,3xl−1 + . . . + a1,N xl−1

a1,1xl1
   
2 3 N  + b1
 a2,1xl + a2,2xl
 1 2
 −
  0 + a2,3xl−1
3 + . . . + a2,N xl−1
N  + b2


 a3,1xl + a3,2xl + a3,3xl  − 0 + . . . + a3,N xl−1
= N + b3

 1 2 3 
 ..   .. 
 .   . 
aN,1xl1 + aN,2xl2 + aN,3xl3 + . . . + aN,N xlN 0 + bN

Or, we obtain

− a1,2 xl−1
2 +...+a1,N xN
l−1
+b1
xl1 = a1,1 ,
l−1 l−1

− a2,3 x3 +...+a2,N xN +b2 − a2,1 xl1
xl2 = a2,2 ,
l−1 l−1

− a3,4 x4 +...+a3,N xN +b3 − a3,1 x1 − a3,2 xl2
l
xl3 = a3,3 ,
..
.
ai,j xl−1
PN Pi−1
− +bi − ai,k xlk
xli = j=i+1 j
ai,i
k=1
, i = 1, N , l = 1, 2, . . .
20/26
3.2. Gauss-Seidel method (5/5)
Finally we obtain
PN l−1 Pi−1 l
− j=i+1 ai,j xj + bi − k=1 ai,k xk
xli = , i = 1, N , l ≥ 1 (11)
ai,i

Starting at some inital guess x0 , the iteration (10) (or (15)) is


proceeded for l = 1, 2, . . . until l = l0 ∈ N such that

∥xl0 − xl0 −1 ∥ ≤ ε ∥xl0 ∥

for some ε given tolerance.

A sufficient condition for convergence:


N
X
|ai,i | > |ai,j |, i = 1, N .
j=1
j̸=i
21/26
3.3. Improvement of conv. using relaxation (1/2)
Fixed-point iteration methods:

Ax=b −→ E xl = F xl−1 + b, l = 1, 2, . . . (12)

Relaxation: for every xl obtained from (12), we modify it by

xl = λxl + (1 − λ)xl−1 , (13)


i.e., (12)(13) : E xl = λ(F xl−1 + b) + (1 − λ)E xl−1
 
E xl = λF + (1 − λ)E xl−1 + λb. (14)

for λ ∈ (0, 2) a weighting factor.


▶ λ = 1 : (13) does not modify xl .
▶ λ ∈ (0, 1) : (13) is called underrelaxation.
▶ λ ∈ (1, 2) : (13) is called overrelaxation, or successive
overrelaxation (SOR).
The choice of a proper value for λ is highly problem-specific.
22/26
3.3. Improvement of conv. using relaxation (2/2)

Further remarks:
1. Gauss-Seidel method (for column-major order).
Instead of (10), ones write the iteration (5) in the form

(D + U ) xl = −L xl−1 + b, l = 1, 2, . . . (15)

Check if it works!
2. When should we use iteration methods for Ax = b?
▶ A is large and sparse.
▶ Solving Ax = b is an intermediate step of some nonlinear
solver.
For example, in Computational Fluid Dynamics (CFD), . . .
3. Excellent textbook for this topic: Yousef Saad, ”Iterative
methods for sparse linear systems”, Society for Industrial
and Applied Mathematics, 2003.
23/26
EXERCISE 1: Try Jacobi and Gauss-Seidel methods
import numpy as np
import scipy.linalg as la
N = 5
A = np.random.rand ( N, N )

# Generating A, a diagonally dominant matrix:


for i in range(0,N):
tmp = 0
for j in range(0,N):
tmp = tmp + abs(A[i,j])
A[i,i] = tmp

# Take randomly a vector as exact solution:


x0 = np.random.rand ( N )

# Generate the RHS of Ax = b using the exact solution:


b = A @ x0

# Decompose A = D + L + U
# ...
# Check Jacobi method for Ax = b using Eq. (7)
# ...
# Check Gauss-Seidel method for Ax = b using Eq. (10)
# ...
# Check Gauss-Seidel solver for Ax = b using Eq. (15)
# ...
23/26
4. Codes and exercises (1/2)

Jacobi solver:
d e f s o l v e r J a c o b i ( a , b , lamb = 1 . 0 , t o l e =1.0E−10, m a x i t =5000 , r e p o r t=True ) :
from numpy i m p o r t z e r o s , copy , s i z e
from s c i p y . l i n a l g i m p o r t norm
m = size (b)
xlm1 = z e r o s (m)
x l = z e r o s (m)
f o r l i n range (0 , maxit ) :
xlm1 = np . c o p y ( x l )
f o r i i n r a n g e ( 0 ,m) :
tmp = 0
f o r j i n range (0 , i ) :
tmp = tmp + a [ i , j ] ∗ xlm1 [ j ]
f o r j i n r a n g e ( i +1 ,m) :
tmp = tmp + a [ i , j ] ∗ xlm1 [ j ]
x l [ i ] = ( −tmp + b [ i ] ) / a [ i , i ]
x l = lamb ∗ x l + (1−lamb )∗ xlm1 # relaxation
norm1 = norm ( x l )
norm2 = norm ( xlm1 − x l )
r e e p s = t o l e ∗norm1
i f ( report ):
p r i n t ( ”%5d : | x l − xlm1 | = %11.4 e , TOL∗ | x l | = %11.4 e ”%( l , norm2 , r e e p s ) )
i f ( norm2 <= r e e p s ) :
i f ( report ):
p r i n t ( ” J a c o b i s o l v e r i s t e r m i n a t e d a f t e r %d i t e r a t i o n s ! \ n”%( l +1))
break
return xl

24/26
4. Codes and exercises (2/2)

Gauss-Seidel solver:
d e f s o l v e r G a u s s S e i d e l ( a , b , lamb = 1 . 0 , t o l e =1.0E−10, m a x i t =5000 , r e p o r t=True ) :
from numpy i m p o r t z e r o s , copy , s i z e
from s c i p y . l i n a l g i m p o r t norm
m = size (b)
xlm1 = z e r o s (m)
x l = z e r o s (m)
f o r l i n range (0 , maxit ) :
xlm1 = np . c o p y ( x l )
f o r i i n r a n g e ( 0 ,m) :
tmp1 = 0
f o r j i n r a n g e ( i +1 ,m) :
tmp1 = tmp1 + a [ i , j ] ∗ xlm1 [ j ]
tmp2 = 0
f o r k i n range (0 , i ) :
tmp2 = tmp2 + a [ i , k ] ∗ x l [ k ]
x l [ i ] = ( −tmp1 − tmp2 + b [ i ] ) / a [ i , i ]
x l = lamb ∗ x l + (1−lamb )∗ xlm1 # relaxation
norm1 = norm ( x l )
norm2 = norm ( xlm1 − x l )
r e e p s = t o l e ∗norm1
i f ( report ):
p r i n t ( ”%5d : | x l − xlm1 | = %11.4 e , TOL∗ | x l | = %11.4 e ”%( l , norm2 , r e e p s ) )
i f ( norm2 <= r e e p s ) :
i f ( report ):
p r i n t ( ” Gauss−S e i d e l s o l v e r i s t e r m i n a t e d a f t e r %d i t e r a t i o n s !\ n”%( l +1))
break
return xl

25/26
EXERCISE 2: Jacobi and Gauss-Seidel solvers

import numpy as np
import scipy.linalg as la
N = 5
A = np.random.rand ( N, N )

# To make sure that A is a diagonally dominant matrix


for i in range(0,N):
tmp = 0
for j in range(0,N):
tmp = tmp + abs(A[i,j])
A[i,i] = tmp + 1

# Pick up randomly a vector as exact solution:


x0 = np.random.rand ( N )

# Generate the RHS of Ax = b using the exact solution:


b = A @ x0

eps = 1.0e-7

xa = solver_Jacobi ( A, b, tole=eps )
print( "%30s %12.5e \n" %( "|x - x0|/|x0| =", la.norm(xa-x0)/la.norm(x0) ))

xb = solver_GaussSeidel ( A, b, tole=eps )
print( "%30s %12.5e \n" %( "|x - x0|/|x0| =", la.norm(xb-x0)/la.norm(x0) ))
25/26
EXERCISE 3: Jacobi and Gauss-Seidel solvers
import numpy as np
import scipy.linalg as la
N = 5
A = np.random.rand ( N, N )

# A is NOT a diagonally dominant matrix:


for i in range(0,N):
tmp = 0
for j in range(0,i):
tmp = tmp + abs(A[i,j])
for j in range(i+1,N):
tmp = tmp + abs(A[i,j])
A[i,i] = tmp

# Take randomly a vector as exact solution:


x0 = np.random.rand ( N )

# Generate the RHS of Ax = b using the exact solution:


b = A @ x0

eps = 1.0e-7
xa = solver_Jacobi ( A, b, tole=eps )
print( "%30s %12.5e \n" %( "|x - x0|/|x0| =", la.norm(xa-x0)/la.norm(x0) ))

xb = solver_GaussSeidel ( A, b, tole=eps )
print( "%30s %12.5e \n" %( "|x - x0|/|x0| =", la.norm(xb-x0)/la.norm(x0) ))
25/26
EXERCISE 4: Jacobi and Gauss-Seidel solvers with relaxation

import numpy as np
import scipy.linalg as la
N = 5
A = np.random.rand ( N, N )

# A is NOT a diagonally dominant matrix:


# ...

# Pick up a vector as exact solution:


x0 = np.random.rand ( N )

# Generate the RHS of Ax = b using the exact solution:


b = A @ x0

eps = 1.0e-7
xa = solver_Jacobi ( A, b, lamb=0.5, tole=eps )
print( "%30s %12.5e \n" %( "|x - x0|/|x0| =", la.norm(xa-x0)/la.norm(x0) ))

xb = solver_GaussSeidel ( A, b, lamb=0.5, tole=eps )


print( "%30s %12.5e \n" %( "|x - x0|/|x0| =", la.norm(xb-x0)/la.norm(x0) ))

Make a CONCLUSION about the methods with/without relaxation !

25/26
Question?

26/26

You might also like