0% found this document useful (0 votes)
20 views20 pages

Systems of Linear Equations

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views20 pages

Systems of Linear Equations

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Systems of Linear

Equations
!"A* | B* #$
(v+1) (v) (v+1)
x = Br x + B x +c
Methods
• Direct Methods
• Cramer’s Rule, Inverse Method, L-U Factorization
• Gaussian Elimination
• Gauss-Jordan Reduction Method

• Indirect Methods
• Jacobi Iteration
• Gauss-Seidel Iteration
System of Linear Equations
A system of linear equations is a set of m equations that
contains n unknowns. Mathematically, we have

a11 x1 + a12 x2 + a13 x3 ++ a1n xn = b1


a21 x1 + a22 x2 + a23 x3 ++ a2n xn = b2
a31 x1 + a32 x2 + a33 x3 ++ a3n xn = b3
   
am1 x1 + am2 x2 + am3 x3 ++ amn xn = bm

where aij are constant coefficients of the unknowns xj.


System of Linear Equations
If m=n, we can write the equations in matrix form as
! a11 a12 a13  a1n $! x1 $ ! b1 $
# &# & # &
# a21 a22 a23  a2n &# x2 & # b2 &
# &# & # &
# a31 a32 a33  a3n &# x3 &=# b3 &
#      &#  & #  &
# &# & # &
#" an1 an2 an3  ann &# xn &% #" bn &
%" %

Or in a more compact form

Ax = b
Example
x +2y +3z = 9
2x −y +z = 8
3x −z = 3

" 1 2 3 % " x % " 9 %


$ ' $ ' $ '
$ 2 −1 1 ' $ y '=$ 8 '
$ 3 0 −1 ' $ z ' $ 3 '
# & # & # &

Ax = b
Direct Methods
• Cramer’s rule xi = Ai A
9 2 3 1 9 3 1 2 9
1 1 1
x = ⋅ 8 −1 1 = 2 y = ⋅ 2 8 1 = −1 z = ⋅ 2 −1 6 = 3
20 20 20
3 0 −1 3 3 −1 3 0 3
−1
• Inverse method x = A b
−1
" 1 2 3 % " 9 % " 1 2 5 %" 9 % " 2 %
$ ' $ ' 1$ '$ ' $ '
x = $ 2 −1 1 ' $ 8 ' = $ 5 −10 5 ' ⋅ $ 8 ' = $ −1 '
$ 3 0 −1 ' $ 3 ' 20 $ 3 6 −5 ' $ 3 ' $ 3 '
# & # & # &# & # &

• L-U factorization
Ax = b 1. Solve for y in the equation using forward substitution.
2. Solve for x in the equation using back substitution.
(LU) x = b " 1 0 0 %
$ '
" 9 %
$ '
! 9 $
# &
L ( Ux ) = b " 1 2 3 % " 1 0 0 %" 1 2 3 % 1. $ 2 −5 5 ' ⋅ y = $ 8 ' y =# 2 &
$ ' $ '$ ' $ 3 −6 −4 ' $ 3 ' # 3 &
# & # & " %
$ 2 −1 1 ' = $ 2 −5 0 ' ⋅ $ 0 1 1 '
Ly = b $ 3 0 −1 ' $ 3 −6 −4 ' $ 0 0 1 '
# & # &# & ! 1 2 3 $
# &
! 9 $
# &
" 2 %
$ '
2. # 0 1 1 &⋅ x = # 2 &
# 0 0 1 & # 3 &
x = $ −1 '
$ 3 '
" % " % # &
Direct Method

Gaussian Elimination
Transform the augmented matrix [A : b] to the matrix [A* :
b*] in row echelon form by applying a series of elementary
row transformations.

[ A | B] Row
operations
!"A* | B* #$

The solution of the system [A* : b*] using back substitution


will also give the solution to the original system [A : b].
! 1 2 3 | 9 # z=3
!"A* : B* #$ = %% 0 1 1 | 2 && y+z = 2 ⇒ y = 2 − 3 = −1
% 0 0 1 | 3 & x + 2y + 3z = 9 ⇒ x = 9 − 2(−1) − 3(3) = 2
" $
Direct Method

Gauss-Jordan Method
Transform the augmented matrix [A : b] to the matrix [A* :
b*] in reduced row echelon form by applying a series of
elementary row transformations.

[ A | B] SINE !"A* | B* #$
Algorithm

Example:
! 1 0 0 | 2 # x=2
!"A* : b* #$ = && 0 1 0 | −1 '' y = −1
& 0 0 1 | 3 ' z=3
" $
Example
Solve using
(1) Gaussian Elimination
(2) Gauss-Jordan Reduction Method

" ( ,
2 −2 −1 1 %* w * ( 0 ,
$ '* * ** **
$ −2 3 0 4 ' x 2
) -=) -
$ 0 −1 −1 −1 '* y * * −2 *
$ 5 2 1 3 '&*+ z *. *+ 1 *.
#
Indirect (Iterative) Methods
• Iterative methods start from an approximation to the true solution
and, if successful, obtain better approximations from a
computational cycle repeated as often as may be necessary for
achieving a required accuracy.
• In certain cases, these methods are preferred over the direct
methods – when the coefficient matrix is sparse (has many zeros) or
diagonally dominant they may be more rapid.
6x1 −2x2 +x3 = 11 " 6 −2 1 % " x1 % " 11 %
$ '$ ' $ '
x1 +2x2 −5x3 = −1 $ −2 7 2 ' $ x2 ' = $ 5 '
−2x1 7x2 +2x3 = 5 $ 1 2 −5 ' $ x ' $ −1 '
# & $# 3 '& # &

Ø Jacobi Iteration
Ø Gauss-Seidel Iteration
Indirect Method

Jacobi Iteration
To approximate the solution vector x of a linear system Ax
= b with det(A)≠0, one constructs a sequence for v=1, 2, 3,
… for which lim x (v) = x :
v→∞
bi
n ci =
bi aij v aii
x v+1
i = −
aii
∑ aii
xj #% − a a
bij = $ ij ii
for i ≠ j
j=1 %& 0 for i = j
j≠i

for i=1,…,n. In compact form:

x(
v+1)
( )
= g x ( ) = Bx ( ) + c
v v

Jacobi iteration matrix


Indirect Method

Jacobi Iteration
Solve using Jacobi iteration

" 6 −2 1 % " x1 % " 11 %


$ '$ ' $ '
$ −2 7 2 ' $ x2 ' = $ 5 '
$ 1 2 −5 ' $ x ' $ −1 '
# & $# 3 '& # &

Solution
Indirect Method

Jacobi Iteration
Indirect Method

Gauss-Seidel Iteration
Similar to the Jacobi but the calculated (v+1)st
approximations of the leading coefficients x1, x2,… xi of xv
are used when calculating the approximation of xi. The
iteration rule becomes " 0
$
0 0  0 %
'
$ b21 0 0   '
B = $
     '
(v+1) (v) (v+1)
x = Br x + B x +c $
$# bn1 bn2  b2,n−1
'
0 '&

" 0 b12 b13  b1n %


$ '
b #% − a a for i ≠ j $ 0 0 b23   '
where ci = i bij = $ ij ii Br = $
$   
'
 bn−1,n '
aii %& 0 for i = j $# 0 0 0  0 '&

n i−1
(v+1) (v) (v+1)
xi = ci + ∑ bik xk + ∑ bik xk
k=i+1 k=1
Indirect Method

Gauss-Seidel Iteration
Solve using Jacobi iteration

" 6 −2 1 % " x1 % " 11 %


$ '$ ' $ '
$ −2 7 2 ' $ x2 ' = $ 5 '
$ 1 2 −5 ' $ x ' $ −1 '
# & $# 3 '& # &

Solution
General Schema
If we rewrite the linear system Ax = b as

Nx = Px + b
Such that A = N – P, the iteration rule becomes
(v+1) (v)
Nx = Px + b
" 6 −2 1 % " x % " %
$ ' $ 1 ' $ 11 '
Example: $ −2 7 2 '
$ 1 2 −5 '
$ x2 ' = $ 5 '
$ ' $ −1 '
# & $# x3 '& # &
Jacobi Gauss-Seidel
" 6 0 0 % " 0 2 −1 % " 6 0 0 % " 0 2 −1 %
$ ' $ ' $ ' $ '
N =$ 0 7 0 ' P = $ 2 0 −2 ' N = $ −2 7 0 ' P = $ 0 0 −2 '
$ 0 0 −5 ' $ −1 −2 0 ' $ 1 2 −5 ' $ 0 0 0 '
# & # & # & # &
Convergence
Let the error of the kth iteration be e(k) = x – x(k) such that
subtracting the last two equations yield Ne(k+1) = Pe(k) or

e(
k+1)
= N −1 Pe( ) = Me(
k k)

Using the vector and matrix norms,


(k+1) (k )
e ≤ M e
( k) (k )
e(
0)
By induction on k, implies e ≤ M thus, the errors
converge to zero (for any initial guess) if

M <1
Convergence
For Jacobi iteration, we have
n
aij
M = max
1≤i≤n
∑ aii
j=1, j≠i

A condition equivalent to
n

∑ aij < aii


j=1, j≠i

for diagonally dominant matrix A.

For Gauss-Seidel iteration, the matrix M can not be easily


evaluated but was shown to converge at least as fast as
Jacobi method for any initial guess.
Example
" 6 −2 1 % " x1 % " 11 %
$ '$ ' $ '
$ −2 7 2 ' $ x 2 ' = $ 5 '
$ 1 2 −5 ' x$ ' $ '
# & $# 3 '& # −1 &

Jacobi Gauss-Seidel
" 6 0 0 % " 0 2 −1 % " 6 0 0 % " 0 2 −1 %
$ ' $ ' $ ' $ '
N =$ 0 7 0 ' P = $ 2 0 −2 ' N = $ −2 7 0 ' P = $ 0 0 −2 '
$ 0 0 −5 ' $ −1 −2 0 ' $ 1 2 −5 ' $ 0 0 0 '
# & # & # & # &

" 0 0.333 −0.1667 % " 0 0.333 −0.167 %


$ ' $ '
M = N −1P = $ 0.286 0 −0.286 ' M = N −1P = $ 0 0.095 −0.333 '
$ 0.2 0.4 0 ' $ 0 0.105 −0.167 '
# & # &

M = 0.6 < 1 M = 0.5 < 1


References
• Atkinson & Han. Elementary Numerical Analysis, John Wiley &
Sons, © 2004.
• Engeln-Mullges G. and Uhlig F. Numerical Algorithms with C,
Springer, © 1996
• Carnahan B., et.al., Applied Numerical Methods, John Wiley & Sons,
Inc., © 1969.

You might also like