0% found this document useful (0 votes)
17 views128 pages

L6 LinearAlgebraicEqns

The document provides an overview of numerical methods for solving linear algebraic equations, including definitions, matrix forms, and various solution techniques such as graphical methods, Cramer's Rule, and Gauss elimination. It discusses the importance of handling singular and ill-conditioned systems, as well as techniques like partial pivoting and scaling to improve solution accuracy. Additionally, it introduces the Gauss-Jordan method as an alternative to Gauss elimination for obtaining solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views128 pages

L6 LinearAlgebraicEqns

The document provides an overview of numerical methods for solving linear algebraic equations, including definitions, matrix forms, and various solution techniques such as graphical methods, Cramer's Rule, and Gauss elimination. It discusses the importance of handling singular and ill-conditioned systems, as well as techniques like partial pivoting and scaling to improve solution accuracy. Additionally, it introduces the Gauss-Jordan method as an alternative to Gauss elimination for obtaining solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 128

MAT 202 E

NUMERICAL METHODS

Linear Algebraic Equations

“These notes are only to be used in class presentations”

Textbook: Numerical Methods for Engineers, S.C. Chapra, R.P. Canale,


7th edition, 2015
Linear Algebraic Equations

• An equation of the form ax+by+c=0 is called a linear equation in


x and y variables.
• ax+by+cz+d=0 is a linear equation in three variables, x, y, and z.
• Thus, a linear equation in n variables is
a1x1+a2x2+ … +anxn = b

• If you need to work more than one linear equations, a system of


linear equations must be solved simultaneously.
•A linear algebraic system is a system of equations where
all the functions are linear
In matrix form;
[ A]{ X }  {B}
where A is a n x n matrix, and X and B are n x 1 vectors.
Requirements for a Solution of a Systems of Equations

Ax = b  matrix equation

A is m x n and describes a system


of m equations in n unknown
We need to solve for X

A1 AX  A1B
X  A1B

How do we get A-1?


– It is non-trivial
– Not very efficient

• Usually use other methods to solve for X


– Elimination Methods
– Iterative Methods
Solutions for second and third order systems
– Graphical methods
– Determinants, Cramer’s Rule
– Elimination of unknowns

•Nowadays, easy access to computers makes the solution


of very large sets of linear algebraic equations possible
Graphical solutions

– Plot the functions and the solution is the intersection point


of the functions
– For second order linear systems, each equation is a line
– For third order linear systems each equation is a plane
Example: Graphical method is used to solve a set of two simultaneous
linear algebraic equations.

3x1 + 2x2 = 18
- x1 + 2x2 = 2

x2 = -3/2x1 + 9
x2 = 1/2x1 + 1

The solution is the intersection


of the two lines at
x1 = 4 and x2 = 3
Graphical method is used to visualize properties of the solutions

Singular systems:
• a) The two equations represent parallel lines. There is no solution.
• b) The two lines are coincident. There is an infinite number of
solutions .
ill-conditioned system
• c) It is difficult the exact point at which lines intersect.
Determinants, Cramer’s Rule
Given a second-order matrix A, the determinant D is defined as
follows:

Given a third-order matrix A, the determinant D is defined as


follows:

For more than three equations, Cramer’s rule becomes impractical.


The determinants are time consuming to evaluate by hand or by computer.
Using determinants to solve a linear system
• Cramer’s rule
– Replace a column of coefficients in matrix A with the B
vector and find determinant
• Example: Cramer’s Rule
Example: Cramer’s Rule

0.3x1 + 0.52x2 + x3 = -0.01 | 0.3 0.52 1 |


0.5x1 + x2 + 1.9x3 = 0.67 D= | 0.5 1 1.9|
0.1x1 + 0.3x2 + 0.5x3 = -0.44 | 0.1 0.3 0.5|

The minors are

| 1 1.9|
A1 = | 0.3 0.5| = 1(0.5) – 1.9(0.3) = -0.07

| 0.5 1.9|
A2 = | 0.1 0.5| = 0.5(0.5) – 1.9(0.1) = 0.06

| 0.5 1 |
A2 = | 0.1 0.3| = 0.5(03) – 1(0.1) = 0.05
Example: Cramer’s Rule
a11 a12 a13
a a 23 a a 23 a a 22
D  a 21 a 22 a 23 , D  a11 22  a12 21  a13 21
a32 a33 a31 a33 a31 a32
a31 a32 a33
a 22 a 23 a a 23 a a 22
D  a11  a12 21  a13 21
a32 a33 a31 a33 a31 a32

D  a11 A1  a12 A2  a13 A3

0.3 0.52 1
D  0.5 1 1. 9
0.1 0.3 0.5

The miors are

1 1.9
A1   (1)(0.5)  (0.3)(1.9)  0.07
0.3 0.5

0.5 1.9
A2   0.5(0.5)  0.1(1.9)  0.06
0.1 0.5

0.5 1
A1   0.5(0.3)  0.1(1)  0.05
0.1 0.3
Example: Cramer’s Rule (cont.)

D = 0.3 (-0.07) – 0.52(0.06) + 1(0.05) = - 0.0022

Vector b  |-0.01 0.52 1 |


| 0.67 1 1.9|
|-0.44 0.3 0.5| 0.03278
X1 = ---------------------------- = -------------------- = -14.9
-0.0022 -0.0022

| 0.3 -0.01 1 |
| 0.5 0.67 1.9|
| 0.1 -0.44 0.5| 0.0649
X2 = ---------------------------- = -------------------- = -29.5
-0.0022 -0.0022

| 0.3 0.52 -0.01 |


| 0.5 1 0.67 |
| 0.1 0.3 -0.44 | -0.04356
X3 = ---------------------------- = -------------------- = 19.8
-0.0022 -0.0022
Elimination Methods

• The basic strategy is to successively solve one of the equations


of the set for one of the unknowns and to eliminate that
variable from the remaining equations by substitution.

• The elimination of unknowns can be extended to systems with


more than two or three equations;
Gauss Elimination

• This method can be applied to large sets


of equations.

• The method has two steps:


1. forward elimination of unknown,
2. back substitution.
Gauss Elimination

1. Forward elimination reduces the set of


equations to an upper triangular system.

 Upper triangle matrix


Gauss Elimination

• Back substitution: The result of xn can back-


substituted into (n-1)th equation to solve for
xn-1.

• The process is repeated to evaluate the


remaining x’s.
Gauss Elimination

• The system will have been transformed to an upper triangle system


Naive Gauss Elimination

• Formula to evaluate xn and the remaining x’s.


Example: Gauss Elimination

3x1 - 0.1x2 – 0.2 x3 = 7.85 (1)


0.1x1 + 7x2 – 0.3x3 = -19.3 (2)
0.3x1 - 0.2x2 + 10x3 = 71.4 (3)

Carry six significant figures during the


computation.
Example: Gauss Elimination

3x1 - 0.1x2 – 0.2 x3 = 7.85 (1)


0.1x1 + 7x2 – 0.3x3 = -19.3 (2)

0.3x1 - 0.2x2 + 10x3 = 71.4 (3)

Forward elimination.
Multiply the equation (1) by (0.1)/3 and subtract
the result from (2) to give

7.00333x2 – 0.293333x3 = -19.5617


Example: Gauss Elimination

3x1 - 0.1x2 – 0.2 x3 = 7.85 (1)


0.1x1 + 7x2 – 0.3x3 = -19.3 (2)

0.3x1 - 0.2x2 + 10x3 = 71.4 (3)

Multiply the equation (1) by (0.3)/3 and subtract it


from (3) to eliminate x1.

-0.190000x2 + 10.0200x3 = 70.6150


Example: Gauss Elimination

After the first eliminations:


Example: Gauss Elimination

• Remove - 0.190000 from (6),

• multiply (5) by –0.190000 / 7.00333,

• subtract the result from (6).


Example: Gauss Elimination

The result will be the system to an upper


triangular form:
Example: Gauss Elimination

Back substitution:
Obtain the value of x3 from (9)
x3 = 70.0843 / 10.0200
x3 = 7.00003 (10)
Example: Gauss Elimination

Substitute the value of x3 into (8)

7.00333x2 – 0.293333(7.00003) = -19.5617


x2 = (-19.5617 + 0.293333(7.00003) ) / 7.00003
x2 = -2.5 (11)
Example: Gauss Elimination

Substitute (10) and (11) into (7)

3x1 - 0.1(-2.5) - 0.2 (7.00003) = 7.85

x1 = (7.85 - 0.1(2.5) + 0.2 (7.00003) ) /3

x1 = 3.0 (12)
Example: Gauss Elimination

Although there is a slight round-off error, the results


are very close to the exact solution of
x1 = 3, x2 = -2.5, and x3 = 7

Verify the results:

3(3) - 0.1(-2.5) - 0.2 (7.00003) = 7.84999  7.85

0.1(3) + 7(-2.5) – 0.3(7.00003) = -19.3000 = -19.3

0.3(3) - 0.2(-2.5)+ 10 (7.00003) = 71.4003  71.4


Pitfalls of Gauss Elimination Method

• Division by zero: This can appear during the


elimination and the back-substitution phases.

• Round-off error: It is consequence of the


number of significant figures. It is relevant when
large numbers of equations are to be solved.
Pitfalls of Elimination Methods

• ill- conditioned systems: Those where small


changes in coefficients result in large changes
in the solution (a wide range of answers can
approximately satisfy the system.

• Use of more significant figures is the simplest


remedy for ill-conditioning systems.

• it requires more memory and processing time


in a computer.
Techniques for Improving Solutions

• The first equation of the system is called the


pivot equation and the first coefficient (a11 ) is
called the pivot coefficient or element.

• When a pivot element is zero, or it is close to


zero the division by zero can occur. In the
second case because of round-off errors.
Pivoting

1. Before each row is normalized, it is


recommended to apply the partial pivoting.

2. Partial pivoting:

• Determine the largest available coefficient in


the column below the pivot element.
• The rows can be switched so that the largest
element is the pivot
Partial Pivoting

• Use Gauss elimination to solve

0.0003x1 + 3.0000x2 = 2.0001


1.0000x1 + 1.0000x2 = 1.0000
Partial Pivoting

• Pivoting

1. The first pivot element a11 = 0.0003 is very


close to zero.

2. The equations are solved in reverse order.

1.0000x1 + 1.0000x2 = 1.0000


0.0003x1 + 3.0000x2 = 2.0001
Partial Pivoting

3. Elimination and substitution yield x2 = 2/3.

4. For different numbers of significant figures x1


can be computed from the first equation as in
x1 = (1 – (2/3)) / 1

This case is much less sensitive to the number


of significant figures in the computation.
Scaling

• It is used to minimize round-off errors for those


cases where some of the equations in a system
have a larger coefficients than others.

• Solve the following set of equations using


Gauss elimination with scaling transformation.

2x1 + 100,000x2 = 100,000


x1 + x2 = 2
Scaling
• Scaling transforms the original equations to
0.00002x1 + x2 = 1
x1 + x2 = 2

• Pivot the rows to put the greatest value on


the diagonal.
x1 + x 2 = 2
0.00002x1 + x2 = 1.00

Solution: x1 = x2 = 1
Gauss-Jordan Method

• It is a variation of Gauss elimination. The major differences are:

– When an unknown is eliminated, it is eliminated from all other


equations rather than just the subsequent ones.

– All rows are normalized by dividing them by their pivot


elements.

– The elimination step result in an identity matrix rather than a


triangular matrix.

– The back substitution is not necessary.


Gauss- Jordan Method with Matrix Inversion

Obtain an identity matrix


Gauss- Jordan Method

• This method involves approximately 50


percent more operations than Gauss
elimination.

• However, it is still used in engineering as


well as in some numerical algorithms.
Example of Gauss- Jordan Method

• Determine the matrix inverse of the matrix [A]


by means Gauss-Jordan, and then obtain the
solution of the system
Example of Gauss- Jordan Method

• Augment the coefficient matrix with an


identity matrix
Example of Gauss- Jordan Method

• Using a11 as the pivot element normalize the


row 1 and use it to eliminate x1 from the first
and second rows.
Example of Gauss- Jordan Method

• Next, a22 can be used as the pivot element and


x2 is eliminated from the first and third rows.
Example of Gauss- Jordan Method

• Finally, a33 is used as the pivot element and x3


is eliminated from the first and second rows.
Example of Gauss- Jordan Method

• The inverse [A]-1 can be multiplied by the


vector {b} to determine the solution.
Example of Gauss- Jordan Method

• X1 = 7.85(0.332489) – 19.3(0.00492297) + 71.4(0.00679813)

• X1 = 3.000441181

• X2 = 7.85(-0.0051644) – 19.3(0.142293) + 71.4(0.00418346)

• X2 = -2.48809640
Example of Gauss- Jordan Method

• X3 = 7.85(-0.0100779) – 19.3(0.00269816) +
71.4(0.0998801)
• X3 = 7.00025314

• Solution :
X1 = 3.00041181
X2 = -2.48809640
X3 = 1.43155150
Gauss- Jordan Method

• Gauss-Jordan method allows to obtain the


solution of a system with the same matrix [A]
and different right side vectors.
Summary

• Gauss elimination is the most fundamental


method for solving simultaneous linear algebraic
equations.

• Gauss-Jordan is a variation of Gauss elimination.

• Both methods are used in engineering.


Use Gauss Jordan method to solve :

4 x1  2 x2  5 x3  6
2 x1  x2  x3  5
3x1  2 x2  x3  4

Check your answers by substituting them into the original


equations.

The solution is given in class.


Answer: x1=2, x2=-1, x3=01
LU Decomposition and
Matrix Inverse
Gaussian Elimination works well for solving linear
systems of the form:

AX  B
What if you have to solve the linear system several times, with
changing B vectors?
• Then, Gaussian elimination becomes inefficient
• Instead, use a method that separates out transformations on A
from transformations on B
LU Decomposition

If Gaussian elimination can be performed on the linear system


without row interchanges, then A can be factorized into the
product of an upper triangular matrix and a lower triangular matrix
Substitute the factorization into the linear system

We have transformed the problem into two steps


– Factorize A into L and U
– Solve the two subproblems
Solving linear equations with LU decomposition:

1. Construct L and U

• U is the coefficient matrix we obtain at the end of forward


elimination process of Gauss elimination
Eliminate xk from (k+1)th through n equations (rows)

 a (k ) 
 Ei  ik
E   Ei k  1, n  1
 (k ) k 
 a kk 
i  k  1, n
Multiplier factors
• Keep track of the multiplier factors and populate L with these
factors used during forward elimination
lii  1
aik ( k )
lik 
akk ( k )

2. Solve for D using forward substitution

b1
d1 
l11
1  i 1 
di  bi   lij d j  i  2, n
lii  j 1 
3. Solve for X using back substitution

dn
xn 
unn
1  n 
xi 
uii
d i   uij x j  i  n  1, n  21
 j i 1 

This factorization is called Doolittle’s method.


Overview of LU Factorization or Decomposition

• LU decomposition requires pivoting to avoid


division by zero.

• Suppose a set of three simultaneous


equations that can be represented:

[A] {X} – {B} = 0


Overview of LU Factorization or Decomposition

Represent the system of equation as an upper triangular


system:

• This representation is similar to the first step of Gauss


elimination.
Overview of LU Factorization or Decomposition

• This representation can be represented in matrix


notation and rearranged to give

[U] {X} – {Y} = 0


Overview of LU Factorization or Decomposition

• Assume that there is a lower diagonal matrix


with 1’s on the diagonal.
Overview of LU Factorization or Decomposition

• This representation can be represented in matrix


notation and rearranged to give

[L] {Y} – {B} = 0


Overview of LU Factorization or Decomposition

• If the set of system is [A] {X} – {B} = 0

• And the system [U] {X} – {Y} is multiply by


[L], the result is

[L] { [U] {X} – {Y} } = [A] {X} – {B}

Notice [L] [U] = [A] and [L] {Y} = {B}


LU Factorization or Decomposition

1. LU factorization or decomposition step:

[A] is factored or “decomposed” into lower


[L] and upper [U] triangular matrices.

2. LU substitution step:

[L] and [U] are used to determine a solution


[X] for the right-hand-side {B}.
LU Factorization or Decomposition

1. [L] {Y} = {B} is used to generate and


intermediate vector {Y} by forward substitution.

2. The result is substituted into [U] {X} – {Y} = 0,


and it is solved by back substitution for {X}.
LU Factorization or Decomposition
LU Factorization or Decomposition

• Gauss Elimination can be used to


decompose [A] into [L] and [U].

• [U] is a direct product of the forward


elimination.
LU Factorization or Decomposition

• The original matrix [A] is reduced to the form:

 [U] has the upper triangular


format.
LU Factorization or Decomposition

• The first step in Gauss elimination is to multiply


row 1 of [A] by the factor:

• And subtract the result from the second row


to eliminate a21.
LU Factorization or Decomposition

• Second step: Row 1 is multiplied by the factor

• And the result subtracted from the third row


to eliminate a31.
LU Factorization or Decomposition

• The final step is to multiply the modified second


row by

• And subtract the result from the third row to


eliminate a’32.
LU Factorization or Decomposition

• We could save the f ’s and manipulate {B} later.

• These values can be stored in the respective aij


that take zero value after the elimination step
LU Factorization or Decomposition

• After elimination, the [A] matrix can there fore


be written as

• This matrix represents an efficient


storage of the LU decomposition of [A],
[A]  [L] [U]
Example: LU Factorization or Decomposition

• Derive an LU decomposition based on the Gauss


elimination.
3x1 - 0.1x2 – 0.2 x3 = 7.85

0.1x1 + 7x2 – 0.3x3 = -19.3

0.3x1 - 0.2x2 + 10x3 = 71.4


Example: LU factorization or Decomposition

• After forward elimination, the following upper


triangular matrix is
Example: LU factorization or Decomposition

• The factors employed to obtain the upper


triangular matrix can be assembled into a lower
triangular matrix.

• The elements a21 and a31 were eliminated by


using the factors

f21 = 0.1 / 3 = 0.0333333

f31 = 0.3 / 3 = 0.1000000


Example: LU factorization or Decomposition

• The elements a’21 was eliminated by using the


factor

f32 = - 0.19 / 7.00333 = -0.0271300


Example: LU factorization or Decomposition

The lower
triangular
matrix
Example: LU factorization or Decomposition

• The LU factorization or decomposition is


Example: LU factorization or Decomposition

• The result can be verified by performing the


multiplication of [L][U] , where the minor
discrepancies are due to round-off
Example:The Substitution Steps

• Implement the forward-substitution phase by


applying [L]{Y} = {B}

• or multiplying out the left-hand side,

y1 = 7.85
0.333333y1 + y2 = -19.3
0.1y1 – 0.02713y2 + y3 = 71.4
Example:The Substitution Steps

• The first equation is solved y1 = 7.85

• This value is substituted into the second equation


to solve for
y2 = -19.3 - 0.333333d1 = -19.3 - 0.333333(7.85)
y2 = -19.5617

• Both y1 and y2 are substituted into the third


equation

y3 = 71.4 – 0.1(7.85) + 0.02713(-19.5617)


y3 = 70.0843
Example:The Substitution Steps

• Apply the forward-elimination phase of


conventional Gauss elimination

x1 =3 x2 = - 2.5 x3 = 7
Example:
Solve the linear system by LU decomposition method
x1  x2  3x4  4
2 x1  x2  x3  x4  1
3x1  x2  x3  2 x4  3
 x1  2 x2  3x3  x4  4

The solution is given in class

Answer: x1  1 x2  2 x3  0 x4  1
Factorization with MATLAB

Solve Ax =b with LU factorization

• Factor A into L and U.


• Solve LY = b for y (use forward
substitution).
• Solve UX = y for x (Use backward
substitution).
There are other decomposition methods:

• Crout’s method : U has 1’s on the diagonal


• Choleski’s method : lii  uii

det( A)  det( L) det(U )


inv( A)  inv(U )inv( L)
Matrix inverse

If A is nonsingular,
1
AA I

LU decomposition provides an efficient means to compute


the matrix inverse

AX  B
To find A-1
•Set b to each of the unit vectors in the identity matrix.

1 0  0 1 0  0 
0 1  0 0  1 0 
I  b (1)    b ( 2)     b ( n )   
      
       
0 0  1 nxn 0   
0 1

• Solve x(j) with LU decomposition for each b(j)


The corresponding x (j) solution represents the jth column in
the inverse matrix.
The Matrix Inverse

• If a matrix [A] is square, there is another matrix


[A]-1 called the inverse of [A], for which
[A][A]-1 = [A]-1 [A] = [I]

Where [I] is an identity matrix.

• The matrix inverse can be calculated by means of


LU decomposition algorithm. This method permits
to evaluate multiple right-hand side vectors.
Example: Matrix Inversion

• Employ LU factorization or decomposition to


determine the matrix inverse for the following
system
Example: Matrix Inversion

• The decomposition resulted in the following lower


and upper triangular matrices:
Example: Matrix Inversion

• Solution: [L] {[U]{X} – {Y}} = [A] [X] - [B]


[L] [U] = [A]
[L] {Y} = {B} ,

and {B} is a unit vector with 1 in the first row


Example: Matrix Inversion

• The vector YT = [ 1 - 0.3333 - 0.1009] is used as


the right hand side of the following equation
Example: Matrix Inversion

• {X}T = [0.33249 – 0.00518 – 0.01008] is obtained by


back substitution.

• This vector is the first column of the matrix [A]-1


Example: Matrix Inversion

• To determine the second column the following


equation is formulated with the unit vector of the
second row.

• A new vector {Y} is obtained.


Example: Matrix Inversion

• The results are used in [U] {X} = {Y} to find

{X}T = [0.004944 0.142903 0.00271], which is the


second column of the matrix [A]-1.
Example: Matrix Inversion

• The forward- and back- substitution procedures


are implemented with {B}T = [0 0 1]
Example: Matrix Inversion

• {X}T = [ 0.006798 0.004183 0.09988], which


is the final column of matrix, is found
Find A-1 with LU decomposition

1 1 0

A  2 1 1 
 
 3  1 1 
The solution is given in class

Answer:  0 0 .2 0 .2 
A1  1  0.2  0.2
 
1  0.8 0.2 
Iterative Methods for Linear
Algebraic Equations
Iterative Methods

• Iterative methods give approximate results for linear


algebraic systems
AX  B
and provide an alternative to the elimination methods
• They are efficient for large systems with a high percentage of
0 entries (A is sparse) in terms of both computer storage and
computation.
•As any other iterative methods, they may not converge or
converge slowly.
Jacobi Iterative Method
To solve n linear equations;
1. Choose an initial approximation x (0)
A simple way to obtain initial guesses is to assume that
they are zero.
2. Convert the system AX=B into an equivalent system of the
form
X  TX  C
It consists of solving ith equation AX=B for xi to obtain

bi n aij
xi    x j i  1,2 n
aii j 1 aii
j i
3. Generate sequence of approximate solution vectors by
computing
( k 1)
X (k )
 TX C k: iteration number
n
( k 1)
 ij j )  bi
(  a x
j 1, j i
xi ( k )  i  1,2 n
aii
4. Terminate the procedure when
a   s
or max. number of iterations is reached.
How to define εa

We have vectors so we’ll use norms to to define εa

x ( k )  x ( k 1)
a  (k )
x

A norm is a real-valued function that provides a measure of the


size or length of multi-component mathematical entities such as
vectors or matrices. Norm of the vector gives a measure for the
distance between an arbitrary vector and zero vector.
x  x1 x2  xn  T

x 
 max xi Maximum magnitude norm
1i  n

n
 n
p
1/ p
x 1   xi
x p
  xi  i 1
i 1 
1/ 2
 2
n
x 2   xi  Euclidean norm
i 1 
The distance between two vectors is defined as the norm of the
difference of the vectors.

y   y1 y2  yn  T

x y 
 max xi  yi
1i  n

1/ 2
 n
2
x y 2
  ( xi  yi ) 
i 1 
Solve the linear system with Jacobi iterative method
4 x1  x2  x3  7
4 x1  8 x2  x3  21
 2 x1  x2  5 x3  15

x ( 0)  1 2 2T
Carry out four iterations.
Gauss-Seidel Method

The Gauss-Seidel method is a commonly used iterative


method.

It is same as Jacobi technique except that it uses the latest


values of x’s.

i 1 n
( k 1)
 ( aij x j ) 
(k )
 ( aij x j )  bi
j 1 j i 1
xi (k )
 i  1,2 n
aii
Solve the linear system with Gauss-Seidel iterative method
4 x1  x2  x3  7
4 x1  8 x2  x3  21
 2 x1  x2  5 x3  15

x ( 0)  1 2 2
Iterate until εa ≤ 0.001 . Use maximum magnitude norm to
calculate εa
Convergence criterion

If the coefficient matrix A is Diagonally Dominant Jacobi


iteration and Gauss-Seidel is guaranteed to converge.

Diagonally Dominant 
For each equation i :
n
aii   aij
j 1
j i

Note that this is not a necessary condition, i.e. the system may
still have a chance to converge even if A is not diagonally
dominant.
(a) convergent (b) divergent
Improvement of Convergence Using Relaxation

Relaxation represents a slight modification of the Gauss-Seidel


method and is designed to enhance convergence.
After each new value of x is computed , that value is modified
by a weighted average of the results of the previous and the
present iterations
new
xi  new
wxi  (1  w) xi
old

0 w2
•For choices of w with 0 < w < 1, the procedures are called
under-relaxation methods
•For choices of w with 1 < w, the procedures are called over-
relaxation methods
These methods are abbreviated SOR, for Successive Over-
Relaxation.
Substitute w in Gauss-Seidel iterative equation

( k 1) w  i 1 n
( k 1) 
 (1  w) xi   i  ij j 
 
(k ) (k )
xi b a x aij x j 
aii  j 1 j i 1 
i  1,2  n
Use SOR method with w=1.25 to solve the linear system
4 x1  3x2  24
3x1  4 x2  x3  30
 x2  4 x3  24

x ( 0)
 1 1 1
T

Compute three iterations.


The result is given for 7 iterations and compared with Gauss-
Seidel Method.

You might also like