Iterative Method
Iterative Method
In certain cases, such as when a system of equations is large, iterative methods of solving
equations are more advantageous. Elimination methods, such as Gaussian elimination, are
prone to large round-off errors for a large set of equations. Iterative methods, such as the
Gauss-Seidel method, give the user control of the round-off error. Also, if the physics of the
problem are well known, initial guesses needed in iterative methods can be made more
judiciously leading to faster convergence.
What is the algorithm for the Gauss-Seidel method? Given a general set of n equations and
n unknowns, we have
a11 x1 a12 x2 a13 x3 ... a1n xn c1
a21 x1 a22 x2 a23 x3 ... a2n xn c2
. .
. .
. .
an1 x1 an 2 x2 an3 x3 ... ann xn cn
If the diagonal elements are non-zero, each equation is rewritten for the corresponding
unknown, that is, the first equation is rewritten with x1 on the left hand side, the second
equation is rewritten with x on the left hand side and so on as follows
2
c1 a12 x2 a13 x3 …… a1n xn
x1 a11
c2 a21 x1 a23 x3 …… a2n xn
x2
a
⁝ 22
cn1 a
j j 1
n1, j x
x j n1
n1
an1,n1
n
cn a nj x j
j 1
j n
xn
a nn
Hence for any row i ,
n
ci aij x
j j 1
j
xi i , i 1,2,…, n.
aii
Now to find xi ’s, one assumes an initial guess for the xi ’s and then uses the rewritten
equations to calculate the new estimates. Remember, one always uses the most recent
estimates to calculate the next estimates, xi . At the end of each iteration, one calculates the
absolute relative approximate error for each xi as
a i
x i new x iold 100
x inew
where xnew is the recently obtained value of x , and x old is the previous value of x .
i i i i
When the absolute relative approximate error for each xi is less than the pre-specified
tolerance, the iterations are stopped.
Example 1
The upward velocity of a rocket is given at three different times in the following table
Find the values of a1 , a2 , and using the Gauss-Seidel method. Assume an initial guess of
the solution as a3
a1 1
a 2
2
a3 5
and conduct two iterations.
Solution
The polynomial is going through three data points t1 , v1 , t2 , v2 , and t3 , v3 where from
the above table
t1 5, v1 106.8
t 2 v2 177.2
8, v3 279.2
t3
12,
Requiring that vt a t 2 a t passes through the three data points gives
a
v t 1 2 3
v at2at a
1 1 11 2 1 3
v t 2v 1 a2 t 2 2 2a t 3 a
2
v t v a t 2 a t a
3 3 1 3 2 3 3
Substituting the data t1 , v1 , t2 , v2 , and t3 , v3 gives
a152 2a 53 a 106.8
a 82 a 8 177.2
a1 2 3
a 122 a 12 279.2
a
1 2 3
or
25a1 5a2 a3 106.8
64a1 8a2 a3 177.2
144a1 12a2 a3 279.2
The coefficients a , a , and for the above expression are given by
1 2
a3
25 5 1 a1 106.8
64 8 1 a 177.2
2
144 12 1 a3 279.2
Rewriting the equations gives
106.8 5a2 a3
a1 25
177.2 64a1 a3
a2
8
279.2 144a1 12a2
a3
1
Iteration #1
Given the initial guess of the solution vector as
a1 1
a 2
2
a3 5
we get
106.8 5(2) (5)
a1 25
3.6720
177.2 643.6720 5
a2
8
7.8150
279.2 1443.6720 12 7.8510
a3
1
155.36
The absolute relative approximate error for each xi then is
a 1
3.6720 1
3.6720 100
72.76%
a 2 7.8510 2
7.8510 100
125.47%
a 3 155.36 5
155.36 100
103.22%
At the end of the first iteration, the estimate of the solution vector is
a1 3.6720
a 7.8510
2
a3 155.36
and the maximum absolute relative approximate error is 125.47%.
Iteration #2
The estimate of the solution vector at the end of Iteration #1 is
a1 3.6720
a 7.8510
2
a3 155.36
Now we get
106.8 5 7.8510 (155.36)
a1 25
12.056
177.2 6412.056 (155.36)
a2
8
54.882
279.2 14412.056 12 54.882
a3
1
= 798.34
The absolute relative approximate error for each xi then is
a 1
12.056 3.6720
12.056 100
69.543%
a 2
54.882 7.8510
54.882 100
85.695%
a 3
798.34 155.36
798.34 100
80.540%
At the end of the second iteration the estimate of the solution vector is
a1 12.056
a 54.882
2
a3 798.54
and the maximum absolute relative approximate error is 85.695%.
Conducting more iterations gives the following values for the solution vector and the
corresponding absolute relative approximate errors.
Iteration a1 a % a2 a % a3 a %
1 2 3
As seen in the above table, the solution estimates are not converging to the true solution of
a1 0.29048
a2 19.690
a3 1.0857
The above system of equations does not seem to converge. Why?
Well, a pitfall of most iterative methods is that they may or may not converge. However, the
solution to a certain classes of systems of simultaneous equations does always converge
using the Gauss-Seidel method. This class of system of equations is where the coefficient
matrix [ A] in [ A][ X ] [C] is diagonally dominant, that is
n
If a system of equations has a coefficient matrix that is not diagonally dominant, it may or
may not converge. Fortunately, many physical systems that result in simultaneous linear
equations have a diagonally dominant coefficient matrix, which then assures convergence for
iterative methods such as the Gauss-Seidel method of solving simultaneous linear equations.
Example 2
Find the solution to the following system of equations using the Gauss-Seidel method.
12x1 3x2 5x3 1
x1 5x2 3x3 28
3x1 7x2 13x3 76
Use
x1 1
x 0
2
x3 1
as the initial guess and conduct two iterations.
Solution
The coefficient matrix
12 3 5
A 1 5 3
3 7
13
is diagonally dominant as
a11 12 12 3 5 8
a12 a13
a22 5 5 a23 1 3 4
a21
a33 13 13 a32 3 7 10
a31
and the inequality is strictly greater than for at least one row. Hence, the solution should
converge using the Gauss-Seidel method.
Rewriting the equations, we get
1 3x2 5x3
x1 12
28 x1 3x3
x2
5
76 3x1 7x2
x3 13
Assuming an initial guess of
x1 1
x 0
2
x3 1
Iteration #1
1 30 51
x1 12
0.50000
28 0.50000 31
x2
5
4.9000
76 30.50000 74.9000
x3 13
3.0923
The absolute relative approximate error at the end of the first iteration is
a 1 0.50000 1
0.50000 100
100.00%
a 2 4.9000 0
4.9000 100
100.00%
a 3
3.0923 1
3.0923 100
67.662%
The maximum absolute relative approximate error is 100.00%
Iteration #2
1 34.9000 53.0923
x1 12
0.14679
28 0.14679 33.0923
x2
5
3.7153
76 30.14679 73.7153
x3
13
3.8118
At the end of second iteration, the absolute relative approximate error is
a 1 0.14679 0.50000
0.14679 100
240.61%
a 2 3.7153 4.9000
3.7153 100
31.889%
a 3 3.8118 3.0923
3.8118 100
18.874%
The maximum absolute relative approximate error is 240.61%. This is greater than the value
of 100.00% we obtained in the first iteration. Is the solution diverging? No, as you conduct
more iterations, the solution converges as follows.
Iteration x1 a 1 % x2 a 2 % x3 a 3 %
1 0.50000 100.00 4.9000 100.00 3.0923 67.662
2 0.14679 240.61 3.7153 31.889 3.8118 18.874
3 0.74275 80.236 3.1644 17.408 3.9708 4.0064
4 0.94675 21.546 3.0281 4.4996 3.9971 0.65772
5 0.99177 4.5391 3.0034 0.82499 4.0001 0.074383
6 0.99919 0.74307 3.0001 0.10856 4.0001 0.00101
Example 3
Given the system of equations
3x1 7x2 13x3 76
x1 5x 28
2
3x3
12x1 3x2 - 5x3 1
find the solution using the Gauss-Seidel method. Use
x1 1
x 0
2
x3 1
as the initial guess.
Solution
Rewriting the equations, we get
76 7x2 13x3
x1
3
28 x1 3x3
x2
5
1 12 x1 3x2
x3 5
Assuming an initial guess of
x1 1
x 0
2
x3 1
the next six iterative values are given in the table below.
Iteration x1 a 1 % x2 a 2 % x3 a 3 %
1 21.000 95.238 0.80000 100.00 50.680 98.027
2 –196.15 110.71 14.421 94.453 –462.30 110.96
3 1995.0 109.83 –116.02 112.43 4718.1 109.80
4 –20149 109.90 1204.6 109.63 –47636 109.90
5 2.0364 105 109.89 –12140 109.92 4.8144 105 109.89
6 –2.0579 106 109.89 1.2272 105 109.89 –4.8653 106 109.89
You can see that this solution is not converging and the coefficient matrix is not diagonally
dominant. The coefficient matrix
3 7 13
A 1 5 3
12 3 5
is not diagonally dominant as
a11 3 3 7 13 20
a12 a13
Hence, the Gauss-Seidel method may or may not converge.
However, it is the same set of equations as the previous example and that converged. The
only difference is that we exchanged first and the third equation with each other and that
made the coefficient matrix not diagonally dominant.
Therefore, it is possible that a system of equations can be made diagonally dominant if one
exchanges the equations with each other. However, it is not possible for all cases. For
example, the following set of equations
x1 x2 x3 3
2x1 3x2 4x3 9
x1 7x2 x3 9
cannot be rewritten to make the coefficient matrix diagonally dominant.
Key Terms:
Gauss-Seidel method
Convergence of Gauss-Seidel method
Diagonally dominant matrix