0% found this document useful (0 votes)
18 views23 pages

0 NS Topic 3

This document discusses the methods for solving systems of linear equations, emphasizing their applications in various fields such as engineering and business. It introduces the Gauss elimination method, including its steps and potential difficulties, such as issues with pivot elements. The document also outlines different solution possibilities and the importance of understanding the relationships between variables in complex equations.

Uploaded by

rollyzamora28
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views23 pages

0 NS Topic 3

This document discusses the methods for solving systems of linear equations, emphasizing their applications in various fields such as engineering and business. It introduces the Gauss elimination method, including its steps and potential difficulties, such as issues with pivot elements. The document also outlines different solution possibilities and the importance of understanding the relationships between variables in complex equations.

Uploaded by

rollyzamora28
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

3.

SOLVING A SYSTEM OF LINEAR EQUATIONS

Introduction
Systems of linear equations that have to be solved simultaneously arise in problems that include several
(possibly many) variables that are dependent on each other. Such problems occur not only in engineering and
science, which are the focus of this manuscript, but in virtually any discipline (business, statistics, economics, etc.).
A system of two (or three) equations with two (or three) unknowns can be solved manually by substitution or other
mathematical methods. Solving a system in this way is practically impossible as the number of equations (and
unknowns) increases beyond three. An example of a problem in electrical engineering that requires a solution of a
system of equations is shown in the figure. Using Kirch hoff's law, the currents i1, i2, i3, and i4 can be determined
by solving the following system of four equations:

9i1 -4i2-2i3 = 24
-4i1+17i2-6i3-3i4 = -16
-2i1 -6i2 + 14i3 -6i4 = 0
-3i2-6i3+11i4 = 18

Obviously, more complicated circuits may require the solution of a system with a larger number of
equations. Another example that requires a solution of a system of equations is calculating the force in members
of a truss. The forces in the eight members of the truss shown in the figure are determined from the solution of the
following system of eight equations (equilibrium equations of pins A, B, C, and D):
0.9231 FAC = 1690
FAB-0.7809 FBC = 0
FCD+0.8575 FDE = 0
-FAB-0.3846 FAC = 3625
0.6247 FBC-FBD = 0
FBD-0.5145 FDE-FDF = 0
0.3846 FCE -0.3846 FAC-0.7809 FBC -FCD = 0
0.9231 FAC+0.6247 FBC-0.9231 FCE = 0

Motivation
Imagine yourself as a master puzzle solver, where each piece represents a variable in a complex
equation. Solving systems of linear equations is like cracking the code to unlock the secrets of our dynamic world.
It's not just about numbers and formulas; it's about understanding how different forces interact and influence each
other. Whether you're optimizing business strategies, predicting weather patterns, or designing innovative
technologies, this skill gives you the power to analyze and solve problems that shape our reality. Every equation
solved is like solving a piece of a larger puzzle—each one brings you closer to revealing the full picture of how
things work together in harmony.

1
Specific Learning Outcomes
At the end of the topic, the students should be able to:
Apply various methods (e.g., Gauss elimination) to solve systems of linear equations and understand their limitations.

Concepts
There are applications, for example, in finite element and finite difference analysis, where the system of
equations that has to be solved contains thousands (or even millions) of simultaneous equations.

The general form of a system of n linear algebraic equations is:


a11x1 + a12x2 + … + a1nxn = b1
a21x1 + a22x2 + … + a2nxn = b2
: : : :
an1x1 + an2x2 + … + annxn = bn

The matrix form of the equations is shown in the figure. Two types of numerical methods, direct and
iterative, are used for solving systems of linear algebraic equations. In direct methods, the solution is calculated
by performing arithmetic operations with the equations. In iterative methods, an initial approximate solution is
assumed and then used in an iterative process for obtaining successively more accurate solutions.

2
The Four Solution Possibilities are:
1. A unique solution – a consistent set of equations
x+y=3
x−y=1

2. No solution - an inconsistent set of equations


x+y=4
x+y=5

3. An infinite number of solutions – a redundant set of equations


x+y=2
2x+2y=4

4. The trivial solution like xj = 0 (j= 1, 2, … n), for a homogeneous set of equations
x−y+z=0
−x+y−z=0
x−z=0

A trivial solution in mathematics refers to a solution of an equation or system that is straightforward and
simple, often involving zero or a constant value. It is typically the most obvious solution and can be identified
without extensive calculation. For example, in the context of differential equations, y=0 is considered a trivial
solution to many equations because it satisfies the equation without requiring complex calculations. Similarly, in
systems of linear equations, if all variables are set to zero (e.g., x1=x2=…=xn=0), this constitutes a trivial solution.
A homogeneous set of equations, particularly in linear algebra, refers to a system where all constant
terms are zero. This means each equation has the form:
a1x1+a2x2+…+anxn=0

Direct Methods: Gauss Elimination Method, Gauss Elimination with Row Pivoting, Gauss – Jordan Elimination
Method

1. Gauss Elimination Method


The Gauss Elimination Method is a systematic approach to solving systems of linear equations. It involves
transforming the system's augmented matrix into row echelon form, which is often referred to as upper triangular form, using
elementary row operations. In essence, row echelon form means that all entries below the leading entry in each row are zeros,

3
while upper triangular specifically refers to matrices where all entries below the diagonal are zeros. Once the matrix is in this
form, it can be solved using back-substitution—a process where variables are solved from bottom up by substituting known
values into equations above them.
To clarify further, while both terms refer to similar forms of matrices used for solving systems of equations, upper
triangular typically implies a more specific structure with zeros below the diagonal. The term row echelon encompasses this
but allows for some flexibility in how rows are arranged as long as each row has its leading entry (pivot) positioned after any
pivots above it and all entries below these pivots are zero.
In practice, after achieving row echelon or upper triangular form through Gauss Elimination, one might optionally
simplify further into reduced row echelon form, where each leading entry is 1 and all other entries in that column are 0. This
makes back-substitution even more straightforward but isn't strictly necessary for finding solutions.

Key Concepts
1. Augmented Matrix: A matrix that represents the coefficients and constants of a system of linear equations.
2. Elementary Row Operations:
Multiply a row by a non-zero scalar.
Add or subtract a multiple of one row to another.
3. Back-Substitution: Solving for variables starting from the last equation and moving upwards.

Steps to Solve Using Gauss Elimination


1. Write the Augmented Matrix: Represent the system of equations in matrix form [A | B], where A is the coefficient matrix and
B is the constant matrix.

2. Perform Row Operations: Use row operations to transform the matrix into upper triangular form:
• Make the first element of the first row (pivot) 1 (if necessary).
• Eliminate the first variable from all rows below the first row.
• Repeat for the second row, third row, etc.

a b c d e
0 f g h i
0 0 j k l
0 0 0 m n

Pivot Element

Pivot Row

Note: The objective is to make an upper triangular form - a matrix where all entries below the main diagonal are zero.
3. Back-Substitution: Solve for the variables starting from the last row and moving upwards.

Sample Problems
1. Using Gauss Elimination, solve the system of equations:
2x + 3y - z = 5
4x + 4y - 3z = 3
-2x + 3y - z = 1
Solution:
a. Write the augmented matrix
2 3 −1 5
[ 4 4 −3 3 ]
−2 3 −1 1
b. Perform row operations
R2 = R2 - 2R1
R3 = R3 + R1

2 3 −1 5
[ 0 −2 −1 −7 ]
0 6 −2 6

4
Result:
R3 = R3 + 3R2

2 3 −1 5
[ 0 −2 −1 −7 ]
0 0 −5 −15
c. Back-substitution
From R3: -5z = -15
z=3
From R2: -2y - z = -7
-2y - 3 = -7
y=2
From R1: 2x + 3y - z = 5
2x + 3(2) – (3) = 5
x=1

Solution: x = 1, y = 2, z = 3

2. Using Gauss Elimination, solve the system of equations:


x + 2y + 3z = 6
2x + 5y + 2z = 4
6x - 3y + z = 2
Solution:
a. Write the augmented matrix
1 2 3 6
[ 2 5 2 4 ]
6 −3 1 2
b. Perform row operations
R2 = R2 - 2R1
R3 = R3 - 6R1

1 2 3 6
[ 0 1 −4 −8 ]
0 −15 −17 −34
Result:
R3 = R3 + 15R2

1 2 3 6
[ 0 1 −4 −8 ]
0 0 −77 −154
c. Back-substitution
From R3: -77z = -154
z=2
From R2: y - 4z = -8
y – 4(2) = -8
y=0
From R1: x + 2y + 3z = 6
x + 3(0) + 3(2) = 6
x=0

Solution: x = 0, y = 0, z = 2

5
3. MATLAB user-defined function for solving a system of equations using Gauss Elimination

6
Potential Difficulties When Applying the Gauss Elimination Method
a. The pivot element is zero
Since the pivot row is divided by the pivot element, a problem will arise during the execution of the Gauss elimination
procedure if the value of the pivot element is equal to zero. As shown in the next section, this situation can be corrected by
changing the order of the rows. In a procedure called pivoting, the pivot row that has the zero-pivot element is exchanged
with another row that has a nonzero pivot element.

b. The pivot element is small relative to the other terms in the pivot row
Significant errors due to rounding can occur when the pivot element is small relative to other elements in the pivot
row. This is illustrated by the following example. Consider the following system of simultaneous equations for the unknowns
x1 and x2:
0.0004x1 + 13.45x2 = 13.454
0.6543x1 + x2 = 6.432

The exact solution of the system using Excel is x1 = 8.302 and x2 = 1.


The error due to rounding is illustrated by solving the system using Gaussian elimination on a machine with limited
precision so that only four significant figures are retained with rounding. When the first equation is entered, the constant on
the right-hand side is rounded to 13.45. The solution starts by using the first equation as the pivot equation and a 11= 0.0004
as the pivot coefficient. In the first step, the pivot equation is multiplied by m 21= 0.6543/0.0004 = 1636. With four significant
figures and rounding, this operation gives:

0.0004 13.45 13.45


0.6543 1.00 6.432

0.0004 13.4500 13.4500


-0.0001 -22003 -21998 R2 >> R2 - 1636 R1

Note that a21 element is not zero but a very small number.
Then, the value of x2 is calculated from the second equation

-0.0001 x1 + -22003 x2 = -21998


x2 = 0.9998

7
0.0004 x1 + 13.45 x2 = 13.45
0.0004 x1 + 13.45 0.9998 = 13.45
0.0004 x1 = 0.00269
x1 = 6.725

The solution that is obtained for x1 is obviously incorrect. The incorrect value is obtained because the magnitude of
all is small when compared to the magnitude of a12 (13.45). Consequently, a relatively small error (due to round-off arising
from the finite precision of a computing machine) in the value of x2 can lead to a large error in the value of x1.
The problem can be easily remedied by exchanging the order of the two equations.

0.6543 1 6.432
0.0004 13.45 13.454

0.6543 1 6.432
2.641E-08 13.45 13.45 R2>> R2 - 0.0006113 R1
almost negligible 4 significant figures

13.4500 x2 = 13.4500
x2 = 1.0000

0.6543 x1 + 1.00 x2 = 6.432


0.6543 x1 + 1.00 1.0000 = 6.432
0.6543 x1 = 5.432
x1 = 8.3020

The solution that is obtained now is the exact solution.

Another Solution without pivoting

0.0004 13.45 13.454


0.6543 1.00 6.432

0.0004 13.4500 13.4540


0 -21999.8375 -22000.9485 R2 >> R2 -1635.75R1

-21999.84 x2 = -22000.95
x2 = 1.0001

0.0004 x1 + 13.45 x2 = 13.454


0.0004 x1 + 13.45 1.0001 = 13.454
0.0004 x1 = 0.00332077
x1 = 8.3019

2. Gauss Elimination with Row Pivoting


In the Gauss elimination procedure, the pivot equation is divided by the pivot coefficient. This, however, cannot be
done if the pivot coefficient is zero. For example, for the following system of three equations:
Ox1 + 2x2 + 3x3 = 6
2x1 + 5x2 + 2x3 = 4
6x1 - 3x2 + 1x3 = 2

8
The process begins by selecting the first equation as the pivot equation and its coefficient of x 1, which is 0, as the
pivot coefficient. In order to eliminate the term 2x1 from the second equation, the pivot equation should be multiplied by 2/0
and then subtracted from the second equation. However, this is not possible when the pivot coefficient is zero. To avoid the
division by zero, the order of the equations can be adjusted so that the first equation has a nonzero coefficient for x 1. For
instance, in the system above, this can be achieved by swapping the first two equations. In the general Gauss elimination
method, an equation (or row) can only serve as the pivot equation (pivot row) if its pivot coefficient (pivot element) is nonzero.
If the pivot element is zero, the row is swapped with another row below it that has a nonzero pivot coefficient. This row
exchange referred to as pivoting. Row pivoting is used to improve numerical stability. It involves swapping rows to ensure
that the largest possible pivot element is used at each step.
If during the Gauss elimination procedure a pivot equation has a pivot element that is equal to zero, then if the
system of equations that is being solved has a solution, an equation with a nonzero element in the pivot position can always
be found.
The numerical calculations are less prone to error and will have f ewer round-off errors if the pivot element has a
larger numerical absolute value compared to the other elements in the same row. Consequently, among all the equations that
can be exchanged to be the pivot equation, it is better to select the equation whose pivot element has the largest absolute
numerical value.

Steps to Solve Using Gauss Elimination with Row Pivoting


1. Write the Augmented Matrix
2. Perform Row Pivoting. At each step, identify the row with the largest absolute value in the current column (pivot column)
and swap it with the current row.
3. Perform Row Operations. Use row operations to eliminate variables below the pivot.
4. Back-Substitution. Solve for the variables starting from the last row.

Sample Problem. Solve the equations using Gaussian elimination method with pivoting.
Ox1 + 2x2 + 3x3 = 6
2x1 + 5x2 + 2x3 = 4
6x1 - 3x2 + x3 = 2
Solution:
a. Write the augmented matrix
0 2 3 6
[2 5 2 4]
6 −3 1 2
b. Perform row pivoting
2 5 2 4
[0 2 3 6]
6 −3 1 2
c. Perform row operations
R3 >> R3 - 3R1

2 5 2 4
[0 2 3 6 ]
0 −18 −5 −10

R3 = R3 + 9R2

2 5 2 4
[ 0 2 3 6 ]
0 0 22 44
c. Back-substitution
From R3: 22x3 = 44
x3 = 2
From R2: 2x2 + 3x3 = 6
2x2 + 3(2) = 6
x2 = 0
From R1: 2x1 + 5x2 + 2x3 = 4

9
2x1 + 5(0) – 2(2) = 4
x1 = 0

Solution: x1 = 0, x2 = 0, x3 = 2

3. Gauss-Jordan Elimination Method


The Gauss-Jordan elimination method is a procedure for solving a system of linear equations, [a][x] = [b]. In this
procedure, a system of equations that is given in a general form is manipulated into an equivalent system of equations in
diagonal form with normalized elements along the diagonal. This means that when the diagonal form of the matrix of the
coefficients, [a], is reduced to the identity matrix, the new vector [b'] is the solution. The starting point of the procedure is a
system of equation given in a general form:

In the Gauss-Jordan elimination method, the system of equations is manipulated to have the following diagonal
form:

The matrix form of the equivalent system is shown above. The terms on the right-hand side of the equations (column
[b']) are the solution. In matrix form, the matrix of the coefficients is transformed into an identity matrix.

Gauss-Jordan Elimination Procedure


The Gauss-Jordan elimination procedure for transforming the system of equations from the form (identity matrix) in
is the same as the Gauss elimination procedure, except for the following two differences:
a) The pivot equation is normalized by dividing all the terms in the equation by the pivot coefficient. This makes the
pivot coefficient equal to 1.
b) The pivot equation is used to eliminate the off-diagonal terms in ALL the other equations. This means that the
elimination process is applied to the equations (rows) that are above and below the pivot equation. In the Gaussian elimination
method, only elements that are below the pivot element are eliminated.
When the Gauss-Jordan procedure is programmed, it is convenient and more efficient to create a single matrix that
includes the matrix of coefficients [a] and the vector [b]. This is done by appending the vector [b] to the matrix [a]. The
augmented matrix at the starting point of the procedure is shown (for a system of four equations) below. At the end of the
procedure, shown below, the elements of [a] are replaced by an identity matrix, and the column [b'] is the solution.

10
Sample Problem: Solve the following set of four equations using the Gauss-Jordan elimination method.
4x1 - 2x2 - x3 + x4 = 4
-2x1 + 2x2 + x3 - x4 = -6
x1 + 2x2 + 2x3 + 2x4 = 3
-2x1 + 2x2 + x3 -2x4 = 3

Solution: Using Excel

11
Sample Problem: Solve the following set of three equations using the Gauss-Jordan elimination method.
3x + 2y – 4z = -5
2x – 3y + 5z = 11
x + y – 2z = – 3

Solution:

12
Iterative Methods : Jacobi Iterative Method and Gauss – Seidel Iterative Method
In the context of numerical methods, an iteration refers to one complete cycle of updating the values of
the unknowns in an iterative process. The process begins with an initial guess for the solution, and in each iteration,
the values of the unknowns are updated based on the results of the previous iteration. This is done repeatedly until
the solution approaches the true value or until a stopping criterion (such as a specific tolerance level) is met.
For example, in solving a system of equations using an iterative method, during the first iteration, you use
initial guesses to compute updated values for the unknowns. Then, in the second iteration, these updated values
are used to compute new values, and so on, refining the solution with each cycle.
To converge means that as the iterations progress, the values of the unknowns are approaching a
specific limit, which is the actual solution to the problem. In other words, the solution gets closer and closer to the
true value with each iteration. Convergence is reached when the difference between successive iterations
becomes sufficiently small or meets a predetermined criterion, indicating that further iterations will not significantly
improve the solution.
In numerical methods, convergence is important because it tells us that the iterative process is effectively
and reliably moving towards the correct solution. If the iterations do not converge, this may indicate an issue with
the method or the problem itself, such as an inconsistent system or poorly chosen initial guesses.

A system of linear equations can also be solved using an iterative approach. This method follows a similar
principle to the fixed-point iteration technique, which is commonly used to solve a single nonlinear equation. In the
context of solving a system of linear equations iteratively, the equations are restructured into an explicit form where
each unknown is expressed as a function of the other variables. This explicit formulation allows for successive
approximations, refining the solution with each iteration. An example of such a formulation for a system of four
equations is provided below.
The explicit form of an equation refers to a mathematical expression where each variable (or unknown)
is isolated on one side of the equation, typically in terms of other variables or constants. In the context of solving
systems of equations iteratively, it means rewriting each equation so that each unknown is expressed as a function
of the other variables.
For example, consider the linear equation explicit form
a1x + b1y + c1z = d1 x = [d1−b1y−c1z]/ a1

13
The system of four equations can be represented in both standard (a) and explicit (b) forms. The solution
process begins by assuming initial values for the unknowns, which serve as the first estimated solution. During the
first iteration, these initial values are substituted into the equations, and the resulting values of the unknowns
become the second estimated solution. In the second iteration, the updated solution is substituted back into the
equations to calculate new values for the unknowns, yielding the third estimated solution. This iterative process
continues in the same manner. When the method converges, the solutions obtained through successive iterations
will approach the actual solution. For a system with n equations, the explicit forms of the equations for the
unknowns [xi] are as follows:
𝐣=𝐧
𝟏
𝐱𝐢 = [ 𝐛𝐢 − ∑ 𝐚𝐢𝐣 𝐱𝐣 ] where 𝐢 = 𝟏, 𝟐, … , 𝐧
𝐚𝐢𝐢
𝐣=𝟏,𝐣≠𝐢

Condition for convergence


For a system of n equations [a][x] = [b], a sufficient condition for con vergence is that in each row of the
matrix of coefficients [a] the absolute value of the diagonal element is greater than the sum of the absolute values
of the off-diagonal elements.

𝐣=𝐧

|𝐚𝐢𝐢 | > ∑ |𝐚𝐢𝐣 |


𝐣=𝟏,𝐣≠𝐢

This condition is sufficient but not necessary for convergence when the iteration method is used. When
condition is satisfied, the matrix [a] is classified as diagonally dominant, and the iteration process converges
toward the solution. The solution, however, might converge even when is not satisfied.
Two specific iterative methods for solving the system, the Jacobi and Gauss-Seidel methods, are outlined
below. The key distinction between the two methods lies in how the newly calculated values of the unknowns are
applied. In the Jacobi method, the updated values of the unknowns are not incorporated until the end of each
iteration, meaning that the same set of estimated values is used throughout the entire iteration. In contrast, the
Gauss-Seidel method updates the value of each unknown immediately as it is calculated, and these updated
values are then used in the computation of the remaining unknowns within the same iteration.

1. Jacobi Iterative Method


The Jacobi Iterative Method is a straightforward iterative technique for solving a system of linear
equations, particularly useful when the system can be written in the form Ax = b. It is an improvement over direct
methods like Gaussian elimination in certain cases, especially for large systems.
The Jacobi method works by solving for each unknown in terms of the others, but in each iteration, it
updates all unknowns simultaneously using values from the previous iteration. The method assumes that the
system of equations can be written in a form where each equation is solved for one variable explicitly.

Step-by-Step Process
For a system of equations Ax = b, where A is a square matrix, x is the vector of unknowns, and b is the
vector of constants, the system can be expressed as:
a11x1 + a12x2 + ⋯ + a1nxn = b1
a21x1 + a22x2 + ⋯ + a2nxn = b2
:
:
an1x1 + an2x2 + ⋯ + annxn = bn

In the Jacobi method, each equation is rearranged so that each unknown is expressed as a function of
the other unknowns. For example, for a system of two equations, it would look like:

14
𝑏1 −𝑎12 𝑥2𝑘
x1(k+1) = 𝑎11

𝑏2 −𝑎21 𝑥1𝑘
x2(k+1) = 𝑎22

Here:
• x1(k) and x2(k) are the values of the unknowns at the k-th iteration.
• x1(k+1) and x2(k+1) represent the updated values after the next iteration.
Iterative Process:
1. Start with an initial guess for the values of the unknowns, say x1(0),x2(0),…,xn(0).
2. For each iteration k, compute the new values of the unknowns using the formulae derived from the system,
based on the previous iteration’s values.
3. Continue the iterations until the changes between successive approximations are small enough, i.e., until
the method converges.

Convergence Criteria:
The method is guaranteed to converge under certain conditions:
• If the matrix A is diagonally dominant, meaning for each row i, ∣aii∣>∑j≠i ∣aij∣ the Jacobi method will
converge to the correct solution.
• In the context of the Gauss-Jacobi method, a diagonally dominant matrix is a matrix where the absolute
value of the diagonal entry in each row is greater than or equal to the sum of the absolute values of the
other (non-diagonal) entries in that row: refer on the sample problem below; say 15 > (2 + 1); 12 > (2 +
1); 8 > (1 + 2).
• For other types of matrices, the convergence may not be guaranteed, and further analysis is required to
determine if the method will work.

Sample Problem:
Using Gauss Jacobi iteration method, solve the following system of linear algebraic equations.
15x + 2y – z = -200
2x + 12y + z = -250
x + 2y + 8z = 30

Solution:

x = [-250 - (2y - z) ] / 15
y = [-250 - (2x + z) ] / 12
z = [30 - (x + 2y) ] / 8

15
The solution converges at N = 10, thus x = -10, y = -20, z = 10.

16
Calculating in Matrixcalc.org using Gauss Elimination method.

17
2. Gauss-Seidel Method
In the Gauss-Seidel method, initial (first) values are assumed for the unknowns x2, x3, .. ., xn (all of the
unknowns except x1). If no information is available regarding the approximate value of the unknowns, the = initial
value of all the unknowns can be assumed to be zero. The first assumed values of the unknowns are substituted
with i = 1 to calculate the value of x1 . Next, with i2 is used f or calculating a new value for x2. This is followed by
using i = 3 for calculating a new value for x3. The process continues until i = n, which is the end of the first iteration.
In the Gauss-Seidel method, the current values of the unknowns are used for calculating the new value of the next
unknown. In other words, as a new value of an unknown is calculated, it is immediately used for the next
application. In the Jacobi method, the values of the unknowns obtained in one iteration are used as a complete set
for calculating the new values of the unknowns in the next iteration. The values of the unknowns are not updated
in the middle of the iteration.

Gauss-Seidel method gives the iteration formula:

These equations can be written in a summation form as:

Hence, for any row i:

18
Now to find xi’s, one assumes an initial guess for the xi’s and then uses the rewritten equations to calculate the
new estimates. Remember, one always uses the most recent estimates to calculate the next estimates, xi. At the
end of each iteration, one calculates the absolute relative approximate error for each xi as:

Error = | (xinew – xiold)/ xinew | x 100

where xinew is the recently obtained value of xi, and xiold is the previous value of xi.
When the absolute relative approximate error for each xi is less than the pre-specified tolerance, the iterations are
stopped.

The criterion for stopping the iterations is the same as in the Jacobi method. The Gauss Seidel method
converges faster than the Jacobi method and requires less computer memory when programmed.
Note: Gauss-Seidel method can still be performed even if the matrix is not diagonally dominant. However,
convergence is not guaranteed in such cases.

Sample Problem:
Perform Gauss-Seidel method in solving linear equations with initial guess x (0) = [ 0 0 0 ]T: Perform in 3 iterations.
45x1 + 2x2 + 3x3 = 58
-3x1 + 22x2 + 2x3 = 47
5x1 + x2 + 20x3 = 67

Solution:
Check if it is diagonally dominant.
45 > 2 + 3
22 > 3 + 2
20 > 5 + 1

Initial guess >> x (0) = [ x1(0) x2(0) x3(0) ]T


x (0) = [0 0 0]T

First Iteration, k = 1
@k–1=1–1=0
(k−1) (k−1) (0) (0)
(k) 58−[2x2 +3x3 ] (1) 58−[2x2 +3x3 ] 58−[2(0)+3(0)]
x1 = x1 = = = 1.2889
45 45 45

(k) (k−1) (1) (1−1)


(k) 47−[−3x1 +2x3 ] (1) 47−[−3x1 +2x3 ] 47−[−3(1.2889)+2(0)]
x2 = x2 = = = 2.3121
22 22 22

(k) (k) (1) (1)


(k) 67−[(5x1 +x2 ] (1) 67−[(5x1 +x2 ] 67−[5(1.2889)+2.3121]
x3 = x3 = = = 2.9122
20 20 20

x1(1) 1.2889
x(1) x2(1) = 2.3121
x3(1) 2.9122

Second Iteration, k = 2
@k–1=2–1=1
(k−1) (k−1) (1) (1)
(k) 58−[2x2 +3x3 ] (2) 58−[2x2 +3x3 ] 58−[2(2.3121)+3(2.9122)]
x1 = x1 = = = 0.9920
45 45 45

19
(k) (k−1) (2) (1)
(k) 47−[−3x1 +2x3 ] (2) 47−[−3x1 +2x3 ] 47−[−3(0.992)+2(2.9122)]
x2 = x2 = = = 2.0069
22 22 22

(k) (k) (2) (2)


(k) 67−[(5x1 +x2 ] (2) 67−[(5x1 +x2 ] 67−[5(0.992)+2.0069]
x3 = x3 = = = 3.0017
20 20 20

x1(2) 0.9920
x(2) x2(2) = 2.0069
x3(2) 3.0017

Third Iteration, k = 3
@k–1=3–1=2
(k−1) (k−1) (2) (2)
(k) 58−[2x2 +3x3 ] (3) 58−[2x2 +3x3 ] 58−[2(2.0069)+3(3.0017)]
x1 = x1 = = = 0.9996
45 45 45

(k) (k−1) (3) (2)


(k) 47−[−3x1 +2x3 ] (3) 47−[−3x1 +2x3 ] 47−[−3(0.9996)+2(3.0017)]
x2 = x2 = = = 1.9998
22 22 22

(k) (k) (3) (3)


(k) 67−[(5x1 +x2 ] (3) 67−[(5x1 +x2 ] 67−[5(0.9996)+1.9998]
x3 = x3 = = = 3.0001
20 20 20

x1(3) 0.9996
x(3) x2(3) = 1.9998
x3(3) 3.0001

20
ACTIVITY SHEET NO. 3 Topic: ________________________________________________

SUBJECT: CE 323 - Numerical Solutions to CE Problems DATE SUBMITTED: _____________________

Name: ______________________________________________ Program/ Year/ Section _________________

Instructor’s Name: __________________________________________________ Score: _________________

Solve the following problems and write your solutions on a separate sheet of paper. Each problem is worth five (5)
points.

1. Solve the following system of four equations using Gauss elimination method.
4x1 - 2x2 - 3x3 + 6x4 = 12
-6x1 + 7x2 + 6.5x3 - 6x4 = -6.5
x1 + 7.5x2 + 6.25x3 + 5.5x4 = 16
-12x1 + 22x2 + 15.5x3 - x4 = 17

2. Using Gauss Elimination Method to solve the system of equations:


x + 2y + z = 3
2x + 3y + 3z = 7
3x + 5y + 4z = 10

3. Perform Gauss Elimination to solve the system of equations:


x+y+z=6
2x + 3y + z = 11
3x + 2y + 2z = 13

4.Solve the system of equations:


0x - 2y + 3z = 6
4x + 2y + z = 5
2x - y + 3z = 4
a) Gauss Elimination with Row Pivoting
b) Gauss-Jordan Elimination

5. Using Gauss Elimination with pivoting to solve the system of equations:


2x + 3y - z = 5
4x + 4y - 3z = 3
-2x + 3y - z = 1

6. Prove using Gauss-Jordan Elimination that these linear equations have no solution:
4x1 - 2x2 - x3 + x4 = 4
-2x1 + 2x2 + x3 - x4 = -6
x1 + 2x2 + 2x3 + 2x4 = 3
-2x1 + 2x2 + x3 - x4 = 3

7. Solve the following set of four equations using Gauss-Jordan Elimination method.
4x1 - 2x2 - 3x3 + 6x4 = 12
-6x1 + 7x2 + 6.5x3 - 6x4 = -6.5
x1 + 7.5x2 + 6.25x3 + 5.5x4 = 16
-12x1 + 22x2 + 15.5x3 - x4 = 17

21
8. Solve the following system of linear algebraic equations using:
a. Gauss Jacobi Iteration method
b. Gauss-Seidel Iteration method - perform in 4 iterations

Linear Equations
19 x + -13 y + 4 z = 111
4 x + 22 y + -13 z = -128
8 x + 8 y + 17 z = 10

22
ANSWER KEY

1. x1 = 2, x2 = 4, x3 = -3, x4 = 0.5

2. x = 5-3z, y = z-1, where z is a parameter

3. x = 1, y = 2, z = 3

4. x1 = -25/4, x2 = 21/2, x3 = 9

5. x1 = 1, x2 = 2, x3 = 3

6. No solution

7. x1 = 2, x2 = 4, x3 = -3, x4 = 0.5

8. a) x1 = 2, x2 = -5, x3 = 2
b) x1 = 2.118, x2 = -5.1311, x3 = 2.0062

23

You might also like