0% found this document useful (0 votes)
15 views55 pages

Module 2 Linear Algebraic Equations

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views55 pages

Module 2 Linear Algebraic Equations

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

NUMERICAL METHODS

MA 311
MODULE 2:LINEAR ALGEBRAIC
EQUATIONS
Musango Lungu, D Eng

School of Mines and Mineral Sciences


Chemical engineering department
2023 @ copyright M.L.
LINEAR ALGEBRAIC EQUATIONS
• Several physical operations ( most notably, staged operations) can be
represented in terms of a set of N coupled linear algebraic equations, for N
unkowns, having the following form

(1)
LINEAR ALGEBRAIC EQUATIONS
• A stage is defined as any device or combination of devices in which two
phases are brought into intimate contact, where mass transfer occurs
between the phases tending to bring them into equilibrium and where
the phases are then mechanically separated.
• A process carried out this way is called a single stage process whilst a
group of stages interconnected so that the various streams flow from one
to the other is called a cascade.
• Typical staged operations include leaching, distillation, gas absorption
etc etc.
• Such equations are also obtained when numerical techniques such as
finite difference, orthogonal collocation, finite elements etc are applied
to sets of ODEs or PDEs.
LINEAR ALGEBRAIC EQUATIONS
• It is important to develop efficient techniques for solving these equations.
• Equation 1 can be written in vector-matrix notation as (2)
where

(3)

• T represents the transpose and the (i,j) term ( ith row and jth column) in
A is .
LINEAR ALGEBRAIC EQUATIONS
• The solution to equation 2 can be written using Cramer’s rule as
(4)

• where represents the determinant of matrix A.


• Matrix is obtained by replacing the jth column of A by the column
vector, b.
• Equation 2 has a unique solution ONLY IF
• b=0 (homogeneous equations), then and all the (trivial
solution).
LINEAR ALGEBRAIC EQUATIONS
• Example 1
Solve the following sets of linear equations using Cramer’s rule :

• Solution
Step 1 Calculate determinant of A
LINEAR ALGEBRAIC EQUATIONS
• Step 2: Calculate determinants of Ai‘s
where Ai is the matrix formed by replacing the ith column of A by the
column of vector B.

• Step 3: Compute unknown xi’s


LINEAR ALGEBRAIC EQUATIONS
• Two possibilities arise if A is singular as illustrated by the next two
examples.
Example 2
Solve (a) (b)

Solution
• A is singular since
• (a) In this case, it is observed that the two equations are in fact identical.
• Situation where there are more variables than equations leads to infinite
solutions.
• Uniqueness is no longer present.
LINEAR ALGEBRAIC EQUATIONS
• (b) In this case the two equations are incompatible:

no solutions exist

• For a general case of M linear algebraic equations in N unknowns


:
(5)

• where and A is an matrix. An augmented matrix,


aug A is formed as given below:
LINEAR ALGEBRAIC EQUATIONS

(6)

• The rank of A and the aug A are first determined.


• The necessary and sufficient condition for solution of Eq is that the rank
of A be the same as the rank of aug A.
LINEAR ALGEBRAIC EQUATIONS
• If the necessary and sufficient condition is satisfied and if r is the rank of
both A and aug A then the following deduction can be made:

Infinite

• If b=0 ( homogeneous equations), then nontrivial solutions are obtained


if and only of the rank of A is less than N.
LINEAR ALGEBRAIC EQUATIONS
GAUSS ELIMINATION and LU DECOMPOSITION
• Suppose we are required to solve N linear simultaneous algebraic
equations with N unknowns ?
• Unfortunately Cramer’s rule requires excessive calculations for such a
system.
• It is estimated that multiplications and divisions are
required to obtain the solution.
LINEAR ALGEBRAIC EQUATIONS
• Several efficient techniques have been proposed to tackle systems with large
sets of equations.
• These are classified into direct ( Gauss elimination, Gauss Jordan
elimination, Cholesky’s or Crout’s method etc etc) and indirect or iterative
( i.e. Jacobi, Gauss Siedel and relaxation) methods.
• Gauss elimination is a relatively simple technique for solving a system of N
linear algebraic equations in N unknowns:

• (7)
LINEAR ALGEBRAIC EQUATIONS
• Superscripts (1) indicate starting (1st iterate) values.
• Set of row operations ( multiplication of an equation by a constant,
adding two equations, etc which do not alter the solutions) are performed
on the N equations.
• The 1st row of and ( corresponding to 1st equation of the set) is
multiplied by and added to the second row, to eliminate the
(2,1) term.
• Similarly, the first row is multiplied by and added to the 3rd
row to eliminate the (3,1) term, etc . This leads to :
(8)
LINEAR ALGEBRAIC EQUATIONS
• It can be easily verified that these operations are described by the following
equations:

(9a)

(9b)

• i = 2,3, …, N
• j= 2,…,N
• with k = 2.
LINEAR ALGEBRAIC EQUATIONS
• The unnecessary operations for j = 1 need not be performed since the end
result is zero. Thus:

(10)

• The next iteration attempts to eliminate the (3,2), (4,2), …, (N 2) terms


from A .
LINEAR ALGEBRAIC EQUATIONS
• This is done by successively multiplying the second row with
and adding it to the 3rd, 4th , …,
Nth rows . This leads to :
LINEAR ALGEBRAIC EQUATIONS

(11)
LINEAR ALGEBRAIC EQUATIONS
• It can again be verified that these operations are described by
i = 3,4, …,N
j = 3,4,…,N (12)
k=3
• This sequence is continued until all the lower triangular elements are
eliminated, to give finally
LINEAR ALGEBRAIC EQUATIONS

(13)

• Where U is the upper triangular matrix A(N)


LINEAR ALGEBRAIC EQUATIONS
• The solution x can now be easily obtained sequentially in the reverse
direction (backward sweep), since the last equation involves only , the one
before this and , etc. Thus we have

and

(14)
LINEAR ALGEBRAIC EQUATIONS
Example 3 : Solve using Gauss elimination with backward sweep
LINEAR ALGEBRAIC EQUATIONS
LINEAR ALGEBRAIC EQUATIONS
Example 4
• Solve Example 1 using the Gauss elimination with backward sweep
method

Solution
Step 1: Form augmented matrix :
LINEAR ALGEBRAIC EQUATIONS

Step 2: Row 1 operation with a11 (10) as pivot element


• Multiply row 1 by -3/10 and add to the 2nd row, to eliminate the (2,1)
term.
• Multiply row 1 by -1/10 and add to the 3rd row to eliminate the (3,1) term.
• Resulting matrix after 2nd iteration is now:
LINEAR ALGEBRAIC EQUATIONS

Step 3: Row 2 operation with a22 (91/10) as pivot element


• Multiply row 1 by -17/91 and add to the 3rd row, to eliminate the (3,2)
term.
• Resulting matrix after 3rd iteration is now:
LINEAR ALGEBRAIC EQUATIONS

• The iteration STOPS here since we have eliminated all the terms below
the matrix diagonal.
Step 4:Backward sweep ( backward substitution)
LINEAR ALGEBRAIC EQUATIONS

MATLAB
code for
Gauss
Elimination
method
LINEAR ALGEBRAIC EQUATIONS
• The Gauss elimination technique cannot work if the diagonal or pivot
element i.e. Becomes zero or close to zero at any stage.
• This can happen even when A is nonsingular.
• A workaround this challenge is exchanging the rows and the continuing
with the procedure.
Example 5 : Solve by Gauss elimination method and backward sweep

• The 2nd iterate is :


LINEAR ALGEBRAIC EQUATIONS

• The elimination cannot proceed since the pivot element .


• Interchanging the 2nd and 3rd equations now gives :

• The 3rd iteration is not needed since .


• Solution is
LINEAR ALGEBRAIC EQUATIONS
• In many situations we need to solve equations in the form .
• In such circumstances it is efficient to use the Gauss elimination on A
once only and store the elements.
• A common technique to realize this is the LU decomposition.
• U is the upper triangular matrix corresponding to the Gauss elimination
of A.
• L is the lower triangular matrix comprising the multiplying factors used
in Gauss elimination defined as:
LINEAR ALGEBRAIC EQUATIONS

(15)

• Note that the negative of the multiplying factors are used in L and
diagonal elements are unity .
• It can be easily confirmed that .
• A new vector is defined such that:
LINEAR ALGEBRAIC EQUATIONS
(16a)
(16b)

• Take not that the original b vector matrix is used and that .
• A forward sweep gives y and these are used in turn to compute x from the
backward sweep.
Example 6
• Solve Example 1 using the LU decomposition method
LINEAR ALGEBRAIC EQUATIONS

Solution
From the Gauss Elimination method in Example 4:
LINEAR ALGEBRAIC EQUATIONS

• Forward sweep gives


LINEAR ALGEBRAIC EQUATIONS

• Backward sweep gives


LINEAR ALGEBRAIC EQUATIONS

LINEAR ALGEBRAIC EQUATIONS
Example 7 : Solve Example 1 using the Gauss Jordan method

Solution
Step1: Augmented matrix
LINEAR ALGEBRAIC EQUATIONS
Step 2
(i) Normalize pivot element, a11 in row 1, R1 by dividing throughout with
a11 (10) .
(ii) Eliminate a21 and a31 by multiplication of R1 by -3 and -1 and addition
to R2 and R3 respectively.
Resulting matrix is
LINEAR ALGEBRAIC EQUATIONS
Step 3
(i) Normalize pivot element, a22 in row 2, R2 by dividing throughout with
a22 (10/91) .
(ii) Eliminate a12 and a32 by multiplication of R2 by -3/10 and -17/10 and
addition to R1 and R3 respectively.
Resulting matrix is now:
LINEAR ALGEBRAIC EQUATIONS
Step 4
(i) Normalize pivot element, a33 in row 3, R3 by dividing throughout with
a33 (91/872) .
(ii) Eliminate a13 and a23 by multiplication of R3 by -4/91 and -17/91 and
addition to R1 and R2 respectively.
Resulting matrix is now:
LINEAR ALGEBRAIC EQUATIONS
• The (i,4); i= 1,2,3 elements give the solution vector.
• In this case it’s 1, 2 and 3.
• It can also be shown that:
LINEAR ALGEBRAIC EQUATIONS
• Matlab code for Gauss Jordan Elimination Method
LINEAR ALGEBRAIC EQUATIONS
GAUSS-SEIDEL TECHNIQUE
• It is an approximate iterative technique
• Suitable for solving large problems typically N> 25.
• In this technique , values for all of the N variables are
first assumed ( physics of problem usually employed).
• The first equation in Ax=b is rearranged as:
(17)

to obtain an updated value for x1.


• The second equation uses the most recent values of x in:
LINEAR ALGEBRAIC EQUATIONS
(18)
to obtain an updated value for x2 .
• This continues until all N equations have been used to compute .
• The algorithm is thus given by:

(19)
LINEAR ALGEBRAIC EQUATIONS
• The computations are continued until some prescribed error criterion
is met. i.e.:

(20)

Example 8: Solve Example 1 using Gauss Seidel iterative method


LINEAR ALGEBRAIC EQUATIONS
Solution:
Step 1: Rearrange the equations:
LINEAR ALGEBRAIC EQUATIONS
1st iteration 2nd iteration
Initial guess vector = xT = [0 0 0 ]’
LINEAR
rd
ALGEBRAIC
th
EQUATIONS
• 3 Iteration 4 iteration
LINEAR ALGEBRAIC EQUATIONS
5th iteration 6th iteration

• At the 6th iteration the solution converges since the relative error is
0.0001 for x1 and 0 for x2 and x3 respectively.
LINEAR ALGEBRAIC EQUATIONS
GAUSS_SEIDEL MATLAB CODE
LINEAR ALGEBRAIC EQUATIONS
• An adaptation of the Gauss-Seidel technique uses updated values, ,
only in the next iteration,i.e.,

(21)

• This adaptation is called Jacobi’s algorithm.


• The Gauss-Seidel technique is about twice as fast as the Jacobi
technique.
LINEAR ALGEBRAIC EQUATIONS
• Convergence of the Gauss-Seidel method can be speeded up by using
successive (or systematic) over relaxation (SOR).
• Using the entire vector is first estimated from the Gauss Seidel
method and then is calculated using the weighted average of
and :

• For w =1, one goes from to in 1 iteration.


• For w > 1 (over relaxation), one overshoots and extrapolates a little beyond
to speedup the computation.
LINEAR ALGEBRAIC EQUATIONS
• w < 1 (under relaxation) is used for systems that do not converge with
the Gauss-Seidel method.
• Usually 1<w<2 is used but optimal choices depend on specific problem.
Exercise
(a) Repeat Example 1 using (i) Jacobi iterations (ii) SOR method with 6
iterations and compare solution with Gauss-Seidel method.
(b) Solve the following problems from the text Numerical Methods for
Engineers by Santosh Gupta: 1.4, 1.12 , 1.15 and 1.19 on Pages 21-24.

END OF MODULE 2

You might also like