0% found this document useful (0 votes)
56 views39 pages

CHE 411 Lesson 11 Note

The document discusses numerical methods for solving equations, including the Jacobi and Gauss-Seidel iterative methods. The Jacobi method involves iteratively solving equations to obtain better approximations to the solution. The Gauss-Seidel method is similar but uses the most recent approximations in each new iteration. Both methods will converge to the correct solution when applied to matrices that are strictly diagonally dominant. Examples are provided to demonstrate solving systems of equations using Gauss-Seidel.

Uploaded by

David Akomolafe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views39 pages

CHE 411 Lesson 11 Note

The document discusses numerical methods for solving equations, including the Jacobi and Gauss-Seidel iterative methods. The Jacobi method involves iteratively solving equations to obtain better approximations to the solution. The Gauss-Seidel method is similar but uses the most recent approximations in each new iteration. Both methods will converge to the correct solution when applied to matrices that are strictly diagonally dominant. Examples are provided to demonstrate solving systems of equations using Gauss-Seidel.

Uploaded by

David Akomolafe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 39

 OTHER

ANALYTICAL
OTHER METHODS METHODS
OF SOLUTION  NUMERICAL
METHODS
Preliminary
Some Other Analytical Methods
Numerical Methods: the
CONTENTS Checklists
Summary
 In this section we consider other method of
solutions to our modelled problems.
 We highlight what remains to be taught and
look at areas of concentration.
 Most important analytical methods as
highlighted in the course outlines have been
PRELIMINARY considered, the remaining ones are to be
reviewed by the students.
 Some numerical methods that have not been
taught in the previous courses are presented
here.
 Students are advised to review those ones that
have bee covered in the pre-requisite courses.
MATRIX METHODS
This method has been covered extensively
in the previous course (CHE 305)
SERIES SOLUTIONS
OTHER  Bessel and Legendre Equations with
ANALYTICAL variable coeffiecients
METHODS – THE
 Sturm-Loiville Boundary value problems
CHECKLIST
 USE OF OPERATORS IN THE SOLUTION
OF PDEs AND LINEAR INTEGRAL
EQUATIONS
These methods were adequately covered
in CHE 306. Students should review them
CHECKLISTS
Fixed-point – Covered in CHE 306
Bi-section method – Covered in CHE 306
Gauss-Seidel – To be reviewed here
NUMERICAL Newton-Raphsons – Covered in CHE 306
METHODS – THE
Difference Operator – Covered in CHE
CHECKLIST
306
Forward, Central and Backward difference
– Covered in CHE 306
Focus
After this Section, students should be able
to:
 solve a set of equations using the Jacobi
THE JACOBI AND and Gauss-Seidel methods,
GAUSS-SEIDEL
 recognize the advantages and pitfalls of the
ITERATIVE
Jacobi and Gauss-Seidel methods, and
METHODS
 determine under what conditions the the
methods always converge.
Motivation -Why do we need another method
to solve a set of simultaneous linear
equations?
In certain cases, such as when a system of equations
is large, iterative methods of solving equations are
more advantageous.
Elimination methods, such as Gaussian elimination,
JACOBI AND are prone to large round-off errors for a large set of
SEIDEL METHODS equations.
Iterative methods, such as the Jacobi and Gauss-
Seidel methods, give the user control of the round-off
error.
Also, if the physics of the problem are well known,
initial guesses needed in iterative methods can be
made more judiciously leading to faster convergence.
Two assumptions made on Jacobi Method:
The system given by

THE JACOBI (252)


METHOD

has a unique solution


The coefficient matrix A has no zeros on its
main diagonal, namely,
are nonzeros
To begin, solve the 1st equation for x 1, the
2nd equation for x2 and so on to obtain the
rewritten equations:

THE JACOBI (253)


METHOD:
Then make an initial guess of the solution
MAIN IDEA

Substitute these values into the right hand


side the of the rewritten equations to
obtain the first approximation,
This accomplishes one iteration.
In the same way, the second
THE JACOBI approximation, is
METHOD: computed by substituting the first
approximation’s x-values into the RHS of
MAIN IDEA the rewritten equations (253)
By repeated iterations, we form a sequence
of approximations
The Algorithm
For each generate the components
of
from by
THE JACOBI
(254)
METHOD:

MAIN IDEA
Example:
Apply the Jacobi method to solve
(255)
Continue iterations until two successive
approximations are identical when
rounded to three significant digits.
Solution
THE JACOBI To begin, rewrite the system
METHOD:

MAIN IDEA

Choose the initial guess, x1 = 0, x2 = 0,


x3 = 0
The first approximation is

THE JACOBI
METHOD: Continue iteration, we obtain
n k=0 k=1 k=2 k=3 … k=6
MAIN IDEA
Consider to solve an n x n size system of
linear equations with

THE JACOBI
METHOD: We split A into

IN MATRIX FORM
 is transformed into (D-L-U)x = b
THE JACOBI
METHOD: Given

IN MATRIX FORM

Then
The matrix form of Jacobi iterative
method is

THE JACOBI
METHOD:
Jacobi iteration method can also be written
IN MATRIX FORM as
Input: A =[aij], b, X0 = x(0), tolerance TOL,
maximum number of iterations N

THE JACOBI
METHOD:

NUMERICAL
ALGORITHM
THE JACOBI
METHOD:

NUMERICAL
ALGORITHM
With the Jacobi method, the values of x i(k)
obtained in the kth iteration remain
unchanged until the entire (k+1)th iteration
has been calculated.
THE GAUSS-
With the Gauss-Seidel method, we use the
SEIDEL METHOD
new values of xi(k+1) as soon as they are
MAIN IDEA known.
For example, once we have computed of
x1(k+1) from the first equation, its value is
then used in the second equation to obtain
the new x2(k+1) and so on.
Example. Derive iteration equations for
the Jacobi method and Gauss-Seidel
method to solve

THE GAUSS-
SEIDEL METHOD
  

MAIN IDEA
The Gauss-Seidel Method. For each k 0
generate the components of xi(k) from
xi(k-1) by
Namely,

THE GAUSS-
SEIDEL METHOD

MAIN IDEA
Gauss-Seidel method can be written as

THE GAUSS- Numerical Algorithm of Gauss-Seidel


SEIDEL METHOD
Method
NUMERICAL
ALGORITHM
THE GAUSS-
SEIDEL METHOD

NUMERICAL
ALGORITHM
Let the iteration method be written as

THE JACOB AND


GAUSS-SEIDEL
METHODS If A is strictly diagonally dominant, then
for any choice of x(0), both the Jacobi and
CONVERGENCE Gauss-Seidel methods give sequences
THEOREMS OF that converges to the unique
THE ITERATION solution of .
METHODS
The upward velocity of a rocket is given at
three different times in the following table
Table 1 Velocity vs. time data.
THE GAUSS-
SEIDEL METHODS

EXAMPLES

The velocity data is approximated by a


polynomial as
 Find the values of a1, a2 , and a3 using the
Gauss-Seidel method. Assume an initial guess
of the solution as

THE GAUSS-
SEIDEL METHODS

EXAMPLES
Solution
 The polynomial is going through three data
points (t1,v1), (t2,v2 ), and (t3,v3) where from the
above table
THE GAUSS- passes
SEIDEL METHODS through the three data points gives

EXAMPLES
Substituting the data (t1,v1), (t2,v2 ), and
(t3,v3) gives

THE GAUSS-
SEIDEL METHODS
or
EXAMPLES
The coefficients a1, a2, and a3 for the above
expression are given by

THE GAUSS-
SEIDEL METHODS Rewriting the equations gives

EXAMPLES
Iteration #1
Given the initial guess of the solution
vector as

THE GAUSS-
SEIDEL METHODS

EXAMPLES we get


THE GAUSS-
SEIDEL METHODS

EXAMPLES The absolute relative approximate error for


each xi then is
THE GAUSS-
SEIDEL METHODS
At the end of the first iteration, the
EXAMPLES
estimate of the solution vector is
and the maximum absolute relative
approximate error is 125.47%.
Iteration #2
The estimate of the solution vector at the
THE GAUSS- end of Iteration #1 is
SEIDEL METHODS

EXAMPLES
Now we get
THE GAUSS-
SEIDEL METHODS
The absolute relative approximate error for
EXAMPLES each xi then is
THE GAUSS-
SEIDEL METHODS

EXAMPLES At the end of the second iteration the


estimate of the solution vector is
And the maximum absolute relative approximate
error is 85.695%.
Conducting more iterations gives the following
values for the solution vector and the
corresponding absolute relative approximate
THE GAUSS- errors.
SEIDEL METHODS

EXAMPLES

As seen in the above table, the solution estimates


are not converging to the true solution of
 The above system of equations does not seem to
THE GAUSS- converge. Why?
SEIDEL METHODS  Well, a pitfall of most iterative methods is that
they may or may not converge.
EXAMPLES  However, the solution to a certain classes of
systems of simultaneous equations does always
converge using the Gauss-Seidel method.
 This class of system of equations is where the
coefficient matrix [A] in [A][X ] = [C] is
diagonally dominant, that is
THE GAUSS-
If a system of equations has a coefficient matrix that
SEIDEL METHODS
is not diagonally dominant, it may or may not
converge.
EXAMPLES Fortunately, many physical systems that result in
simultaneous linear equations have a diagonally
dominant coefficient matrix, which then assures
convergence for iterative methods such as the
Gauss-Seidel method of solving simultaneous linear
equations.
Class Work 7
 Find the solution to the following system of
equations using the Gauss-Seidel method

THE GAUSS-
SEIDEL METHODS

EXAMPLES Use

as the initial guess and conduct two iterations.

You might also like