0% found this document useful (0 votes)
40 views82 pages

To Numerical Methods: Prof. P. K. Jha

This document provides an introduction to numerical methods. It discusses how numerical methods allow mathematical problems to be formulated and solved using arithmetic operations. It also outlines the engineering problem solving process of formulation, solution, and interpretation. Numerical methods are chosen as powerful problem solving tools suited for illustrating the capabilities and limitations of computers. The document then discusses specific types of mathematical problems that can be solved using numerical methods and provides an example of modeling terminal velocity of a falling parachutist. It compares analytical and numerical solutions and discusses concepts like accuracy, precision, and error in numerical methods.

Uploaded by

rbansal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views82 pages

To Numerical Methods: Prof. P. K. Jha

This document provides an introduction to numerical methods. It discusses how numerical methods allow mathematical problems to be formulated and solved using arithmetic operations. It also outlines the engineering problem solving process of formulation, solution, and interpretation. Numerical methods are chosen as powerful problem solving tools suited for illustrating the capabilities and limitations of computers. The document then discusses specific types of mathematical problems that can be solved using numerical methods and provides an example of modeling terminal velocity of a falling parachutist. It compares analytical and numerical solutions and discusses concepts like accuracy, precision, and error in numerical methods.

Uploaded by

rbansal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 82

Introduction

to
Numerical Methods

Prof. P. K. Jha
NUMERICAL METHODS

Numerical methods are techniques by which


mathematical problems are formulated so that
they can be solved with arithmetic operations.
Non-computer Methods

• Solutions derived for some problems with


analytical/exact methods (limitations in case
of nonlinearities)

• Graphical solutions used to characterize


behavior of systems

• Calculators/slide rules used


Phases of Engineering Problems

Formulation
Solution
Interpretation
Reasons to choose Numerical Methods
(with evolution of PC’s)

• Powerful problem solving tool

• Commercially available software/codes (Can


even make changes in them)

• Well suited to illustrate power and limitation of


computer

• Vehicle to reinforce your understanding of


computers
Types of Mathematical Subject Areas

• Roots of equations

• System of Linear Algebraic Equations

• Optimization

• Curve Fitting

• Integration

• Ordinary Differential Equations


• Partial Differential Equations
• Requires understanding of engineering systems
– By observation and experiment
– Theoretical analysis and generalization

• Computers are great tools, however, without


fundamental understanding of engineering
problems, they will be useless.
The engineering problem-solving process
Mathematical Model

D = f (I, P, F)

D: dependent variable
I: Independent variables
P: Parameters
F: forcing (external) functions

F
a=
m
Example: free falling parachutist
▪ A mathematical model is represented as a functional relationship
of the form

Dependent independent forcing


Variable =f variables, parameters, functions

▪ Dependent variable: Characteristic that usually reflects the state


of the system
▪ Independent variables: Dimensions such as time and space
along which the systems behavior is being determined
▪ Parameters: reflect the system’s properties or composition
▪ Forcing functions: external influences acting upon the system
Newton’s 2nd law of Motion
▪ “Thetime rate change of momentum of a body is
equal to the resulting force acting on it.”
Formulated as F = m.a

▪Some complex models may require more sophisticated


mathematical techniques than simple algebra
Example, modeling of a falling parachutist:

FU = Force due to air resistance = -cv (c = drag


coefficient)
FD = Force due to gravity = mg
Terminal velocity: the final steady velocity of the parachutist

dv F
=
dt m
F = FD + FU

FD = mg FU = −cv

dv mg − cv
=
dt m

mg
v(t ) = (1 − e −( c / m )t )
c
(analytical solution)
dv c
=g− v
dt m
▪This is a differential equation and is written in
terms of the differential rate of change dv/dt of
the variable that we are interested in
predicting.
▪If the parachutist is initially at rest (v=0 at t=0),
using calculus
Independent variable

v(t ) =
gm
c
1− e (
−( c / m )t
)
Dependent variable
Parameters
Forcing function
Analytical Solution
( )
If v(t) could not be solved analytically,
gm
v(t ) = 1 − e −( c / m ) t then we need to use a numerical method to
solve it
c

g = 9.8 m/s2 c =12.5 kg/s


m = 68.1 kg

t (sec.) V (m/s)
0 0
2 16.40
4 27.77
8 41.10
10 44.87
12 47.49
∞ 53.39
Numerical Solution for Terminal Velocity
dv mg − cv c
= =g− v
dt m m

v(ti +1 ) − v(ti ) c
= g − v(ti )
ti +1 − ti m

c
v(ti +1 ) = v(ti ) + [ g − v(ti )](ti +1 - ti )
m
t (sec.) V (m/s)
0 0
dv v v(ti +1 ) − v(ti ) 2 19.60
 = ∆t = 2 sec
dt t ti +1 − ti 4 32.00
8 44.82
10 47.97
To minimize the error, use a smaller step size, ∆t
No problem, if you use a computer! 12 49.96
∞ 53.39
Comparison of numerical and analytical solutions

Larger step size → less accurate result

Smaller step size → more steps (longer computing


time)
Analytical vs. Numerical solution
m=68.1 kg c=12.5 kg/s
g=9.8 m/s ∆t = 2 sec ∆t = 0.5 sec ∆t = 0.01 sec
t (sec.) V (m/s) t (sec.) V (m/s) t (sec.) V (m/s) t (sec.) V (m/s)

0 0 0 0 0 0 0 0

2 16.40 2 19.60 2 17.06 2 16.41

4 27.77 4 32.00 4 28.67 4 27.83

8 41.10 8 44.82 8 41.95 8 41.13

10 44.87 10 47.97 10 45.60 10 44.90

12 47.49 12 49.96 12 48.09 12 47.51

∞ 53.39 ∞ 53.39 ∞ 53.39 ∞ 53.39

v(t ) =
gm
c
(
1 − e −( c / m ) t ) c
v(ti + 1) = v(ti ) + [ g − v(ti )]t
m
CONCLUSION: If you want to minimize
the error, use a smaller step size, ∆t
Conservation Laws and Engineering

▪ Conservation laws are the most important and fundamental


laws that are used in engineering.
Change = increases – decreases (1.13)
▪ Change implies changes with time (transient). If the change
is nonexistent (steady-state), Eq. 1.13 becomes
Increases =Decreases
Conservation laws: fluid flow in pipes

▪For steady-state incompressible fluid flow in pipes:


Flow in = Flow out
or
100 + 80 = 120 + Flow4
Flow4 = 60
Approximations and Round-Off Errors
▪For many engineering problems, we cannot obtain
analytical solutions.
▪Numerical methods yield approximate results, results that
are close to the exact analytical solution. We cannot
exactly compute the errors associated with numerical
methods.
▪ Only rarely given data are exact, since they originate from
measurements. Therefore there is probably error in the input
information.
▪ Algorithm itself usually introduces errors as well, e.g., unavoidable
round-offs, etc …
▪ The output information will then contain error from both of these
sources.
▪How confident we are in our approximate result?
▪The question is “how much error is present in our
calculation and is it tolerable?”
▪ Accuracy. How close is a computed or measured value to the
true value

▪ Precision (or reproducibility). How close is a computed or


measured value to previously computed or measured values.

▪ Inaccuracy (or bias). A systematic deviation from the actual


value.

▪ Imprecision (or uncertainty). Magnitude of scatter.


Error Definitions
True Value = Approximation + Error

Et = True value – Approximation (+/-)

True error
true error
True fractional relative error =
true value
true error
True percent relative error,  t = 100%
true value
▪For numerical methods, the true value will be
known only when we deal with functions that
can be solved analytically (simple systems). In
real world applications, we usually not know the
answer a priori. Then
Approximat e error
a = 100%
Approximat ion
▪Iterative approach, example Newton’s method

Current approximat ion - Previous approximat ion


a = 100%
Current approximat ion
(+ / -)
▪ Use absolute value.
▪ Computations are repeated until stopping criterion is
satisfied.

 a  s Pre-specified % tolerance based


on the knowledge of your
solution

▪ If the following criterion is met

 s = (0.510(2-n) )%
you can be sure that the result is correct to at least n
significant figures.
Round-off Errors
▪Numbers such as p, e, or 7 cannot be expressed
by a fixed number of significant figures.
▪ Computers use a base-2 representation, they cannot
precisely represent certain exact base-10 numbers.
▪Fractional quantities are typically represented in
computer using “floating point” form, e.g.,
Integer part
exponent
m.be
mantissa Base of the number system
used
Taylor Series
▪ Taylor series is a expansion of a function f(x) for some finite
distance dx to f(x ± dx) as:
dx 2 '' dx 3 ''' dx 4 ''''
f ( x  dx ) = f ( x )  dxf ( x ) +
'
f ( x ) f ( x )+ f ( x )  ...
2! 3! 4!
f(x)
For getting f’(x) from Taylor series

1. Forward difference:

f ( x + dx ) − f ( x )
f' ( x )  + 0( dx )
dx
x x+△x x

* The error of the first derivative using the forward formulation is of order dx.

! dx to remain FINITE !
2. Backward difference:
f ( x ) − f ( x − dx )
f' ( x )  + 0( dx )
dx
* The error of the first derivative using the backward formulation is of order dx.
3. Central difference:
f(x)

f ( x + dx ) − f ( x − dx )
f' ( x )  + 0( dx 2 )
2dx

x-△x x x+△x x

* The error of the first derivative using the backward formulation is of order dx2.
For second order derivative:-
1. Forward difference:

f ( x + 2dx ) − 2 f ( x + dx ) + f ( x )
f '' ( x )  2
+ 0( dx )
dx

2. Backward difference:
f ( x ) − 2 f ( x − dx ) + f ( x − 2dx )
f '' ( x )  2
+ 0( dx )
dx

3. Central difference:

f ( x + dx ) − 2 f ( x ) + f ( x − dx )
f '' ( x )  2
+ 0( dx 2
)
dx
Truncation Error
• Taylor series can be expanded as:

• Now let us truncate the series after the first derivative term:

• Now after rearranging the above equation we get:


Now,

Truncation Error

or

Thus, the estimate of the derivative v’(ti) has a truncation error


of order ti+1−ti. In other words, the error of our derivative
approximation should be proportional to the step size.
Consequently, if we halve the step size, we would expect to
halve the error of the derivative.
Numerical Errors
1. Round-off Error: Due to inability to handle large number
of significant digits.
5.37527 5.3753
5.37524 5.3752
2. Truncation Error: Due to replacement of an exact
mathematical expression by a
numerical approximation
dx 2 '' ….Neglecting other higher
f ( x  dx ) = f ( x )  dxf ( x ) +
'
f (x) order
2!
Total Numerical Error
▪ The total numerical error is the summation of the
truncation and round-off errors.

▪ The only way to minimize round-off errors is to increase the number


of significant figures. Round-off error will increase due to an
increase in the number of computations in an analysis.

▪ On the other hand, truncation error can be reduced by decreasing


the step size while a decrease in step size can lead to an increase
in computations

▪ Thus, truncation errors are decreased as the round-off errors are


increased or vice versa.
▪ The challenge is to identify the point of diminishing returns where
round-off error begins to negate the benefits of step-size reduction.
Linear Algebraic
Equations
Linear Algebraic Equations
▪An equation of the form ax+by+c=0 or equivalently
ax+by=-c is called a linear equation in x and y
variables.
▪ ax+by+cz=d is a linear equation in three variables, x,
y, and z.
▪Thus, a linear equation in n variables is
a1x1+a2x2+ … +anxn = b
▪A solution of such an equation consists of real numbers
c1, c2, c3, … , cn. If you need to work more than one
linear equations, a system of linear equations must be
solved simultaneously.
Noncomputer Methods for Solving
Systems of Equations
▪ For small number of equations (n ≤ 3) linear equations can be
solved readily by simple techniques such as “method of
elimination.”
▪ Linear algebra provides the tools to solve such systems of linear
equations.
▪ Nowadays, easy access to computers makes the solution of large
sets of linear algebraic equations possible and practical.
Gauss Elimination
Solving Small Numbers of Equations
▪ There are many ways to solve a system of linear equations:
▪ Graphical method
▪ Cramer’s rule
▪ Method of elimination
For n ≤ 3
▪ Computer methods
Graphical Method
▪For two equations:
a11 x1 + a12 x2 = b1
a21 x1 + a22 x2 = b2
▪Solve both equations for x2:
 a11  b1

x2 = − 
 x1 +  x2 = (slope) x1 + intercept
 a12  a12
 a21  b2

x2 = − 
 x1 +
 a22  a22
▪Plot x2 vs. x1
on rectilinear
paper, the
intersection of
the lines
present the
solution.
Graphical Method
▪Or equate and solve for x1
 a11  b1  a21  b2
x2 = −  x1 + = −  x1 +
 a12  a12  a22  a22
 a21 a11  b1 b2
  −  x1 + − =0
 a22 a12  a12 a22
 b1 b2   b2 b1 
 −   − 
 x1 = −  a12 a22  =  a22 a12 
 a21 a11   a21 a11 
 −   − 
 a22 a12   a22 a12 
No solution Infinite solutions Ill-conditioned
(Slopes are too close)
Determinants and Cramer’s Rule
▪Determinant can be illustrated for a set of three
equations:
Ax = b [A] : coefficient matrix

▪Where A is the coefficient matrix:

a11 a12 a13  a11 a12 a13   x1  b1 


a a a   x  = b 
A = a21 a22 a23   21 22 23   2   2 
a31 a32 a33  a31 a32 a33   x3  b3 
▪ Assuming all matrices are square matrices, there is a number associated with
each square matrix A called the determinant, D, of A. (D=det (A)). If [A] is
order 1, then [A] has one element:
A=[a11]
D=a11
▪ For a square matrix of order 2, A=

the determinant is D= a11 a22-a21 a12

a11 a12
a21 a22
▪ For a square matrix of order 3, the minor of an element aij is the
determinant of the matrix of order 2 by deleting row i and column j of A.
a11 a12 a13
D = a21 a22 a23
a31 a32 a33
a22 a23
D11 = = a22 a33 − a32 a23
a32 a33
a21 a23
D12 = = a21 a33 − a31 a23
a31 a33
a21 a22
D13 = = a21 a32 − a31 a22
a31 a32
a22 a23 a21 a23 a21 a22
D = a11 − a12 + a13
a32 a33 a31 a33 a31 a32

• Cramer’s rule expresses the solution of a


systems of linear equations in terms of ratios
of determinants of the array of coefficients of
the equations. For example, x1 would be
computed as:
b1 a12 a13 a11 b1 a13 a11 a12 b1
b2 a22 a23 a21 b2 a23 a21 a22 b2
b3 a32 a33 a31 b3 a33 a31 a32 b3
x1 = x2 = x3 =
D D D
Method of Elimination
▪ The basic strategy is to successively solve one of the equations of
the set for one of the unknowns and to eliminate that variable from
the remaining equations by substitution.
▪ The elimination of unknowns can be extended to systems with
more than two or three equations; however, the method becomes
extremely tedious to solve by hand.
The elimination of unknowns by combining equations is an
algebraic approach that can be illustrated for a set of two
equations:
a 11 x 1 + a 12 x 2 = b 1 a 11 a 21 x 1 + a 12 a 21 x 2 = b 1 a 21
a 21 x 1 + a 22 x 2 = b 2 a 21 a 11 x 1 + a 22 a 11 x 2 = b 2 a 11
Naive Gauss Elimination
▪ Extension of method of elimination to large sets of equations by
developing a systematic scheme or algorithm to eliminate
unknowns and to back substitute.
▪ As in the case of the solution of two equations, the technique for n
equations consists of two phases:
▪ Forward elimination of unknowns
▪ Back substitution
 a11 a12 a13 b1 
▪ Solve Ax = b a a22 a23 b2 
 21 Forward
 a31 a32 a33 b3  Elimination
▪ Consists of two phases:
▪ Forward elimination 
▪ Back substitution a11 a12 a13 b1 
0 '
a22 '
a23 b2' 

▪ Forward Elimination  0 0 a33'' b3'' 
reduces Ax = b to an upper
triangular system Tx = b’ 
Back
b3'' b2' − a23
'
x3
▪ Back substitution can then x3 = '' x2 = '
Substitution
solve Tx = b’ for x a33 a22
b1 − a13 x3 − a12 x2
x1 =
a11
Pitfalls of Elimination Methods
▪Division by zero. It is possible that during both elimination
and back-substitution phases a division by zero can occur.

▪Round-off errors.
▪ Because computers carry only a limited number of significant
figures, round-off errors will occur and they will propagate from
one iteration to the next.
▪ This problem is especially important when large numbers of
equations (100 or more) are to be solved.
▪ Always use double-precision numbers/arithmetic. It is slow but
needed for correctness!
▪ It is also a good idea to substitute your results back into the
original equations and check whether a substantial error has
occurred.
▪ Ill-conditioned systems. Systems where small changes in
coefficients result in large changes in the solution. Alternatively, it
happens when two or more equations are nearly identical,
resulting a wide ranges of answers to approximately satisfy the
equations. Since round off errors can induce small changes in the
coefficients, these changes can lead to large solution errors.

Since round off errors can induce small changes in the coefficients, these
Problem:
changes can lead to large solution errors in ill-conditioned systems.
Example:

x1 + 2x2 = 10 b1 a12 10 2
1.1x1 + 2x2 = 10.4 b2 a22 10.4 2 2(10) − 2(10.4)
x1 = = = =4 x2 = 3
D 1(2) − 2(1.1) − 0.2
x1 + 2x2 = 10
b1 a12 10 2
1.05x1 + 2x2 = 10.4
b2 a22 10.4 2 2(10) − 2(10.4)
x1 = = = =8 x2 = 1
D 1(2) − 2(1.05) − 0.1
▪ Surprisingly, substitution of the erroneous values, x1=8 and x2=1, into the original
equation will not reveal their incorrect nature clearly:
x1 + 2x2 = 10 8+2(1) = 10 (the same!)
1.1x1 + 2x2 = 10.4 1.1(8)+2(1)=10.8 (close!)

IMPORTANT OBSERVATION:
An ill-conditioned system is one with a determinant close to zero

▪ If determinant D=0 then there are infinitely many solutions ➔ singular system

▪ Scaling (multiplying the coefficients with the same value) does not change the
equations but changes the value of the determinant in a significant way.
However, it does not change the ill-conditioned state of the equations!
DANGER! It may hide the fact that the system is ill-conditioned!!

How can we find out whether a system is ill-conditioned or not?


Not easy! Luckily, most engineering systems yield well-conditioned results!

▪ One way to find out: change the coefficients slightly and recompute & compare
▪ Singular systems. When two equations are identical, we would loose one
degree of freedom and be dealing with the impossible case of n-1 equations
for n unknowns. For large sets of equations, it may not be obvious however.
The fact that the determinant of a singular system is zero can be used and
tested by computer algorithm after the elimination stage. If a zero diagonal
element is created, calculation is terminated.
1. Use of more significant figures. double precision
arithmetic
2. Techniques
Pivoting. If a for
pivotImproving Solutions
element is zero, normalization
step leads to division by zero. The same problem
may arise, when the pivot element is close to zero.
Problem can be avoided:
▪ Partial pivoting. Switching the rows so that the largest
element is the pivot element.
▪ Complete pivoting. Searching for the largest element in all
rows and columns then switching.
3. Scaling - used to reduce the round-off errors and
improve accuracy
4. Computer Algorithm for Gauss Elimination
Gauss-Jordan
▪ It is a variation of Gauss elimination. The major differences are:
▪ When an unknown is eliminated, it is eliminated from all other
equations rather than just the subsequent ones.
▪ All rows are normalized by dividing them by their pivot elements.
▪ Elimination step results in an identity matrix.
▪ Consequently, it is not necessary to employ back substitution to obtain
solution.
Gauss-Jordan method
Gauss-Jordan Elimination

a11 0 0 0 0 0 0 0 0 b11

0 x22 0 0 0 0 0 0 0 b22

0 0 x33 0 0 0 0 0 0 b33

0 0 0 x44 0 0 0 0 0 b44

0 0 0 0 x55 0 0 0 0 b55

0 0 0 0 0 x66 0 0 0 b66

0 0 0 0 0 0 x77 0 0 b77

0 0 0 0 0 0 0 x88 0 b88
0 0 0 0 0 0 0 0 x99 b99
Gauss-Jordan Elimination: Example 62

1 1 2  x1  8  1 1 2| 8 
− 1 − 2 3  x  = 1  Augmented Matrix : − 1 − 2 3 | 1 
  2   
 3 7 4  x3  10  3 7 4 |10
Scaling R2:
R2  R2 - (- 1 1 2 | 8  1 1 2 | 8 
1)R1 0 −1 5 | 9  R2  R2/(- 0 1 −5| −9 
  1)  
R3  R3 - ( 0 4 −2| −14  0 4 −2| −14 
3)R1
R1  R1 - 1 0 7 | 17 
1 0 7 | 17 
(1)R2
0 1 − 5 | − 9  Scaling R3: 0 1 − 5 | − 9 
  R3   
R3  R3-(4)R2 0 0 18 | 22  R3/(18) 0 0 1 |11 / 9
R1  R1 -
(7)R3 1 0 0 | 8.444  RESULT:
0 1 0 | − 2.888
R2  R2-(-5)R3   x1=8.45, x2=-2.89,
0 0 1 | 1.222  x3=1.23

Time Complexity? ➔➔ O(n3)


▪ Provides an efficient way to compute matrix inverse by separating the time
consuming elimination of the Matrix [A] from manipulations of the right-
LU
hand sideDecomposition
{B}. and Matrix Inversion
▪ Gauss elimination, in which the forward elimination comprises the bulk of
the computational effort, can be implemented as an LU decomposition.
If
L- lower triangular matrix
U- upper triangular matrix
Then,
[A]{X}={B} can be decomposed into two matrices [L] and
[U] such that
[L][U]=[A]
[L][U]{X}={B}
Similar to first phase of Gauss elimination, consider
[U]{X}={D}
[L]{D}={B}
▪ [L]{D}={B} is used to generate an intermediate vector {D}
by forward substitution
▪ Then, [U]{X}={D} is used to get {X} by back substitution.
Decompose A= U.L

L : Lower Triangular Matrix


U : Upper Triangular Matrix
To solve [A]{x}={b}

[L][U]=[A] ➔ [L][U]{x}={b}

Consider [U]{x}={d}
[L]{d}={b}
1. Solve [L]{d}={b} using forward substitution to get {d}

2. Use back substitution to solve [U]{x}={d} to get {x}

Both phases, (1) and (2), take O(n2) steps.


Ax = b LU x = b
 a11 a12 a13  1 0 0 u11 u12 u13 
  l
A = a21 a22 a23  
 21 1 0  0 u22 u23 
a31 a32 a33  l31 l32 1  0 0 u33 

[L] [U]
Ax = b LU x = b
 a11 a12 a13   x1  b1 
a     
 21 a22 a23   x2  = b2 
a31 a32 a33   x3  b3 
a11 a12 a13   x1  b1 
Gauss Elimination ➔  0 '    '

'
a 22 a 23   x 2  = b2 

 0 0 ''
a33   x3  b3'' 
 
1 0 0

L = l 21 1 
0 [U]
 l31 l32 1 Coefficients used during the elimination
step
 a11 a12 a13   1 0 0 a11 a12 a13 

A = a21 a22  
a23  = l 21 1  
0  0 '
a22 ' 
a23 
a31 a32 a33  l31 l32 1  0 0 ''
a33 

a21
l21 =
a11
[ L.U ]
a31
l31 =
a11

l32 = ?
Example: A = L . U
 − 1 2.5 5  1 0 0 − 1 2.5 5
− 2 9 11 = 2 1 0  0 4 1 
    
 4 − 22 − 20 − 4 − 3 1  0 0 3

Gauss Elimination
Coefficients
 − 1 2.5 5  − 1 2.5 5
l21 = -2/-1= 2 − 2 9 11    0 4 1
 
l31 = 4/-1= -4  4 − 22 − 20  0 − 12 0

− 1 2.5 5 − 1 2.5 5
0 4 1   
   0 4 1 
[L] l32 = -12/4= -3  0 − 12 0  0 0 3

 1 0 0 1 0 0 [U]
L =  2 1 0  L =  2 1 0

− 4 ?? 1 − 4 − 3 1
Example: A = L . U
Gauss Elimination with pivoting
 − 1 2.5 5   4 − 22 − 20

− 2 9 11    pivoting  − 2 9 11 
 
 4 − 22 − 20   − 1 2.5 5 

Coefficients
 4 − 22 − 20 4 − 22 − 20 4 − 22 − 20
l21 = -2/4= -.5 − 2 9 11   0 − 2 1   pivoting  0 − 3 0 
     
l31 = -1/4= -.25  − 1 2.5 5  0 − 3 0  0 − 2 1 

4 − 22 − 20 4 − 22 − 20
Coefficients 0 − 3 0   0 − 3 0 
   
l32 = -2/-3 0 − 2 1  0 0 1 

 1 0 0  1 0 0  1 0
L =  − 0.5 1 0  − 0.25 1 0  − 0.25
0
1 0
[U]
− 0.25 ?? 1  − 0.5 ?? 1  − 0.5 0.66 1
LU decomposition
▪ Gauss Elimination can be used to decompose [A] into [L] and [U].
Therefore, it requires the same total FLOPs as for Gauss elimination:
In the order of (proportional to) N3 where N is the # of unknowns.

▪ lij values (the factors generated during the elimination step) can be
stored in the lower part of the matrix to save storage. This can be
done because these are converted to zeros anyway and unnecessary
for the future operations.

▪ Saves computing time by separating time-consuming elimination step


from the manipulations of the right hand side.

▪ Provides efficient means to compute the matrix inverse


MATRIX INVERSE
A. A-1 = I
 a11 a12 a13   x11 x12 x13  1 0 0
a a22 a23   x21 x22 x23  = 0 1 0
 21
a31 a32 a33   x31 x32 x33  0 0 1

Solve in n=3 major phases

1 2 3
 a11 a12 a13   x11  1   a11 a12 a13   x12  0  a11 a12 a13   x13  0
  a    
a
 21 a22 a23   x21  = 0  21 a22 a23   x22  = 1  a
 21 a22 a23   x23  = 0
a31 a32 a33   x31  0 a31 a32 a33   x32  0 a31 a32 a33   x33  1 

Solve each one


 x11  1  Each solution takes
    O(n2) steps.
using A=L.U method ➔ e.g. LU   x21  = 0
Therefore,
 x31  0 The Total time = O(n3)
73
Special Matrices

▪ Certain matrices have particular structures


that can be exploited to develop efficient
solution schemes (e.g. banded, symmetric)

▪ A banded matrix is a square matrix that has


all elements equal to zero, with the exception
of a band centered on the main diagonal.

▪ Standard Gauss Elimination is inefficient in


solving banded equations because
unnecessary space and time would be
expended on the storage and manipulation of
zeros.

▪ There is no need to store or process the zeros


(off of the band)
Solving Tridiagonal Systems
(Thomas Algorithm)

A tridiagonal system has a bandwidth of 3

 f1 g1   x1   r1  DECOMPOSITION
e   x  r 
 2 f2 g2   2  =  2 
 e3 f3 g 3   x3  r3  DO k = 2, n
     ek = ek / fk-1
 e4 f 4   x4  r4  fk = fk - ek gk-1
1 0 0 0  f1 g1  END DO
e ' 1 0 0  f 2' g2 
A = L U =  2 
0 e'3 1 0  f 3' g3  Time Complexity?
  '
0 0 e' 4 1  f4  O(n)
vs. O(n3)
Tridiagonal Systems (cont.)
{d}
1 0 0 0  f1 g1   x1   r1 
e' 1 0 0  f 2' g2   x  r 
 2  2  =  2 
0 e'3 1 0  f 3' g3   x3   r3 
     
0 0 e' 4 1  f 4'   x 4  r4 

1 0 0 0  d1   r1   f1 g1   x1   d1 
e' 0 d 2  r2     x  d 
 2 1 0
 f 2' g2  2  =  2 
=
0 e'3 1 0 d 3   r3   f 3' g3   x3   d 3 
         
0 0 e' 4 1 d 4  r4   f 4'   x 4  d 4 

Forward Substitution Back Substitution


d1 = r1 xn = dn /fn
DO k = 2, n DO k = n-1, 1, -1
dk = rk - ek dk-1 xk = (dk - gk . xk+1
END DO )/fk
Gauss-Seidel
▪ The Gauss-Seidel method is a commonly used iterative method.

▪ It is same as Jacobi technique except with one important difference:


A newly computed x value (say xk) is substituted in the subsequent
equations (equations k+1, k+2, …, n) in the same iteration.

Example: Consider the 3x3 system below:

b1 − a12 x 2old − a13 x3old • First, choose initial guesses for the
xnew
1 = x’s.
a11
• A simple way to obtain initial
b2 − a x new
− a 23 x old
x 2new = 21 1 3 guesses is to assume that they are
a 22 all zero.
b3 − a31 x1new − a32 x 2new • Compute new x1 using the previous
x new
3 = iteration values.
a33
• New x1 is substituted in the
{ X }old  { X }new
equations to calculate x2 and x3
Convergence Criterion for Gauss-Seidel Method

▪ Iterations are repeated until the convergence criterion is satisfied:


xij − xij −1 For all i, where j and j-1 are
 a ,i = j
100%   s the current and previous iterations.
xi

▪ As any other iterative method, the Gauss-Seidel method has problems:


▪ It may not converge or it converges very slowly.

▪ If the coefficient matrix A is Diagonally Dominant Gauss-Seidel is


guaranteed to converge.
For each equation i :
n
Diagonally Dominant ➔ a ii   a i , j
j =1
ji

▪ Note that this is not a necessary condition, i.e. the system may still have a
chance to converge even if A is not diagonally dominant.
Time Complexity:
78 Each iteration takes O(n2)
▪In case of two simultaneous equations, the
Gauss-Seidel algorithm can be expressed as

b1 a12
u ( x1 , x2 ) = − x2
a11 a11
b2 a21
v( x1 , x2 ) = − x1
a22 a22
u u a12
=0 =−
x1 x2 a11
v a21 v
=− =0
x1 a22 x2
▪Substitution into convergence criterion of two linear
equations yield:
a12 a21
1 1
a11 a22
• In other words, the absolute values of the slopes
must be less than unity for convergence:
a11  a12
a22  a21
For n equations :
n
aii   ai , j
j =1
j i

You might also like