To Numerical Methods: Prof. P. K. Jha
To Numerical Methods: Prof. P. K. Jha
to
Numerical Methods
Prof. P. K. Jha
NUMERICAL METHODS
Formulation
Solution
Interpretation
Reasons to choose Numerical Methods
(with evolution of PC’s)
• Roots of equations
• Optimization
• Curve Fitting
• Integration
D = f (I, P, F)
D: dependent variable
I: Independent variables
P: Parameters
F: forcing (external) functions
F
a=
m
Example: free falling parachutist
▪ A mathematical model is represented as a functional relationship
of the form
dv F
=
dt m
F = FD + FU
FD = mg FU = −cv
dv mg − cv
=
dt m
mg
v(t ) = (1 − e −( c / m )t )
c
(analytical solution)
dv c
=g− v
dt m
▪This is a differential equation and is written in
terms of the differential rate of change dv/dt of
the variable that we are interested in
predicting.
▪If the parachutist is initially at rest (v=0 at t=0),
using calculus
Independent variable
v(t ) =
gm
c
1− e (
−( c / m )t
)
Dependent variable
Parameters
Forcing function
Analytical Solution
( )
If v(t) could not be solved analytically,
gm
v(t ) = 1 − e −( c / m ) t then we need to use a numerical method to
solve it
c
t (sec.) V (m/s)
0 0
2 16.40
4 27.77
8 41.10
10 44.87
12 47.49
∞ 53.39
Numerical Solution for Terminal Velocity
dv mg − cv c
= =g− v
dt m m
v(ti +1 ) − v(ti ) c
= g − v(ti )
ti +1 − ti m
c
v(ti +1 ) = v(ti ) + [ g − v(ti )](ti +1 - ti )
m
t (sec.) V (m/s)
0 0
dv v v(ti +1 ) − v(ti ) 2 19.60
= ∆t = 2 sec
dt t ti +1 − ti 4 32.00
8 44.82
10 47.97
To minimize the error, use a smaller step size, ∆t
No problem, if you use a computer! 12 49.96
∞ 53.39
Comparison of numerical and analytical solutions
0 0 0 0 0 0 0 0
v(t ) =
gm
c
(
1 − e −( c / m ) t ) c
v(ti + 1) = v(ti ) + [ g − v(ti )]t
m
CONCLUSION: If you want to minimize
the error, use a smaller step size, ∆t
Conservation Laws and Engineering
True error
true error
True fractional relative error =
true value
true error
True percent relative error, t = 100%
true value
▪For numerical methods, the true value will be
known only when we deal with functions that
can be solved analytically (simple systems). In
real world applications, we usually not know the
answer a priori. Then
Approximat e error
a = 100%
Approximat ion
▪Iterative approach, example Newton’s method
s = (0.510(2-n) )%
you can be sure that the result is correct to at least n
significant figures.
Round-off Errors
▪Numbers such as p, e, or 7 cannot be expressed
by a fixed number of significant figures.
▪ Computers use a base-2 representation, they cannot
precisely represent certain exact base-10 numbers.
▪Fractional quantities are typically represented in
computer using “floating point” form, e.g.,
Integer part
exponent
m.be
mantissa Base of the number system
used
Taylor Series
▪ Taylor series is a expansion of a function f(x) for some finite
distance dx to f(x ± dx) as:
dx 2 '' dx 3 ''' dx 4 ''''
f ( x dx ) = f ( x ) dxf ( x ) +
'
f ( x ) f ( x )+ f ( x ) ...
2! 3! 4!
f(x)
For getting f’(x) from Taylor series
1. Forward difference:
f ( x + dx ) − f ( x )
f' ( x ) + 0( dx )
dx
x x+△x x
* The error of the first derivative using the forward formulation is of order dx.
! dx to remain FINITE !
2. Backward difference:
f ( x ) − f ( x − dx )
f' ( x ) + 0( dx )
dx
* The error of the first derivative using the backward formulation is of order dx.
3. Central difference:
f(x)
f ( x + dx ) − f ( x − dx )
f' ( x ) + 0( dx 2 )
2dx
x-△x x x+△x x
* The error of the first derivative using the backward formulation is of order dx2.
For second order derivative:-
1. Forward difference:
f ( x + 2dx ) − 2 f ( x + dx ) + f ( x )
f '' ( x ) 2
+ 0( dx )
dx
2. Backward difference:
f ( x ) − 2 f ( x − dx ) + f ( x − 2dx )
f '' ( x ) 2
+ 0( dx )
dx
3. Central difference:
f ( x + dx ) − 2 f ( x ) + f ( x − dx )
f '' ( x ) 2
+ 0( dx 2
)
dx
Truncation Error
• Taylor series can be expanded as:
• Now let us truncate the series after the first derivative term:
Truncation Error
or
a11 a12
a21 a22
▪ For a square matrix of order 3, the minor of an element aij is the
determinant of the matrix of order 2 by deleting row i and column j of A.
a11 a12 a13
D = a21 a22 a23
a31 a32 a33
a22 a23
D11 = = a22 a33 − a32 a23
a32 a33
a21 a23
D12 = = a21 a33 − a31 a23
a31 a33
a21 a22
D13 = = a21 a32 − a31 a22
a31 a32
a22 a23 a21 a23 a21 a22
D = a11 − a12 + a13
a32 a33 a31 a33 a31 a32
▪Round-off errors.
▪ Because computers carry only a limited number of significant
figures, round-off errors will occur and they will propagate from
one iteration to the next.
▪ This problem is especially important when large numbers of
equations (100 or more) are to be solved.
▪ Always use double-precision numbers/arithmetic. It is slow but
needed for correctness!
▪ It is also a good idea to substitute your results back into the
original equations and check whether a substantial error has
occurred.
▪ Ill-conditioned systems. Systems where small changes in
coefficients result in large changes in the solution. Alternatively, it
happens when two or more equations are nearly identical,
resulting a wide ranges of answers to approximately satisfy the
equations. Since round off errors can induce small changes in the
coefficients, these changes can lead to large solution errors.
Since round off errors can induce small changes in the coefficients, these
Problem:
changes can lead to large solution errors in ill-conditioned systems.
Example:
x1 + 2x2 = 10 b1 a12 10 2
1.1x1 + 2x2 = 10.4 b2 a22 10.4 2 2(10) − 2(10.4)
x1 = = = =4 x2 = 3
D 1(2) − 2(1.1) − 0.2
x1 + 2x2 = 10
b1 a12 10 2
1.05x1 + 2x2 = 10.4
b2 a22 10.4 2 2(10) − 2(10.4)
x1 = = = =8 x2 = 1
D 1(2) − 2(1.05) − 0.1
▪ Surprisingly, substitution of the erroneous values, x1=8 and x2=1, into the original
equation will not reveal their incorrect nature clearly:
x1 + 2x2 = 10 8+2(1) = 10 (the same!)
1.1x1 + 2x2 = 10.4 1.1(8)+2(1)=10.8 (close!)
IMPORTANT OBSERVATION:
An ill-conditioned system is one with a determinant close to zero
▪ If determinant D=0 then there are infinitely many solutions ➔ singular system
▪ Scaling (multiplying the coefficients with the same value) does not change the
equations but changes the value of the determinant in a significant way.
However, it does not change the ill-conditioned state of the equations!
DANGER! It may hide the fact that the system is ill-conditioned!!
▪ One way to find out: change the coefficients slightly and recompute & compare
▪ Singular systems. When two equations are identical, we would loose one
degree of freedom and be dealing with the impossible case of n-1 equations
for n unknowns. For large sets of equations, it may not be obvious however.
The fact that the determinant of a singular system is zero can be used and
tested by computer algorithm after the elimination stage. If a zero diagonal
element is created, calculation is terminated.
1. Use of more significant figures. double precision
arithmetic
2. Techniques
Pivoting. If a for
pivotImproving Solutions
element is zero, normalization
step leads to division by zero. The same problem
may arise, when the pivot element is close to zero.
Problem can be avoided:
▪ Partial pivoting. Switching the rows so that the largest
element is the pivot element.
▪ Complete pivoting. Searching for the largest element in all
rows and columns then switching.
3. Scaling - used to reduce the round-off errors and
improve accuracy
4. Computer Algorithm for Gauss Elimination
Gauss-Jordan
▪ It is a variation of Gauss elimination. The major differences are:
▪ When an unknown is eliminated, it is eliminated from all other
equations rather than just the subsequent ones.
▪ All rows are normalized by dividing them by their pivot elements.
▪ Elimination step results in an identity matrix.
▪ Consequently, it is not necessary to employ back substitution to obtain
solution.
Gauss-Jordan method
Gauss-Jordan Elimination
a11 0 0 0 0 0 0 0 0 b11
0 x22 0 0 0 0 0 0 0 b22
0 0 x33 0 0 0 0 0 0 b33
0 0 0 x44 0 0 0 0 0 b44
0 0 0 0 x55 0 0 0 0 b55
0 0 0 0 0 x66 0 0 0 b66
0 0 0 0 0 0 x77 0 0 b77
0 0 0 0 0 0 0 x88 0 b88
0 0 0 0 0 0 0 0 x99 b99
Gauss-Jordan Elimination: Example 62
1 1 2 x1 8 1 1 2| 8
− 1 − 2 3 x = 1 Augmented Matrix : − 1 − 2 3 | 1
2
3 7 4 x3 10 3 7 4 |10
Scaling R2:
R2 R2 - (- 1 1 2 | 8 1 1 2 | 8
1)R1 0 −1 5 | 9 R2 R2/(- 0 1 −5| −9
1)
R3 R3 - ( 0 4 −2| −14 0 4 −2| −14
3)R1
R1 R1 - 1 0 7 | 17
1 0 7 | 17
(1)R2
0 1 − 5 | − 9 Scaling R3: 0 1 − 5 | − 9
R3
R3 R3-(4)R2 0 0 18 | 22 R3/(18) 0 0 1 |11 / 9
R1 R1 -
(7)R3 1 0 0 | 8.444 RESULT:
0 1 0 | − 2.888
R2 R2-(-5)R3 x1=8.45, x2=-2.89,
0 0 1 | 1.222 x3=1.23
[L][U]=[A] ➔ [L][U]{x}={b}
Consider [U]{x}={d}
[L]{d}={b}
1. Solve [L]{d}={b} using forward substitution to get {d}
[L] [U]
Ax = b LU x = b
a11 a12 a13 x1 b1
a
21 a22 a23 x2 = b2
a31 a32 a33 x3 b3
a11 a12 a13 x1 b1
Gauss Elimination ➔ 0 ' '
'
a 22 a 23 x 2 = b2
➔
0 0 ''
a33 x3 b3''
1 0 0
L = l 21 1
0 [U]
l31 l32 1 Coefficients used during the elimination
step
a11 a12 a13 1 0 0 a11 a12 a13
A = a21 a22
a23 = l 21 1
0 0 '
a22 '
a23
a31 a32 a33 l31 l32 1 0 0 ''
a33
a21
l21 =
a11
[ L.U ]
a31
l31 =
a11
l32 = ?
Example: A = L . U
− 1 2.5 5 1 0 0 − 1 2.5 5
− 2 9 11 = 2 1 0 0 4 1
4 − 22 − 20 − 4 − 3 1 0 0 3
Gauss Elimination
Coefficients
− 1 2.5 5 − 1 2.5 5
l21 = -2/-1= 2 − 2 9 11 0 4 1
l31 = 4/-1= -4 4 − 22 − 20 0 − 12 0
− 1 2.5 5 − 1 2.5 5
0 4 1
0 4 1
[L] l32 = -12/4= -3 0 − 12 0 0 0 3
1 0 0 1 0 0 [U]
L = 2 1 0 L = 2 1 0
− 4 ?? 1 − 4 − 3 1
Example: A = L . U
Gauss Elimination with pivoting
− 1 2.5 5 4 − 22 − 20
− 2 9 11 pivoting − 2 9 11
4 − 22 − 20 − 1 2.5 5
Coefficients
4 − 22 − 20 4 − 22 − 20 4 − 22 − 20
l21 = -2/4= -.5 − 2 9 11 0 − 2 1 pivoting 0 − 3 0
l31 = -1/4= -.25 − 1 2.5 5 0 − 3 0 0 − 2 1
4 − 22 − 20 4 − 22 − 20
Coefficients 0 − 3 0 0 − 3 0
l32 = -2/-3 0 − 2 1 0 0 1
1 0 0 1 0 0 1 0
L = − 0.5 1 0 − 0.25 1 0 − 0.25
0
1 0
[U]
− 0.25 ?? 1 − 0.5 ?? 1 − 0.5 0.66 1
LU decomposition
▪ Gauss Elimination can be used to decompose [A] into [L] and [U].
Therefore, it requires the same total FLOPs as for Gauss elimination:
In the order of (proportional to) N3 where N is the # of unknowns.
▪ lij values (the factors generated during the elimination step) can be
stored in the lower part of the matrix to save storage. This can be
done because these are converted to zeros anyway and unnecessary
for the future operations.
1 2 3
a11 a12 a13 x11 1 a11 a12 a13 x12 0 a11 a12 a13 x13 0
a
a
21 a22 a23 x21 = 0 21 a22 a23 x22 = 1 a
21 a22 a23 x23 = 0
a31 a32 a33 x31 0 a31 a32 a33 x32 0 a31 a32 a33 x33 1
f1 g1 x1 r1 DECOMPOSITION
e x r
2 f2 g2 2 = 2
e3 f3 g 3 x3 r3 DO k = 2, n
ek = ek / fk-1
e4 f 4 x4 r4 fk = fk - ek gk-1
1 0 0 0 f1 g1 END DO
e ' 1 0 0 f 2' g2
A = L U = 2
0 e'3 1 0 f 3' g3 Time Complexity?
'
0 0 e' 4 1 f4 O(n)
vs. O(n3)
Tridiagonal Systems (cont.)
{d}
1 0 0 0 f1 g1 x1 r1
e' 1 0 0 f 2' g2 x r
2 2 = 2
0 e'3 1 0 f 3' g3 x3 r3
0 0 e' 4 1 f 4' x 4 r4
1 0 0 0 d1 r1 f1 g1 x1 d1
e' 0 d 2 r2 x d
2 1 0
f 2' g2 2 = 2
=
0 e'3 1 0 d 3 r3 f 3' g3 x3 d 3
0 0 e' 4 1 d 4 r4 f 4' x 4 d 4
b1 − a12 x 2old − a13 x3old • First, choose initial guesses for the
xnew
1 = x’s.
a11
• A simple way to obtain initial
b2 − a x new
− a 23 x old
x 2new = 21 1 3 guesses is to assume that they are
a 22 all zero.
b3 − a31 x1new − a32 x 2new • Compute new x1 using the previous
x new
3 = iteration values.
a33
• New x1 is substituted in the
{ X }old { X }new
equations to calculate x2 and x3
Convergence Criterion for Gauss-Seidel Method
▪ Note that this is not a necessary condition, i.e. the system may still have a
chance to converge even if A is not diagonally dominant.
Time Complexity:
78 Each iteration takes O(n2)
▪In case of two simultaneous equations, the
Gauss-Seidel algorithm can be expressed as
b1 a12
u ( x1 , x2 ) = − x2
a11 a11
b2 a21
v( x1 , x2 ) = − x1
a22 a22
u u a12
=0 =−
x1 x2 a11
v a21 v
=− =0
x1 a22 x2
▪Substitution into convergence criterion of two linear
equations yield:
a12 a21
1 1
a11 a22
• In other words, the absolute values of the slopes
must be less than unity for convergence:
a11 a12
a22 a21
For n equations :
n
aii ai , j
j =1
j i