0% found this document useful (0 votes)
26 views8 pages

Lect 13 14

Uploaded by

Mahendra Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views8 pages

Lect 13 14

Uploaded by

Mahendra Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

4:09 PM

CH-2-16(MO) 1/31 4:09 PM


Set of ODE’s
2/31

Numerical Methods
 A set of ODEs are solved very similarly
(Ordinary Differential Equations-3&4)
dy
 f1 ( x, y , z ) with y  yo at x  xo
dx
Kannan Iyer dz
[email protected]  f 2 ( x, y , z ) with z  zo at x  xo
dx
 Modified Euler’s method

k11  f1 ( xn , yn , z n ) k 21  f1 xn  h, y n 1 , z n 1 
k12  f 2 ( xn , yn , z n ) k 22  f 2 xn  h, y n 1 , z n 1 
Department of Mechanical Engineering
Indian Institute of Technology Jammu y n 1  y n  hk11  y n 1  y n  h / 2k11  k 21 
z n 1  z n  hk12  z n 1  z n  h / 2k12  k 22 

4:09 PM 3/31 4:09 PM 4/31


Higher Order equations
This equation is a stiff
Runge-Kutta Fourth Order Method
𝑑 𝑦 𝑑𝑦 equation with Solution
+ 3.1 + 0.3𝑦 = 0 𝑘 = 𝑓 (𝑥 , 𝑦 , 𝑧 )
𝑑𝑥 𝑑𝑥 y=e-3x+e-0.1x
𝑑𝑦 𝑘 = 𝑓 (𝑥 , 𝑦 , 𝑧 )
𝑤𝑖𝑡ℎ 𝑦 𝑥 = 0 = 2, (𝑥 = 0) = −3.1
𝑑𝑥 𝑘 = 𝑓 𝑥 + 0.5ℎ, 𝑦 + ℎ(0.5𝑘 ), 𝑧 + ℎ(0.5𝑘 )
 We can split the above equation as 𝑘 = 𝑓 𝑥 + 0.5ℎ, 𝑦 + ℎ(0.5𝑘 ), 𝑧 + ℎ(0.5𝑘 )
𝑑𝑦 𝑘 = 𝑓 𝑥 + 0.5ℎ, 𝑦 + ℎ(0.5𝑘 ), 𝑧 + ℎ(0.5𝑘 )
=𝑧 𝑤𝑖𝑡ℎ 𝑦(𝑥 = 0) = 2, 𝑘 = 𝑓 𝑥 + 0.5ℎ, 𝑦 + ℎ(0.5𝑘 ), 𝑧 + ℎ(0.5𝑘 )
𝑑𝑥
𝑘 = 𝑓 𝑥 + ℎ, 𝑦 + ℎ𝑘 , 𝑧 + ℎ𝑘
𝑑𝑧
= −3.1𝑧 − 0.3𝑦 𝑤𝑖𝑡ℎ 𝑧(𝑥 = 0) = −3.1 𝑘 = 𝑓 𝑥 + ℎ, 𝑦 + ℎ𝑘 , 𝑧 + ℎ𝑘
𝑑𝑥
𝑦 = 𝑦 + ℎ/6 𝑘 + 2𝑘 + 2𝑘 + 𝑘
𝑧 = 𝑧 + ℎ/6 𝑘 + 2𝑘 + 2𝑘 + 𝑘

1
4:09 PM 5/31 4:09 PM 6/31

Sample Problem Adaptive method


 Analytical Solution 2.00
for the given equation 2.00 A step size of dx = 0.05 is would require
1000 steps upto x = 50.
y  e 0.1 x  e 3 x 1.60
Adaptive method with a tolerance of 10-5
1.60
for relative error needs only 75 steps
1.20

y
1.20
A step sixe of dx = 0.05 is y 0.80
required for accurate sol.
0.80
0.40

0.40 0.00
0.00 2.00 4.00 6.00
x 0.00 20.00
x 40.00 60.00

4:09 PM 7/31 4:09 PM 8/31

Shooting method
Boundary Value Problem
The equation is split into a system of three first
ff   2 f   0 ; f ( 0 )  0 , f ( 0 )  0 , f (  )  1. order equations

 This equation is the classical Blasius Equation 𝑑𝑓


=𝑓, 𝑤𝑖𝑡ℎ 𝑓(0) = 0
 It does not have an analytical solution 𝑑𝑥
 Numerical solution obtained suggests x =10 can be
considered infinite
𝑑𝑓
 The solution can be found by the IVP approach =𝑓, 𝑤𝑖𝑡ℎ 𝑓 (0) = 0
iteratively 𝑑𝑥
 For this f’’(0) is first assumed and adjusted till f’(10)
obtained numerically is 0
 This approach is called shooting method
𝑑𝑓 −𝑓𝑓
= 𝑤𝑖𝑡ℎ 𝑓 0 = 0 (𝑎𝑠𝑠𝑢𝑚𝑒𝑑)
𝑑𝑥 2

2
4:09 PM 9/31 4:09 PM 10/31

Comments on Shooting Method Direct Solutions of BVP


• Shooting methods need iterative solutions • Finite difference methods can be used to obtain
solutions that will satisfy boundary conditions
• This may create convergence problems automatically
but usually it can be circumvented by • For a non-linear system the equations have to
judicial under relaxation be linearized, as otherwise solutions become
messy
• The advantage is that we can easily get 4th
• Step size sensitivity studies have to be
order solutions performed before accepting the solutions as
• Non-linearity does not require any special satisfactory
treatment

4:09 PM 11/31 4:09 PM 12/31

Finite difference principles Finite differences-I


• In this method, the derivatives are replaced by • The finite differences for derivatives can be
finite differences obtained very easily by Newton interpolating
• The domain is discretised into finite number of polynomials derived earlier
regions (say N) • The same can also be obtained by Taylor series
• A system of linear equations is formed for the N • Since Taylor series derivation is easy for first
unknown values of the functions and second derivatives upto second order it is
• Several approaches with varying accuracy are illustrated first.
possible
• Popular approaches restrict the order of method
upto second order 1 N+1
i-1 i i+1

3
4:09 PM 13/31 4:09 PM 14/31

Finite differences -II Finite differences-III


f ( x  h )  f ( x )  hf ( x )  h2! f ( x )  h3! f ( x )  O( h 4 )
2 3

• To get consistent accuracies near boundaries, often


f ( x  h )  f ( x )  hf ( x )  h2
f ( x )  h3
f ( x )  O( h )
4 we need to get forward and backward differences
2! 3!
• This is easily obtained by using Newton’s forward
 f ( x  h )  f ( x  h )  2 hf ( x )  2 h3! f ( x )  O( h 4 )
3
interpolating polynomial
• A system of linear equations is formed for the N
 f ( x  h )  f ( x  h )  2 f ( x )  h 2 f ( x )  O( h 4 ) unknown values of the functions
f ( x  h ) f ( x  h ) • Consider four points in the neighbourhood that are
 f ( x )   O( h 2 ) a distance h from each other
2h
sh h
f ( x  h ) f ( x  h ) 2 f ( x )
 f ( x )  2
 O( h 2 )
h x
X0 X1 X2 X3
The above two relations are called the centered approximations

4:09 PM 15/31 4:09 PM 16/31

Finite differences-IV Finite differences -V


• With four points we can fit a polynomial of third  Third Order Polynomial
order which will be fourth order accurate s( s  1) 2
P3 ( x0  sh)  f (0)  sf (0)   f (0)
• When the first derivative is taken, then this 2
approximation will drop to third order accuracy s ( s  1)( s  2) 3
• The same will become second order accurate,   f (0)  O ( h 4 )
6
when second derivative is expressed
• First we shall derive second order accurate s2  s
formulas by dropping one of the term and compare
 f 0  s( f1  f 0 )  ( f 2  2 f1  f 0 )
2
the results with the previously obtained ones.
s 3  3s 2  2 s
 ( f 3  3 f 2  3 f1  f 0 )  O( h 4 )
6

4
4:09 PM 17/31 4:09 PM 18/31

Finite differences -VI Finite differences -VII


 The first derivative
• To get derivatives at x0 the value of s will be 0 and
to get the same at x1, x2 and x3, the values of s will
 2s  1 be 1, 2 and 3 respectively
P3( x0  sh)  ( f1  f 0 )  ( f 2  2 f1  f 0 )
 2 • Thus, we can get backward, forward and centered
3s 2  6s  2 1 differences from a single expression just by
 ( f 3  3 f 2  3 f1  f 0 )  O( h 4 )  changing the value of s.
6 h • First, let us get relations for first derivatives that are
 The second derivative second order accurate at these points
 The first derivative One sided Difference at x0
P3 ( x0  sh)  ( f 2  2 f1  f 0 )  can be expressed as
6s  6  1  2s  1 1
( f 3  3 f 2  3 f1  f 0 )  O( h 4 )  2 P2( x0  sh)  ( f1  f 0 )  ( f 2  2 f1  f 0 )  O(h 2 )
6 h  2 h

4:09 PM 19/31 4:09 PM 20/31


Finite differences -IX
Finite differences -VIII  Putting s = 2, we get
 Putting s = 0, we get
 3 1
 1 1 P2( x2 )  ( f1  f 0 )  ( f 2  2 f1  f 0 )   O(h 2 )
P2( x0 )  ( f1  f 0 )  ( f 2  2 f1  f 0 )   O( h 2 )  2 h
 2 h (3 f 2  4 f1  f 0 )
( f 2  4 f1  3 f 0 )   O( h 2 ) ItBackward
has become
  O(h 2 ) Forward Difference 2h Difference
2h  We can get third order accurate one sided differences
 Putting s = 1, we get by using 3 terms and putting s = 0 and 3
 2s  1
 1 1 P3( x0  sh )  ( f1  f 0 )  ( f 2  2 f1  f 0 )
P2( x1 )  ( f1  f 0 )  ( f 2  2 f1  f 0 )  O( h 2 )  2
 2 h
3s 2  6 s  2 1
(f  f )  ( f 3  3 f 2  3 f1  f 0 )  O ( h 4 ) 
 2 0  O (h 2 ) It has become
6
2h centered Difference h
Forward Difference

5
4:09 PM 21/31 4:09 PM 22/31
Finite differences -X Simple Application-I
 1
P3( x0 )  ( f1  f 0 )  ( f 2  2 f1  f 0 )
 2
2 1 d2y dy
 ( f 3  3 f 2  3 f1  f 0 )   O ( h 3 ) 2
 a  by  0
6 h dx dx
2 f  9 f 2  18 f1  11 f 0 )
 3  O(h 3 ) with y ( x  0)  y0 , y ( x  L)  y L
6h Forward Difference
 5
P3( x3 )  ( f1  f 0 )  ( f 2  2 f1  f 0 )
 2
11 1 1 i-1 i i+1 n
 ( f 3  3 f 2  3 f1  f 0 )  O(h 3 )
6 h
11 f 3  18 f 2  9 f1  2 f 0 )
  O (h3 ) Backward Difference
6h

4:09 PM 23/31 4:09 PM 24/31

Simple Application-II Simple Application-III


d2y y( i  1 )  2 y ( i )  y ( i  1 ) Multiplying by h2 and collecting coefficients we get
2

dx i h2
dy y( i  1 )  y( i  1 ) y( i  1 )( 1  0.5 ah )  y( i )( bh 2  2 )  y( i  1 )( 1  0.5 ah )  0
 1 0 0 0   y1   y0 
dx i 2h
a a22 a23 0  y   b 
 21   2    2 
The finite difference equation for node I is 0 a32 a33 a34   yn 1  bn 1 
 
0 0 0 1   yn   y L 
y( i  1 )  2 y( i )  y( i  1 ) y( i  1 )  y( i  1 )
2
a  by( i )  0
h 2h We can solve for y’s using TDMA

6
4:09 PM 25/31 4:09 PM 26/31
Simple Application-IV Simple Application-V
 Writing FDE at point i
 Treatment of Neumann Boundary Condition
d2y dy y( i  1 )  2 y( i )  y( i  1 ) y( i  1 )  y( i  1 )
a  by( i )  0
2
 a  by  0 h 2
2h
1

dx dx  Boundary Condition at point i


dy y (i  1)  y (i  1)
with y ( x  0)  y0 , ( x  L)  yL  O(h 2 )  yL
dx 2h
 METHOD-1 Extended Domain Method  y (i  1)  y (i  1)  2hyL  O(h 3 ) 2

 Substituting Eq. (2) in Eq. (1), we get


y (i  1)  2hy L  O( h3 )  2 y (i )  y (i  1)
 There is
h2 degeneration
i-3 i-2 i-1 i i+1

y (i  1)  2hy L  O (h )  y (i  1)
3 of accuracy
X=L
a  by(i )  0
2h

4:09 PM 27/31 4:09 PM 28/31


Simple Application-V Simple Application-VI
 However, the solution can be obtained as for the Dirichlet
Boundary Condition as the matrix is tri-diagonal The above can be rearranged as
 The loss of accuracy near the boundary condition may not 6hyL  11yi  18 yi 1  9 yi  2  2 yi 3
be acceptable
 This can be overcome by using higher order formulation This formulation will break the tri-diagonal structure
at the boundary
1 0 0 0 0 0   y1   y0 
 METHOD-2 Higher Order Boundary Method a
 21 a22 a23 0 0 0   y2   b2 
We have shown that a third order accurate derivative can    
be expressed at the boundary as 0 a32 a33 a34 0 0   y3   b3 
     
yL 
11 yi  18 yi 1  9 yi  2  2 yi 3 )
 O(h 3 ) 0 0 a43 a44 a( i 2 )(i 1) 0   y4   b4 
6h 0 0 0 a54 a( i 1)(i 1) a(i 1)i   yi 1   bi 1 
    
 0 0 a( i  3 ) i a( i  2 ) i a(i 1) i aii   yi  6hyL 
i-3 i-2 i-1 i i+1

X=L

7
4:09 PM 29/31 4:09 PM 30/31
Simple Application-VII Treatment of Non-Linearity-I
 A tri-diagonal matrix will be obtained by performing two  Consider a non-linear Equation
Gauss operations
 First by performing Gauss Operation between i-2 and i y   2 y 2 y   0
rows, a(i-3),i can be reduced to 0  When a finite difference equation is written for a node, it
 Then by performing a Gauss operation between i-1 and I will lead to a non-linear equation due to the presence of
rows, we can reduce a(i-2),i to 0 higher order powers
 Thus, tri-diagonal structure is restored and can be solved  In such cases to get a linear form of the equation, we need
by TDMA to resort to iterations
 The procedure is to assume a y distribution
 Linearise and solve for y
 Iterate until convergence is reached
 The underlying principles used in linearisation are
discussed in next slide

4:09 PM 31/31
Treatment of Non-Linearity-II
 The term y 2 y  is linearised as

 y   y 
2 k k 1

 Thus, while solving, y2 value is always known and becomes


a coefficient in the matrix
 Frequently, the methods tend to diverge
 To facilitate convergence, under-relaxation is employed

 y k 1    y k 1  (1   ) y k
 α is assumed to have a value between 0 and 1
 The above suppresses wild variations of y introduced by
the iteration method
 Severe non-linearity may force the value of α close to zero
 Convergence criterion is similar to what we normally do by
controlling the normalized values between the iterations

You might also like