MEC 236 - Computer Applications 3
MEC 236 - Computer Applications 3
Prof. M. G. Mousa
2021
The Method of Modelling and Mathematical Modeling The
process of modeling of physical systems in the real world should
generally follow the path illustrated schematically in the chart
below:
The mathematical formulation of the problem is the reduction
of the physical problem to a set of either algebraic or differential
equations subject to certain assumptions.
INTRODUCTION
Write the Application of Navier-Stokes Equations •
The equations are nonlinear partial differential equations
• No full analytical solution exists
• The equations can be solved for several simple flow conditions
• Numerical solutions to Navier-Stokes equations are increasingly being used
to describe complex flows
Boundary layer
example
example
At y=0.0 u/U=0
At y= u/U=1
At y= du/dy=0
Initial Value Problems Of
Ordinary Differential
Equations
9.1 INTRODUCTION
Problems of solving ordinary differential equations (ODE) are classified into initial
value problems and boundary value problems. Many initial value problems are time-
dependent, in which all the conditions for the solution are specified at the initial time.
The numerical methods for initial value problems are significantly different from
those for boundary value problems. Therefore, the present chapter discussed the
numerical solution methods for the former type only. And chapter 10 describes the
numerical methods for the latter.
The initial value problem of a first-order ODE may be written in the form
Where ƒ(y, t) is a function of y and t, and the second equation is initial condition. In
Eq. (9.1.1), the first derivative of y is given by a known function of y and t, and we
desire to compute the unknown function y by numerically integrating ƒ(y, t). If ƒ
grations discussed in chapter 4. However, the fact that ƒ is a function of the unknown
function y makes the integration different.
The initial condition is always a part of the problem definition because the
solution of an initial value problem can be uniquely determined only if an initial
condition is given.
1
(c) y ' (t ) = − y ( 0) = 1
1 + y 2'
Euler other
Name of methods relevant formula local global featuresª
Nonstiff equation:
Euler methods
Forward forward difference 0(h²) 0(h) SS, EC
Modified trapezoidal rule 0(h³) 0(h²) SS, EC, NL
Backward back ward difference 0(h²) 0(h) SS, EC, NL
…………………………………………………………………………………………..
Rang-Kutta
Second-order trapezoidal rule 0(h³) 0(h²) SS, EC, NL
4
³) SS, EC, N ) 0(h 0(h Third-order Simpson's 1/3
4 5
) SS, EC, NL ) 0(h 0(h Forth-order Simpson's 1/3 or 3/8
…………………………………………………………………………………………
Predictor-corrector
Second-order (identical with second-order Range-Kutta) SS, EC
4
³) NS, DC ) 0(h 0(h Third-order Newton backward
) NS, DC 4 ) 0(h 5 0(h Forth-order Newton backward
…………………………………………………………………………………………..
Stiff Equations:
Exponential exponential
Transformation transformation SS
y' ' (0) = y' '0 , y' ' ' (0) = y' ' '0
And where yo, y'o, y''o, and y'''o are prescribed values. By defining u, v, and w as
W' + aw + bv + cu + ey = g (9.1.3)
So, Eq. (9.1.2) is equivalent to the set of four first-order ordinary differential
equations:
y' = u , y (0) = y 0
The
numerical methods for first-order ordinary differential equations are then applicable to the
foregoing set.
The numerical methods may be3 applied to integro-differential equations, too.
For example, consider the equation given by
t
u = y' , v= 0
y ( s ) ds
y' = u , y (0) = y0
The foregoing set of first-order ordinary differential equations can be solved by a numerical
method.
We set our study with the Euler methods, which are suitable for a quick programming
because of their great simplicity. It should be points out that, as the system of equations
becomes more complicated, the Euler methods are more often used. Indeed, a large fraction
of numerical methods for parabolic and hyperbolic partial differential equations, which are
far more complicated than ordinary differential equations, arte based on Euler methods
rather than the Range-kutta or predictor-corrector, methods.
Euler methods consist of three versions: (a) forward Euler, (b) modified Euler, and
(c) backward Euler methods.
9.2.1 Forward Euler method
The forward Euler method for y' = f (y, t) is obtained by rewriting the forward difference
approximation.
y n+ 1 − y n
y' n
h
(9.2.2)
To
y n +1 = y n + h f ( yn , t n ) (9.2.3)
Where y'n = f (yn, tn) is used. Using Eq. (9.2.3), yn is recursively calculated as
y1 = y0 + h y' 0 = y0 + h f ( y0 , t 0 )
y 2 = y1 + h f ( y1 , t 1 )
y3 = y2 + h f ( y2 , t 2 )
y n = yn − 1 + h f ( yn − 1 , t n − 1 )
Example 9.1
(a) Solve y' = -20y + 7exp (-0.5t), y (0) = 5, using the forward Euler method with h =
0.001 for 0 < t < 0.02. Do this part by hand calculation
.
(b) Repeat the same for h = 0.01, 0.001, and 0.0001 on a computer for analytical
solution given by
y = 5 e − 20 t + ( 7 / 19.5 ) ( e − 0.5 t
−e − 20 t
)
(Solution)
(a) The first few steps of calculations with h = 0.1 are shown next:
t 0 = 0, y 0 = y ( 0) = 5
The computational results for selected values of t with three values of times interval (grid
spacing) are shown in table 9.2.
ª (error) × 100
Comments: Accuracy of the forward Euler method increases with a decreases in time
interval h. in effect, magnitudes of errors are approximately proportional to h. however,
further reduction of h without using double precision is not advantageous because it
increases numerical error caused by round-off (see chapter 1).
Although the forward Euler method is simple, it has to be used carefully for two kinds
of errors. The first is the truncation errors that we have already seen in example 9.1. The
second is a potential of instability, which occurs when the time constant of the equation is
negative (solution approaches zero if there is no source term), unless time interval h is
sufficiently small. A typical equation for a diminishing solution is y' = - α y, with y (0) =
y0 > 0, where α > 0. The exact solution is y = y oe (- αt). The forward Euler method for
this problem becomes
y n +1 = (1 − h ) y n
If αh < 1, the numerical solution is diminishing and positive, but if αh > 1, the sign of the
solution alternates. Furthermore, if αh > 2, the magnitude of the solution increases after
each step, and the solution oscillates. This is the instability.
The forward Euler method is applicable to a set of first-order ODEs. Consider a set
of first-order ODEs given by
y ' = f ( y, z , t ) , y ( 0 ) = y 0
(9.2.4)
z ' = g ( y, z , t ) , z ( 0 ) = z 0
y n +1 = y n + h y' n = y n + h f ( yn , z n , t n )
(9.2.5)
z n +1 = z n + h z' n = z n + h f ( yn , z n , t n )
A higher-order ordinary differential equation may be broken into a set of coupled first-
order differential equations as mentioned earlier.
Example 9.2
Using the forward Euler method with h = 0.5, find the values of y (1) and y (1)
for
y'' (t) – 0.5 y' (t) + 0.15 y (t) = 0, y' (0) = 0, y (0) = 1
(Solution)
t = 0.5:
y' 0 = z 0 = 0
t = 1:
y ' 0 = z 0 = − 0.075
Therefore
y (1 ) = y 2 = 0.96250
Example 9.3
Solve the following set of first-order ODEs by the forward Euler method with h =
0.005Π:
(Solution)
The calculations for the first few steps few steps with h = 0.0005Π are shown next.
t0 = 0: y0 = 1
z 0 = 0
In table 9.3 the results of the present calculations for selected values of t are compared to
the exact solution, y = cos (t) and z = - sin (t).
Table 9.3
IT IS OBSERVED IN THE RESULTS SHOWN IN TABLE 9.3 THAT THE ERROR INCREASES WITH
INCREASE IN T, AND PROPORATION TO H. (SEE THE Y VALUES FOR T = Π, 2Π, 3Π, 6Π AND 8Π:
Z VALUES FOR THESE T VALUES DO NOT FOLLOE THE SAME TREND BECAUSE, WHEN Z IS
CLOSED TO ZERO, THE ERRORS OF Z ARE SIGNIFICANTLY AFFECTED BY PHASE SHIFT).
The motivation for the modified Euler method is twofold. First, the modified Euler method
is more accurate than the forward Euler method. Second, it is more stable then the forward
Euler method.
The modified Euler method is derived by applying the trapezoidal rule to
integrating y' = f (y, x):
h
y n +1 = yn + [ f (y n +1 ,t n +1 ) + f ( y n , t n )]
2 (9.2.6)
If ƒ is linear in y, both Eqs. (9.2.6) may easily solve for yn+1. For example, if the ODE is
given by
h
y n +1 = y n + [a y n +1 + cos ( t n +1 )+ ay n + cos ( t n )]
2
1 + ah / 2 h/2
y n +1 = y + [ cos ( t n +1 ) + cos ( t n )] (9.2.7)
1 − ah / 2 1 − ah / 2
n
(k ) h ( k − 1)
y − yn = f(y , t n +1 ) + f ( y n ,t n )]
n +1 2 n +1 (9.2.8)
(k ) (k )
Is an initial guess y n +1
is the kth iterative approximation for yn+1, and y n +1
Where
(k ) ( k −1)
Becomes less than y − y for yn+1. The above iteration is terminated when
n +1 n +1
A prescribed tolerance. The initial guess is set to yn. Then. The first iteration step becomes
identical with the forward Euler method. If only one more iteration step is used, the scheme
becomes the second-corrector method. But, in the modified Euler method, iteration is
continued until the tolerance of convergence is satisfied.
Example 9.4 shows an application of the modified Euler method to a nonlinear
first-order ODE.
Example 9.4
(Solution)
h
y n +1 = y n + [ − ( y n +1 ) 1.5 − ( y n ) 1.5 + 2 ] (A)
2
For n = 0
h
y1 = y 0 + [ − ( y 1 ) 1.5 − ( y 0 ) 1.5 + 2 ]
2
The best estimate for y1 on the right side is yo. By introducing y1≈yo to the right side, the
equation becomes
0.1
y 1 10 + [ − (10) 1.5 − (10 ) 1.5 + 2 ] = 6.93772
2
0.1
y 1 10 + [ − ( 6.93772) 1.5 − (10 ) 1.5 + 2 ] = 7.60517
2
0.1
y 1 10 + [ − ( 7.60517) 1.5 − (10 ) 1.5 + 2 ] = 7.47020
2
0.1
y 1 10 + [ − ( 7.47020) 1.5 − (10 ) 1.5 + 2 ] = 7.49799
2
0.1
y 1 10 + [ − ( 7.49799) 1.5 − (10 ) 1.5 + 2 ] = 7.49229
2
0.1
y 1 10 + [ − ( 7.49326) 1.5 − (10 ) 1.5 + 2 ] = 7.49326
2
t y
0.0 10.0
0.1 7.4932
0.2 5.8586
0.3 4.7345
0.4 3.9298
0.5 3.3357
0.6 2.8859
0.7 2.5386
0.8 2.2658
0.9 2.0487
1.0 1.8738
Why is the accuracy of the modified Euler method higher than that of the forward
Euler method? To explain the reason analytically. Let us consider a test equation, y' = xy.
Equation (9.2.6) for this problem may then be written as
h 9.2.9))
y n +1 = y n + ( y n +1 + y n )
2
h h −1
y n + 1 = 1 + 1 − yn
2 2
1 1
y n +1 = 1 + h + ( h) 2 + ( h ) 3 + ...... y n
2 4
Comparing this expansion to the Taylor expansion of the exact solution y (tn+1) = exp (αh)
yn, it is found that Eq. (9.2.9) is accurate to the second-order term. Thus, the modified
Euler method is a second-order (accurate) method. On the other hand, a similar analysis
fort the forward Euler method indicates that the forward Euler method is first-order
accurate.
The local error (error generated in each step) of the forward Euler method is
proportional to h² and its global error is proportional to h, whereas the local error of the
modified Euler method is proportional to h³ and its global error is proportional to h². The
order of errors of the backward Euler method is the same as in the forward Euler method.
If the modified Euler method is applied to asset of ODEs, the whole equations must
be solved simultaneously or "implicitly." However, the advantage of the implicit solution
is that the method is more stable than the forward Euler method and thus allows a larger
time step.
The backward Euler method is based on the backward difference approximation and is
written as
y n + 1 = y n + h f ( yn +1 , t n + 1 ) (9.2.10)
The accuracy of this method is the same as that of the forward Euler method. Besides if ƒ
is a nonlinear function of y, an iterative scheme has to be used in each step just as in the
modified Euler method. However, the advantages are (a) the method is stable for stiff
problems, and (b) positively of solution is guaranteed when the exact solution is positive.
See applications of the backward Euler method in section 9.5 and chapter 12.
(a) The forward Euler method is based on the forward difference approximation. Its error
in one interval is proportional to h² and its global error to h. the forward Euler method may
become unstable if the ODE has a negative time constant unless a small h is used.
(b) The modified Euler method is based on the trapezoidal rule. If the ODE is not linear,
an iterative method is necessary for each interval. Its error in one interval is proportional
to h³ and its global error to h².
(c) The backward Euler method is based on the backward difference approximation. Its
error is comparable to those of the forward Euler method. The method is stable so it is used
to solve stiff problems that are difficult to solve by other methods.
To calculate yn+1 at tn+1 h with a known value of yn, we integrate Eq. (9.3.1) over the
interval [tn, tn+1] as
t n +1
y n +1 = y n + f ( y , t ) dt (9.3.2)
tn
1
f ( y, t ) dt h [ f ( y n , t n ) + f ( y n +1 , t
2
n +1 )] (9.3.3)
Here we examine an application of the trapezoidal rule to the right side of Eq. (9.3.2):
In Eq. (9.3.3) yn+1 is not known, so the second term is approximated by ƒ (y¯ n+1, tn+1),
where y¯n+1 is the first estimate for yn+1 calculated by the forward Euler method. The scheme
derived here is called the second-order Runge-Kutta method and summarized as
y n + 1 = y n + h f (y n , t n )
h
y n +1 = y n + [ f ( y n , t n ) + f ( y n +1 , t n +1 )]
2
Or in a more standard form as
k1 = h f (y n ,t n )
k 2 = h f ( y n + k 1 , t n +1 )
(9.3.4)
1
y n + 1 = y n + [k1 + k 2 ]
2
Example 9.5
d
L /(t ) + R /(t ) = E , /(0) = 0 ( A)
dt
‹Solution›
We first rewrite Eq. (a) as
d R E
I =− I + f ( I , t)
dt L L
Then, the second-order .Runge-kutta method becomes
R E
k 1 = h − I +
L
n
L
R E
k 2 = h − (I n + k 1 ) +
L L
1
I n +1 =I n (k 1 +k 2 )
2
The calculations for the first two steps are shown next:
N = 0 (t=0.1): k1 =0.1[(-0.4) (0) +0.2] = 0.02
1 1
I1 = I 0 (k1 + k 2 ) = 0 + (0.02 + 0.0192) = 0.0196
2 2
1
I 2 = I1 ( k1 + k 2 )
2
1
= 0.0196 + ( 0.019216+ 0.018447 ) = 0.038431
2
t (sec) t (amp)
0 0
1 0.1648
2 0.2752
3 0.3493
4 0.3990
5 0.4332
6 0.4546
7 0.4695
8 0.4796
9 0.4863
10 0.4908
(∞) (0.5000)
h2
y n +1 = yn +h f + [ f t + fy f ]
2
h3
+ [f u +2 f ty f + fyy f 2
+ f t f y +f 2
y f ] + 0 (h 4 ) (9.3.6)
6
Where all the derivatives of y are expressed in terms of f and the partial derivatives of f at
tn.
h2 h3
y n +1 = yn +h f + [ f t + fy f ] + [f u +2 f ty f + fyy f 2
] + 0 (h 4 ) (9.3.6)
2 4
By comparing Eq. (9.3.6) to Eq. (9.3.5), Eq. (9.3.4) is found to be accurate to the order of
h² and the discrepancy (error generated in one step) is proportional to h³. Notice that the
second-order Runge-Kutta method is identical to the modified Euler method given by Eq.
(9.3.8) with two iteration steps. However, the order of accuracy of the former is identical
to that of the latter, which requires iterative convergence. This indicates that the iteration
in the modified Euler method increases accuracy only a little. (Indeed, using the second-
order Runge Kutta method with a smaller h is far more effective in improving accuracy
than using the modified Euler method with strict iterative convergence.) y' = αy, but this
approach is left as a student's exercise.
Application of the second-order Runge Kutta method to a higher-order ordinary
differential equation is easy. For illustration, we consider the second-order differential
equation:
Where a and b are coefficients and q(t) is a known function, and two initial conditions are
given. By defining
The second-order Runge Kutta method for the foregoing equations is written as
k 1 = h f ( y n , z n ,t n ) = hz n
l 1 = h g ( y n , zn , tn ) = h ( − a z n − b yn + qn )
k 2 = h f ( y n + k1 , z n + l1 , t n + 1 ) = h ( z n + l1 )
l2 = h g ( y n + k1 , z n + l1 , t n + 1 ) = h ( − a( z n + l1 ) − b ( y n + k1 ) + q n + 1 ) (9.3.10)
1
yn + 1 = yn + ( k1 + k1 )
2
1
z n +1 = z n + (l1 + l 2 )
2
Example 9.6
A cubic material of mass M = 0.5 kg is fixed to the lower end of a missies spring.
The upper end of the spring is fixed to a structure at rest. The cube receives resistance R =
- B dy/dt from the air, where B is a damping constant.
d2 d
M 2 y +B y + k y = 0, y ( 0 ) = 1, y ' ( 0 ) = 0 (A)
dt dt
Where y is the displacement from the static position, k is the spring constant equal to 100
kg/s², and B = 10 kg/s.
(a) Calculate y (t) for 0< t < 0.05 using the second-order Runge Kutta method with h =
0.025 by hand calculations.
(b) Calculate y (t) for 0 < t < 10 sec using the second-order Runge Kutta method with h =
0.001.
(Solution)
k1 = h f ( y 0 , z 0 , t 0 ) = h z 0 = 0.025(0) = 0
k 2 = h f ( y 0 + k1 , z 0 + l1 , t 0 ) = h ( z 0 + l1 ) = 0.025(0 − 5 ) = − 0.125
l 2 = h g ( y 0 + k1 , z 0 + l1 , t 0 ) = h ( − 20 ( z 0 + l1 − 200 ( y 0 + k1 ) )
1
y1 = y0 + ( 0 − 1.25) = 0.9375
2
1
z1 = z0 + ( − 5 − 2.5 ) = − 3.75
2
For n = 2: t = 0.05
k1 = h f ( y1 , z1 , t1 ) = h z1 = 0.025(−3.75) = − 0.09375
l1 = h g ( y1 , z1 , t1 ) = h ( − 20 z1 − 200 y1 )
l 2 = h g ( y1 + k1 , z1 + l1 , t1 ) = h ( − 20 ( z1 + l1 ) − 200 ( y1 + k1 ) )
= − 0.9375
1
y 2 = y1 + ( − 0.09375 − 0.1640625) = 0.80859
2
1
z 2 = z1 + ( − 2.8125 − 0.9375) = − 5.625
2
(b) And (c) this part of the computations was performed by using PROGRAM 9 – 1. The
computational results after every 50 steps up to 0.75 sec are shown bellow:
(b) (c)
t (sec) y (meter) y (meter)
(B = 10) (B = 0)
0 1.000 1.000
0.05 0.823 0.760
0.1 0.508 0.155
0.15 0.238 -0.523
0.2 0.066 -0.951
0.25 -0.016 -0.923
0.3 -0.042 -0.45
0.35 -0.038 0.235
0.4 -0.025 0.810
0.45 -0.013 0.996
0.5 -0.004 0.705
0.55 0.000 0.075
0.6 0.001 -0.590
0.65 0.001 -0.973
0.7 0.001 -0.889
0.75 0.000 -0.378
A Runge Kutta that is more accurate than the second-order Runge Kutta method may be
derived by using a higher-order numerical integration scheme for the second term of Eq.
(9.3.2). Using the Simpson's 1/3 rule, Eq. (9.3.2) becomes
h
yn + 1 = yn + [ f ( yn , t n ) + 4 f ( y n + 1 , t 1 ) + f ( y n +1 , t n +1 ) ] (9.3.11)
6 2 n+
2
h
y n + 1 = yn + f ( yn , tn ) (9.3.12)
2 2
The estimate y n + 1 may be obtained by
y n + 1 = yn + h f ( yn , tn )
or
y n + 1 = yn + h f ( y n +
1 t 1 )
2 n+
2
y n + 1 = y n h [ f ( y n , t n ) + (1 − ) f ( y n + 1 , t 1 )] (9.3.13)
2 n+
2
k1 = h f ( y n , t n )
1 h
k 2 = h f ( yn + k1 , t n + )
2 2
(9.3.14)
k 3 = h f ( y n + k1 + (1 − ) k 2 , t n + h )
1
yn + 1 = yn + ( k1 + 4 k 2 + k 3 )
6
1 2 1
k2 = h f + h ( f1 + f y f ) + h 3 ( f u + 2 f t y f + f y y f 2 ) ( 9.3.15 b)
2 8
1 3
k3 = h f + h 2 ( f t + f y f ) + h [ fu + 2 ft y f
2
+ fyy f 2
+ (1 − ) ( f t + f y f ) f y ] ( 9.3.15 c)
Where ƒ and its derivatives are evaluated at tn. By introducing Eq. (9.3.15) into Eq. (9.3.14)
and comparing it to Eq. (9.3.5), we find that θ = - 1 is the optimum because Eq. (9.3.14)
then agrees with Eq. (9.3.5) to the third-order term.
The forgoing derivative may be more easily understood if it is applied to the test
equation y' = αy.
In summary, the third-order-accurate Runge Kutta method is written as
k1 = h f ( yn , t n )
1 h
k 2 = h f yn + k1 , t n +
2 2
(9.3.16)
k3 = h f ( yn − k1 + 2 k 2 , t n + h )
1
yn + 1 = yn + ( k1 + 4 k 2 + k3 )
6
Example 9. 8
Solve
y' = t y + 1 , y (0) 0
Using the fourth-order Runge-Kutta method, Eq. (9.3.17), with h = 0.2, 0.1. And 0.05,
respectively, and evaluate the error for each h at t = 1, 2, 3, 4, and 5
(Solution)
Computations for this example were performed by using PROGRAM 9 – 2. The results
are as shown below:
ª p.e...Percentage error
A comparison of these results to those of Euler methods reveals that the error of the fourth-
order Runge-Kutta method with h = 0.1 is comparable to that of the modified Euler method
with h = 0.001 also, the fourth-order Runge-Kutta method with h = 0.2 is comparable to
the forward Euler method with h = 0.001.
The fourth-order Runge-Kutta method for the set of two equations becomes
k1 = h f ( y n , z n , t n )
l1 = h g ( y n , z n , t n )
k l h
k2 = h f yn + 1 , zn + 1 , tn +
2 2 2
k l h
l2 = h g y n + 1 , z n + 1 , t n +
2 2 2
(9.3.20)
k l h
k3 = h f yn + 2 , z n + 2 , t n +
2 2 2
k l h
l3 = h g y n + 2 , z n + 2 , t n +
2 2 2
k4 = h f ( y n + k 3 , z n + l3 , t n + h )
l4 = h g ( y n + k 3 , z n + l3 , t n + h )
1
yn + 1 = yn + [ k1 + 2 k 2 + 2 k 3 + k 4 ]
6 (9.3.21)
1
zn + 1 = zn + [ l1 + 2 l 2 + 2 l3 + l 4 ] (9.3.22)
6
Even when the number of equations in a set is greater than two, the derivation of
the fourth-order Runge-Kutta method is essentially the same. A program to solve a set of
equations using the fourth-order Runge-Kutta method is given as PROGRAM 9 – 3.
Example 9.9
Repeat the problem in example 9.3 by using the fourth-order Runge-Kutta method
with h = 0.2Π, and h = 0.5Π.
(Solution)
Comparing these values to the results of the forward Euler solution in example 9.3, the
accuracy of the fourth-order Runge-Kutta method even with h = 0.2Π is significantly better
than the forward Euler method with h = 0.01Π.
9.3.5 Error, stability, and Grid Interval optimization
The Range-Kutta methods are subject to two kinds of error—truncation error and
instability. As discussed earlier, the truncation error is due to the discrepancy between the
Taylor expansion of the numerical method and the Taylor expansion of the exact solution.
The amount of error decreases as the order of the method becomes higher. On the other
hand, instability is an accumulated effect of the local error such that the error of the solution
grows unboundedly as the time steps are advanced.
To analyze the instability of a Runge-Kutta method, let us consider the test equation
y' = αy (9.3.23)
Where α < 0. For a given value of yn the exact value for yn+1 is analytically given as
k1 = h y n
k 1
k 2 = h yn + 1 = h 1 + h yn
2 2
k 1 1 (9.3.25)
k3 = h yn + 2 = h 1 + h 1 + h yn
2 2 2
k 4 = h ( yn + k3 ) = h 1 + h 1 + 1
h 1+
1
h y n
2 2
1 1 1
yn +1 = 1 + h + ( h ) 2 + ( h ) 3 + ( h) 4 yn (9.3.26)
2 6 24
Equation (9.3.26) equals the first five terms of the Taylor expansion for the right side of
Eq. (9.3.24) about tn. The factor
1 1 1
=1 +h + ( h) 2 + ( h ) 3 + ( h) 4 (9.3.27)
2 6 24
In Eq. (9.3.26) is approximating exp (αh) of Eq. (9.3.24), so the truncation error and
instability of Eq. (9.3.26) both originate in this approximation.
Equation (9.3.27) and exp (αh) are plotted together in figure 9.1 for comparison. The figure
indicates that if α < 0 and the modulus (absolute value) of αh increase, the deviation of γ
from exp (αh) increases, so that the error of the Runge-Kutta method increases. Particularly,
if αh ≤ -2.785, the method becomes unstable because the modulus of the numerical solution
grows in each step whereas the modulus of the true solution decreases by a factor, exp (αh),
in each step.
Eh = B h4 (9.3.28)
Where B a constant that depends on the given problem. If we apply the same Runge-Kutta
method in two steps with h/2 as the time interval, the error becomes . Times 2Where
4
the factor 2 is due to the accumulation (h / 2) preoperational to
Two steps thus, it becomes of error in
4
h 1
2 E h/2 = 2 B = B h4 (9.3.29)
2 8
The left side of the forgoing equation may be evaluated by a numerical experiment—
that is, by running the scheme twice starting from the same initial value. In the first run,
only one time step is advanced using a trial value for h as the time interval. We denote the
result of this calculation as [y1] h. in the second run; [y2] h/2 is calculated in two time steps
using h/2 as the time interval. Using the results of those two calculations, the left side of
Eq. (9.3.30.) is evaluated as
Eh − 2 E h / 2 = y1 h − y2 h / 2 (9.3.31)
Introducing Eq. (9.3.31) into Eq. (9.3.30) and solving for B yields
( y1 h − y2 h / 2 ) / h 4
8
B = (9.3.32)
7
Once B is determined, the maximum (or optimum) h that satisfies the criterion Eh ≤ξ may
be found by introducing Eh = ξ into Eq. (9.3.28) and solving for h. as follows:
0.25
h =
B
The theory we just described is reminiscent of the Romberg integration explained in section
3.2.
Example 9.10
y
y' = − , y ( 0) = 1
1 +t2
Find an optimal step interval satisfying Eh ≤ 0.00001.
(Solution)
Eh = B h 5 (A)
The approach is very similar to Eqs (9.3.28) through (9.3.33) except that the order of
5
Error is five. The error accumulated in two steps using h/2 is 2Eh/2 = 2B (h/2)
The deference between the errors of one-step and two-step calculations, namely Eh – Eh/2,
is numerically evaluated by
IN Eq. (B), [y1] h is result of the fourth-order rung-kutta method for only one step
With h, and [y2]h/2 is the result of the same for two steps with h/2. Introducing Eq. (A) into
Eq. (B) and solving for B, we have
16
B = ( [ y1 ]h − [ y1 ]h / 2 ) / h 5 (C)
15
Now we actually run the fourth order Runge-Kutta method for only one step with h
= 1 starting with the given initial condition. Then we run it for two steps with h/2 = 1/2.
The results are
16
B = ( 0.4566667 − 0.4559973) / (1) 5 = 6.310− 4 (D)
15
By introducing this into Eq. (a), the local error for any h is expressed by
Eh = 6.3 10 − 4 h 5
(a) The Runge-Kutta methods are derived by integrating the first order ODE with
numerical integration methods. The second-order Runge-Kutta method is identical to the
modified Euler method with two iteration cycles as well as to the second order predictor-
corrector method.
(c) Each Runge-Kutta method becomes unstable if α is negative and |αh| exceeds a certain
criterion.
(d) The local error of a Runge-Kutta method can be found by running the same method
twice: the first time for one interval with a value of h, and the second time for two intervals
with h/2.
to explain the methods, let us consider an equispaced time interval and assume that
the solution has been calculated up to time point n so that the values of y and y' on the
pervious time points may be used for the calculation of yn+1.
Both predictor and corrector formulas are derived by introducing appropriate
polynomial approximation for y'(t) into Eq. (9.3.2). The most primitive member of the
predictor-corrector methods is the second-order predictor-corrector method, which is
identical to the second-order Runge-Kutta method.
Let us derive a third-order predictor by approximating y' = ƒ (y, tr) with a
quadratic interpolation polynomial fitted of f'n, y'n-1 and y'n-2:
1
y' ( z ) = [ ( z + h ) ( z + 2 h ) y ' n − 2 z ( z + 2 h ) y ' n − 1 + z ( z + h ) y ' n − 2 ] + E ( Z ) (9.4.1)
2 h2
z = t - tn
And E (z) is the error (see section 2.3). Equation (9.4.1) is the Lagrange interpolation fitted
to the values y'n, y'n-1 and y'n-2. The error of the polynomial is
z ( z + h) ( z + 2 h ) y ( i v ) ( ) , t n − 2 t n + 1
1
E ( z)= (9.4.2)
3!
Here, the derivative in the error term is of the fourth order because a quadratic polynomial
is fitted to y'.
Equation (9.3.2) can be rewritten in terms of the local coordinate z = t – tn as
h
y n +1 = y n + 0
y' ( z ) d z
(9.4.3)
h
y n +1 = y n + ( 23 y 'n − 16 y 'n −1 + 5 y 'n − 2 ) + 0 ( h 4 ) (9.4.4)
12
Equation (9.4.4) is called the third-order Adams-Bash forth predictor formula. The error of
Eq. (9.4.4) is attributable to Eq. (9.4.2) and is evaluated by integrating Eq. (9.4.2) in [0, h],
as allows:
h y ( ) , tn − 2 tn + 1
3 4 (iv)
0( h4 ) =
8
The deriving Eq. (9.4.4), notice that Eq. (9.4.1) has been used as an extrapolation. As
pointed out in section 2.9, extrapolation is lees accurate than interpolation (see section 2.9
and Appendix A). Therefore, Eq. (9.4.4) is used only as a predictor and is written as
h
y n +1 = y n + ( 23 y 'n − 16 y 'n −1 + 5 y 'n − 2 ) + 0 ( h 4 )
12
y' ( z ) =
1
2 h2
z ( z + h ) y ' n + 1 − 2 ( z − h) ( z + h) y ' n + z ( z − h) y ' n − 1 + E ( z ) (9.4.6)
Where z is the local coordinate defined after Eq. (9.4.1). The error of this equation is
z ( z + h) ( z + 2 h ) y ( i v ) ( ) , t n − 1 t n + 1
1
E ( z)=
3!
h
y n +1 = y n + ( 5 y ' n + 1 + 8 y ' n − y ' n −1 ) + 0 ( h 4 )
12 (9.4.7)
The error is
h y ( ) , t n −1 t n + 1
1 4 (iv)
0( h4 ) = −
24
Equation (9.4.7) is named the Adams-Moulton corrector formula of order 3. The set of Eqs.
(9.4.5) and (9.4.7) is called the third-order Adams predictor-corrector method.
As seen from the preceding derivation, numerous formulas can be derived by
changing the choice of the extrapolating and interpolating polynomials.
In discussing the predictor-corrector methods, we have assumed that the
solutions for previous points are available. The third-order predictor-corrector method
needs three previous values of y as explained earlier. therefore, to start up the method, the
solutions for n = 0, n = 1, and n = 2 are necessary, the first of which is given by an initial
condition, but the second and third should be provided by some other means than the
predictor-corrector method, such as a Runge-Kutta method.
The interpolation polynomial fitted to y' at points n, n-1, n-2, …, n-m may be written in the
Newton backward formula [see Eq. (2.4.14)] as
m
s + k − 1 k
gm (t ) = ( − 1 ) k y' n −k
(9.4.8)
k =0 k
Where
t − tn
s=
h
By introducing Eq. (9.4.8) into Eq. (9.3.2), we obtain the Adams-Bashforth predictor
formula of order m + 1:
y n + 1 = yn + h b0 y 'n + b1 y 'n −1 + ......... + bm m y 'n − m (9.4.9)
Where
1 s + k − 1 (9.4.10)
bk = 0
k
d s
b0 = 1
1
b1 =
12
5
b2 =
12
3
b3 =
8
251
b4 =
720
if we set m = 2 in Eq. (9.4.9) for example, we obtain the third-order predictor given by Eq.
(9.4.4). By following the same procedure for m = 3, the fourth-order predictor formula is
derived as
9h
y n +1 = y n + ( 55 y 'n − 59 y 'n −1 + 37 y 'n − 2 − 9 y 'n − 3 ) + 0 (h 5 ) (9.4.11)
24
Where
h y ( ) , tn −3 tn + 1
251 5 ( v )
0 ( h5 ) =
720
The corrector formulas may be derived by using the polynomial fitted to y' at grid points,
n+1, n, n-1, …., n –m +1. The Newton backward interpolation formula fitted to y' at these
points (see section 2.5) is
m
s + k − 2 k
gm (t ) y ' n + 1 −k
(9.4.12)
k =0 k
Introducing this equation into Eq. (9.3.2) yields the Adams-Moulton corrector formula
y n + 1 = y n + h c0 y ' n + 1 + c1 y' n + ......... + cm m y' n − m
Where
1 s + k − 2
ck = 0
k
d s
h y ( ) , tn − 2 tn + 1
19 5 ( v )
0 ( h5 ) = −
720
The set of Eqs. (9.4.11) and (9.4.14) is called the fourth-order Adams predictor-corrector
method.
(a) The method cannot be started by itself because of the use of previous points, until the
solutions for enough points are determined, another method such as Runge-Kutta method
must be used.
(b) Because previous points are used, changing the interval size in the middle of solution
is not easy. Although there predictor-corrector formulas may be derived on nonuniformly
spaced points, the coefficients of the formulas change for each interval, so programming
becomes very cumbersome.
(c) The predictor-corrector method cannot be used if y' becomes discontinuous. This can
happen when one of the coefficients of the differential equation changes discontinuously
in the middle of the domain.
However, the last two difficulties can be overcome as follows: because the predictor-
corrector program must contain a self-starting method such as a Runge-Kutta method
anyway, the computation can be restarted whenever the step interval has to be changed or
when y' becomes discontinuous.
One advantage of the predictor-corrector methods is that local error may be evaluated
easily by observing the difference between the predictor and the corrector in each step. For
illustration of analysis, we consider the third-order Adams predictor-corrector method.
Equations (9.4.4) and (9.4.7) indicate that, assume that yn, yn-1, yn-2… are exact, the predictor
and corrector values become
3 4 (i v )
y n + 1 = yn + 1 , exact − h y ( )
8 (9.4.15)
1 4 (iv)
y n + 1 = y n + 1 , exact + h y ( ) (9.4.16)
24
If we assume further that the values of the fourth derivative in Eqs. (9.4.15) and (9.4.16)
take the same value, then subtracting Eq. (9.4.16) from Eq. (9.4.15) yields
10 4 ( i v ) (9.4.17)
y n + 1 − yn + 1 = − h y ( )
24
y n + 1 , exact − yn + 1 =
1
10
(y n +1 − yn + 1 ) (9.4.18)
The right side of Eq. (9.4.18) is the local error of the corrector. Because it is expressed in
terms of the difference between the predictor and the corrector, the calculation is simple.
By using this algorithm at every step interval, the local error of the method can be
automatically monitored in a program.
y' = αy (9.4.19)
h
y n +1 = y n + ( 23 y 'n − 16 y 'n −1 + 5 y 'n − 2 )
12
h
y n +1 = y n + ( 5 y 'n + 1 + 8 y 'n − y 'n − 1 )
12
In the foregoing two equations and reorganizing the terms lead to y n +1 Eliminating
yn + 1 = − a2 yn − a1 yn − 1 − a0 yn − 2 (9.4.20)
Where
a 2 = − (1 + 13 b + 115 b 2 )
a1 = b + 80 b 2
a 0 = − 25 b 2
h
b =
12
Equation (9.4.20) may be considered as the initial value problem of a difference
equation, for which the analytical solution may be obtained in a similar way as for a third-
order linear ordinary differential equation. Indeed, the analytical solution of Eq. (9.4.20)
may be found in the form.
yn = c n (9.4.21)
Where γ is a characteristic value and c is a constant. By introducing Eq. (9.4.21) into Eq.
(9.4.20), we get the characteristic equation:
3 + a2 2 + a1 + a0 = 0 (9.4.22)
Equation (9.4.22) is a third-order polynomial equation, so it has three roots although two
of them can be complex values. We denoted the three roots by
1 , 2 , and 3
Because each of y1, y2, and y3 satisfies Eq. (9.4.20), a linear combination of all the solution
is also a solution of Eq. (9.4.20). The general solution of Eq. (9.4.20) may now be written
as
yn = c1 ( 1 ) n + c2 ( 2 )n + c3 ( 3 )n (9.4.23)
Where c1, c2 and c3 are determined when the initial values of yo, y1 and y2 are given
(remember that the third-order predictor-corrector method needs three starting values).
The exact solution to the original problem, Eq. (9.4.19_), is given by
yn = y ( 0 ) exp ( n h ) (9.4.24)
Where y (0) is the initial value of y (t). The question is each of the three terms in Eq. (9.4.23)
is related to Eq. (9.4.24). The answer is that one term in Eq. (9.4.23) is an approximation
for Eq. (9.4.24), but the other two are irrelevant to the true solution and constitute a part of
the error of the scheme. we assumed that the first term is the approximation and that the
second and the third are the errors. Instability of the method is then related to the second
and third are the error terms vanish as n increases, there is no instability behavior of the
numerical solution occurs. This is the instability, and it happens if
2 1 or 3 1 or both
When the predictor-corrector method is applied to Eq. (9.4.19), both α and h affect the
instability. However, because α and h always appear as a product [see Eq. (9.4.20), we can
consider αh as one parameter. the roots of Eq. (9.4.22) for various values of αh are shown
in table 9.2.
Table 9.2 Characteristic values of the third-order predictor-corrector method applied to y'(t) = αy (t)
Percentage
αh exp (αh) γ1 Error γ 2, γ 3 | γ 2|, | γ 3|
0.1 1.1051 1.1051 0 0.006±0.039j 0.040
0.2 1.2214 1.2214 0 0.014±0.074j 0.075
0.5 1.6487 1.6477 0.06 0.047±0.155j 0.162
1.0 2.7183 2.6668 1.90 0.108±0.231j 0.255
1.5 4.4816 4.1105 8.3 0.155±0.266j 0.308
2.0 7.3891 5.9811 23 0.190±0.283j 0.341
2.5 12.1825 8.2705 32 0.215±0.292j 0.362
-0.1 0.9048 0.9048 0 -0.003±0.043j 0.043
-0.2 0.8187 0.8189 0.02 -0.002±0.092j 0.092
-0.3 0.7408 0.7416 0.1 -0.003±0.145j 0.145
-0.4 0.6703 0.6732 0.43 -0.011±0.203j 0.203
-0.5 0.6065 0.6147 1.35 0.022±0.265j 0.266
-1.0 0.3678 0.4824 31.2 0.116±0.588j 0.600
-1.5 0.2231 0.4944 121. 0.338±0.821j 0.889
-2.0 0.1353 0.5650 419. 0.731±0.833j 1.109
j = −1
(b) The predictors of the Adams predictor-corrector methods are named Adams-Bashforth
predictors. They are derived by integrating a polynomial extrapolation of y' for the previous
points.
(c) The correctors of the Adams predictor-corrector methods are named Adams-Moulton
predictors. And they are derived by integrating a polynomial interpolation of ( the
predicted value for the new point). y ' y' for the previous points plus
(d) The second-order predictor-corrector method is identical with the second-order Runge-
Kutta method.
(e) The third-and fourth-order predictor-corrector methods cannot be self-started. However,
once started, their computational efficiency is higher than that of the Runge-kutta method.
Error check in each interval is easier than for the Runge-Kutta method.
In this section, five applications of the numerical methods for initial value problems are
shown. Although the fourth-order Runge-Kutta method is used throughout this section, it
replaced by any other method for ordinary differential equations described in this chapter.
Example 9. 12
A metal piece of 0.1 kg mass and 200º C (or 473ºk) is suddenly placed in a room
of temperature 25º C, where it is subject to both natural convection cooling and radiation
heat transfer. Assuming that the temperature distribution in the metal is uniform, the
equation for the temperature may be written as
dT A
= [ ( 2974 − T 4 ) + hc (297 − T ) ] , T ( 0 ) = 473 ( A)
dt pcv
Where T is the temperature is degrees Kelvin, and we assume that the constants are given
by
p = 300 k g / m 3 ( density of the matel )
(Solution)
t (sec) T(ºK)
0 473
10 418.0
20 381.7
30 356.9
60 318.8
120 300.0
180 297.4
Example 9.13
The electric current of the circuit shown in figure E9.13A satisfies the integro-
differential equation
di 1 t 1
L
dt
+ Ri +
C 0
i ( t' ) d t' +
C
q (0) = E (t ), t 0
(A)
Where the switch is closed at t = 0; I = i(t) is the current (amp); R is resistance (ohm); L,
C and E are given by
L = 200 Henry
C = 0.001 farad
E (t) = 1 volt for t > 0
Initial conditions are q (0) = 0 (capacitor's initial charge) and I (0) = 0. Calculate the current
for 0 ≤ t ≤ 5 sec after closing the switch (t = 0) for the following four values of R:
(a) R = 0 ohm
(b) R = 50 ohm
(c) R = 100 ohm
(d) R = 300 ohm
(Solution)
We first define
t
q (t ) = 0
i ( t' ) d t '
(B)
Differentiating Eq. (B) yields
d (C)
q (t ) = i (t ), q (0) = 0
dt
d R 1 1 E (t )
i ( t ) = − i ( t) − q (t ) + q(0) + , i ( 0) = 0 (D)
dt L LC LC L
thus, Eq. (A) is transformed to a set of two first-order ODEs Eqs (C) and (D) PROGRAM
9 N- 4 was modified for the problem in two respects (see note below). The result of the
computation is shown in a graphic form in figure E9.13b.
note: (a) to perform the calculations for all four cases in one run, four pairs of coupled
first-order ODEs are incorporated, the first pair corresponding to the first case, the section
pair to the second case, and so on. This is possible because not all equations in the program
have to be mathematically coupled.
(b) A graphic plotting routine is added, so all four cases are plotted on one graphic output.
Example 9.14
The three-mass system is shown in the figure below. The displacements of the three
masses satisfy the equations given by
M 1 y' '1 + B1 y'1 + k1 y1 − B1 y' 2 − k 2 y2 = F1 ( t )
M 1 = M 2 = M 3 = 1 (Mass, kg )
F1 ( t ) = 1, F3 (t ) = 0 (Force, Newton)
y1 (0) = y '1 (0) = y 2 = y ' 2 (0) = y3 (0) = y '3 (0) = 0 (Initial conditions)
Solve the foregoing equations by using the fourth-order Runge-Kutta method for 0 ≤ t ≤
30 sec with h = 0.1
(Solution)
By defining
y4 = y '1 , y5 = y'2 , and y6 = y '3 (B)
y'1 = y 4 (C 1)
y' 2 = y5
(C 2)
y'3 = y 6
(C 3)
1
y'4 = [ − B1 y 4 − k1 y1 + B1 y 5 + k 2 y 2 + F1 ] (C 4)
M2
1
y '5 = [ B1 y 4 + k1 y1 − B1 y 5 − ( k1 + k 2 ) y 2 + k 2 y 3 ] (C 5)
M2
1
y'6 = [ k 2 y 2 − B2 y 6 − ( k 2 k 3 ) y 3 + F3 ] (C 6)
M3
These equations are solved by modifying PROGRAM 9 – 3. The computational results are
shown in figure E9.14b.
Example 9.15
A rod 1.0 m long placed in a vacuum is heated by an electric current through the
rod. The temperature at both ends is fixed at 273º k. The heat is dissipated from the surface
by radiation heat transfer to the environment whose temperature is 273º k. Using the
following constants, determine the temperature distribution in the axial direction:
k = 60 W/ mk (thermal conductivity)
(Solution)
d2
− Ak 2
T + p ( T 4 − 273 4 ) = Q 0 x 1.0 (A)
dx
y1 ( x ) = T ( x )
y 2 ( x) = T ( x)
Eq. (A) may be rewritten as a set of two first-order ODEs as
y '1 = y2 (B)
p Q
y '2 = ( y 4 − 273 4 ) −
Ak kA
Only one initial condition, y1 (0) = 273, is known from the boundary conditions (but y2 (0)
is not known). So we solve Eq. (A) with trail values for y2 (0) until the boundary condition
for the right end, namely y1 (1) = 273, is satisfied. This approach is called the shooting
method [Rieder/ Busby].
For the present example, PROGRAM 9 – 3 is used with some modifications. The
results are directly plotted on a printer and shown in figure E9. 15. It is seen that y2 (0) =
1160 is too small as an initial guess, whereas y2 (0) = 1300 is too large. Some y2 (0) in
between these values should give the best result. After a few more trials, y2 (0) = 1200 is
found to satisfy almost exactly the right boundary condition.
Example 9.16
the temperature of a perfectly insulated iron bar 55 cm long is initially at 200 º C. the
temperature of the left edge is suddenly reduced and fixed to 0º C at t = 0 sec. calculate the
temperature distribution at even 100 sec is reached. the property constants are
p = 7870 k g / m³ (density)
We first divide the rod into eleven control volumes as shown in figure E 9.16a. Denoting
the average temperature of control volume I by Ti (t). The heat balance equation for control
volume i is written as
p c x A ( d T / dt ) = ( qi − 1 − qi ) A
In Eq. (A), q, is the heat flux (rate of condition of the heat transfer per unit cross-sectional
area) at the boundary of the control volumes I and I + 1, and written
300 0 67 122 160 182 193 197 199 200 200 200
400 0 58 108 146 172 186 194 198 199 200 200
By
k
qi = − ( Ti + 1 − Ti ) for i = 0, 1, 2, .......... , 9
x (B)
And
q10 = 0 (C)
introducing Eq. (B) into Eq. (A) and rearranging yield
d Ti k
= (Ti − 1 − 2 Ti + Ti + 1 ) (D – 1)
dt p c x2
d T10 k
= (T9 − T10 ) (D-
dt p c x2 2)
Equation (D) may be considered as a set of first-order ODEs and solved by using
one of the Runge-Kutta methods. The set of equations is solved by PROGRAM 9 – 3 with
some modifications. The computed results are shown in figure E9. 16b.
Notes:
(a) Equation (D) may be viewed as a semi difference approximation for the heat conduction
equation (parabolic partial differential equation)
T (x ,t )
k T (x , t) = p c
x x t
With the initial condition, T (x, 0) 200º C, and the boundary conditions, T (0, t) = T' (55, t)
= 0.
(b) The present solution technique for the partial differential equation using a numerical
method for ODEs is called the method of lines.
(d) The author's study indicates that the computations using h = 50 sec agree well with that
of h = 1 sec, but the solution scheme becomes unstable with h = 100 sec.
Stiffness refers to a very short time constant of an ODE. Consider, for example,
y' = − y + s (t ) , y ( 0 ) = y0 (9.6.1)
y (t ) = y0 e − t (9.6.2a)
The response of the system to the initial condition as well as to the change of s (t) is
characterized by 1 / |α| that is called the time constant.
(9.6.3) y' = - y + z + 3
z = −10 7 z + y
The second equation has a significantly shorter time constant than the first.
A number of numerical methods that allow a large time step have been proposed
including the implicit Runge-Kutta method and the rational Runge-Kutta method. Two
such methods are introduced in the remainder of this section.
d
y = f ( y , z, t )
dt
(9.6.4)
d
z = g ( y , z , t)
dt
Using the backward difference approximation to the left side, we can write an implicit
scheme as
yn + 1 − yn = h f ( yn + 1 , zn + 1 , t n + 1 ) h fn + 1
(9.6.5)
zn + 1 − zn = h g ( yn + 1 , zn + 1 , tn + 1 ) h gn + 1
Where the ƒ and g terms on the right side have unknowns yn+1 and z n+1,
If ƒ and g are nonlinear functions, Eq. (9.6.5) cannot be solved in a closed form.
However, the iterative solution explained in Subsection 9.2.3 can be applied very easily
[Hall/Watt]. Indeed, the reader is encouraged to try it. Unfortunately, it is not
computationally efficient for a large system of ODEs. A more efficient approach is to
linearize the equations by Taylor expansions [Kubicek; Constantinites]. The Taylor
expansion of fk, n+1 about tn becomes
f n + 1 = f n + f y y + f z z + ft h
(9.6.6)
g n + 1 = g n + g y y + g z z + gt h
Where
y = yn + 1 − yn , z = zn + 1 − zn (9.6.7)
Introducing Eq. (9.6.6) into Eq (9.6.5) and using Eq. (9.6.7) yields
.
1 − h f y − h fz y h fn h2 ft
−h g 1 − h g z z = (9.6.8a)
y h g n h 2 gt
or more compactly
( I − hJ ) y = RH S (ال8ز6ز9)
Where
fy fz
J =
g y g z
I = identity matrix
y = col ( y , z )
Exponential transformation and exponential fitting have been proposed and used by various
researchers to stiff ODEs. The subsection gives only a brief introduction of the basic ideas
in exponential methods.
To explain the principle, consider a single first0order ODE;
Where, for simplicity of the discussions, we assume f does not include t explicitly.
y ' + c y = f ( y, t ) + c y (9.6.10)
−ct
as an integrating factor, Eq. (9.6.10) is integrated e Where c is a constant. Using
As t n , t n + 1 in the interval
h
[ f ( y ( t n + ) , t n + ) + c y ( t n + ) ] e c ( − h ) d (9.6.11)
−ch
y (t n +1 ) = yn e +
0
y (t ) = yn + y (t ) (9.6.12)
y' = f ( y n + y)
= f n + ( f y ) n y + 0( y 2 ) (9.6.13)
By ignoring the second-order error term, Eq. (9.6.13) can be equivalently written as
y' − ( f y ) n y = f n − ( f y ) n y n
(9.6.14)
Which is a linearized approximation for Eq.(9.6.9) about t=tn. if c in Eq. (9.6.10)is set to
c = − ( fy )n (9.6.15)
c = − ( f ' / f )n (9.6.17)
An explicit numerical scheme is obtained by setting the terms in the brackets of Eq.
(9.6.11) by
[ f ( y , tn + ) + c y ( tn + ) ] f n + c yn (9.6.18)
1 e −c h
= yn + h f n (9.6.19)
ch
Which is known as exponentially fitted method [Bui; Oran; Hetric; Fergason/hasen]. Not
only this method is unconditionally stable but also positivity of the solution is guaranteed
whenever the exact solution is expected to be positive.
The errors of Eq. (9.6.19) come from the approximation of Eq. (9.6.18). A more
accurate method using an iterative procedure is developed in the remainder of this
subsection. Based on Eq.(9.6.19), a predictor for y(t) for tn < t < tn+1 can be set
1 − e − c
y ( t ) = yn + fn , = t − t n (9.6.20)
c
1 − e −ch
y n +1 = yn + fn
c
h
y n + 1 = yn + =0
[ f ( y ( t n + ) , t n + ) − f n + c y ( t n + ) − cy n ] e c ( − h ) d (9.6.21)
The second term of Eq. (9.6.21) is a correction of Eq. (9.6.20), and can be evaluated by
any one of the following:
(a) Analytical integration if it is possible.
(b) Approximating the terms in the brackets by a linear interpolation.
(c) Integrating by the trapezoidal rule.
Approach (a) is not easy unless ƒ is a simple function, so we do not consider it any
further. To pursue (b), the linear interpolation of the bracketed part is written as
[ f ( y , t n + ) − f n + c y ( t n + ) − c yn ] B (9.6.22)
Where
f n +1 − f n + c( y n +1 − y n )
B=
h
Bh 2 1 − e − ch (9.6.23)
yn +1 = y n +1 +
ch ch − 1
Bh 2
yn +1 = yn +1 + (9.6.24)
2
(b) To alleviate the difficulty of the stiff ODEs, two methods, including an implicit
method and exponential method, are introduced.
11.1 Basic concepts and definitions
these variables and the partial derivatives ux, uy,......,uxx, uxy, uyy ........of the function.
confine our discussion for n=2. However extension of properties discussed here to
occurring in the equation. The power of the highest order derivative in a differential
u u
Example 11.1 (a) x +y =0 is a first-order equation in two variables with
x y
variable coefficients.
u u
(b) a +b =c; where x,y are independent variables, a and b are constants;
x y
u u
(c) + -(x+y) u=0 is a partial differential equation of first-order.
x y
2u 2u 2u u u
(d) a(x) +2b(x) +c(x) =x+y+u+ + is a partial differential
x 2
xy y 2
x y
equation of second-order.
2u 2u 2u u u
(e) a(x) +2b(x) +c(x) = f(x,y,u, , )
x 2
xy y 2
x y
where a(x), b(x) and c(x) are functions of x and f(..,..,.,.,.) is a function of x,y,u,
u u
and , is a partial differential equation of second order.
x y
2u u
(f) u + =y is a partial differential equation of second-order.
xy x
2u 2u 2u
(g) + 2y + 3x = 4 sin x is a partial differential equation of second-
x 2 xy y 2
2u 2u
(h) = is a partial differential of second-order.
x 2 y 2
2
u
2
u
(i) + = 1 is a partial differential equation of first-order and second
x y
degree.
put values of quantities on the left hand side we get right hand side.
Example 11.2.
(i) Show that sin n(x+y), cosn(x+y) and ex+y are solutions of the partial
differential equation
u u
- =0
x y
(ii) Show that u(x,y)=(x+y)3 and u(x,y)=sin (x-y) are solutions of the partial
differential equation
2u 2u
- =0
x 2 y 2
u
Solution (i) = n cos n (x+y) if u(x,y) = sin n(x+y)
x
u
= n cos n (x+y) if u(x,y) = sin n(x+y)
y
u u
L.H.S. of the equation is - = ncos n(x+y) – n cos n(x+y) =0=R.H.S.
x y
u
= -nsin n(x+y) if u(x,y) = cos n(x+y)
x
u
= -nsin n(x+y) if u(x,y) = cos n(x+y)
y
L.H.S. = [-nsin n(x+y)]-[-n sin n(x+y)]=0=R.H.S.
u
= ex+y if u(x,y) ex+y
x
u
= ex+y if u(x,y) = ex+y
y
u 2u
(ii) For u(x,y) = (x+y)3, =3(x+y)2, =6 (x+y)
x x 2
u 2u
For u(x,y)=(x+y)3, =3(x+y)2, = 6(x+y)
y y 2
This implies that L.H.S. of the given partial differential is
2u 2u
- = 6(x+y)-6(x+y)=0=R.H.S.
x 2 y 2
u 2u
For u(x,y)=sin (x-y), =cos (x-y), = - sin (x-y)
x x 2
u 2u
= - cos (x-y), = - sin (x-y)
y y 2
L.H.S. of the partial differential equation is
2u 2 y
- = - cos (x-y) + cos (x-y) = 0 = R.H.S.
x 2 y 2
Therefore (x+y)3 and sin (x-y) are solutions of
2u 2u
- = 0.
x 2 y 2
u(.,.) and all its partial derivatives appear in an algebraically linear form, 'that is, of
A uxx+2Buxy+Cuyy+Dux+Euy+Fu = f (11.3)
where the coefficients A,B,C,D.E and F and the function f are functions of x and y,
Left hand side of (11.3) can be abbreviated by Lu, where u has continuous
operator, that is, L carries u to the sum of scalar multiplications of its partial
(u+v)= Lu+v where and are scalars and u and v are any functions with
called homogeneous if Lu=0, that is, f on the right hand side of a partial differential
equation is zero, say f=0 in 11.3. The partial differential equation is called non-
homogeneous if f0.
equation of first-order.
first-order.
second-order.
equation of second-order.
one that does not contain arbitrary functions or constants. Homogeneous linear
partial differential equation has an interesting property that if u is its solution then
a scalar multiple of u, that is, cu, where c is a constant, is also its solution. Any
equation of the type F(x,y,u,c1,c2)=0, where c1 and c2 are arbitrary constants, which
or general integral of that equation. It is clear that in some sense general solution
provides a much broader set of solutions than a complete solution. However a
u u 2u
Very often ux = , uy = ,uxx = 2
x y x
2u 2u
uxy = and uyy = are respectively denoted by p, q,r, s and t.
xy y 2
is
F(x,y,u,p,q)=0 (11.4)
F(x,y,u,p,q,r,s,t)=0 (11.5)
u
in the principal part, namely the terms involving first derivatives: thus, for A +
x
u
B = C, these equations are defined to be such that the left hand side, which
y
contains all derivatives is linear in u in that A,B depend on x and y alone; however
2u 2u 2u u u
A + 2B +C = f(x,y,u, , ) (11.7)
x 2
xy y 2
x y
where A,B,C are functions of x and y.
Section 11.1. In this section we mainly focus on the classification of second order
equations into elliptic, hyperbolic and parabolic types. Notion of Cauchy data (initial
and boundary conditions) and characteristic for partial differential equations are
introduced.
problem. The initial conditions, also known as Cauchy conditions, are the values
of the unknown function u(.,.) and an appropriate number of its derivatives at the
initial point.
u(.,.) in the independent variables x and y, and suppose that this equation can be
solved explicitly for uyy, and hence can be represented in the form
For some value y=y0, we prescribe the initial values of the unknown function
u(x,y0)=f(x) (11.9)
uy(x,y0)=g(x) (11.10)
The problem of determining the solution of (11.8) satisfying initial conditions
refer to the data assigned at y=y0. If initial values are prescribed along some curve
in the (x,y) plane, that is, finding solution of equation (11.8) subject to prescribed
value of y on some curve is called the Cauchy problem. These conditions are
u(x,0)= cos x 0 x l
is an initial-value problem.
(b) Suppose that is a curve in the (x,y) plane; we define Cauchy data to be the
parametric form
u u
A(x,y,u) +B (x,y,u) =C (11.10)
x y
Let (x0,y0) denote points on a smooth curve in the (x,y) plane. Also let the
where is a parameter.
We suppose that two functions f() and g() are prescribed along the curve
. The Cauchy problem is now one of determining the solution u(x,y) of Equation
u
u=f(), =g()
n
on the curve . n is the direction of the normal to which lies to the left of in the
counter clockwise direction of increasing arc length. The functions f() and g(()
in the (x,y,u) space passing through a curve having as its projection in the (x,y)
u
plane and satisfying =g() which represents a tangent plane to the integral
n
surface along .
The boundary conditions on partial differential equation (11.6) fall into the
first kind), when the values of the unknown function u are prescribed at each
second kind), when the values of the normal derivatives of the unknown
(iii) Robin boundary conditions (also known as boundary conditions of the third
u 2u
Example 11.4 (i) = k 2 , 0<x<l,t>0
t x
u(x,o)=f(x)
u
(x,o)=g(x), 0<x<l
t
u(0,t) =T1(t)
u(l,t)=T2(t), t>0
u 2u
(ii) =k , 0<x< l, t>o
t x 2
u
u(x,o)=f(x), (x,o)=g(x), 0<x< l
t
u u
(0,t) =T3(t), (l,t)=T4(t), t>0
n n
u 2u
(iii) = k 2 , 0<x< l, t>0
t x
u
u(x,o)=f(x), (x,o)=g(x), 0<x< l,
t
u
u(0, t) + (0, t) = 0,
n
t 0 .
u
u(l, t) + (l, t ) = 0
x
devoted to initial and boundary value problems. Solutions of few important initial
homogeneous equation
we replace ux by , uy by , uxx by 2, uxy by , and uyy by 2. The left hand side
the quantity
The equation
Example 11.5 Examine whether the following partial differential equations are
2u 2u
(ii) + y =0
x 2 y 2
2 2u 2 u
(iii) y - =0
x 2 y 2
Solution (i) A = 1, C = x, B = 0
if x = 0.
In this case the equation is hyperbolic B2-AC=o if x=y. For this the equation
coefficients.
The most general form of linear partial differential equations of first order
Aux+Buy+Ku=f(x,y) (11.15)
du=uxdx+uydy (11.16)
dx dy du
= = (11.17)
A B f ( x, y ) − Ku
Bx − c
The solution of the left pair is Bx-Ay=c or y= , where c is an arbitrary
A
constant of integration
dx dy
A = B or Bdx-Ady=0 or Bx-Ay=c by integrating both sides of the
previous equation .
The other pair
dx du
=
A f ( x, y ) - Ku
du f ( x, y ) - Ku
=
dx A
Bx - c
f ( x, )
du Ku f ( x, y ) A
or + = =
dx A A A
kx
Avx+Bvy = f(x,y)e A =g(x,y)
Aux+Buy=f(x,y) (11.18)
dx dy du
= = (11.19)
A B f ( x, y )
dx dy
The solution of = is
A B
Ay + c
x=
B
Substituting this value in
dy du
= we get
B f ( x, y )
dy du
=
B Ay + c
f( , y)
B
Ay + c
f( , y)
or du=F(y,c) dy where F(y,c)= B
B
equations contain two independent equations, with two solutions of the form
in the (x,y)-plane are called the base characteristics. The general solution
represents a family of surfaces, and these surfaces are called integral surfaces.
of any one of these planes with an integral surface is a curve whose projection in
the (x,y)-plane is again given by Bx-Ay=c, but this time this equation represents a
straight line and is the base characteristic. Therefore, the solution u on a base
same as above.
Example 11.6 Find the general solution of the first-order linear partial differential
4ux+uy=x2y
dx dy du
= = 2
4 1 x y
From here we get
dx dy
= or dx-4dy=0. Integrating both sides
4 1
dx du
we get x-4y=c. Also = 2 or x2y dx=4du
4 x y
x-c
or x2 ( ) dx =4du or
4
1
(x3 – cx2) dx = du
16
3 x 4 - 4cx 3
u=c1+
192
3 x 4 - 4cx 3
= f(c)+
192
3 x 4 - 4( x - 4 y )x 3
u=f(x-4y)+
192
x4 x3y
=f(x-4y)- +
192 12
variable coefficients is
P(x,y)ux+Q(x,y)uy+f(x,y)u=R(x,y) (11.21)
where P,Q,R in (11.22) are not the same as in (11.21). The following theorem
Theorem 11.1 The general solution of the linear partial differential equation of first
order
Pp+Qq=R; (11.23)
u u
where p= , q = , P, Q and R are functions of x y and u
x y
is F(, ) = 0 (11.24)
where F is an arbitrary function and (x,y,u) =c1 and (x,y,u)=c2 form a solution
dx dy du
= = (11.25)
P Q R
xdx+y dy +udu=0
and
dx dy du
= =
P Q R
Px+Qy+Ru=0
F F
+ p + + p = 0
x u x u
F F
+ q + + q = 0
y u y u
F F
and if we now eliminate and from these equations, we obtain the
( , ) (, ) ( , )
equation p +q = (11.27)
( y, u) (u, x ) ( x, y )
=g() or =h(),
Example 11.7 Find the general solution of the partial differential equation y2up +
x2uq = y2x
dx dy du
2
= 2 = 2 (11.28)
y u x u xy
Taking the first two members we have x2dx = y2dy which on integration
given x3-y3 = c1. Again taking the first and third members,
we have x dx = u du
First-Order
Let
u u
where p=ux= , q = uy=
x y
If we can find another relation between x,y,u,p,q such that
f(x,y,u,p,q)=0 (11.31)
then we can solve (11.28) and (11.30) for p and q and substitute them in equation
F F F p F q
+ p+ + =0 (11.32)
x u p x q x
f f f p f q
+ p+ + =0 (11.33)
x u p x q x
F F F p F q
+ q+ + =0 (11.34)
y u p y q y
f f f p f q
+ q+ + =0 (11.35)
y u p y q y
p q
Eliminating from, equations (11.31) and (11.32), and from equations
x y
(11.33) and (11.34) we obtain
F f f F F f f F F f f F q
- + - p + - =0
x p x p u p u p q p q p dx
F f f F F f f F F f f F p
- + - q + - =0
y q y q u q u q p q p q dy
q 2u p
= =
x xy y
F f F f F F f F F f
- + - + - p - q + +p
p x q y p q u x u p
(11.36)
F f f
+ + q =0
y u q
system of equations
dx dy du dp dq df
= = = = = (11.37)
- F - F F F F F F F 0
-p -q +p +q
p q p q x u y u
the required equation (11.30). p and q determined from (11.28) and (11.30) will
Example 11.8 Find the general solution of the partial differential equation.
2 2
u u
x + y - u = 0 (11.38)
x y
u u
Solution: Let p = ,q=
x y
dx dy du dp dq
= = = = (11.39)
2px 2qy 2(p x + q y ) p - p
2 2 2
q - q2
F F F F F
= 2px , = 2qy, = p2, = - 1, = q2
p q x u y
and multiplying by -1 throughout the auxiliary system. From first and 4th expression
in (11.38) we get
p 2 dx + 2pxdp
dx = . From second and 5th expression
py
q 2 dy + 2qydq
dy=
qy
p 2 dx + 2pxdp q 2 dy + 2qydq
=
p2 x q2 y
dx 2 dy 2dq
or + dp = +
x p y q
or ln|x| p2 = ln|y|q2c
(c+1)q2y=u
1
u 2
q=
(c + 1)y
1
cu 2
p=
(c + 1)x
1 1
cu 2 u 2
du= dx + dy
(c + 1)x (c + 1)y
1 1 1
1+ c 2 c 2 i 2
or du = dx + dy
u x y
1 1 1
By integrating this equation we obtain ((1+ c )u) 2
= (cx) 2
+ ( y) 2
+ c1
F(p,q)=0 (11.41)
dx dy du dp dq
= = = =
Fp Fq pFp + qFq 0 0
(11.40) we have
F(c,q)=0 (11.42)
du=cdx+G(c) dy
dx dy du dp dq
- = = 2 2
= =
- 2p 2q - 2p - 2q 0 0
dx dy du dp dq
or = = 2 = =
p q p +q 2
0 0
Using dp =0, we get p=c and q= 1- c 2 , and these two combined with du
=pdx+qdy yield
dx dx
Using = p , we get du = where p= c
du c
x
Integrating the equation we get u = + c1
c
dy
Also du = , where q = 1- p 2 = 1- c 2
q
dy 1
or du = . Integrating this equation we get u = y +c2
2 2
1- c 1- c
we get
u2 = (x-)2 + (y-)2
u=px+qy+f(p,q)
or
F=px+qy+f(p,q)-u=0 (11.43)
The auxiliary system of equations for Clairaut equation takes the form
dx dy du dp dq
= = = =
x + fp y + fq px + qy + pf p + qf q 0 0
p=c1, q=c2
of (11.42).
F(u,p,q) = 0 (11.44)
dp dq
The last two terms yield =
p q
This equation together with 11.43 can be solved for p and q and we
dx dy du dp dq
= = = =
q p 2pq - 2up - 2uq
1
q= 4 - u 2 and p = + a 4 - u2
a
1
du = + 4 - u2 adx + dy
a
du 1
or = + adx + dy
4-u 2 a
u 1
Integrating we get sin--1 = + adx + y + c
2 a
1
or u = + 2 sin ax + y + c
a
f(x, p) = g(y, q) = C
Solving for p and q, and using du=pdx+qdy we can obtain the solution
a a
This gives p = and q =
2
1- x 4 - y2
a a
du = dx + dy
1- x 2 4 - y2
y
Integration gives u = a sin' x + sin' + c.
2
order
u u
this case the values of p= , q= are not unique at a point (x,y,u). If an integral
x y
intersection of the surface with the planes y=constant and x=constant, respectively.
Moreover, p,q,-1 represent the direction ratios of the normals to the surface at the
are infinitely many possible normals and consequently infinitely many integral
surfaces passing through any fixed point. So, unlike the case of ordinary differential
through a point.
making it pass through a continuous twisted space curve, also known as an initial
known as the normal cone. The corresponding tangent planes to the integral
surfaces envelope a cone known as the Monge cone. In the case of a linear or a
quasi linear equation, the normal cone degenerates into a plane since each normal
is perpendicular to a fixed line. Consider the equation ap+bq=c, where a,b, and c
are functions x,y, and u. Then the direction p,q,-1 is perpendicular to the direction
ratios a,b,c. This direction is fixed at a fixed point. The Monge cone then
degenerates into a coaxial set of planes known as the Monge pencil. The common
axis of the planes is the line through the fixed point with direction ratios a,b,c. This
Constant Coefficients
2u 2u 2u
+ k + k =0 (11.45)
x 2 xy y 2
1 2
(D 2
x )
+ k 1D xD y + k 2D2y u = 0
D 2x + k 1D x D y + k 2D 2y = 0
Let the roots of this equation be m1 and m2, that is, Dx=m1Dy, Dx=m2Dy
This implies
dx dy du
= =
1 - m2 0
or y+m2x=c
dx dy du
= =
1 - m1 0
of (11.44).
If the roots are equal (m1 = m2) then Equation 11.44 is equivalent to
(Dx-m1Dy)2 u = 0
z= (y+m1x)
(Dx-m1Dy) u = (y+m1x)
or p-m1q = (y+m1x)
dx dy du
= =
1 - m1 ( y + m1x )
u= x (y+m1x) + (y+m1x)
2u 2u
- =0
x 2 y 2
Solution: In the terminology introduced above this equation can be written as
(Dx2-Dy2) u = 0.
or (Dx-Dy) (Dx+Dy)u=0
(Dx-Dy)(Dx+Dy)=0,
that is, Dx - Dy =0
p=q or p = - q
p-q = 0 or p+q=0
dx dy du
= =
1 -1 0
dx dy du
= =
1 1 0
2u 2u 2u
+ k + k =f(x,y) (11.48)
x 2 xy y 2
1 2
are called non-homogeneous partial differential equations of the second-
2u 2u 2u
+ k + k =0 (11.49)
x 2 xy y 2
1 2
(11.47) Let f(Dx,Dy) be a linear partial differential operator with constant coefficients,
1
as
f (D x ,D y )
1
f(Dx,Dy) ( x, y ) = ( x, y ) (11.50)
f (D x ,D y )
1 1 1
( x, y ) = ( x, y ) (11.51)
f1 (D x ,D y )f2 (D x ,D y ) f1(D x ,D y ) f2 D x ,D y )
1 1
= ( x, y ) (11.52)
f2 (D x ,D y ) f1(D x ,D y )
1
1( x, y) + 2 ( x, y) = 1 1( x, y)
(D x ,D y ) f (D x ,D y )
(11.53)
1
+ 2 ( x, y )
f (D x ,D y )
1 1
e ax+by = = e ax+by , f (a,b) 0 (11.54)
f (D x ,D y ) f (a,b)
1 1
( x, y )e ax+by = e ax+by ( x, y ) (11.55)
f (D x ,D y ) f (D x + a,D y + b)
1 1
= e ax e by ( x, y ) = e by e ax ( x, y ) (11.56)
f (D x + a,D y ) f (D x ,D y + b)
1 1
2 2
cos (ax + by ) = cos (ax + by ) (11.57)
f (D , D y )
x f (-a , - b 2 )
2
1 1
2 2
sin (ax + by ) = sin (ax + by ) (11.58)
f (D , D y )
x f (-a , - b 2 )
2
1
When (x,y) is any function of x and y, we resolve into partial
f (D x , D y )
fractions treating f(Dx, Dy) as a function of Dx alone and operate each partial
1
(x,y) = ( x, c − mx )dx
D x − mD y
Example 11.13
Find the particular solution of the following partial differential equations
2u 2u u
(i) 3 2 +4 - = e x −3 y
x xy y
2u u
(ii) 3 - = e x sin( x + y )
x 2 y
(3D 2x + 4 D xD y - D y ) u = ex-3y
1
up = ex-3y
3D + 4 D x D y - D y
2
x
1
= ex-3y by (11.53)
3 + 4(-3) - (-3)
1 x-3y
= - e
6
1
up = 2
ex sin (x+y)
3D - D y
x
1
= ex sin (x+y)
(3(D x + 1) 2 - D y )
1
= ex sin(x+y)
(3D + 6 D x + 3 − D y )
2
x
1
= ex sin(x+y)
(3(-1) + 6D x + 3 - D y
1 ( 6D x + D y )
= ex sin (x+y) = ex sin(x+y)
6D -D 36D 2x - D 2 y
x y
7 cos ( x + y )
= ex
- 35
1 x
=- e cos(x+y).
5
2u 2 2u
-c = e-xsin t
t 2 x 2
(D 2t -c2Dx2) u = e-xsin t
1
up= 2
2 2
e − x sin t
D - c Dx
t
−x 1 1
= e 2
sin t = e − x sin t
D - (c(D x - 1)
t - 1- c 2
1
=- e − x sin t
c +1
2
uc = (x-ct)+ (x+ct)
1
u(x,t)= (x-ct)+ (x+ct) - e − x sin t
c +12
equation
2u 2 2u
-c =0.
t 2 x 2
11.5 Monge's Method for a special class of non linear Equations (quasi
u u 2u 2u 2u
Let p = , q = ,r = , s= , t= 2
x y x xy y
F(x,y,u,p,q,r,s,t)=0 (11.59)
form
= f() (11.60)
where and are known function of x,y,u, p and q and the function f is
arbitrary; that is, in finding relations of the type (11.59) such that equation (11.58)
can be derived from equation (11.59). The following equations are obtained from
it by partial differentiation.
It may be noted that every equation of the type (11.58) does not have a first
integral of the type (11.59). By eliminating f'() from equations (11.60) and (11.61),
we find that any second order partial differential equation which possesses a first
R1r+S1s+T1t+U1(rt-s2)=V1 (11.63)
where R1, S1,T1,U1 and V1 are functions of x,y,u, p and q defined by the relations
R1r+S1s+T1t=V1 (11.67)
if and only if the Jacobian pp- qp=0 identically. Equation (11.66) is a non-
linear equation because the coefficients R1, S1, T1, V1 are functions of p and q as
well as of x,y, and u. Infact it is a quasi linear equation. We explain here the method
Rr+Ss+Tt = V (11.68)
for which a first integral of the form (11.59) exists. For any function u of x
Eliminating r and t from this pair of equations and equation (11.67), we see
Example: 11.15
2 2
u 2 u u u 2 u u 2 u
Solve the equation - 2 + =0
y x x y xy x y
2 2
(pdx+qdy)2 = 0 (11.73)
By the equation du=pdx+qdy and (11.72) we get du=0, which gives integral
u=c1. From (11.71) and (11.72) we have qdp =pdq, which has solution
dx dy du
= =
1 - f (u) 0
y+x f(u)=g(u)
Write down the order and degree of partial differential equations in problems
1-5.
u u
1. + = u2
x y
2u u
2. =
x 2
t
3
u u
3. + =0
x y
u u
4. + 100 =0
t x
2 3
u u
5. + = 0
x y
2u 2u
+ =0
x 2 y
x ux –y uy = 0
Examine whether cos (xy), exy and (xy)3 are solutions of this partial
differential equation.
2u 2u 2u
11. 4 - 12 + 9 =0
t 2 xt x 2
2u 2u 2u
12. 8 - 2 - 3 =0
x 2 xy y 2
For what values of x and y are the following partial differential equations
20. 2(u+xp+yq)=yp2
21. u2=pqxy
22. xp+3yq=2(u-x2q2)
23. pq=1
24. p2y(1+x2)=qx2
25. u=p2-q2
26. p2q2+x2y2=x2q2(x2-y2)
27. Discuss the method for finding a complete solution of the equation of
the type
F(u,p,q)=0
2u u
28. + 12 +2=0
x 2
x
2u 2u 2u
29. 4 - 16 + 15 =0
x 2 y 2 y 2
2u 2u u
30. 3 2 +4 - =0
x xy y
2u u
31. 3 - = sin (ax+by)
x 2 y
2 u u u 2u
32. 3 - 2 - 5 = 3x+y+ex-y
x 2 x y y 2
2u 2u
33. =
x 2 y 2
u u u u u u
2 2
34. - x -
2
x y x xy y x
2
2u u
2
u u 2u 2u u u 2u u 2u
35. 2 - 2 - = -
x y x q xy y 2 x x y 2 y xy
36.