Notes Computationalsep2015notes - v2 PDF
Notes Computationalsep2015notes - v2 PDF
Lectures 1 to 6
2015/16
V2
Contents
1 Introduction to modeling techniques
1.1 Models classification . . . . . . . . . . . . . . . . . . .
1.2 Examples of derivation of few classical PDEs . . . . .
1.3 Partial differential equations: nomenclature . . . . . .
1.4 Analytical solution methods for PDEs . . . . . . . . .
1.5 Numerical solution methods for PDEs . . . . . . . . .
1.6 Classical PDE, generalities and basic solution methods
1.7 Example of solution . . . . . . . . . . . . . . . . . . .
1.8 General and particular solution . . . . . . . . . . . . .
1.9 The wave equation . . . . . . . . . . . . . . . . . . . .
1.10 The diffusion equation . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1.0
1.1
1.3
1.8
1.10
1.11
1.11
1.11
1.13
1.21
1.23
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2.0
2.1
2.1
2.3
2.9
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3.0
3.1
3.2
3.6
3.7
3.8
3.10
3.12
3.12
3.16
4.0
4.1
4.4
4.23
5.0
6.0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0.1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
0.2
Important disclaimer
These notes are meant to be a reference for students following the course 3MA010 and cover only lecture 1 to
lecture 4. The notes are written in a concise way. These are not meant to be a replacement for following the
classes or for studying the material in the suggested books. The notes, as far as possible, reproduce material
from the original sources (quoted where appropriate). This gives the possibility to students to easily go back
to the original, and more complete, study material.
List of sources
NB: Lectures are based on material from a variety of books and web sources. These notes collect most of
this material, from the different sources, and some examples. Whenever applicable a detailed reference to
the original source for exercises / paragraph / chapters is provided. Here below a short list of the relevant
sources:
List of books
Essential Mathematical Methods for the Physical Sciences
by K. F. Riley and M. P. Hobson
University of Cambridge 2011
ISBN: 9780521761147
Several lectures follow closely Chapters 10 and 11 of the book.
Partial Differential Equations
by S.W. Rienstra, J.H.M. Ten Thije Boonkkamp R. M. M. Mattheij
Society for Industrial & Applied Mathematics, U.S. 1987
ISBN-10: 0898715946
ISBN-13: 978-0898715941
More advanced book for both analytical and numerical methods for solution of PDEs.
Finite Difference and Spectral Methods for Ordinary and Partial Differential Equations
Lloyd N. Trefethen
Available online: http://people.maths.ox.ac.uk/trefethen/pdetext.html
Finite Difference Schemes and Partial Differential Equations
John Strikwerda
Society for Industrial and Applied Mathematics 2007
ISBN-10: 089871639X
ISBN-13: 978-0898716399
Good reference on numerical analysis and numerical methods to solve PDEs.
Methods of Mathematical Physics
R. Courant, D. Hilbert
Wiley-VCH 1989
ISBN-10: 0471504475
Introduction to Partial Differential Equations with MATLAB
J.M. Cooper
Birkhauser Basel 1998
ISBN: 978-0-8176-3967-9
0.2
CONTENTS
0.3
0.3
Lecture 1
Models classification . . . . . . . . . . . . . . . . . . . . . .
1.1.1 Systematic models . . . . . . . . . . . . . . . . . . . . . .
1.1.2 Constructing models . . . . . . . . . . . . . . . . . . . . .
1.1.3 Canonical models . . . . . . . . . . . . . . . . . . . . . . .
1.2 Examples of derivation of few classical PDEs . . . . . . .
1.3 Partial differential equations: nomenclature . . . . . . .
1.4 Analytical solution methods for PDEs . . . . . . . . . . .
1.5 Numerical solution methods for PDEs . . . . . . . . . . .
1.6 Classical PDE, generalities and basic solution methods
1.7 Example of solution . . . . . . . . . . . . . . . . . . . . . .
1.8 General and particular solution . . . . . . . . . . . . . . .
1.8.1 First order equation . . . . . . . . . . . . . . . . . . . . .
1.8.2 First order inhomogeneous equations . . . . . . . . . . . .
1.8.3 Second order equation . . . . . . . . . . . . . . . . . . . .
1.9 The wave equation . . . . . . . . . . . . . . . . . . . . . . .
1.10 The diffusion equation . . . . . . . . . . . . . . . . . . . .
1.0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1.1
1.1
1.2
1.2
1.3
1.8
1.10
1.11
1.11
1.11
1.13
1.13
1.15
1.17
1.21
1.23
Models classification
1.1
1.1
Models classification
1.1.1
Systematic models
Systematic models are also referred to as asymptotic or reducing models. The starting point are over-complete
models, that are considered appropriate to describe the problem, but that are overcomplete because they
may include effects negligibly small or uninteresting, thus making the mathematical problem unnecessary
complex. By using knowledge about the system, one can simplify the complete model towards an easier to
handle and to interpret model that retains the interesting effects.
Example 1.1: From a real to an ideal pendulum
A classical example of systematic model is the motion of an ideal pendulum when compared to the
motion of a real one. A pendulum with a point mass suspended by a weightless cord of length L, see
Figure 1.1, is described by the following non-linear equation of motion:
d2
g
= sin ()
2
dt
L
(1.1)
1.1
Models classification
1.1.2
1.2
Constructing models
Also referred to as building or lumped-parameter models. The problem description is built step by step from
bottom to up, from simpler to more complex, by adding effects and elements until the required description
or accuracy is achieved. Examples of such models are given in Section 1.2.
1.1.3
Canonical models
Also known as characteristic or quintessential models. An existing model is reduced in order to describe
only a certain aspect of the problem, typical case being the Burgers equation which is the reduced version
of full Navier-Stokes equations.
Example 1.2: 1 - Burgers equation
This section closely follows: RMM Mattheij et al. Page 135 example 7.5
The Navier-Stokes equations for incompressible viscous flow are given by
1
v + v v = p + 2 v
t
the Burgers equation is reduced to a linear equation, related to the heat equation:
2
2 = C(t)
t
x
This latter kind of equation is well understood and allows many exact solutions.
1.2
1.2
1.3
dQ
=
dt
Z
dS
(kT ) n
S
Z
(kT )dV
=
V
is the outward-pointing unit normal to S. The passage from an integral over a surface to an
where n
integral on a volume is based on Gauss divergence theorem. The rate of change of Q can also be
expressed as
Z
T
dQ
=
cp
dV
dt
t
V
where we used Leibniz rule and cp , symbols represent specific heat and density, respectively.
Equating the above results and assuming a material with constant properties, we can obtain the
three-dimensional heat diffusion equation
2 T =
T
t
(1.3)
1.3
1.4
~wr.
Figure 1.2: Chain of coupled spring elements.
p
The typical solution of this equation are waves propagating with a wave velocity /.
More precisely this equation describes longitudinal wave, i.e. oscillating in the direction of propagation. This is similar to what one obtains for sound waves propagating in a compressible fluid like air,
in that case the air stiffness would correspond to = p where = 1.4 is a gas constant and p the
atmospheric pressure.
1.4
1.5
F = T sin(2 ) T sin(1 )
(1.7)
Assuming that both angles 1 and 2 are small, one can make the approximation tan . Since at
any position the slope is such that tan = u
x , one can write
u(x + x) u(x)
2 u(x, t)
x
(1.8)
F = T
T
x
x
x2
This upward force must be equated, according to Newtons law, to the product of the vertical acceleration times the mass of the small string element.
x
2 u(x, t)
2 u(x, t)
=T
x
2
t
x2
(1.9)
and thus
2
2 u(x, t)
2 u(x, t)
=
c
t2
x2
2
where c = T / is the speed of propagation of waves.
(1.10)
1.5
1.6
Assuming that the number of individuals is very large, we can take the limit N or equivalently
h 0 and thus p becomes a continuos function of space:
1
(p(x + h, y) + p(x h, y) + p(x, y + h) + p(x, y h))
4
that can be rewritten in the following form:
p(x, y) =
(1.12)
(1.13)
(1.14)
This is the celebrated Laplace equation that typically emerges to describe equilibrium phenomena
where information is exchange in all direction and discontinuities and gradient get smoothed out.
A very famous case where the Laplace equation emerges is the distribution of temperature in heatconduction problems once a stationary equilibrium is achieved.
1.6
1.7
yj+1
yj
yj1
xj1
xj
xj+1
1.7
1.3
1.8
The difference between partial differential equations, PDE, and ordinary differential equations,
ODE, consists in the fact that in PDE more than one independent variable are present while in ODEs only
one independent variable (and its derivative) enter in the equation (e.g. time).
While ODEs are special cases of PDEs, their general behaviour is quite different. Here some basic concepts
related to PDEs are presented.
Order
Linear
Semi-linear
The highest order derivative coefficients are functions only of the independent variables.
Quasi-linear
If it is linear in its highest derivatives. When the equation is a linear combination of derivatives but the coefficients of the highest derivative, suppose of
order n, depends upon (n1)th order derivatives at most, then the equation
is called quasi-linear.
Non-linear
Homogeneous
BVP
IVP
Dirichlet BC
Neumann BC
The value of u/n, i.e. the normal derivative of u is specified at each point
of the boundary.
Cauchy BC
Robin BC
A linear combination of the value of the function u and its normal derivative
u/n.
Mixed BC
One type of boundary data is given on one part while another type of boundary data is given on a different part of the boundary.
Periodic BC
Domain of influence
Domain of dependence
Evolutionary problems
Stationary problems
1.9
u 1 u2
+
=0
t
2 x
It is the celebrated inviscid Burgers equation in its conservation form which is a 1st order
quasi-linear equation since it can be also written in the advection form as
u
u
+u
=0
t
x
where clearly the coefficient b(x, y, u) = u.
2.
|u| = c
It is the well-known eikonal equation which is a non-linear 1st order equation. Its non-linearity
can be clearly deduced if it is written in the following form:
u
x
2
+
u
y
2
+
u
z
2
= c2
1.9
1.10
in G [0, )
2. Neumann BC:
3. Robin BC:
u = 0 on G [0, )
(1.15)
u
= 0 on G [0, )
n
(1.16)
u
+ hu = 0 on G [0, )
n
(1.17)
(1.18)
We define the pure IVP specifying the initial displacement f (x) and initial velocity g(x) at initial
instant of time t = 0 as
u(x, 0) = f (x), ut (x, 0) = g(x) for x R
1.4
Separation of variables Method in which the final solution is found as the product of separate
functions of the independent variables.
Method of characteristics A method that in some special cases allows to find characteristic curves
on which the PDE reduces to a system of ODEs.
Integral transform Integral transformations can allow to transform an equation in a simpler one, for
example in a separable PDE. A classical example is the use of Fourier analysis that diagonalizes the
heat equation using the eigenfunction represented by sinusoidal waves.
1.10
1.11
Change of variables Sometimes PDEs can be transformed in simpler one by appropriate change of
variables (e.g. the Cole-Hopf transformation that transforms Burgers into the heat equation).
Fundamental solution Non homogenous equations can be solved by firstly finding the fundamental
solution (the solution to a point source, or Dirac delta) and then taking the convolution with the
boundary condition to get the solution.
Superposition principle Particularly useful in case of linear equations, make use of the fact that the
sum of two solutions is still a solution.
Methods for non-linear equations No general solution methods exist for non-linear equations. Numerical approximation plays an importan role here. Many different methods can be helpful in specific
cases.
1.5
1.6
1.7
Example of solution
(1.19)
f (x) = const
(1.20)
f (x, y)
=0
x
(1.21)
f = f (y)
(1.22)
1.11
Example of solution
1.12
(1.23)
u = w(x) + v(y)
(1.24)
(1.25)
u(x, y) =
x0
(1.26)
y0
(1.27)
xy =
(1.28)
u(x, y) = (, )
(1.29)
2 = 0
(1.30)
= w()
(1.31)
u = w(x + y)
(1.32)
Similarly, if and are constants, the general solution of the differential equation
ux + uy = 0
(1.33)
u = w(x y)
(1.34)
is in the form
According to elementary theorems of differential calculus, the PDE
ux gy uy gx = 0
(1.35)
where g(x, y) is any given function of x, y states that the Jacobian (u, g)/(x, y) of u, g with respect to x,
y vanishes. This means that u depends on g, i.e., that
u = w[g(x, y)],
where w is an arbitrary function of the quantity g.
1.12
1.8
1.13
1.8.1
The most general first order linear PDE of two independent variables is:
A(x, y)
u
u
+ B(x, y)
+ C(x, y)u = R(x, y)
x
y
(1.36)
where A, B, C and R are given functions. For A = 0 or B = 0 the equation is just an ODE where the
arbitrary integration constant is now a function of the independent variable, either x or y.
Example 1.11: First order PDE: general solution
This section closely follows: Riley & Hobson Page 394
As an example, consider the first order equation for u(x, y)
x
u
+ 3u = x2
x
equivalent to
u 3u
+
= x,
x
x
Multiplying through the integrating factor x3 we find
3
(x u) = x4
x
that can be straightforwardly integrated with respect to x
x3 u =
x5
+ f (y)
5
and finally
x2
f (y)
+ 3
5
x
where now the integration constant is, actually, an arbitrary function of y.
u=
Let us assume for the time being that C = R = 0 and let us search for a solution in the form u(x, y) = f (p)
where p is an unknown function of x and y:
u
df (p) p
=
x
dp x
(1.37)
df (p) p
u
=
y
dp y
(1.38)
(1.39)
1.14
A(x, y)
p
p
+ B(x, y)
=0
x
y
(1.40)
We now look for the necessary conditions for f (p) to remain constant as x and y vary, this is equivalent to
require that p itself to remain constant. For this condition to be fulfilled, we need that x and y vary in such
a way that
p
p
dp =
dx +
dy = 0
(1.41)
x
y
The forms of (1.40) and (1.41) become exactly the same if we require that
dy
dx
=
A(x, y)
B(x, y)
(1.42)
u
u
2y
=0
x
y
find the solution that takes the value (i) 2y + 1 on line x = 1 and, then, a solution that has the value
(ii) 4 at the point (1, 1).
We seek a solution of the form u(x, y) = f (p) and it will be constant along lines of (x, y) that satisfy
the relation
dy
dx
=
,
x
2y
see Eq. (1.42) where, for this equation, B = 2y and A = x.
We are now ready to integrate the relations (1.8.1) as:
log x = c
1
log y
2
which, finally, gives x = c0 y 1/2 . If we identify the constant of integration c0 with p1/2 , we have that
p = x2 y. The general solution of the PDE is therefore
u(x, y) = f (x2 y),
with f arbitrary function.
For the boundary condition (i), the particular solution required is given by
u(x, y) = 2(x2 y) + 1
while for the boundary condition (ii) some acceptable solutions are
u(x, y) = x2 y + 3,
u(x, y) = 4x2 y,
u(x, y) = 4.
1.14
1.15
All these three solutions are particular examples of the general solution
u(x, y) = x2 y + 3 + g(x2 y)
where g(x, y) = g(p) is an arbitrary function subject to the only condition that g(1) = 0
So far we considered the case where the PDE contains no term proportional to u. In the case instead that
C(x, y) 6= 0 the procedure needs to be adapted and one seeks a solution in the form u(x, y) = h(x, y)f (p).
Example 1.13: Solution of homogeneous equation (2)
This section closely follows: Riley & Hobson Page 396
Consider the inhomogeneous equation
x
u
u
+2
2u = 0.
x
y
(1.43)
To start with, we look for solutions in the form u(x, y) = h(x, y)f (p). We can write the following
equations:
u
h
df (p) p
=
f (p) + h
x
x
dp x
u
h
df (p) p
=
f (p) + h
y
y
dp y
(1.44)
(1.45)
(1.46)
From inspection of Eq. (1.46), we see that the first term in curl brackets vanishes for any solution
h(x, y) (it has the same expression as the original PDE), while the second term in curl brackets can
be solved as previously done:
x
p
p
+2
= 0 x = cey/2 p = xey/2
x
y
(1.47)
(1.48)
where f (p) is an arbitrary function and h(x, y) is any solution to Eq. (1.43).
1.8.2
An equation is said to be homogenous if given a solution u(x, y) then u(x, y) is also a solution for any .
The problem is said to be homogenous if the boundary condition satisfied by u(x, y) are also satisfied by
u(x, y).
1.15
1.16
The interest in homogeneity vs. inhomogeneity of PDE is that, similarly to ODE, the general solution of the
inhomogeneous problem can be written as the sum of any particular solution of the problem and the general
solution of the corresponding homogeneous problem.
As an example, consider the equation
u
u
x
+ u = f (x, y)
x
y
(1.49)
(1.50)
where v(x, y) is any solution of the inhomogeneous equation such that v(0, y) = g(y) and w(x, y) is the
general solution of the homogenous equation:
w
w
x
+ w = 0
x
y
(1.51)
u
u
x
= 3x
x
y
(1.52)
To start with we look for solutions of the corresponding homogeneous equation in the form u(x, y) =
f (p). Following the exposed procedure, we see that u(x, y) will be constant along lines satisfying the
relation
dx
dy
=
y
x
and after integration we get
x2
y2
+
=c
2
2
from which, imposing c = p/2, we obtain the general solution of the homogeneous equation in the
form u(x, y) = f (x2 + y 2 ) where f is an arbitrary function which will be determined once appropriate
boundary conditions will be imposed.
Proceeding further, we seek a particular integral of eq. (1.52). For this simple case, we note that
a particular integral is u(x, y) = 3y and so the general solution of (1.52) will be the sum of two
contributions
u(x, y) = f (x2 + y 2 ) 3y
To determine the arbitrary function f , we apply the boundary condition u(x, 0) = x2 which requires
that u(x, 0) = f (x2 ), that is f (z) = z, and so the particular solution in this case is
u(x, y) = x2 + y 2 3y
If the boundary condition is a one-point boundary condition, say, u(1, 0) = f (1) = 2, one possibility
is f (z) = 2z and so we obtain
u(x, y) = 2x2 + 2y 2 3y + g(x2 + y 2 )
1.16
1.17
where g is any arbitrary function for which g(1, 0) = 0. Alternatively, a simpler choice is f (z) = 2,
which leads to
u(x, y) = 2 3y + h(x2 + y 2 )
where, again h(1, 0) = 0.
1.8.3
2u
2u
2u
u
u
+
B
+
C
+D
+E
+ F u = R(x, y)
2
2
x
xy
y
x
y
(1.53)
B 2 > 4AC
B 2 = 4AC
B 2 < 4AC
Clearly if A, B and C are functions of x and y, instead of being constants, then the nature of the PDE may
be different in different parts of the domain.
Here we illustrate the nature of the PDE by considering a special case, namely homogeneous equations,
R(x, y) = 0, for which the coefficients A, . . . , F are not function of space but are constant.
We focus in particular on the special case D = E = F = 0 so that only second-order derivatives remain.
This is for example the case of the one-dimensional wave equation (1.18), of the two-dimensional Laplace
equation (1.14) but not of the diffusion equation (1.3) as the latter contains first order derivatives.
We seek for a solution in the form
u(x, y) = f (p)
(1.54)
where f (p) is a function such that we hope to be able to obtain a common factor d2 f (p)/dp2 .
u
df (p) p
=
x
dp x
(1.55)
Clearly one will not obtain a single factor involving a second order derivative of f (p) unless p is a linear
function of x and y. Assuming a solution of the form u(x, y) = f (ax + by) and evaluating the partial
derivative
1.17
1.18
u
x
u
y
2u
x2
2u
xy
2u
y 2
df (p)
dp
df (p)
= b
dp
2
d
f (p)
= a2
dp2
2
d f (p)
= ab
dp2
2
d f (p)
= b2
dp2
=
(1.56)
(1.57)
(1.58)
(1.59)
(1.60)
d2 f (p)
=0
dp2
(1.61)
This is precisely the form we were looking for. In the case the term inside the parenthesis is zero, then the
equation is satistified for any function f (p). The condition is thus:
Aa2 + Bab + Cb2 = 0
(1.62)
and from this second order equation one obtains the following two solutions for a and b:
1/2 i
b
1 h
=
B B 2 4AC
a
2C
(1.63)
and if we take 1 and 2 equal to the two ratios b/a, solution of the second order equation, then any function
of the two variables p1 = x + 1 y and p2 = x + 2 y is a solution of the original PDE. Thus, in general, the
solution may be written as:
u(x, y) = f (x + 1 y) + g(x + 2 y)
(1.64)
where f and g are arbitrary functions.
The solution of the equation d2 f (p)/dp2 = 0 provides only the trivial solution u(x, y) = kx + ly + m for
which all second derivatives are identically zero.
Example 1.15: General solution of the one-dimensional wave equation
This section closely follows: Riley & Hobson Page 401
As an example we look at the solution of the following wave equation:
2u
1 2u
=0
x2
c2 t2
(1.65)
This equation is in the form (1.53) with A = 1, B = 0 and C = 1/c2 . Consequently 1,2 are a
solution of
21,2
1 2 =0
(1.66)
c
1.18
1.19
and namely 1 = c and 2 = c. This means that the generic solution can be expressed as
u(x, t) = f (x ct) + g(x + ct)
(1.67)
where f and g are two arbitrary functions corresponding to travelling solutions in the positive and
negative x direction with speed c.
(1.68)
and again we look for a solution in the form of a function f (p) where p = x + y and satisfies:
1 + 2 = 0
(1.69)
This condition requires that = i and thus p = x iy and the general solution is:
u(x, y) = f (x + iy) + g(x iy)
(1.70)
From the two examples above it is clear that the nature of the solution, i.e. the appropriate combination of
x and y, depends upon whether B 2 > 4AC or B 2 < 4AC. This is the criterion that distinguishes if the PDE
is hyperbolic or elliptic.
As a general result, hyperbolic or elliptic equations, given the condition that the constants A, B and C are
real, have a solution with arguments respectively of the form x + y or x + iy, where and are real
constants.
The case of parabolic equations, i.e. B 2 = 4AC, is special because 1 = 2 and thus only one appropriate
combination of x and y is possible:
u(x, y) = f (x (B/2C)y)
(1.71)
In order to find the second part of the general solution one may try a solution of the form:
u(x, y) = h(x, y)g(x (B/2C)y)
Substituting this into the equation and using the fact that A = B 2 /4C:
2
h
2h
2h
A 2 +B
+C 2 g =0
x
xy
y
(1.72)
(1.73)
Thus it is required that h is any solution of the original PDE. As any will do, one can take the simplest
h(x, y) = x, this will allow to construct the general solution of the parabolic PDE as:
u(x, y) = f (x (B/2C)y) + xg(x (B/2C)y)
(1.74)
1.19
1.20
(1.75)
(1.77)
Now the boundary condition u(0, y) = 0 implies f (p) = 0 and the other boundary condition, u(x, 1) =
x2 , gives:
xg(x 1) = x2
(1.78)
and g(p) = p + 1 with a particular solution given by
u(x, y) = x(p + 1) = x(x y + 1)
(1.79)
As the boundary conditions are prescribed along two boundaries, x = 0 and y = 1, the solution is
completely determined and it contains no arbitrary function.
Here we provide an alternative derivation of the general solutions (1.64) and (1.74) by changing variables in
the original PDE before solving it. At that point the solution will become very easy, but of course this is
only possible due to the insight that we already have on the expected solutions.
Starting from eqn. (1.53) we change to the new variables:
= x + 1 y
(1.80)
= x + 2 y
(1.81)
=
+
x
= 1
+ 2
y
(1.82)
(1.83)
(1.84)
2u
2u
2u
+B
+C 2 =0
2
x
xy
y
becomes
[2A + B(1 + 2 ) + 2C1 2 ]
1.20
2u
=0
(1.85)
(1.86)
Version of September 1, 2015
1.21
(1.87)
u(, ) = f () + g()
(1.88)
(1.89)
If the equation is parabolic (i.e. B 2 = 4AC) we use the alternative set of variables:
= x + y
=x
(1.90)
2u
=0
2
(1.91)
(1.92)
u(x, y) = xg(x + y) + f (x + y)
(1.93)
1.9
(1.94)
where f and g are arbitrary functions that represent the propagation in positive and negative directions.
In the case where f (p) = g(p) this may result into a wave that does not progress, i.e. a standing wave.
Supposing
f (p) = g(p) = A cos(kp + )
(1.95)
then the solution can be written as:
u(x, t) = A [cos(kx kct + ) + cos(kx + kct + )] = 2A cos(kct) cos(kx + )
(1.96)
and thus the shape of the wave does not move, but its amplitude oscillates in time with a frequency kc. At
any point that satisfies cos(kx + ) = 0 there is no displacement and such points are called nodes.
So far the discussion considered the wave equation without any boundary condition.
How to impose a boundary condition to the wave equation? This problem is usually treated by means of
the method of separation of variables (see Chapter 5). Here we consider the DAlemberts solution u(x, t) of
the wave equation with the following initial conditions:
initial displacement:
u(x, 0) = (x)
(1.97)
1.21
1.22
initial velocity:
u(x, 0)
= (x)
(1.98)
t
We need to find the functions f and g that are consistent with the assigned values at t = 0. This implies
that
(x) = u(x, 0) = f (x 0) + g(x + 0)
u(x, 0)
(x) =
= cf 0 (x 0) + cg 0 (x + 0)
t
(1.99)
(1.100)
(1.101)
(1.102)
p0
for some integration extreme p0 and with a consistent constant K (depending on p0 ). Putting this together
with = f (x) + g(x) gives us:
Z p
1
K
f (p) =
(q)dq
(1.103)
2
2c p0
2
Z p
1
K
g(p) = +
(q)dq +
(1.104)
2
2c p0
2
Adding these two last equations, the first evaluated with p = x ct and the second with p = x + ct we obtain
the solution to the problem in the form:
Z x+ct
1
1
u(x, t) = [(x ct) + (x + ct)] +
(q)dq
(1.105)
2
2c xct
What is the physical interpretation of what we found? The solution is composed by 3 terms, the first two
represent the influence of the original displacement that started at a position ct or ct and traveled leftward
or rightwards, arriving at x at time t. The third term can be seen as an accumulated displacement at position
x of all parts of the initial condition that could reach x within a time t travelling both backward or forward.
Extension to 3d The extension to the 3d wave equation of solution similar to the one just discussed is
rather straighforward.
The 3d wave equation reads:
2u 2u 2u
1 2u
+ 2 + 2 2 2 =0
2
x
y
z
c t
(1.106)
and, similarly to the 1d case, we can search for solutions that are linear combinations of all four variables:
p = lx + my + nz + t
A solution will be acceptable under the condition:
2 d2 f (p)
2
2
2
l +m +n 2
=0
c
dp2
(1.107)
(1.108)
(1.109)
Version of September 1, 2015
1.23
(1.110)
pointing
This conditions is equivalent to saying that (l, m, n) are the cartesian components of a unit vector n
r ct and the
along the direction of propagation of the wave. The argument p can be written as p = n
general solution of the wave equation in 3d:
u(x, y, z, t) = u(r, t) = f (
n r ct) + g(
n r + ct)
1.10
(1.111)
u
2 u(x, t)
=
x2
t
(1.112)
where the constant has dimensions of [length]2 [time]1 . The actual value of the constant is a property
of the material and of the nature of the process (e.g. diffusion of a concentration of a solute, heat flux, etc).
The methods seen so far cannot be applied to this equation as it is differentiated a different number of time
with respect to x and to t. It is obvious that any attempt to search for a solution in the form u(x, t) = f (p)
with p = ax + bt will not lead to a form where the function f can be cancelled out.
A simple way of solving this equation is to set both members of the equation equal to a constant, :
2u
= ,
2
x
u
=
t
(1.113)
and
(1.114)
2
x + xg(t) + h(t)
2
u(x, t) = t + m(x)
These solutions are compatible if g(t) = g is a constant, h(t) = t and m(x) = (/2)x2 + gx. An acceptable
solution is thus:
2
x + gx + t + const
(1.115)
2
We remark that a solution that is a function of a linear combination of x and t cannot work and so we
seek solutions of equations by combining the independent variables in particular ways. Due to the physical
dimensions of the following combination of variables is dimensionless
u(x, t) =
x2
(1.116)
t
By substitution we see if we can find solutions in the form u(x, t) = f (). Evaluating the derivatives:
=
1.23
1.24
df ()
2x df ()
u
=
=
x
d x
t d
2 2
2
u
2 df ()
2x
d f ()
=
+
x2
t d
t
d 2
2
u
x df ()
= 2
t
t d
(1.117)
(1.118)
(1.119)
and substituting into the diffusion equation one obtains that it can be, indeed, written solely in terms of :
4
d2 f ()
df ()
+ (2 + )
=0
2
d
d
(1.120)
that is a simple ODE that can be solved in the following way. Using the notation f 0 () = df ()/d we have:
f 00
1
1
=
f0
2 4
(1.121)
Integrating, we have
ln (f 0 ()) = ln 1/2 + c
4
0
1/2
ln (f ()) ln
= +c
4
0
1/2
ln (f ()) + ln
= +c
4
from which
ln [ 1/2 f 0 ()] = + c
4
(1.122)
A
exp (/4)
1/2
(1.123)
(1.124)
f 0 () =
Z
f () = A
0
1/2
x
=
2
2(t)1/2
(1.125)
u(x, t) = f ()g() = B
exp ( 2 )d
(1.126)
In the expression above B is a constant and x and t appear only in the upper integration limit, , and only
in the combination xt1/2 . If 0 = 0 then u(x, t) is the error function erf[x/2(kt)1/2 ]. Only non negative
values of x and t are considered and so 0 .
To understand the physical meaning of the solution, we may think of u representing a temperature field. We
may want to know which temperature distribution it represents, e.g. in the case 0 = 0.
Because the solution is a function of xt1/2 it is clear that all points x at times t such that xt1/2 has the
same value will have the same temperature. In other words, at any time t, the region with a given value
of the temperature has moved along the positive x-axis of a distance proportional to t1/2 . This is a typical
feature of diffusion processes. We notice that at t = 0 the variable and u becomes independent of
x (expect at x = 0). Instead at x = 0 one has that u is identically zero for all t.
1.24
1.25
r2
u(r, t) = exp
(1.127)
t
2t
where and are constants. Show that (i) = 2k/(s); (ii) that the excess heat energy is
independent of t and evaluate , (iii) show that the total heat flow through any circle of radius r is
E.
Solution. The equation relevant for this problem is the equation for heat diffusion:
k2 u(r, t) = s
u(r, t)
t
(1.128)
We look for the solution for r b and treat the problem as with circular symmetry. The equation
becomes:
k
u
u
r
= s
(1.129)
r r
r
t
where u(r, t) = u(r, t). (i) By substituting the solution in the equation:
r2
2k r2
1
exp
t2 2t
2t
from which we can see that = 2k/s.
(ii) The excess heat in the system at any time t is
Z
Z
bs
u(r, t)2rdr = 2bs
0
(1.130)
r2
r
exp
dr = 2bs
t
2r
(1.131)
The excess heat is thus independent of t and must be equal to the heat input E. As a consequence:
=
E
E
=
2bs
4bk
(1.132)
1.25
Lecture 2
2.0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
2.1
2.1
2.3
2.4
2.5
2.9
2.1
2.1
This section closely follows: Applied Partial Differential Equations, John Norbury web Lecture notes University of Oxford
We start considering the following first-order, quasi-linear PDEs:
a(x, y, u)
u
u
+ b(x, y, u)
= c(x, y, u)
x
y
(2.1)
where a, b and c are assumed to be smooth functions (continuously differentiable, C ) of the independent
variables x and y and u(x, y) the solution.
Using this simple equation, that is relevant to many applications, we will illustrate the concepts of Cauchy
data, characteristics and weak solutions.
Special cases:
a and b do not depend on u, then the equation is called semi-linear
a(x, y)
u
u
+ b(x, y)
= c(x, y, u)
x
y
(2.2)
2.2
u
u
+ b(x, y)
= c1 (x, y)u + c2 (x, y)
x
y
(2.3)
Method of Characteristics
The solution u(x, y) can be thought as a surface z = u(x, y) in three-dimensions and it is possible to compute
the normal to this surface by taking the gradient:
u
n (u(x, y) z) =
x
u
y
(2.4)
1
while the PDE (2.2) can be rewritten as:
(a b c) n = 0
(2.5)
and this means that the vector (a, b, c)T is always tangent to the surface representing the solution of the
PDE. Geometrically the above condition allows us to define the characteristics curves (x( ), y( ), z( ))
parameterized by that are curves everywhere tangent to the solution surface and that satisfy
dx
= a(x, y, u)
d
(2.6)
dy
= b(x, y, u)
d
(2.7)
du
= c(x, y, u)
(2.8)
d
The projection of the characteristics on the x y plane, (x( ), y( )), are the characteristics projections.
2.1
Method of Characteristics
2.2
/1
/
9
I
9
/ K
/ t~tcte.v,~1k
/
/
l,rb3~~Cl~iDr1O
;~~;i-~l ~
I~p~e
~r~ec~oflS
/
4,,
~9
Figure 2.1: Schematic showing the characteristics parameterized by and pointing in the direction (a, b, c)T ,
emerging from the initial curve, in turn parametrized by s. The curve and the characteristic projections
are the projections of the initial curve and of the characteristics curves, respectively.
The formulation of the PDE in terms of characteristics allows us to define a solution to the PDE in parametric form by solving a system of ODEs.
Here we assume that the boundary data, i.e. Cauchy data, is given on a curve (x0 (s), y0 (s), u0 (s)), whose
projection on (x y) plane is called and it is parameterized by another parameter s, see Figure 2.1.
For any value of s we can find a characteristic that passes throught, with the initial conditions:
x = x0 (s)
(2.9)
y = y0 (s)
(2.10)
u = u0 (s)
(2.11)
The characteristic itself is parameterized with , solve eqns. (2.6) (2.8) and thus the parametric solutions
are given by:
x = x(s, )
(2.12)
y = y(s, )
(2.13)
u = u(s, )
(2.14)
2.2
Cauchy data
2.3
u
u
+u
=1
t
x
for u(x, t) in t > 0, given the initial condition u = x at t = 0.
The equations for the characteristics are, according to (2.6) (2.8):
dt
=1
d
dx
=u
d
du
=1
d
(2.15)
(2.16)
(2.17)
(2.18)
(2.19)
with x = s for t = 0, giving the solution x = s + st + 12 t2 , which can be expressed in terms of s and
finally one can solve the equation for u giving
u=
2.3
x + t + 21 t2
1+t
(2.20)
Cauchy data
Cauchy data are boundary conditions for PDE that, at least locally, determine the solution.
In the case of first-order quasilinear PDE the Cauchy data correspond to the prescription of the values of u
along a curve in the x y plane, i.e. u = u0 (s) where x = x0 (s) and y = y0 (s) with s parameterizing the
curve .
The curve along which the Cauchy data is given should be nowhere tangent to (a, b)T , if this happens the
characteristic points along the initial curve, instead of departing from it, and in general the initial data will
not agree with the ODE satisfied by u along the characteristic.
Example 2.2: Consistency of initial data
The equation
u u
+
=1
x y
(2.21)
x+y
+ F (x y)
2
(2.22)
2.3
Cauchy data
2.4
2.3.1
Cauchy-Kowalevski theorem
We want to formalize the condition of existence of a solution. A necessary condition for the existence of an
unique solution to the PDE eqn. (2.23) in a neighbourhood of is the existence of the first derivatives of u
on .
a(x, y, u)
u
u
+ b(x, y, u)
= c(x, y, u)
x
y
(2.23)
(2.24)
u
The equations (2.23) and (2.24) provide a couple of equation for u
x and y on .
The derivative can thus be derived by the system of equations under the condition that the determinant of
the system of the above equations is non zero:
a
dy0
dx0
b
dx
(2.25)
0 dy0 = a ds b ds 6= 0
ds
ds
If the condition is satisfied then both u and its derivative are defined on . This condition is equivalent to
what previously discussed of not to be tangent to a characteristic projection.
When the determinant is zero, there is either no solution or an infinity of solutions. The two conditions
correspond, respectively to the following two cases:
1 dx0
1 dy0
1 du0
=
6=
a ds
b ds
c ds
no solutions
(2.26)
1 dx0
1 dy0
1 du0
=
=
infinite solutions
(2.27)
a ds
b ds
c ds
The same argument carried out for the existence of the first derivative of u0 on can be carried out to
inquire for the existence of the second derivative:
2 u dy0
d u
2 u dx0
=
+
(2.28)
ds x
x2 ds
xy ds
2.4
Cauchy data
2.5
2u
2u
a u
b u a
+b
+
+
+
2
x
xy x x x y
u
2
u
x
2
+
b u u
c
c u
=
+
u x y
x u x
(2.29)
u
and the conditions for this system to have an unique solution
where there are two equations for xu2 and xy
are the same as for the system (2.25).
Conclusion: if a, b and c are analytic then the same argument can be continued giving the same conditions
at any order of the derivative of u on . A Taylor series can thus be constructed for u(x, y) on the curve .
This is a hint of the Cauchy-Kowalevski theorem stating that: the PDE (2.23) has a unique solution in
some interval on if a, b and c are analytic functions and satisfy the condition (2.25).
2.3.2
Domain of definition
When the initial data is not given on an infinite curve but e.g. only on a finite interval, then the solution is
defined in a region that is reached by the characteristics originating from the points in the initial interval.
This region is called domain of definition.
Example 2.3: Domain of definition
u u
+
= u3
x y
(2.30)
with initial conditions u = y on x = 0 and 0 < y < 3. The equations for the characteristics are:
dx
=1
d
dy
=1
d
du
= u3
d
(2.31)
(2.32)
(2.33)
(2.34)
y =s+
s
u=
1 2s2
(2.35)
yx
1 2x(y x)2
(2.36)
(2.37)
(2.38)
as represented in Figure 2.2 where the domain of definition of the solution is shown as bounded by
curve given by (2.38) and by the characteristic projections y = x and y = x + 3.
2.5
Cauchy data
2.6
u
u
+y
=0
x
y
(2.39)
(2.40)
(2.41)
(2.42)
(2.43)
(2.44)
u=s
(2.45)
y = se
for 0 < s < 1. Eliminating s and we obtan the solution u = y/x for 0 < y/x < 1. The solution is
not uniquely defined at the origin thus the domain of definition is 0 < y/x < 1 and x > 0, as shown
in Figure 2.3 where characteristic projections are drawn.
2.6
Cauchy data
2.7
Figure 2.3: Characteristic projections for equation (2.39) and with the domain of definition bounded by the
thick black curve. The domain of definition does not contain the origin.
(2.46)
for t > 0 and with initial data u = sin (x) for 0 x 2 at t = 0. The solution in parametric form
is u = sin(s), t = , x = s + sin(s) or
u = sin(x tu)
(2.47)
in implicit form. The characteristic lines are represented in Figure 2.4. As it can be seen the
characteristics cross at some finite distance. The Jacobian is (x, y)/(s, t) = 1 + t cos s and J = 0 on
the curve x = s tan s, t = sec(s), drawn in Figure 2.4 as a thick red curve. The solution, instead,
is represented in Figure 2.5.
Non-uniqueness
The domain of definition may be further restricted by u not being a unique function of x and y anymore. A
unique mapping between (, s) and (x, y) requires that a unique characteristic passes through each point in
the (x, y) plane. If the following determinant is zero this means that characteristics start to intersect:
x x
y
x
s
(2.48)
y y = a s b s
s
and thus the method of characteristics fails to provide a valid solution.
2.7
Cauchy data
2.8
5
4.5
4
3.5
J=0
2.5
2
Domain of
definition
1.5
1
0.5
0
0
Figure 2.4: Characteristic projections for Burgers equation with initial conditions given by u = sin (x), as
from Example 2.5, and domain of definition bounded by the thick red curve, where the Jacobian J = 0.
Figure 2.5: Solution of Eq. (2.47) plotted versus x for t=0.5, 1.0, 1.5, 2.0. The initial condition u = sin (x)
is shown in blue. Note that at some instant of time the solution becomes multivalued and so the solution
obtained from method of characteristics is not valid anymore, see Fig. 2.4.
2.8
2.4
2.9
We can extend the solution of the equation to the case where u is smooth everywhere but C 0 on a line C on
which the first derivative of u is discontinuos. If the curve C is parameterized by x = x() and y = y() then
we can use the superscript to indicate the solution on one or the other side of C.
u+ dx u+ dy
du+
=
+
d
x d
y d
du
u dx u dy
=
+
d
x d
y d
(2.49)
(2.50)
while the function u itself is continuous across C and u+ = u . From this it follows that:
du
du+
=
d
d
and thus
dx
d
u+
u
x
x
dy
+
d
(2.51)
u+
u
y
y
=0
(2.52)
and finally
+
+
dy u
dx u
+
=0
d x d y
(2.53)
where [u]+
= u u is the jump across C. Both u and u are classical solutions of the PDE, in the sense
1
that they are C and thus:
u
u
a
+b
=c
(2.54)
x
y
u
x
+
+b
u
y
+
=0
(2.55)
The two equations (2.53) and (2.55) form a system for the discontinuity in the derivatives on C and the
system can be solved if the determinant is non zero. This is equivalent to the condition:
b
dx
dy
a
=0
d
d
(2.56)
2.9
2.10
(2.57)
0
x
x<0
x0
(2.58)
The characteristic equations are dx = dy = du with general solution u = x + f (x y). The boundary
condition gives
s s < 0
f (s) =
(2.59)
0
s0
and the solution, therefore, is
u=
y
x
x<y
xy
(2.60)
Thus the function u is continuos across the characteristic y = x but its derivative not.
2.10
Lecture 3
3.0
. . . . . . . . . . . . .
equation . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3.1
3.2
3.3
3.6
3.7
3.8
3.10
3.12
3.12
3.12
3.16
3.18
3.1
In this chapter we will discuss fundamental aspects related to the issue of the numerical approximation of
PDEs by means of finite difference. The fundamental concepts that will be introduced and discussed are:
finite difference approximation of differential operators (time and space derivatives), well posed
problems, truncation error, scheme consistency, scheme dissipative or dispersive properties,
scheme stability (von Neumann stability analysis), scheme convergence.
We will recall how to derive finite difference approximations of derivatives in order to define a numerical
scheme for solving partial differential equations.
We will derive the truncation error of a scheme so to be able to determine the accuracy of the computed
solution. We will also define very important properties of a numerical scheme: consistency, stability,
well-posedness and their relation with the convergence of the numerical solution towards the exact solution of a partial differential equation.
3.1
This section closely follows: G. Tryggvason - CFD course - Web Lecture Notes
Consider a partial differential equation for u(x, t) defined in a rectangular domain D = {(x, t)|a < x <
b, t0 < t < tmax }. In order to numerically solve PDEs one can approximate the derivatives to evaluate them
numerically. We discretize the function over a mesh that is equispaced by a factor h in space and t in time.
This means that xj = j h and that tn = n t.
The time derivative
The Taylor expansion of the function u(x, t) gives:
u(x, t + t) = u(x, t) +
1 2 u(x, t) 2
u(x, t)
t +
t + O(t3 )
t
2 t2
(3.1)
and thus:
u(x, t + t) u(x, t)
u(x, t) t 2 u(x, t)
=
+
+ O(t2 )
t
t
2
t2
(3.2)
u(x, t)
u(x, t + t) u(x, t) t 2 u(x, t)
=
+ O(t2 )
t
t
2
t2
(3.3)
2 u(x, t) h2
3 u(x, t) h3
4 u(x, t) h4
u(x, t)
h+
+
+
+ O(h5 )
x
x2
2
x3
6
x4 24
(3.4)
u(x h, t) = u(x, t)
u(x, t)
2 u(x, t) h2
3 u(x, t) h3
4 u(x, t) h4
h+
+
+ O(h5 )
2
3
x
x
2
x
6
x4 24
(3.5)
and subtracting
u(x + h, t) u(x h, t) = 2
and thus
u(x, t)
3 u(x, t) h3
h+2
+ O(h5 )
x
x3
6
u(x, t)
u(x + h, t) u(x h, t) 3 u(x, t) h2
=
+ O(h4 )
x
2h
x3
6
3.1
(3.6)
(3.7)
3.2
(n + 2)t
(n + 1)t
nt
unj1 unj
unj+1
(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 3.1: Stencil for first order spatial derivative.
The spatial second derivative
u(x + h, t) = u(x, t) +
u(x, t)
2 u(x, t) h2
3 u(x, t) h3
4 u(x, t) h4
h+
+
+
+ ...
x
x2
2
x3
6
x4 24
(3.8)
u(x h, t) = u(x, t)
u(x, t)
2 u(x, t) h2
3 u(x, t) h3
4 u(x, t) h4
h+
+
+ ...
2
3
x
x
2
x
6
x4 24
(3.9)
and summing
u(x + h, t) + u(x h, t) = 2u(x, t) + 2
and
3.2
2 u(x, t) h2
4 u(x, t) h4
+2
+ ...
2
x
2
x4 24
+ ...
2
x
h2
x4 12
(3.10)
(3.11)
This section closely follows: G. Tryggvason - CFD course - Web Lecture Notes
Based on the expressions for the discretization of the differential operators we can now write the discretized
form for the advection-diffusion equation:
u
u
2u
+v
=D 2
(3.12)
t
x
x
where for simplicity we consider the case where the advection velocity v = const. At t = tn = n t and
x = xj = j h we denote
unj = u(xj , tn ) = u(j h, n t)
n
u
=
t (xj ,tn )
j
n
u
u
=
x j
x (xj ,tn )
u
t
3.2
2u
x2
n
j
3.3
2 u
=
x2 (xj ,tn )
u
t
n
+v
u
x
n
=D
2u
x2
n
(3.13)
j
n
u
t
=
j
un+1
unj
j
+ O(t)
t
(3.14)
n
unj+1 unj1
u
=
+ O(h2 )
x j
2h
2 n
unj+1 2unj + unj1
u
=
+ O(h2 )
x2 j
h2
(3.15)
(3.16)
Dt
vt n
uj+1 unj1 + 2 unj+1 2unj + unj1
2h
h
(3.17)
(n + 2)t
un+1
j
(n + 1)t
nt
unj1 unj
unj+1
(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 3.2: A graphical representation corresponding to the stencil used to approximate the advectiondiffusion equation via forward Euler in time and centered difference in space (FTCS).
3.2.1
For a single variable function u(x) denote ui = u(xi ). The Taylor expansion gives
u
h2 2 u
h3 3 u
ui+1 = ui + h
+
+
+ O(h4 )
x i
2 x2 i
6 x3 i
ui1 = ui h
u
x
+
i
h2
2
2u
x2
3.3
h3
6
3u
x3
+ O(h4 )
(3.18)
(3.19)
3.4
(n + 2)t
(n + 1)t
un+1
j
nt
unj
(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 3.3: A graphical representation corresponding to the stencil used to approximate the temporal derivative as forward Euler.
(n + 2)t
(n + 1)t
nt
unj1 unj
unj+1
(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 3.4: A graphical representation corresponding to the stencil used to approximate the spatial derivative
as centered difference.
The forward difference:
D+ =
u
x
u
x
=
i
ui+1 ui
h
h
2
ui ui1
h
+
h
2
2u
x2
2u
x2
h2
6
h2
6
3u
x3
3u
x3
+ O(h3 )
(3.20)
+ O(h3 )
(3.21)
=
i
and the central difference for first order derivative has an error of O(h2 ):
1
u
ui+1 ui1
h2 3 u
(D+ + D ) =
=
+ O(h3 )
2
x i
2h
6 x3 i
(3.22)
2u
x2
=
i
3.4
(3.23)
3.5
x
h
(3.24)
and using
h3 3 u
h2 2 u
u
+
+ O(h4 )
+
ui+1 = ui + h
x i
2 x2 i
6 x3 i
u
(2h)3 3 u
(2h)2 2 u
ui+2 = ui + 2h
+
+ O(h4 )
+
x i
2
x2 i
6
x3 i
(3.25)
(3.26)
thus:
ui + ui+1 + ui+2
++
=
ui + ( + 2)
h
h
u
x
h
+ ( + 4)
2
i
2u
x2
+ O(h2 )
(3.27)
(3.28)
+ 2 = 1
(3.29)
+ 4 = 0
(3.30)
(3.31)
x2 i
h2
thus:
ui + ui+1 + ui+2
++
+ 2
=
ui +
h2
h2
h
u
x
+ 4
+
2
i
(3.32)
2u
x2
+ O(h)
(3.33)
2u
x2
++ =0
(3.34)
+ 2 = 0
(3.35)
+ 4 = 2
(3.36)
=
i
ui 2ui+1 + ui+2
+ O(h)
h2
(3.37)
3.5
3.6
u
2ui+1 + 3ui 6ui1 + ui2
=
+ O(h3 )
x i
6h
u
ui+2 + 6ui+1 3ui 2ui1
=
+ O(h3 )
x i
6h
ui+2 + 8ui+1 8ui1 + ui2
u
=
+ O(h4 )
x i
12h
3.3
2u
x2
=
i
(3.38)
(3.39)
(3.40)
(3.41)
This section closely follows: Strikwerda Finite Difference Schemes and Partial Differential Equations Section
1.4
An important concept characterizing a finite difference scheme is the so-called consistency.
We say that, given a partial differential equation, P u = f , and a finite difference scheme, Pt,h = f , the
finite difference scheme is consistent with the partial differential equation if for any smooth function (t, x)
n
P |j Pt,h 0, as t, h 0,
the convergence being point-wise convergence at each point (t, x).
Note that for some schemes, we may have to restrict the manner in which t and h tend to zero in order
for it to be consistent. We demonstrate this definition with the following examples, where the consistency
of two numerical schemes, which will be later described in details, are introduced.
Example 3.1: Consistency of the Forward-Time Forward-Space Scheme for LAE
For the linear advection equation
u
u
+a
=0
t
x
the operator P is
is given by
+ a x
. The difference operator Pt,h for the forward-time forward-space scheme
n+1
nj
nj+1 nj
j
+a
,
t
h
where nj = (nt, jh). We begin with the Taylor series of the function in t and x about the point
(nt, jh). We have that
Pt,h =
1
n+1
= nj + tt + t2 tt + O(t3 ),
j
2
1
nj+1 = nj + hx + h2 xx + O(h3 ),
2
(3.42)
(3.43)
where the derivatives on the RHS are evaluated at (nt, jh), and so
1
1
Pt,h = t + ax + ttt + ahxx + O(t2 ) + O(h2 ).
2
2
3.6
3.7
Thus
1
1
n
P |j Pt,h = ttt ahxx + O(t2 ) + O(h2 )
2
2
0 as (t, h) 0
and therefore, according to the definition, this scheme is consistent with the given partial differential
equation.
n+1
12 (nj+1 + nj1 )
nj+1 nj1
j
+a
t
2h
nj+1 nj1
1
= x + h2 xxx + O(h4 ).
2h
6
Substituting these expressions and Eq. (3.42) in the scheme, we obtain
1
1
1
Pt,h = t + ax + ttt t1 h2 xx + ah2 xxx + O(h4 + t1 h4 + t2 ).
2
2
6
n
Note also that consistency is a necessary condition for a scheme to be convergent, but it is not
a sufficient condition. We will later determine under which conditions a scheme will be convergent.
3.4
2 u(x, t)
u(x, t) =
t
x2
3.7
(3.44)
Stability
3.8
By finite difference approximation of the 1d Laplacian and writing uj (t) for u(xj , t)
duj (t)
uj+1 (t) 2uj (t) + uj1 (t)
=
dt
h2
which is a system of ODEs evolved along the lines of discretization of the equation.
This system can be written in matrix form as follow:
du
= Au
dt
where
A=
1
h2
(3.45)
(3.46)
1
...
1
1
...
1
(3.47)
Based on this discretization the ODEs can be approximated numerically based on the standard methods for
ODEs.
3.5
Stability
Figure 3.5: The stability of the ODE: for time step t 2 102 , the forward Euler method of integration
becomes unstable. Made with MATLAB script ODE_stability.m
3.8
Stability
3.9
We consider one of the simplest example of numerical integration of the following ODE with initial condition
f (0) = 1:
df
= f
(3.48)
dt
that has the exact exponential solution f (t) = e(t) which is monotone decreasing. By using forward Euler
discretization scheme for the time derivative:
f n+1 = f n f n t = (1 t)f n .
(3.49)
n+1
f
fn 1
(3.50)
(3.51)
from which
f n+1 = f n f n+1 t
(3.52)
f n+1
1
=
fn
(1 + t)
(3.53)
n+1
f
fn 1
(3.54)
(3.55)
u(0) = 1.
(3.56)
1
un+1 = un 100t un cos tn +
sin tn
100
(3.57)
In Figure 3.5, the numerical solution is implemented with different time steps t. It can be shown
that the stability requirement is given by
|1 100t| 1
and, therefore, for t > 2.00 102 , instability occurs.
3.9
3.6
3.10
This section closely follows: G. Tryggvason - CFD course - Web Lecture Notes - Lecture 3
Here we illustrate one of the possible methods to investigate the stability of a scheme for solving PDEs. The
basic idea is to check that a small perturbation around the solution will not grow in time.
Thus, in order to study the stability of the numerical method we study the evolution of a small perturbation
around the solution unj :
unj unj + unj
(3.58)
and in particular we focus on the evolution of the amplitude of the Fourier modes associated to the perturbation. In order to do that we can simply Fourier transform the perturbation:
unj = u(xj )n =
dn eikxj
u
k
(3.59)
k=
For the sake of simplicity we will consider here the evolution of a monochromatic perturbation, i.e. the
evolution of a wave with a single wavenumber:
dn eikxj
unj = u
k
(3.60)
and we use the same expression to evaluate the perturbation at the nearby points involved in the stencil:
dn eikxj+1 = u
dn eikxj eikh
unj+1 = u
k
k
(3.61)
dn eikxj1 = u
dn eikxj eikh
unj1 = u
k
k
(3.62)
(3.63)
(3.64)
(3.65)
vt
Dt
i sin kh + 2 2 (cos kh 1) =
h
h
Dt
kh
vt
1 4 2 sin2
i
sin kh
h
2
h
(3.66)
(3.67)
(3.68)
3.11
Diffusion equation This corresponds to the case where v = 0. The stability condition simplifies to:
\
un+1
Dt
kh
G = k = 1 4 2 sin2
dn
h
2
u
(3.69)
Dt
1
h2
(3.70)
that is
Dt
1
2
h
2
(3.71)
Advection equation This corresponds to the case where D = 0. The stability condition is:
\
un+1
vt
k
sin kh
G=
= 1 i
dn
h
u
(3.72)
and the absolute value of the complex number is always larger than unity impliying that the method
is unconditionally unstable for the integration of the advection equation.
Advection-diffusion equation For the general case we report only the result of the stability condition
that requires extra analysis.
Dt
1
(3.73)
2
h
2
and, simultaneously,
v 2 t
2
(3.74)
D
In the case of two- or three-dimensional problems similar type of analysis can be conducted where the
amplitude of the perturbation is now expressed in terms of two- or three-dimensional Fourier amplitudes:
i(kj xj +kl xl )
n
\
unj,l = u
kj ,kl e
(3.75)
The stability condition for the advection-diffusion equation solved via FTCS scheme reads:
In 2d 1
Dt
h2
4
In 3d Dt
1
2
h
6
(3.76)
(3.77)
and
and
3.11
3.7
3.12
3.8
A very important problem, besides the stability of the method, is the issue of the convergence of the finitedifference approximation to the solution of the continuum PDE.
Lax equivalence theorem - It states that given a well posed inital value problem and a finite difference
approximation that satisfies the consistency condition then stability is the necessary and sufficient
condition for convergence to the solution of the problem.
3.8.1
This section closely follows: Trefethen 1994 Finite difference and Spectral methods for ODE and PDE
Dispersion relation
Any time-dependent scalar and linear PDE with constant coefficients on an unbounded domain admits plane
wave solutions
u(x, t) = ei(kx+t)
(3.78)
where k is the wave number and the frequency. The PDE imposes a relation between the possible
values of k and :
= (k)
(3.79)
This relation is known as dispersion relation (NB: in general for a given k more than one frequency
may be admissible. The numbers of possible values of depend upon the order of the PDE, thus we talk of
a dispersion relation and not of a function). Considering k real, may be real or complex, according to the
PDE.
Here below a short list of dispersion relation for common and basic PDEs:
ut = ux
=k
(3.80)
utt = uxx
= k = k
(3.81)
ut = uxx
i = k 2
(3.82)
ut = iuxx
(3.83)
= k
3.12
3.13
ut=0u
ut=ux
2
2
0
k
utt=uxx
2
3
0
k
ut=uxx
0
k
utt=xu
0
k
ut=xu
0
k
ut=ixu
0
k
2
0
2
0
2
4
8
2
0
k
ut=iuxx
2
0
2
0
2
4
0
k
Figure 3.6: (Left panel) Dispersion relations for the four PDEs (3.80) (3.83); (Right panel) Dispersion
relation for the finite difference approximation of the PDEs, as from (3.84) (3.87). The dotted black line
corresponds to the dispersion relation for the original PDEs. Note that with the symbols 0 and xx we
identify the finite difference representation of the first derivative with respect to x and the second derivative
with respect to x, respectively. Made with MATLAB script dispersion_relation.m
3.13
3.14
u =u
Continuum
nd
2
2
order
th
4 order
6th order
0
2
3
0
k
ut=iuxx
3
Continuum
2nd order
th
4 order
6th order
2
4
6
8
3
0
k
Figure 3.7: Dispersion relation for the finite difference approximation of the PDEs at increasing the order of
the discretization of the space derivatives. Higher order discretizations match more closely the dispersion relation for the original PDEs at small wavenumbers k. Made with MATLAB script dispersion_relation_2.m.
1
h
sin kh
(3.84)
2 =
4
h2
sin2
(3.85)
kh
2
i = h42 sin2
kh
2
(3.86)
= h42 sin2
kh
2
(3.87)
All these formulas are readily obtained by substituting the plane wave (3.78) into the finite difference
discretization with x = xj . The behaviour of the dispersion relations for the finite difference case is plotted
in figures 3.6 (Right panel) and, as it can be seen, deviate from the ones obtained for the PDEs. A general
remark that can be made is that the dispersion relation for the discretized PDE approximate well the one
of the PDEs for small k while deviations are always visible for larger wavenumbers k.
Unless other requirements impose differently, one would clearly like to have a discretization that approximates
the dispersion relation of the continuum PDE to the higher degree as possible. In Figure 3.7 it is shown that
increasing the order of the discretization of the space derivatives allows to obtain dispersion relations that
better match with the dispersion relation of the PDEs.
3.14
3.15
Here we give a few examples of dispersion relations for the PDE ut = ux with = t/h = 0.5.
BTCS
Crank-Nicolson
Leap Frog
Lax-Wendroff
kh
2
(3.88)
(3.89)
(3.90)
(3.91)
L2 u = 0
(3.92)
L3 u = 0
(3.93)
(3.94)
It can be seen that this wavelike solution is an eigenfunction of the three linear PDEs:
L1 u(x, y) = (i + ikU )u = 1 u
(3.95)
(3.96)
(3.97)
From these relations it follows that the eigenfunction is a solution of the PDEs Lu = 0 if and k are such
that they satisfy (, k) = 0.
The equation (, k) = 0 is called the dispersion relation. For the case of the three PDEs above this
corresponds to:
(k) = U k
(3.98)
(3.99)
(3.100)
(k) = U k iDk
(k) = U k + k 3
(3.101)
As said (k), in general, has the form
(k) = (k) + i(k)
(3.102)
3.16
Re((k))
(k)
=
k
k
(3.103)
(3.104)
(3.105)
(3.106)
(3.107)
L2 u = 0
(3.108)
L3 u = 0
(3.109)
L3 : k
(U k + k ) = U + k
(3.110)
(3.111)
(3.112)
And thus in the case L3 the solution presents dispersion as waves with different wavelength travel with
different velocity.
For what concerns the amplitude the situation is as follow:
L1 : (k) = 0
(3.113)
L2 : (k) = Dk 2
(3.114)
L3 : (k) = 0
(3.115)
and thus in the case L2 the amplitude present dissipation and it decays as exp(Dk 2 t).
Summarizing:
Given a linear homogeneous PDE operator we call it dissipative if and only if Im((k)) < 0 and dispersive
if and only if Re((k)) is not linear in k.
3.9
This section closely follows: G. Tryggvason - CFD course - Web Lecture Notes
The numerical approximation of the derivatives introduces error terms in the equation. However, the numerical scheme can be interpreted as an exact discretization not of the original equation but of another
modified equation different from the original equation that we want to solve. The structure of the terms
in the difference between the modified and the original equation can teach us much about the properties of
the numerical approximation.
3.16
3.17
+ O(h4 )
= unj
2
x
x 2
x3 6
un+1
= unj +
j
unj+1
unj1
(3.116)
(3.117)
(3.118)
(3.119)
un+1
unj
U n
j
+
uj unj1 = 0
(3.120)
t
h
Using Taylor expansions given by (3.116) (3.118) and inserting them into (3.120), we obtain
u
t
Uh
t2
U h2
u
+U
= utt +
uxx
uttt
uxxx + O(h3 ) + O(t3 )
t
x
2
2
6
6
and use the PDE to write all time derivatives in terms of space derivatives
t
Uh
t2
U h2
uttt +
uxxt
utttt
uxxxt + . . .
2
2
6
6
U t
U 2h
U t2
U 2 h2
=
uttx
uxxx +
utttx +
uxxxx + . . .
2
2
6
6
utt + U uxt =
U uxt U 2 uxx
(3.121)
(3.122)
(3.123)
(3.124)
(3.125)
(3.126)
(3.127)
U t
<1
h
(3.129)
3.17
(3.130)
3.18
The dissipative and dispersive nature of the first-order and second-order methods, respectively, are visible in
Figure 3.8. In fact, for the upwind method in the final modified equation, a term proportional to the second
derivative, responsible for the dissipation of the scheme, appears, while for the Lax-Wendroff scheme, a term
proportional to the third derivative, responsible for the dispersivity of the scheme, is present.
3.9.1
Artificial viscosity
The wiggling oscillation can be damped out by the addition of an artificial viscosity.
Here we consider the equation in conservative form
u F
+
=0
t
x
(3.131)
where F = U u. The artificial viscosity can be added by modifying the flux as:
F0 = F
u
x
(3.132)
where = Dh2 | u
x | thus
u F
+
=
t
x
x
u
2 u u
=
Dh
x
x
x x
(3.133)
The artificial viscosity produces effects similar to physical viscosity on a scale of the order of the grid scale.
It particularly acts across discontituities and is negligible elsewhere. The h2 ensures that the viscous term
remains higher order.
This is visible looking at Figure 3.9 where the upwind scheme is compared with the simple Lax-Wendroff
scheme (D = 0) and with Lax-Wendroff scheme where the artificial diffusive term is added (D = 0.5). Note
that the effect of this additional term is concentrated in proximity of the discontinuity.
Figure 3.8: Comparison between analytical, dissipative (Upwind) and dispersive (Lax-Wendroff) numerical scheme for h = 0.1, t = 0.25 h, n = 81 and at t = 4.00. Made with MATLAB script
shock_triangular_modified_equation.m
3.18
3.19
Figure 3.9: Comparison between analytical, upwind and (Lax-Wendroff) numerical scheme for D = 0 (classical Lax-Wendroff scheme) and with the adding of artificial viscosity D = 0.5 with h = 0.0125, t = 0.25 h,
n = 321 and at t = 2.75. Made with MATLAB script artificial_viscosity.m
3.19
Lecture 4
4.0
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4.1
4.4
4.5
4.6
4.7
4.9
4.10
4.11
4.12
4.13
4.14
4.15
4.17
4.18
4.23
4.23
4.24
4.25
4.25
4.26
4.26
4.1
4.1
This section closely follows: G. Tryggvason - CFD course - Web Lecture Notes
We start considering the case of 2D Poisson equation:
2u 2u
+ 2 =S
x2
y
(4.1)
on a two dimensional mesh with spacing h. Applying central difference for the discretization:
ui+1,j 2ui,j + ui1,j
ui,j+1 2ui,j + ui,j1
+
= Si,j
h2
h2
(4.2)
1
ui+1,j + ui1,j + ui,j+1 + ui,j1 h2 Si,j
4
(4.3)
This expression can be taken as a starting point for iterative (relaxation) solution methods (note that here
n + 1 denotes new values and n denotes old values):
Jacobi
un+1
i,j =
1 n
ui+1,j + uni1,j + uni,j+1 + uni,j1 h2 Si,j
4
(4.4)
1 n
n+1
n
2
ui+1,j + un+1
(4.5)
i1,j + ui,j+1 + ui,j1 h Si,j
4
Note that in Gauss-Seidel method, new values are used as soon as they become available, see terms
n+1
un+1
i1,j and ui,j1 in RHS of Eq.(4.5).
Notice also that the iterative procedure is even simpler than Jacobi.
un+1
i,j =
n
n+1
n
2
n
ui+1,j + un+1
i1,j + ui,j+1 + ui,j1 h Si,j + (1 )ui,j
4
(4.6)
(4.7)
2u 2u
+ 2 =0
x2
y
(4.8)
(4.9)
4.1
4.2
Figure 4.1: Solution via SOR iterative procedure of Laplace equation with boundary conditions given by
(4.9) for = 1.5. Made with MATLAB script SOR_Laplace2D.m
Method
Gauss-Seidel
SOR
SOR
SOR
SOR
SOR
1.0
1.2
1.5
1.7
1.9
1.95
Iterations for
1584
1051
518
265
99
202
i,j
Table 4.1: Evaluation of number of iterations needed to reach required average residual for Gauss-Seidel and
SOR methods for the solution of eq.(4.8) under boundary conditions given by eq.(4.9). Note that M and N
are the numbers of grid nodes along x and y, respectively. In this case M = N = 50.
4.2
4.3
(4.10)
n
un+1
uni+1,j + uni1,j + uni,j+1 + uni,j1 4uni,j
i,j ui,j
=
t
h2
(4.11)
14
t
h2
uni,j +
t
h2
(4.12)
(4.13)
1 n
ui+1,j + uni1,j + uni,j+1 + uni,j1 h2 Si,j
4
4.3
(4.14)
4.2
4.4
This section closely follows: G. Tryggvason - CFD course - Web Lecture Notes
In this section, numerical schemes for hyperbolic equations, along with their accuracy and stability properties, are presented.
Application of some of them to numerical solution of both linear (Linear Advection Equation) and non-linear
(inviscid Burgers equation) first-order hyperbolic equations are illustrated.
In particular, dissipative (for first-order accurate in space methods) or dispersive (for second-order accurate
in space methods) nature of different schemes is demonstrated through advection of a discontinuos function,
which is a solution of the linear advection equation.
Finally, a numerical scheme for second-order linear hyperbolic equations is presented and applied to the
classical wave equation.
A typical example of hyperbolic equation is the second-order wave equation:
2u
2u
c2 2 = 0
(4.15)
2
t
x
This equation can be rewritten as a system of two first-oder PDEs. Thus, for the sake of simplicity we will
focus firstly on the first order equation:
u
u
+U
=0
t
x
where the analytic solution is easily obtained by solving the equations for the characteristics:
dx
dt
du
dt
(4.16)
=U
(4.17)
=0
(4.18)
4.4
4.2.1
4.5
The equation is discretized with forward in time and central difference scheme in space:
un+1
= unj
j
U t n
(uj+1 unj1 )
2h
(4.19)
U t
sin kh
2h
(4.20)
and thus it is unconditionally unstable. Here and in the following we will indicate with =
Courant number .
U t
h
the
(n + 2)t
(n + 1)t
nt
un+1
j
unj1 unj
unj+1
(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.2: A graphical representation corresponding to the stencil used to approximate the temporal
derivative as forward Euler and the spatial derivative as centered difference: FTCS scheme.
4.5
4.2.2
4.6
The upwind scheme is similar to the previous scheme but uses directional (forward) derivatives in space.
Time is discretized in the same forward difference manner.
t
U (unj unj1 )
(4.21)
h
It must be noticed that the upwind or downwind nature of the scheme depends on the local sign of the
advecting velocity U and thus one may refer to the following generalized upwind scheme that reads:
un+1
= unj
j
t
U (unj unj1 )
h
t
= unj
U (unj+1 unj )
h
un+1
= unj
j
if U > 0
(4.22)
un+1
j
if U < 0
(4.23)
By defining
U+ =
1
(U + |U |)
2
(4.24)
U =
1
(U |U |)
2
(4.25)
and
t + n
U (uj unj1 ) + U (unj+1 unj )
h
(4.26)
Substituting U :
t n
|U |t n
(uj+1 unj1 ) +
uj+1 2unj + unj1
(4.27)
2h
2h
that is displaying a clear structure with a central difference term and a numerical viscosity term proportional to the numerical viscosity
un+1
= unj U
j
|U |h
2
The equation for the growth of a small perturbation reads:
Dnum =
un+1
unj
U
j
+ (unj unj1 ) = 0
t
h
and testing the stability of a wave perturbation with frequency k, i.e. writing
unj = u
n eikxj
(4.28)
(4.29)
(4.30)
u
n+1
= 1 (1 eikh )
u
n
(4.31)
with = Uht .
For the stability we are interested in knowing when |G| < 1 (see the section on the stability analysis via the
von Neumann method).
4.6
4.7
The stability conditon can be assessed graphically by the construction in Figure 4.5 and it corresponds to
the conditions < 1 or equivalently
U t
1
(4.32)
h
This condition is called CFL condition and it was first introduced by Courant, Friedrichs and Lewy.
Another way to derive the stability condition is via direct algebra.
G = 1 + eikh = 1 + cos kh i sin kh
(4.33)
(4.34)
and thus
(4.35)
(1 ) + 2(1 ) cos kh + =
(4.36)
1 2 + 22 + 2(1 ) cos kh
(4.37)
and
|G|2 1
if 1
(4.38)
(n + 2)t
(n + 1)t
nt
un+1
j
unj1 unj
(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.3: A graphical representation corresponding to the stencil used in the upwind scheme.
Remarks: The upwind is exceptionally robust but low accuracy in space and time thus unsuitable for most
applications.
4.2.3
The CFL condition is a necessary condition for stability of explicit hyperbolic schemes and it has a rather
simple physical interpretation. It amounts to the requirement that the numerical domain of dependence
contains the physical domain of dependence. In other words the numerical speed of propagation must
be greater or equal to the speed of propagation of the PDE. This condition can be interpreted as providing
a limitation on the time step.
A consequence of this is that: There are no explicit, unconditionally stable and consistent finite
difference schemes for hyperbolic systems.
4.7
4.8
37~MQ7fl/ ~1~VMC77Y
J-QFI
Figure 4.4: The geometrical interpretation of the CFL condition for the upwind scheme. The characteristics
of the equation must travel slower than one grid spacing per time step.
(-9) ~
Figure 4.5: A geometrical interpretation of the stability condition for the upwind scheme. The red circle
bounds values of amplification factor G for which the scheme is stable.
4.8
4.2.4
4.9
(4.39)
u
=
u
2h j+1
t j
2h j1
t j
(4.40)
n+1
aj uj+1
+ bj un+1
+ cj un+1
j
j1 = dj
(4.41)
or equivalently
(n + 2)t
(n + 1)t
nt
unj1 unj
(n 1)t
unj+1
un1
j
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.6: A graphical representation corresponding to the stencil used to approximate the derivatives of
the backward Euler central space scheme.
4.9
4.2.5
4.10
1 n
(u
+ unj1 )
2 j+1
(4.42)
(4.43)
(4.44)
(n + 2)t
(n + 1)t
nt
un+1
j
unj1 unj
unj+1
(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.7: A graphical representation corresponding to the stencil used in Lax-Friedrichs scheme.
4.10
4.2.6
4.11
One of the simplest second-order scheme (in time) is the following Leap-Frog method:
un+1
un1
u
j
j
=
+ O(t2 )
t
2t
U t n
uj+1 unj1
h
that corresponds to the following modified equation:
(4.45)
un+1
= un1
j
j
(4.46)
u
u
U h2 2
+U
=
( 1)uxxx + . . .
t
x
6
(4.47)
un+1
j
unj1 unj
(n 1)t
unj+1
un1
j
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.8: Leap-Frog.
4.11
4.2.7
4.12
2 u t2
3 u t3
u
t + 2
+ 3
+ ...
t
t 2
t 6
(4.48)
and using
u
u
= U
t
x
one obtains
2
=
t2
t
u
t
=
t
(4.49)
u
2u
= U2 2
x t
x
(4.50)
u
2 u t2
t + U 2 2
+ O(t3 )
x
x 2
(4.51)
u
U
x
= U
U 2 t2 n
U t n
uj+1 unj1 +
uj+1 2unj + unj1
2
2h
2h
(4.52)
This scheme is second order accurate in space and in time and stable for
U t
<1
h
(4.53)
(n + 2)t
(n + 1)t
nt
un+1
j
unj1 unj
unj+1
(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.9: A graphical representation corresponding to the stencil used to approximate the derivative in
the Lax-Wendroff (I) scheme.
4.12
4.2.8
4.13
This is the two step Lax-Wendroff scheme and it is split in two steps:
Step 1: Lax
n+1/2
+U
unj+1 unj
=0
h
(4.54)
Step 2: Leapfrog
n+1/2
n+1/2
uj+1/2 uj1/2
un+1
unj
j
+U
=0
t
h
(4.55)
It is stable for
U t
<1
h
and second order accurate in time and space. For linear equations the LW-II is identical to LW-I.
(4.56)
(n + 2)t
(n + 1)t
nt
n+1/2
uj1/2
un+1
j
n+1/2
uj+1/2
unj1 unj
unj+1
(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.10: A graphical representation corresponding to the stencil used for the Lax-Wendroff (II) scheme.
4.13
4.2.9
4.14
This scheme is similar to the LW-II without the j + 1/2 and j 1/2.
The predictor step provides a provisional value for u based on forward difference:
t n
uj+1 unj
h
= unj U
un+1
j
(4.57)
Corrector, the predicted value is corrected by using backward difference (time step t/2):
n+1/2
un+1
= uj
j
n+1/2
t n+1
uj un+1
j1
2h
(4.58)
unj + un+1
j
2
(4.59)
uj
1 n
t n+1
n+1
uj + un+1
(u
u
)
j
j1
2
h j
(4.60)
(n + 2)t
un+1
j
(n + 1)t un
j1
nt
unj1
unj+1
(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.11: A graphical representation corresponding to the stencil used for the MacCormack scheme.
4.14
4.2.10
4.15
t n
uj unj1
h
(4.61)
1 n
t
t n
uj + uj U
uj uj1 U
uj 2unj1 + unj2
2
h
h
(4.62)
and combining
1
un+1
= unj unj unj1 + ( 1) unj 2unj1 + unj2
j
2
(4.63)
where = U t
h
The scheme is stable for 0 2
Second order accurate in time and space
(n + 2)t
(n + 1)t
nt
un+1
j
unj2 unj1 unj
(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.12: A graphical representation corresponding to the stencil used for the second order upwind method
(Beam-Warming)
(n + 2)t
(n + 1)t
nt
un+1
j
unj1 unj
(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.13: A graphical representation corresponding to the stencil used to approximate the temporal
derivative as forward euler.
4.15
4.16
Numerical scheme
t
un+1
= unj U2h
(unj+1 unj1 )
j
Stability
Unconditionally
unstable
n
n
un+1
= unj U t
j
h (uj+1 uj )
Uh
2 (1
1)uxxx
U 2 t
2 uxx
un+1
j
unj1 unj
unj+1
Upwind
un+1
j
Stable for 1
unj1 unj
BTCS
unj1
(Implicit)
unj
un+1
un
j
j
t
+U
n+1
un+1
j+1 uj1
2h
=0
1
2
6Uh
+ 13 U 3 t2 uxxx
Unconditionally
stable
U h2
3 (1
Conditionally
consistent,
stable for 1
unj+1
un1
j
n
(un
un+1
j+1 +uj1 )/2
j
+
t n
(un
j+1 uj1 )
+U
=0
2h
Lax-Friedrichs
un+1
j
Uh
2
uxx +
2 )uxxx
unj1unj unj+1
un1
un+1
j
j
2t
Leapfrog
+U
n
un
j+1 uj1
2h
U h2
2
6 (
=0
1)uxxx
Stable for 1
un+1
j
unj1unj unj+1
un1
j
Lax-Wendroff
(I)
un+1
j
unj1 unj
un
un+1
(un un )
j
j
+
U j+12h j1 +
t
(un 2un +un )
U 2 t2 j+1 2hj2 j1 = 0
Stable for 1
= 0
As LW-1
Stable for 1
unj U t
unj+1 unj
hh
i
n+1
1
n
=
+
2 uj + uj
As LW-1
Stable for 1
unj+1
n+1/2
Lax-Wendroff
n+1/2
uj1/2
un+1
j n+1/2
uj+1/2
unj1 unj
MacCormack
unj1
(II)
n
n
uj+1/2 (un
un
j+1 +uj )/2
j+1 uj
+
U
t/2
h
n+1/2
n+1/2
uj+1/2 uj1/2
un+1
un
j
j
+U
=0
t
h
unj+1
un+1
j
un+1
j
un+1
j
unj+1
Beam-Warming
un+1
j
U2
n+1
t
h (uj
un+1
un
j
j
t
U 2 t
n
2h2 (uj
un+1
j1 )
(3un 4un
+un
j1
j2
+ U j
2h
n
n
2uj1 + uj2 ) = 0
U h2
6 (1
2
)(2 )uxxx
) (2 )uxxxx
U h3
8 (1
Stable for
02
4.2.11
4.17
(4.64)
dx
=U
dt
du
=0
dt
(4.65)
(4.66)
(4.67)
uL
uR
x < x0 ,
x > x0 ,
(4.68)
(4.69)
(4.70)
x < a
uL
1
x
(u
+
u
)
(u
u
)
a
<x<a
u(x, 0) =
L
R
L
R a
2
uR
x>a
(4.71)
(4.73)
where
dt
=1
ds
dx
F
=
ds
u
(4.74)
(4.75)
or
dx
F
=
= F 0 (u)
(4.76)
dt
u
The Entropy condition corresponds to the requirement that the characteristics enter in the discontinuity and
thus the shock speed must be between
F 0 (uL ) > C > F 0 (uR )
4.17
(4.77)
Version of September 1, 2015
4.18
-Yr
~0-~
Figure 4.14: Initial condition for eq.(4.67) given by (4.71).
4.2.12
(4.78)
1
0
x<0
x>0
(4.79)
In Figures from 4.15 to 4.18, the numerical solutions of eq. (4.78) with initial conditions (4.79) for different
numerical schemes are plotted for h = 0.10, t = 0.05, n = 41 and at t = 2.75, while for Figures from 4.19
to 4.22, numerical solutions for the same equation and initial conditions for h = 0.05, t = 0.025, n = 81
and at t = 2.75 are plotted. Note that the second-order methods tend to better capture the solution shape
(higher accuracy) but they show oscillating behaviour in proximity of the shock, while first-order methods
(e.g. upwind) are dissipative and less accurate but the solution preserves monotonicity (it does not oscillate).
4.18
4.19
Figure 4.15: Shock-like numerical solution of linear advection equation using the upwind scheme with h = 0.1,
dt = 0.05, n = 41 and at t = 2.75. Made with MATLAB script shock_advection.m.
Figure 4.16: Shock-like numerical solution of linear advection equation using the Lax-Wendroff (I) scheme
with h = 0.1, t = 0.05, n = 41 and at t = 2.75. Made with MATLAB script shock_advection.m.
4.19
4.20
Figure 4.17: Shock-like numerical solution of linear advection equation using Leapfrog scheme with h = 0.1,
t = 0.05, n = 41 and at t = 2.75. Made with MATLAB script shock_advection.m.
Figure 4.18: Shock-like numerical solution of linear advection equation using McCormack scheme with
h = 0.1, t = 0.05, n = 41 and at t = 2.75. Made with MATLAB script shock_advection.m.
4.20
4.21
Figure 4.19: Shock-like numerical solution of linear advection equation using upwind scheme with h = 0.05,
t = 0.025, n = 81 and at t = 2.75. Made with MATLAB script shock_advection.m.
Figure 4.20: Shock-like numerical solution of linear advection equation using Lax-Wendroff (I) scheme with
h = 0.05, t = 0.025, n = 81 and at t = 2.75. Made with MATLAB script shock_advection.m.
4.21
4.22
Figure 4.21: Shock-like numerical solution of linear advection equation using Leapfrog scheme with h = 0.05,
t = 0.025, n = 81 and at t = 2.75. Made with MATLAB script shock_advection.m.
Figure 4.22: Shock-like numerical solution of linear advection equation using McCormack scheme with
h = 0.05, t = 0.025, n = 81 and at t = 2.75. Made with MATLAB script shock_advection.m.
4.22
4.3
4.23
This section closely follows: G. Tryggvason - CFD course - Web Lecture Notes
As a model equation we consider the diffusion equation:
u
2u
= 2
t
x
for t > 0 and in the interval a < x < b. This equation requires initial conditions:
(4.80)
u(x, 0) = u0 (x)
(4.81)
(4.82)
u(b, t) = b (t)
(4.83)
or Neumann
u
(a, t) = a (t)
x
u
(b, t) = b (t)
x
(4.84)
(4.85)
or a combination of both.
Notice that parabolic equations can be viewed as the limit of hyperbolic equations with two families of
characteristics (actually one) when the propagation speed goes to infinity.
4.3.1
u
t
n
u
t
n
=
2u
x2
n
(4.86)
j
where
and
thus
u
t
n
j
2u
x2
un+1
unj
j
t
(4.87)
(4.88)
=
j
n
=
j
un+1
unj
j
=
=
t
2u
x2
n
=
(4.89)
or equivalently:
t n
uj+1 2unj + unj1
2
h
The scheme corresponds to the following (modified) equation:
un+1
= unj +
j
(4.90)
u
2u
h2
4u
6u
2 =
(1 6r) 4 + O(t2 , h2 t, h4 ) 6
t
x
12
x
x
with
r=
t
h2
(4.91)
(4.92)
and
4.23
4.24
(n + 2)t
(n + 1)t
un+1
j
nt
unj1 unj
unj+1
(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.23: A graphical representation corresponding to the stencil used to approximate the temporal
derivative as forward Euler and the spatial derivatives as centered in space.
has accuracy O(t, h2 )
has no odd derivatives, thus it is dissipative
stability (von Neumann):
t
h
un+1
= 1 4 2 sin k 2
un
h
2
and
1 < 1 4
and
0
(4.93)
t
<1
h2
(4.94)
1
t
<
2
h
2
(4.95)
Domain of dependence
As it can be seen from the discretization, at each time step the updated values depend maximum upon the
first neighbours at the previous time step. This means that a node in the center may be influenced solely by
the initial data for a long time, without feeling the boundary conditions. This may result in unphysical
behaviour of the solution.
It can be easily verified that for t 0 and h 0 the computational domain containts the physical
domain of dependence. We have seen that the latter condition requires t h2 for von Neumann
stability.
4.3.2
(4.96)
n+1
n
run+1
run+1
j+1 + (2r + 1)uj
j1 = uj
(4.97)
and
where r = t/h2 .
This is a tri-diagonal system that must be solved by matrix method.
This method corresponds to the following modified equation:
2u
h2
4u
6u
u
2 =
(1 + 6r) 4 + O(t2 , h2 t, h4 ) 6
t
x
12
x
x
4.24
(4.98)
4.25
with
t
h2
and the amplification factor from von Neumann type analysis is
(4.99)
r=
G = [1 + 2r(1 cos )]
(4.100)
unj1 unj
(n 1)t
unj+1
un1
j
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.24: A graphical representation corresponding to the stencil used to approximate the temporal
derivative as backward Euler.
4.3.3
j
=
t
2
n+1
un+1
+ un+1
unj+1 2unj + unj1
j+1 2uj
j1
+
h2
h2
n+1
n
n
n
run+1
run+1
j+1 + 2(r + 1)uj
j1 = ruj+1 2(r 1)uj + ruj1
(4.101)
(4.102)
(4.103)
1 r(1 cos())
1 + r(1 cos())
(4.104)
4.3.4
The method
This method can interpolate, varying the parameter between implicit and explicit:
" n+1
#
uj+1 2un+1
+ un+1
un+1
unj
unj+1 2unj + unj1
j
j
j1
=
+ (1 )
t
h2
h2
(4.105)
where 0 1 and:
4.25
4.26
(n + 2)t
(n + 1)t
nt
n+1
un+1
un+1
j1 uj
j+1
unj1 unj
unj+1
(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.25: A graphical representation corresponding to the stencil used in Crank-Nicolson method.
(n + 2)t
(n + 1)t
nt
n+1
un+1
un+1
j1 uj
j+1
unj1 unj
unj+1
(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.26: A graphical representation corresponding to the stencil used in theta method.
= 0 gives the explicit FTCS scheme;
= 1 gives the implicit BTCS scheme;
= 1/2 gives the Cranck-Nicolson scheme;
4.3.5
(4.106)
4.3.6
(4.107)
Version of September 1, 2015
4.27
(n + 2)t
(n + 1)t
un+1
j
nt
unj1 unj
(n 1)t
unj+1
un1
j
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.27: A graphical representation corresponding to the stencil used in the Richardson scheme.
un+1
un1
unj+1 un+1
un1
+ unj1
j
j
j
j
=
2t
h2
(4.108)
un+1
(1 + 2r) = un1
+ 2r(unj+1 ujn1 + unj1 )
j
j
(4.109)
(4.110)
(4.111)
(n + 2)t
un+1
j
(n + 1)t
nt
unj1
unj+1
(n 1)t
un1
j
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.28: A graphical representation corresponding to the stencil used in the DuFort-Frankel method.
4.27
Numerical scheme
un+1
j
unj1 unj
unj+1
unj1 unj
un+1
unj
j
t
unj+1
un1
j
un+1
unj
unj+1 2unj + unj1
j
=
t
h2
BTCS
4.28
h2
(1 6r)uxxxx
12
n+1
un+1
+ un+1
j+1 2uj
j1
h2
h
(1 + 6r)uxxxx
12
Crank-Nicolson
n+1
uj1
un+1
j
unj1 unj
un+1
j+1
unj+1
h2
3 2
un+1
unj
j
n+1
u
+
uxxxxxx
xxxx
= 2 un+1
2u
+
j+1
j
12
12
t
2h
n
n
n
+un+1
j1 + uj+1 2uj + uj1
Richardson
un+1
j
unj1 unj
unj+1
un+1
un1
unj+1 2unj + unj1
j
j
=
2t
h2
Stability
Stable for r 1/2
O(t2 , h2 )
Unconditionally
stable
Unconditionally
stable
Unconditionally
unstable
un1
j
DuFort-Frankel
un+1
j
un+1
un1
j
j
2t
=
unj1
unj+1
unj+1 un+1
ujn1 + unj1
j
h2
h2
(1 12r2 )uxxxx
12
Unconditionally
consistent, conditionally consistent
un1
j
3-levels
implicit
n+1
uj1
un+1
un+1
j
j+1
unj
un1
j
3un+1
4unj + un1
j
j
=
2t
h2
uxxxx
12
Unconditionally
stable
n+1
un+1
+ un+1
j+1 2uj
j1
h2
4.28
Lecture 5
5.0
Lecture 6
6.0