Script PDE
Script PDE
Preliminary Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
7
9
11
14
21
24
33
33
37
37
37
38
42
43
44
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1 Definition, Notation and Classification . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 Finite difference method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3 von Neumann stability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
49
51
54
Advection Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.1 FTCS Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.2 Upwind Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
1
Burgers Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1 Hopf-Cole Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2 General Solution of the 1D Burgers Equation . . . . . . . . . . . . . . . . . . .
6.3 Forced Burgers Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4 Numerical Treatment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71
72
73
73
74
79
79
80
81
83
86
86
Sine-Gordon Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
8.1 Kink and antikink solitons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
8.2 Numerical treatment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Part I
In this part, we discuss the standard numerical techniques used to integrate systems
of ordinary differential equations (ODEs).
Chapter 1
Preliminary Concepts
Chapter 2
(2.1)
x(t) = (x1 (t), x2 (t), . . . xn (t))T , f [a, b] Rn Rn , together with specified value
x(a) = x0 ,
(2.2)
j = 0, 1, . . . , M,
h=
ba
.
M
(2.3)
x2 = x1 + h f (t1 , x1 ) .
The process is repeated and generates a sequaence of points x0 , x1 , x2 , . . . xM that
approximates the solution x(t). The general step of the Eulers method is [?, 45]
x j+1 = x j + h f (t j , x j ) ,
t j+1 = t j + h,
(2.4)
j = 0, 1, . . . , M 1.
Notice that the Euler method (2.4) is an explicit method, i.e., x j+1 is given explicitly
in terms of known quantities such as x j and f (t j , x j ).
From geometrical point of view, one starts at the point (t0 , x0 ) of the (t, x)-plane
and is moving along the tangent line to the solution x(t) and will end up at the point
(t1 , x1 ). Now this point is used to compute the next slope f (t1 , x1 ) and to locate the
next approximation point (t2 , x2 ) etc.
Example 1
Let us use Eulers method (2.4) to solve approximately a simple IVP [?]
x = x ,
x(0) = 1.
(2.5)
The exact solution is x(t) = exp(t), so we can calculate the correct value at the end
of the time interval, i.e.,
x(1) = e = 2.71828...
Let us find the numerical approximation of (2.5) for different step sizes h={0.1, 1e2, 1e-3, 1e-4, 1e-5} and calculate the difference between obtained numerical value
at the end of the time interval xend and the exact value e. The results are shown in
Table (2.1) The presented results demonstrate that the error at the end of interval is
Table 2.1 Numerical results, obtained by Eulers methhod for Eq. (2.5)
h
xend
|xend 1|
0.1
1e-2
1e-3
1e-4
1e-5
2.5937
2.7048
2.7169
2.7181
2.7183
0.12
1.35e-2
1.4e-3
1.35e-4
1.35e-5
3
2
0
1
2
0
Example 2
Now let us solve a nonlinaer IVP [?]
x = t x2 ,
over t [0, T ],
x(0) = x0
(2.6)
for different values of x0 and T using the method (2.4). Figure 2.1 shows numerical
solutions of Eq. (2.6) over the time interval [0, 9] for four different initial values
x0 = {0.7, 0, 1, 3}. One can see that the solutions corresponding to different x0
converge to the same curve. But if we compute the solution again for a longer time
interval, say t [0, 900] for, e.g., x0 = 0, the numerical solution starts to oscillate
from some time moment on (see Fig. 2.2 (a)) and the oscillations character becomes chaotic. This effect indicate the instability of the Eulers method at least at
the choosen value of the time step. However, the effect disappears if we repeat the
calculation with a smaller h (see Fig. 2.2 (b) for details).
The presented examples raises a number of questions. One of these is the question
of convergence. That is, as the step size h tends to zero, do the values of the numerical solution approach the corresponding values of the actual solution? Assuming
that the answer is affirmative, there remains the important practical question of how
rapidly the numerical approximation converges to the solution. In other words, how
small a step size is needed to guarantee a given level of accuracy? We discuss these
questions below.
j+1 =
1
x(t j+1 ) x(t j ) f (t j , x j ) ,
h
j = 0,... ,M .
(2.7)
for hmax 0 ,
(2.8)
h2
x (t j ) + . . . .
2!
(2.9)
Thus, every time we take a step using Eulers method (2.4), we incur a truncation
error of O(h2 ), i.e., the local truncation error for the Euler method is proportional
to the square of the step size h and the proportionality factor depends on the second
derivative of the solution .
The local truncation error (2.7) is different from the global discretization (truncation) error e j , which is defined as the difference between the true solution and the
computed solution, i.e.,
e j = x(t j ) x j ,
j = 0 , . . . , M,
(2.10)
where x(t j ) denotes the exact solution on the step j and x j stands for its numerical
approximation. The concept of the global discretization error is connected with the
notion of convergency of the method, namely, the numerical scheme is convergent,
(a)
(b)
50
30
40
25
30
20
10
20
15
0
10
10
5
20
30
0
100
200
300
400
500
t
600
700
800
900
0
0
100
200
300
400
500
t
Fig. 2.2 Numerical implementation of the Eulers method for Eq. (2.6) over a long time interval, T = 900 and initial value x0 = 0. (a) with h = 0.05, illustrating numerical instability of the
scheme (2.4). (b) The instability desappeares for a smaller step size h = 0.025.
600
700
800
900
if
max ke j k 0 ,
tj
for hmax 0 .
(2.11)
Moreover, one can also say that the scheme possesses the order of convergency p, if
p
max ke j k Khmax
,
tj
hmax 0 ,
(2.12)
(2.13)
In most cases, we do not know the exact solution and hence the final global error (2.13) is not possible to be evaluated. However, if we neglect round-off errors, it
is reasonable to assume that the global error after M time step is M times the local
truncation error (2.7), since M is proportional to 1/h, E(x(b), h) should be proportional to j+1 /h. For example, for the Eulers method (2.4) the accumulated error
would be
M
E(x(b), h) =
h2
h2
h M
b a
x Mx =
x h=
x h = O(h)
2
2
2
j=1 2
K = const,
and
h 1
1
h
E(x(b), ) K = K h E(x(b), h) .
2
2 2
2
Hence if the step size in Eulers method (2.4) is reduced by a factor of 1/2 we can
expect that the final global truncation error (2.13) will be reduced by the same factor
(see also Example 2 of Section (2.1)).
t j+1
f (t, x(t)) dt .
(2.14)
tj
Now a numerical integration method can be used to approximate the definite integral in Eq. (2.14). From the geometrical point of view, the right-hand side of (2.14)
corresponds to the area S under the curve f (t, x(t)), between t j and t j+1 . For example, Eulers method (2.4) consists of approximation of right-hand side of (2.14) by
the area of the rectangle Sr with the height f (t j , x(t j )) and width h, i.e., one obtains
Eq. (2.4), namely
x j+1 = x j + Sr = x j + h f (t j , x(t j )) .
Clearly, a better approximation to the area S can be obtained if we use the trapezium
with area
h
f (t j , x(t j )) + f (t j+1 , x(t j+1 )) ,
St =
2
yielding
x j+1 = x j +
h
f (t j , x(t j )) + f (t j+1 , x(t j+1 )) .
2
(2.15)
Notice that the r.h.s. of Eq. (2.15) contains the yet unknown value x j+1 . In order to overcome this difficulty we use the Eulers approximation (2.4) to replace f (t j+1 , x(t j+1 )) with f (t j+1 , x j + h f (t j , x(t j ))). After it is substituted into
Eq. (2.15), the resulting expression is called Heuns, trapezoid or improved Eulers
method:
h
(2.16)
f t j , x(t j ) + f t j + h, x j + h f (t j , x(t j )) .
x j+1 = x j +
2
(2.17)
b a 2
h3
x
x h = O(h2 ) .
12
12
j=1
E(x(b), h) =
Again, if we perform two computations using the step sizes h and h/2 we obtain
E(x(b), h) = K h2 ,
K = const.
and
h
h2
1
E(x(b), ) K = E(x(b), h) .
2
4
4
Thus, if the step size in Heuns method is reduced by a factor of 1/2, we can expect
that the final global truncation error would by reduced by a factor of 1/4.
Example 1
Use Eulers method (2.4) and Heuns method (2.18) to solve the IVP for the ODE,
describing the behavior of the simple harmonic oscillator
x + 2 x = 0,
x(0) = 0,
x(0)
= v0 ,
(2.18)
over the time interval t [0, T ], and where the frequency and initial velocity v0
are given constants. The exact solution
x(t) =
v0
sin( t) .
(2.19)
(2.20)
x(0) = 0 ,
y(0) = v0 .
We begin with analysis of system (2.20) using the ideas of phase space [46]. If
we multiply both sides of the first equation of (2.20) by 2 x and both sides of the
second equation of the system by y and add the two together we get the following
relation
y y + 2 x x = 0 .
Notice that the l.h.s. of the relation above is the time derivative, so one can rewrite
the last relation as
1
1
d 1 2 1 2 2
(2.21)
y + x = 0 y2 + 2 x2 := I1 ,
dt 2
2
2
2
(a)
(b)
3
5
4
3
2
1
0
1
0
1
2
3
3
6
0
x
5
0
10
20
30
40
t
Fig. 2.3 (a) Phase diagramm for the linear oscillator (2.18), corresponding to the different values
of the energy I1 = {0.1 , 0.5 , 1 , 3} and = 0.5. (b) Numerical solution of (2.18) over the interval
[0, 20 ] by methods (2.4) (blue points) and (2.18) (red line) with the step size h = 0.05. The black
curve corresponds to the exact solution of Eq. (2.18).
where I1 = const is usually called a constant of motion or a first integral, which can
be interpreted as the mechanical energy of the system. From the geometrical point
of view, one can speak about phase curves that form a set of ellipses
in the phase
2I1 / and the
space with coordinates
(x,
y).
These
ellipses
cut
the
x-axis
at
x
=
50
60
70
x(tn+1 ) := xn+1 = xn + h ci ki ,
(2.22)
i=1
where
k1 = f (tn , xn ),
k2 = f (tn + 2 h, xn + h21k1 (tn , xn )),
k3 = f (tn + 3 h, xn + h(31k1 (tn , xn ) + 32k2 (tn , xn ))),
..
.
km = f (tn + m h, xn + h
m1
m j k j ).
j=1
Examples
1. Let m = 1. Then
k1 = f (tn , xn ) ,
xn+1 = xn + h c1 f (tn , xn ) .
On the other hand, the Taylor expansion yields
xn+1 = xn + h x tn + = xn + h f (tn , xn ) + O(h2 ) c1 = 1 .
Thus, the first-stage RK-method is equivalent to the explicit Eulers method (2.4).
Note that the method (2.4) is of the first order of accuracy. Thus we can speak
about the RK method of the first order.
2. Now consider the case m = 2. In this case Eq. (2.22) is equivalent to the system
k1 = f (tn , xn ) ,
k2 = f (tn + 2 h, xn + h 21k1 ) ,
(2.23)
xn+1 = xn + h(c1 k1 + c2 k2 ) .
Now let us write down the Taylor series expansion of x in the neighborhood of tn
up to the h2 term, i.e.,
dx h2 d 2 x
+O(h3 ) .
xn+1 = xn + h +
dt tn 2 dt 2 tn
d f (t, x) f (t, x)
d 2x
f (t, x)
:=
=
+ f (t, x)
.
2
dt
dt
t
x
Hence the Taylor series expansion can be rewritten as
h2 f
f
xn+1 xn = h f (tn , xn ) +
+O(h3 ) .
+f
2 t
x (tn , xn )
(2.24)
On the other hand, the term k2 in the proposed RK method can also expanded to
O(h3 ) as
f
f
+h 21 f
+O(h3 ) .
k2 = f (tn + 2 h, xn +h 21 k1 ) = h f (tn , xn )+h 2
t (tn , xn )
x (tn , xn )
Now, substituting this relation for k2 into the last equation of (2.23), we achieve
the following expression:
f
f
2
2
xn+1 xn = h (c1 +c2 ) f (tn , xn )+h c2 2
+h c2 21 f
+O(h3) .
t (tn , xn )
x (tn , xn )
Making comparision the last equation and Eq. (2.24) we can write down the
system of algebraic equations for unknown coefficients
c1 + c2 = 1 ,
1
c2 2 = ,
2
1
c2 21 = .
2
The system involves four unknowns in three equations. That is, one additional
condition must be supplied to solve the system. We discuss two useful choices,
namely
RK4 Methods
One member of the family of RungeKutta methods (2.22) is often referred to as
RK4 method or classical RK method and represents one of the solutions corresponding to the case m = 4. In this case, by matching coefficients with those of the Taylor
series one obtains the following system of equations [30]
c1 + c2 + c3 + c4 = 1 ,
21 = 2 ,
31 + 32 = 3 ,
1
c2 2 + c3 3 + c4 4 = ,
2
1
c2 22 + c332 + c4 42 = ,
3
1
c2 23 + c333 + c4 43 = ,
4
1
c3 2 32 + c4 (2 42 + 3 43 ) = ,
6
1
c3 2 3 32 + c4 4 (2 42 + 3 43 ) = ,
8
1
,
c3 22 32 + c4(22 42 + 32 43 ) =
12
1
c4 2 32 43 =
.
24
The system involves thirteen unknowns in eleven equations. That is, two additional
condition must be supplied to solve the system. The most useful choices is [37]
2 =
1
,
2
31 = 0 .
The corresponding Butcher tableau is presented in Table 2.3. The tableau 2.3 yields
Table 2.3 The Butcher tableau corresponding to the RK4 method.
0
1/2 1/2
1/2 0 1/2
1 0 0 1
1/6 1/3 1/3 1/6
h
k1 + 2k2 + 2k3 + k4 ,
6
(2.25)
k1 = f (tn , xn ),
h
h
k2 = f (tn + , xn + k1 ),
2
2
h
h
k3 = f (tn + , xn + k2 ),
2
2
k4 = f (tn + h, xn + hk3 ).
This method is reasonably simple and robust and is a good general candidate for
numerical solution of ODEs when combined with an intelligent adaptive step-size
routine or an embedded methods (,e.g., so-called Runge-Kutta-Fehlberg methods
(RKF45)).
Remark:
Notice that except for the classical method (2.25), one can also construct other
RK4 methods. We mention only so-called 3/8-Runge-Kutta method. The Brutcher
tableau, corresponding to this method is presented in Table 2.4.
Table 2.4 The Butcher tableau corresponding to the 3/8- Runge-Kutta method.
0
1/3 1/3
2/3 -1/3 1
1 1 -1 1
1/8 3/8 3/8 1/8
tZn+1
f (t, x) dt .
(2.26)
tn
Now, if the Simpsons rule is applied, the approximation to the integral of the last
equation reads [?]
tZn+1
tn
h
f (t, x) dt
6
h
h
f (tn , x(tn )) + 4 f (tn + , x(tn + )) + f (tn+1 , x(tn+1 )) . (2.27)
2
2
On the other hand, the values k1 , k2 , k3 and k4 are approximations for slopes of
the curve x, i.e., k1 is the slope of the left end of the interval, k2 and k3 describe two
estimations of the slope in the middle of the time interval, whereas k4 corresponds to
the slope at the right. Hence, we can choose f (tn , x(tn )) = k1 and f (tn+1 , x(tn+1 )) =
k4 , whereas for the value in the middle we choose the average of k2 and k3 , i.e.,
h
k2 + k3
h
.
f (tn + , x(tn + )) =
2
2
2
Then Eq. (2.26) becomes
4(k2 + k3 )
h
k1 +
xn+1 = xn +
+ k4 ,
6
2
which is equivalent to the RK4 schema (2.25).
n+1 = h5
x(4)
.
2880
Now we can estimate the final global error (2.13), if we suppose that only the error
above is presented. After M steps the accumulated error for the RK4 method reads
M
E(x(b), h) = h5
k=1
b a (4)
x(4)
x h = O(h4 ) .
2880 2880
That is, the RK4 method (2.25) is of the fourth order. Now, let us compare two
appximations, obtained using the time steps h and h/2. For the step size h we have
E(x(b), h) K h4 ,
with K = const. Hence, for the step h/2 we get
h
h4
1
E(x(b), ) = K
E(x(b), h) .
2
16 16
That is, if the step size in (2.25) is reduced by the factor of two, the global error of
the method will be reduced by the factor of 1/16.
Remark:
In general there are two ways to improve the accuracy:
1. One can reduce the time step h, i.e., the amount of steps increases;
2. The method of the higher convergency order can be used.
However, increasing of the convergency order p is reasonable only up to some limit,
given by so-called Butcher barrier [45], which says, that the amount of stages m
grows faster, as the order p. In other words, for m 5 there are no explicit RK
methods with the convergency order p = m (the corresponding system is unsolvable). Hence, in order to reach convergency order five one needs six stages. Notice
that further increasing of the stage m = 7 leads to the convergency order p = 5 as
well.
That is,
e
xn+1 = x1 + C h p+1 + O(h p+2) ,
p+1
h
e
+ O(h p+2) .
xn+1 = x2 + 2C
2
|x1 x2 | = C h
p+1
|x1 x2 |
1
.
1 p C =
2
(1 2p)h p+1
Substituing the relation for C in the second estimate for the true solution we get
where
e
xn+1 = x2 + + O(h p+2) ,
|x1 x2|
2p 1
can be considered as a convenient indicator of the truncation error. That is, we have
improved our estimate to the order p + 1. For example, for p = 4 we get
e
xn+1 = x2 +
|x1 x2|
+ O(h6 ) .
15
This estimate is accurate to fifth order, one order higter than with the original step
h. However, this method is not efficient. First of all, it requires a significant amount
of computation (we should solve the equation three times at each time step). The
second point is, that we have no possibility to control the truncation error of the
method (higher order means not always higher accuracy).
However we can use an estimate for the step size control, namely we can compare
with some desired accuracy 0 (see Fig 2.4).
Input t j , x j , 0 , h j , j = 0
hj
2
) and
yes
no
no
> 0
t j+1 = t j + h j , j := j + 1
yes
Halve step size: h j+1 :=
hj
2 ;
Fig. 2.4 Flow diagramm of the step size control by use of the step doubling method.
Alternatively, using the estimate , we can try to formulate the following problem of the adaptive step size control, namely: Using the given values x j and t j , find the largest possible step size
hnew , so that the truncation error after the step with this step size remains below some given desired
accuracy 0 , i.e,
hnew p+1 |x1 x2 |
p+1
0
C hnew
0 .
h
1 2p
That is,
hnew = h
1/p+1
Then if the two answers are in close agreement, the approximation is accepted. If > 0 the step
size has to be decreased, whereas the relation < 0 means, that the step size has to be increased
in the next step.
Notice that because our estimate of error is not exact, we should put some safety factor
1 [45, 37]. Usually, = 0.8, 0.9. The flow diagramm, corresponding to the the adaptive step
size control is shown on Fig. 2.5
Notice one additional technical point. The choise of the desired error 0 depends on the IVP we
are interested in. In some applications it is convinient to set 0 propotional to h [37]. In this case
the exponent 1/p + 1 in the estimate of the new time step is no longer correct (if h is reduced from
a too-large value, the new predicted value hnew will fail to meet the desired accuracy, so instead of
1/p + 1 we should scale with 1/p (see [37] for details)). That is, the optimal new step size can be
written as
1/p+1
,
0 ,
h 0
(2.28)
hnew =
1/p
h 0
,
<
Input t0 , x0 , 0 , h, j = 0
Calculate x(t j + h, h), x(t j + h, 2h ) and
yes
< 0
1/p+1
The step is accepted; hnew := h 0
, t j+1 = t j + hnew , j := j + 1
no
1/p
hnew := h 0
Reiterate the step
Fig. 2.5 Flow diagramm of the adaptive step size control by use of the step doubling method.
Runge-Kutta-Fehlberg method
The alternative stepsize adjustment algorithm is based on the embedded Runge-Kutta formulas,
originally invented by Fehlberg and is called the Runge-Kutta-Fehlberg methods (RKF45) [45, ?].
At each step, two different approximations for the solution are made and compared. Usually an
fourth-order method with five stages is used together with an fifth-order method with six stages,
that uses all of the points of the first one. The general form of a fifth-order Runge-Kutta with six
stages is
k1 = f (t, x),
k2 = f (t + 2 h, x + h21 k1 ),
..
.
5
k6 = f (t + 6 h, x + h 6 j k j ) .
j=1
xn+1 = xn + h ci ki + O(h5 ) .
i=1
And a better value for the solution is determined using a Runge-Kutta method of fifth-order:
6
xn+1 = xn + h ci ki + O(h6 )
i=1
The two particlular choises of unknown parametrs of the method are given in Tables 2.52.6.
The error estimate is
6
As was mentioned above, if we take the current step h and produce an error , the corresponding
optimal step hopt is estimated as
tol 0.2
hopt = h
,
where tol is a desired accuracy and is a safety factor, 1. Then if the two answers are
in close agreement, the approximation is accepted. If > tol the step size has to be decreased,
whereas the relation < tol means, that the step size are to be increased in the next step.
Using Eq. (2.28), the optimal step can be often written as
hopt
0.2
,
h tol
0.25
=
h tol
,
tol ,
< tol ,
2.4.2 Examples
2.4.2.1 Lotka-Volterra competition model
The LotkaVolterra competition equations are a simple model of the population dynamics of
species competing for some common resource. For given two populations with sizes x and y the
model equations are [?]
x = ax(b x cy),
y = dy(e y f x),
(2.29)
Here, positive constant c represents the effect species two has on the population of species one and
positive constant f describes the effect species one has on the population of species two. Let us
analyse the system (2.29) using parameters
a = 0.004, b = 50, c = 0.75, d = 0.001, e = 100, f = 3 .
Fixed points
Equations (2.29) have four fixed points (x , y ):
(0, 0) ,
(b, 0) = (50, 0) ,
b ce b f 1
,
1 f c c f 1
= (20, 40).
Linear stability
In order to analyse the linear stability of (2.29) one derives the corresponding Jacobian
a b 2 a x a c y
a c x
J=
d f y
d e 2 y d f x
Now one can calculate J for all fixed point values (x , y ) and derive the eigenvalues (1 , 2 ) (see
Table 2.7).
Table 2.7 Eigenvalues and linear stability analysis for four fixed points of the system (2.29)
(x , y )
(1 , 2 )
stability
(0, 0)
(0, 100)
(50, 0)
(20, 40)
(0.2, 0.1)
(0.1, 0.1)
(0.2, 0.05)
(0.027, 0.14)
+
+
-
Numerical results
Table (2.29) indicates that the trivial fixed point, coresponding to the case, that both populations
die out, is unstable. Furthermore, the fixed point (20, 40), corresponding to the case, that both
populations survive, is unstable too. That is, both populations will neither die out or survive. Which
population will survive (or die out) depends on initial conditions ( see Fig. 2.6).
x = a x b x y ,
(2.30)
y = c x y d y .
Here a > 0 and c > 0 are preys and predators growth rates whereas d > 0 and b > 0 describe
preys and predators death rates respectively.
A typical numerical coefficients are a = 2, b = 0.02, c = 0.0002, d = 0.8.
The model (2.30) predicts a cyclical relationship between predator and prey numbers. To see this
effect, first we find two fixed points (x , y ) of the system. The fixed points are
d a
,
= (4 103 , 102 ) .
c b
(0, 0) ,
The Jacobian of (2.30) is
J=
a b y b x
cy
cx d
Now one can calculate eigenvalues for both fixed points (see Table 2.8). Furthermore, one can also
Table 2.8 Eigenvalues and linear stability analysis for four fixed points of the system (2.29)
(x , y )
(0, 0)
d a
c, b
(1 , 2 )
stability
(i a d, i a d) neutral stable
d x a y
100
90
80
70
60
50
40
30
20
10
0
0
10
20
30
40
x
50
60
70
80
(a)
(b)
130
6000
120
5000
4000
x(t),y(t)
110
100
3000
90
2000
80
1000
70
2500
3000
3500
4000
4500
5000
5500
6000
0
0
10
20
30
40
Fig. 2.7 Numerical solution of the system (2.30), calculated with RK4 method. (a) Solutions on
the phase plane, corresponding to two different initial conditions: (5 103 , 120) and (3 103 , 102 ).
(b) A cyclical relationship between predator and prey numbers, calculated for the initial condition
(5 103 , 120).
d
a
dV
= c
x a by + b
y (c x d) = 0
dt
x
y
That is, solutions of (2.30) can not leave levels of V . This is illustrated on Fig. 2.7 (a), where
two numerical solutions, corresponding to two different initial conditions (5 103 , 120) and (3
103 , 102 ) are presented. The green point denotes the neutral stable fixed point. Oscillations of both
populations, corresponding to the initial value (5 103 , 120) is presented on Fig. 2.7 (b).
Now suppose that preys growth rate is periodic in time, e.g,
a := a (1 + sin( t)) ,
where [0, 1) and let be = . In this case, depending on control parameter , quasiperiodic
or even chaotic behaviour can be expected. Figure 2.8 illustates an example of quasiperiodic behaviour.
(2.31)
Here denotes the rotation angle, J is a inertia moment of the pendulum about the axis of rotation,
K is a damping constant, D stays for the torque per unit angel, and, finally, N = m g r sin( ) is a
projection of the variable stimulations moment (m is a external mass, r is a radius of the wheel). In
frequency and the free phase
addition, F sin( t + ) is an external forsing of the amplitude F,
.
50
(a)
(b)
(c)
180
10000
11000
10000
160
9000
9000
8000
140
8000
7000
7000
x
120
6000
6000
100
5000
5000
4000
80
4000
3000
60
3000
2000
40
2000
4000
6000
x
8000
10000
1000
0
50
100
150
200
250
3000
4000
5000
Fig. 2.8 Numerical solution for (2.30) calculated with RK4 method for the case = 0.4. Initial
condition is (5 103 , 120). (a) Solutions on phase plane; (b) Quasiperiodic oscillations of the preys
population; (c) The first return map.
In order to solve Eq. (2.31) numerically we rewrite it to a system of first order ODEs. Substitution
x := , y := , z := t leads to the system
x = y ,
y = a y b x + c sin(x) + d sin( z) ,
(2.32)
z = 1 ,
with
K
D
N
F
, b := , c := , d := .
J
J
J
J
We solve the system (2.32) with the classical RK4 method (2.25) with the time step h = 0.025 over
the time interval t [0, 150]. Other parameters are
a :=
a = 0.799 ,
b = 9.44 ,
c = 14.68,
d = 2.1 .
Furthermore, we use the frequency of the external forcing as a control parameter. We start at
= 2.5. The result is shown on Fig. 2.9 (a)-(c). Figure 2.9 (a) shows the solution of (2.32) on
the phase space ( , ). One can see, that the solution corresponds to forced oscillations with the
period one (see Fig. 2.9 (b) as well). Period one oscillations can also be recognised from the first
return map (Fig. 2.9 (c)). Now we increase the control parameter to = 2.32. The results can be
seen on Fig. 2.10 (a)(c). In this case the system oscillates between two values, so one can speak
about period two oscillations (or about periode-doubling bifurcation). Further increasing of leads
to second periode-doubling bifurcation and period four oscillation sets in (see Fig. 2.11 (a)(c)).
Finally, we increase further and chaotic oscillations can be observed (see Fig. 2.12 (a)(c)). The
first return map, shown on Fig. (2.12) (c) indicates the structure of the chaotic motion: the nth
maximal value of is predicted by the n 1th one.
6000
7000
8000
9000
10000
(a)
(b)
(c)
2.5
2.5
2.24
2.22
1.5
2
2.2
2.18
n+1
1.5
d/dt
0.5
0.5
2.12
1
1.5
2.1
0.5
2.08
2
2.5
2.16
2.14
0.5
1.5
0
0
10
20
30
40
t
50
60
70
80
2.06
2.06
2.08
2.1
2.12
2.14
2.16
2.18
2.2
2.22
2.24
Fig. 2.9 Solution of Eq. (2.31) corresponding to = 2.5. (a) Solution on the phase plane. (b) One
period oscillations on the , t plot. (c) The first return map.
x = (y x) ,
(2.33)
y = r x x x z ,
z = x y b z .
Here > 0 is Prandtl number, r > 0 stays for normalized Rayleigh number, whereas b > 0 is a
geometric factor. The function x(t) is propotional to the intensity of convection motion, y(t) is
propotional to the temperature difference between ascending and descending currents and z(t) is
propotional to the distortion of vertical tmperature profile from the linear one.
This system was investigated by Ed Lorenz in 1963. Its purpose was to provide a simplifed model
of atmospheric convection [?, 46].
(a)
(b)
(c)
2.3
2.5
2.25
2.5
2
2
2.2
1.5
n+1
1.5
2.15
d/dt
0.5
0
1
0.5
2.1
1
0.5
1.5
2.05
2
2.5
0
0.5
1.5
2.5
0
0
50
100
150
2.05
2.1
Fig. 2.10 Solution of Eq. (2.31), corresponding to the periode-doubling bifurcation for = 2.32.
(a) Solution on the phase plane. (b) Two period oscillations on the , t plane . (c) The first return
map.
2.15
2.2
2.25
2.3
(a)
(b)
(c)
2.5
2.5
2
2.3
1.5
2.25
1.5
2.2
n+1
d/dt
0.5
2.15
0.5
2.1
1
0.5
1.5
2.05
2
2.5
0
0.5
1.5
0
0
2.5
50
100
150
2.05
2.1
2.15
2.2
2.25
Fig. 2.11 Solution of Eq. (2.31), corresponding to the second periode-doubling bifurcation for
= 2.3. (a) Solution on the phase plane. (b) Period four oscillations on the , t plane. (c) The first
return map.
(a)
(b)
(c)
2.3
2.5
2.25
3
2
2.2
n+1
2
1.5
d/dt
2.15
2.1
1
2.05
1
0.5
3
0
0.5
1.5
2.5
0
0
50
100
150
t
200
250
300
1.95
1.95
2.05
Fig. 2.12 Solution of Eq. (2.31), corresponding to the chaotic oscillation for = 2.25.(a) Solution
on the phase plane. (b) Chaotic oscillations on the , t plane. (c) The first return map, indicationg
the chaotic regime.
Symmetry
The system (2.33) admits a symmetry
(x, y, z) (x, y, z) .
Fixed Points
The fixed points (x , y , z ) are
(a) x = yp
= z = 0 p
corresponds to the state of no convection;
p
p
(b) C+ = ( b(r 1), b(r 1), r 1) and C = ( b(r 1), b(r 1), r 1) correspond
to the state of steady convection. Note that both solutions exist only for r > 1.
2.1
2.15
n
2.2
2.25
2.3
2.3
Linear Stability
The Jacobian of the system (2.33) reads
0
J(x , y , z ) = z 1 x
y x b
(a) The trivial solution (x , y , z ) = (0, 0, 0): In this case the matrix J can be written as 2 2
matrix,
J0 =
r 1
as an linearized equation for z(t) is
z = b z
decoupled. The stability of (2.33) can be determined using the trace and the deteminant of J0 :
Sp(J0 ) = 1 < 0 ,
3 + ( + b + 1) 2 + (r + ) b + 2 b (r 1) = 0 .
The eigenvalues consist of one real negative root and a pair of complex conjugate roots [?]. The
complex roots can be found using the ansatz = i . Substitution into characteristic polynomial
leads to the expression for the crtical Rayleigh number rH
rH =
+b+3
,
b1
> b + 1.
3 = ( + b + 1) < 0
That is, the nontrivial solutions C+ and C are stable for
> b + 1.
1 < r < rH ,
The nontrivial solutions loose stability at rH via the Hopf bifurcation. One can show that this
bifurcation is subcritical [?]. That is, the limit circles are unstable and exist only for r < rH .
8
,
3
= 10 .
470
27.74 . . . .
19
(a)
(b)
0.18
(c)
2.5
30
0.16
25
0.14
0.12
20
1.5
15
0.1
0.08
1
0.06
10
0.04
0.5
0.02
0
0
0.2
0.4
x
0.6
0.8
20
0.5y
0
0
1
0.5
1.5
x
2.5
y
0
15
10
10
15
Fig. 2.13 Solutions of the Lorenz equations (2.33), corresponding to different values of r. (a)
r = 0.5 - the origin is stable; (b) r = 3 - the origin is unstable. All trajectories converge to one of
stable nontrivial fixed points C+ or C ; (c) r = 16 -the basin of attraction around C+ and C are
no longer distinct.
(a)
(b)
(c)
45
42
40
45
35
40
40
38
30
35
20
25
36
zn+1
30
25
34
20
15
32
15
10
5
0
0
30
10
50
50
100
t
150
0
20
0
10
0
x
10
20
50
28
y
28
30
32
34
zn
36
Fig. 2.14 (a) Solution of the Lorenz equations (2.33) on (t, z) plane, computed at r = 26. (b)
Solution of (2.33) at r = 26 on the tree-dimensional phase space. (c) The Lorenz map.
0 < r < 1: The origin is stable node (see Fig. 2.13 (a)).
1 < r < 24.74: The origin becomes unstable and bifurcates into a pair of solutions C+ and C .
All trajectories converge to either one or another point (see Fig. 2.13 (b)). At r 13.926 the
origin becomes a homoclinic point, i.e., beyond this point trajectories can cross forward and
backward between C+ and C befor settle down to them (see Fig. 2.13 (c)).
r = 24.74: Both C+ and C become unstable via subcritical Hopf bifurcation.
r > 24.74: After initial transient the solution settes into irregular oscillation and is aperiodic
(see Fig. 2.14 (a)). On the phase space, the time spent wandering near sets around C+ and C
becomes infinite and the set becomes a strange attractor (see Fig. 2.14 (b)).
38
40
42
20
Lorenz map
Lorenz found a way to analyse the dynamics on the strange attractor. He has considered a projection
of the three-dimensional phase space on the (t, z) plane. The idea was that if we consider the nth
local maximum of the function z(t), zn , it should predict zn+1 . To check this, one can estimate the
local maxima of the function z(t) and plot zn+1 versus zn . The resulting function, presented on
Fig. 2.14 (c) is now called the Lorenz map.
tZn+1
f (t, x(t)) dt .
tn
Now a numerical integration method can be used to approximate the definite integral in the last
equation. The Adams-Bashforth-Moulton method is a multi-step method that proceeds in two
steps [30, 37]. The first step is called the Adams-Bashforth predictor. The predictor uses the
Lagrange polynomial approximation for the function f (t, x(t)) based on the nodes (tn3 , f n3 ),
(tn2 , f n2 ), (tn1 , f n1 ) and (tn , f n ). After integrating over the interval [tn , tn+1 ] the predictor
reads
h
pn+1 = xn +
9 f n3 + 37 f n2 59 f n1 + 55 f n .
(2.34)
24
The second step is the Adams-Moulton corrector and is developed similary. A seecond Lagrange
polynomial for the function f (t, x(t)) is constracted. In this case, it based on the points (tn2 , f n2 ),
(tn1 , f n1 ), (tn , f n ) and the new point (tn+1 , f n+1 ) = (tn+1 , f (tn+1 , pn+1 )). After integrating over
the interval [tn , tn+1 ] the following relation for the corrector is obtained
xn+1 = xn +
h
24
f n2 5 f n1 + 19 f n + 9 f n+1 .
(2.35)
Notice that the method (2.34)(2.35) is not self-starting, i.e., four initial points (tn , xn ), n =
0, 1, 2, , 3 must be given in order to estimate the points (tn , xn ) for n 4.
Error Estimation
The local truncation error for both predictor (2.34) and corrector (2.35) terms are of the order
O(h5 ), namely [30]
251 (5) 5
x h ,
720
19 (5) 5
=
x h .
720
x(tn+1 ) pn+1 =
x(tn+1 ) xn+1
That is, for small values of h one can eliminate terms with fifth derivative and the error estimate
reads
19
x(tn+1 ) xn+1
xn+1 pn+1 .
(2.36)
270
Equation (2.36) gives an estimation of the local truncation error based on the two computed values
pn+1 and xn+1 , but x(5) .
Example 1
Solve an IVP
x = t 2 x ,
x0 := x(0) = 1
(2.37)
over the time interval t [0, 5] with the Adams-Bashforth-Moulton method (2.34)(2.35) using the
time step h = 0.05. The three starting x1 , x2 and x3 values can be calculated via the classical RK4
method. The exact solution of the problem is [30]
x(t) = t 2 2t + 2 et .
The result of the calculation is presented on Fig. 2.15.
18
exact solution
16
numerical solution
14
12
10
8
6
4
2
0
0
3
t
Chapter 3
A boundary value problem (BVP) is a problem, typically an ODE or a PDE, which has values
assigned on the physical boundary of the domain in which the problem is specified. Let us consider
a genearal ODE of the form
x(n) = f (t, x, x , x , , x(n1) ) ,
t [a, b]
(3.1)
(3.2)
..
.
x(a) = ,
t [a, b]
(3.3)
x(b) = .
The main idea of the method is to reduce the solution of the BVP (3.3) to the solution of an initial
value problem [45, 30]. Namely, let us consider two special IVPs for two functions u(t) and v(t).
Suppose that u(t) is a solution of the IVP
37
u(a) = ,
u (a) = 0
v(a) = 0 ,
v (a) = 1 .
c = const .
(3.4)
is a solution to BVP (3.3). The unknown constant c can be found from the boundary condition on
the right end of the time interval, i.e.,
x(b) = u(b) + c v(b) = c =
u(b)
.
v(b)
u(b)
v(t) .
v(b)
Example 1
Let us solve a BVP [30]
2
2t
x (t)
x(t) + 1 ,
1 + t2
1 + t2
x(0) = 1.25,
x(1) = 0.95 .
x (t) =
(3.5)
over the time interval t [0, 4] using the linear shooting method (3.4).
According to Eq. (3.4) the solution of this equation has the form
x(t) = u(t)
0.95 + u(4)
v(t) ,
v(4)
2
2t
u (t) +
u(t) + 1 ,
1 + t2
1 + t2
u(0) = 1.25 ,
u (0) = 0
and
2t
2
v (t) +
v(t) ,
v(0) = 0 ,
v (0) = 1 .
1 + t2
1 + t2
Numerical solution of the problem 3.5 as well as both dunctions u(t) and v(t) are presented on
Fig. 3.1
v (t) =
4
u(t)
v(t)
3
x(t)
u,v,x
3
0
0.5
x(b) = .
1.5
2
t
2.5
t [a, b]
3.5
(3.6)
be the BVP in question and let x(t, s) denote the solution of the IVP
x (t) = f (t, x(t), x (t)) ,
x(a) = ,
x (a) = s ,
t [a, b]
(3.7)
where s is a parameter that can be varied. The IVP (3.7) is solved with different values of s with,
e.g., RK4 method till the boundary condition on the right side x(b) = becomes fulfilled. As
mentioned above, the solution x(t, s) of (3.7) depends on the parameter s. Let us define a function
F(s) := x(b, s) .
If the BVP (3.6) has a solution, then the function F(s) has a root, which is just the value of the
slope x (a) giving the solution x(t) of the BVP in question. The zeros of F(s) can be found with,
e.g., Newtons method [37].
The Newtons method is probably the best known method for finding numerical approximations
to the zeroes of a real-valued function. The idea of the method is to use the first few terms of the
Taylor series of a function F(s) in the vicinity of a suspected root, i.e.,
F(sn + h) = F(sn ) + F (sn ) h + O(h2 ) .
where sn is a nth approximation of the root. Now if one inserts h = s sn , one obtains
F(s) = F(sn ) + F (sn ) (s sn ) .
As the next approximation sn+1 to the root we choose the zero of this function, i.e.,
F(sn+1 ) = F(sn ) + F (sn ) (sn+1 sn ) = 0 sn+1 = sn
F(sn )
.
F (sn )
The derivative F (sn ) can be calculated using the forward difference formula
F (sn ) =
F(sn + s) F(sn )
s
(3.8)
(b)
70
60
50
40
30
F(t,s)1
(a)
20
10
10
10
100
80
60
40
20
12
0
0.2
0.4
0.6
0.8
Fig. 3.2 Numerical solution of BVP (3.9) with single shooting method. (a) The Function F(s) =
x(t, s) 1 is presented. Green points depict two zeros of this function, which can be found with
Newtons method. (b) Two solutions of (3.9) corresponding to two different values of parameter s
(the red line corresponds to s = 35.8, whereas the blue one to s = 8.0).
where s is small. Notice that this procedure can be unstable near a horizontal asymptote or a local
extremum.
Example 1
Consider a simple nonlinear BVP [45]
3
x(t)2 ,
2
x(0) = 4,
x(1) = 1
x (t) =
(3.9)
over the interval t [0, 1] and let us solve it numerically with the single shooting method discussed
above. First of all we define a corresponding IVP
x (t) =
3
x(t)2
2
x(0) = 4 ,
x (0) = s
over t [0, 1] and solve it for different values of s, e.g., s [100, 0] with the classical RK4
method. The result of calculation is presented on Fig. 3.2 (a). One can see, that the function F(s) =
x(t, s) 1 admits two zeros, depicted on Fig. 3.2 (a) as green points. In order to find them we use
the Newtons method, discussed above. The method gives an approximation to both zeros of the
function F(s): s = {35.8, 8.0}, which give the right slope x (0). Both solutions, corresponding
to two different values of s are presented on Fig. 3.2 (b).
0.5
=0.5
=50
0.4
=100
0.3
0.2
0.1
0.1
0.2
0
0.2
0.4
0.6
0.8
Example 2
Let us consider a linear eigenvalue problem of the form
x + x = 0 ,
x(0) = x(1) = 0 ,
x (0) = 1
(3.10)
over t [0, 1] with the simple shooting method. The exact solution is
= n2 2 ,
n N.
In order to apply the simple shooting method we consider a corresponding IVP of the first order
with additional equation for the unknown function (t):
y = x ,
x = y ,
with
x(0) = 0 ,
x (0) = 1 ,
= 0
(0) = s .
where s is a free shooting parameter. Here we choose s = {0.5, 50, 100}. Results of the shooting
with these initial parameters are shown on Fig. 3.3. One can see, that numerical solutions correspond to first three eigenvalues = { 2 , (2 )2 , (3 )2 }.
Example 3
Consider a nonlinear BVP of the fourth order [?]
x(4) (t) (1 + t 2 ) x (t)2 + 5 x(t)2 = 0 ,
with
x(0) = 1 ,
x (0) = 0 ,
x (1) = 2 ,
t [0, 1]
(3.11)
x (1) = 3 .
Our goal is to solve this equation with the simple shooting method. To this end, first we rewrite the
equation as a system of four ODEs of the first order:
x1 = x2 ,
x2 = x3 ,
x1 (0) = 1 ,
x3 = x4 ,
x2 (0) = 0 ,
x3 (1) = 2 ,
x4 (1) = 3 ,
x4 = (1 + t 2 ) x23 5 x21 .
As the second step we consider correspondig IVP
x1 = x2 ,
x2 = x3 ,
x1 (0) = 1 ,
x3 (1) = p ,
x3 = x4 ,
x2 (0) = 0 ,
x4 (1) = q ,
x4
= (1 + t
) x23 5 x21
with two free shooting parameters p and q. The solution of this IVP fulfilles following two requirements:
F1 (p, q) : = x3 (1, p, q) + 2 = 0 ,
F2 (p, q) : = x4 (1, p, q) + 3 = 0 .
That is, a system of nonlinear algebraic equations should be solved to find (p, q). The zeros of the
system can be found with the Newtons method (3.8). In this case the iteration step reads
si+1 = si
F(si )
DF(si )
F1 F1
p q
F2 F2
p q
h=
ba
,
N
i = 0, 1, . . . N .
Difference quotient approximations for derivatives can be used to solve BVP in question [45, 30].
In particular, using a Taylor expansion in the vicinity of the point t j , for the first derivative one
1.02
1
0.98
0.96
0.94
0.92
0.9
0.88
0.86
0.84
0.82
0
0.2
0.4
0.6
0.8
x (ti ) =
(3.12)
x(ti ) x(ti1 )
+ O(h) .
h
(3.13)
We can combine these two approaches and derive a central difference, which yields a more accurate
approximation:
x(ti+1 ) x(ti1 )
+ O(h2 ) .
(3.14)
x (ti ) =
2h
The second derivative x (ti ) can be found in the same way using the linear combination of different
Taylor expansions. For example, a central difference reads
x (ti ) =
(3.15)
t [a, b] ,
x(a) = ,
x(b) = .
and introduce the notation x(ti ) = xi , p(ti ) = pi , q(ti ) = qi and r(ti ) = ri . Then, using Eq. (3.14)
and Eq. (3.15) one can rewrite Eq. (3.3) as a difference equation
x0 = ,
xi+1 xi1
xi+1 2 xi + xi1
= pi
+ qi xi + ri ,
h2
2h
xN = .
i = 1 , . . ., N 1 ,
Now we can multiply both sides of the second equation with h2 and collect terms, involving xi1 ,
xi and xi+1 . As result we get a system of linear equations
h
h
i = 1, 2, . . . N 1 .
1 + pi xi1 (2 + h2 qi )xi + 1 pi xi+1 = h2 ri ,
2
2
or, in matrix notation
Ax = b,
(3.16)
0
...
...
0
(2 + h2 q1 ) 1 2h p1
1 + h p2 (2 + h2 q2 ) 1 h p2
0
...
0
2
2
h
2q ) 1 h p
p
(2
+
h
.
.
.
0
0
1
+
3
2 3
2 3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 h pN2
2
...
...
where
1 =
h
p1 + 1 ,
2
1 + 2h pN1 (2 + h2 qN1 )
h2 r1 1
h2 r2
2r
h
3
,
xN1
h2 rN1 N
x1
x2
x3
h
N = 1 pN1 .
2
Our goal is to find unknown vector x. To this end we should invert the matrix A. This matrix has a
band structure and is tridiagonal. For matrices of this kind a tridiagonal matrix algorithm (TDMA),
also known als Thomas algorithm can be used (see Appendix A for details).
Example
Solve a linear BVP [?]
x (t) (1 + t 2 ) x(t) = 1 ,
(3.17)
x(1) = x(1) = 0
over t [1, 1] with finite difference method. First we introduce discrete set of nodes ti = 1 + i h
with given time step h. According to notations used in previous section, p(t) = 0, q(t) = (1 +t 2 ),
r(t) = 1, = = 0. Hence, the linear system (3.16) we are interested in reads
(2 + h2 q1 )
1
0
... ...
0
h
x1
x2
h2
1
(2 + h2 q2 )
1
0 ...
0
x3
h2
0
1
(2 + h2 q3 ) 1 . . .
0
=
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
xN1
0
...
...
0 1 (2 + h2 qN1 )
h2
The numerical solution of the problem in question is presented on Fig. 3.5.
(3.18)
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
1
x(a) = 0 ,
0.5
0
t
0.5
x(b) = 0 .
Introducing notation xi := x(ti ), qi := q(ti ), vi := v(ti ), we can write down a difference equation for
Eq. (3.18)
x0 = 0
xi+1 2xi + xi1
+ qi xi vi xi = 0 ,
h2
xN = 0 .
i = 1, . . . N 1 ,
If vi 6= 0 for all i we can rewrite the difference equation above as an eigenvalue problem
(A I) x = 0
(3.19)
...
0
...
0
...
0
A=
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
h v
1
+ qv11
0
...
h2 v
h2 v1
q2
11
1
2
+
...
h2 v2
v2
h2 v2
h2 v2
q3 1
1
2
0
+
v3 h2 v3
h2 v3
h2 v3
...
...
2
1
h2 vN1 h2 vN1
N2
q3
+ vN1
Part II
In this part, we discuss the standard numerical techniques used to integrate partial differential
equations (PDEs). Here we focus on finite difference method.
Chapter 4
Introduction
u(x, y) = 0.
x
This equation implies that the function u(x, y) is independent of x. Hence the general solution
of this equation is u(x, y) = f (y), where f is an arbitrary function of y. The analogous ordinary
differential equation is
du
= 0,
dx
its general solution is u(x) = c, where c is a constant. This example illustrates that general solutions
of ODEs involve arbitrary constants, whereas solutions of PDEs involve arbitrary functions.
In general, one can classify PDEs with respect to different criterion, e.g.:
Order;
Dimension;
Linearity;
Initial/Boundary value problem, etc.
By order of PDE we will understand the order of the highest derivative that occurs. A PDE is said
to be linear if it is linear in unknown functions and their derivatives, with coefficients depending on the independent variables. The independent variables typically include one or more space
dimensions and sometimes time dimension as well.
For example, the wave equation
2 u(x,t)
2 u(x,t)
= a2
2
t
x2
is a one-dimensional, second-order linear PDE. In contrast, the Fisher Equation of the form
u(r,t)
= u(r,t) + u(r,t) u(r,t)2 ,
t
where r = (x, y) is a two-dimensional, second-order nonlinear PDE.
49
2 2
+ 2 =0
x2
y
is elliptic, as a = c = 1, b = 0, D = 4 < 0. In general, elliptic PDEs describe processes that have
already reached steady state, and hence are time-independent.
One-dimensional wave equation for some amplitude A(x,t)
2A
2A
v2 2 = 0
t2
x
with the positive dispersion velocity v is hyperbolic (a = 1, b = 0, c = v2 , D = 4v2 > 0). Hyperbolic PDEs describe time-dependent, conservative processes, such as convection, that are not
evolving toward steady state.
The next example is a diffusion equation for the patricles density (x,t)
2
=D 2,
t
x
where D > 0 is a diffusion coefficient. This equation is called to be parabolic (a = D, b = c =
0,D = 0). Parabolic PDEs describe time-dependent, dissipative processes, such as diffusion, that
are evolving toward steady state.
Each of these classes should be investigated separately as different methods are required for
each class. The next point to emphasize is that as all the coefficients of the PDE can depend on x
and y, this classification concept is local.
- Dirichlet conditions specify the values of the dependent variables of the boundary points.
- Neumann conditions specify the values of the normal gradients of the boundary.
- Robin conditions defines a linear combination of the Drichlet and Neumann conditions.
Moreover, it is useful to classify the PDE in question in terms of initial value problem (IVP)
and boundary value problem (BVP).
- Initial value problem: PDE in question describes time evolution, i.e., the initial space-distribution
is given; the goal is to find how the dependent variable propagates in time ( e.g., the diffusion
equation).
- Boundary value problem: A static solution of the problem should be found in some region-and
the dependent variable is specified on its boundary ( e.g., the Laplace equation).
i = 0, 1, . . ., M;
t j = t0 + jt,
j = 0, 1, . . ., T ;
u x2 2 u x3 3 u
xn n u
= u(x) + x
+
+
+...
n
x
2! x2
3! x3
n=1 n! x
u(x + x) = u(x) +
(4.1)
u u(x + x) u(x) x 2 u x2 3 u
=
...
x
x
2! x2
3! x3
(4.2)
If we break the right hand side of the last equation after the first term, for x 1 the last equation
becomes
u u(x + x) u(x)
i u
=
+ O(x) =
+ O(x) ,
(4.3)
x
x
x
where
i u = u(x + x) u(x) := ui+1 ui
is called a forward difference. The backward expansion of the function u can be written as x 1
the last equation reads
u(x + (x)) = u(x) x
so for the first derivative one obtains
u x2 2 u x3 3 u
+
+ . . .,
x
2! x2
3! x3
(4.4)
u u(x) u(x x)
i u
=
+ O(x) =
+ O(x) ,
x
x
x
where
(4.5)
is called a backward difference. One can see that both forward and backward differences are of the
order O(x). We can combine these two approaches and derive a central difference, which yields
a more accurate approximation. If we substract Eq. (4.5) from Eq. (4.3) one obtains
u(x + x) u(x x) = 2x
u
x3 3 u
+2
+ . . .,
x
3! x3
(4.6)
what is equivalent to
u u(x + x) u(x x)
=
+ O(x2 )
x
2x
(4.7)
Note that the central difference (4.7) is of the order of O(2 x).
The second derivative can be found in the same way using the linear combination of different
Taylor expansions. For instance, consider
u(x + 2x) = u(x) + 2x
u (2x)2 2 u (2x)3 3 u
+
+
+...
x
2! x2
3! x3
(4.8)
Substracting from the last equation Eq. (4.1), multiplied by two, one gets the following equation
u(x + 2x) 2u(x + x) = u(x) + x2
2u
3u
+ x3 3 + . . .
x2
x
(4.9)
(4.10)
Similarly one can obtain the expression for the second derivative in terms of backward expansion,
i.e.,
(4.11)
Finally, if we add Eqn. (4.3) and (4.5) an expression for the cental second derivative reads
(4.12)
One can see that approximation (4.12) provides more accurate approximation as (4.10) and (4.11).
In an analogous way one can obtain finite difference approximations to higher order derivatives and
differential operators. The coefficients for first three derivatives for the case of forward, backward
and central differences are given in Tables 4.1, 4.2, 4.3.
Mixed derivatives
A finite difference approximations for the mixed partial derivatives can be calculated in the same
way. For example, let us find the central approximation for the derivative
-1 1
1 -2
-1 3
-3
1 -4
-4
x3 x3u
4
x4 x4u
-1 1
-2 1
-1
-3 1
-4
-4 1
2x ux
x2 x2u
2
1 -2 1
2x3 x3u
3
4
x4 x4u
2u
x y
u
y
-1
0 -2
-4 6 -4
O(x2 )
u(x,y+y)u(x,yy)
2y
+ O(y2 ) =
u(x+x,y+y)u(xx,y+y)u(x+x,yy)+u(xx,yy)
4xy
+ O(x2 y2 ) .
A nonequidistant mesh
In the section above we have considered different numerical approximations for the derivatives
using the equidistant mesh. However, in many applications it is convinient to use a nonequidistant
mesh, where the spatial steps fulfill the following rule:
xi = xi1.
If = 1 the mesh is said to be equidistant. Let us now calculate the first derivative of the function
u(x) of the second-order accurance:
u(x + x) = u(x) + x
u ( x)2 2 u ( x)3 3 u
+
+
+...
x
2!
x2
3!
x3
(4.13)
Adding the last equation with Eq. (4.4) multiplied by one obtains the expression for the second
derivative
2 u u(x + x) (1 + )u(x) + u(x x)
+ O(x)
(4.14)
=
1
2
x2
2 ( + 1)x
Substitution of the last equation into Eq. (4.4) yields
(4.15)
(4.16)
Here T is a nonlinear operator, depending on the numerical scheme in question. The successive
application of T results in a sequence of values
u0 , u1 , u2 , . . .,
that approximate the exact solution of the problem. However, at each time step we add a small
error j , i.e., the sequence above reads
u0 + 0 , u1 + 1 , u2 + 2 , . . .,
where j is a cumulative rounding error at time t j . Thus we obtain
u j+1 + j+1 = T (u j + j ) .
(4.17)
After linearization of the last equation (we suppose that Taylor expansion of T is possible) the
linear equation for the perturbation takes the form:
j+1 =
T (u j ) j
:= G j .
uj
(4.18)
This equation is called the error propagation law, whereas the linearization matrix G is said to be
an amplification matrix [21]. Now, the stability of the numerical scheme in question depends on
the eigenvalues g of the matrix G. In other words, the scheme is stable if and only if
|g | 1
Now the question is how this information can be used in practice. The first point to emphasize is
j
that in general one deals with the u(xi ,t j ) := ui , so one can write
ij+1 = Gii ij ,
i
where
(4.19)
Gii =
T (u j )i
uij
Futhermore, the spatial variation of ij (rounding error at the time step t j at the point xi ) can be
expanded in a finite Fourier series in the intreval [0, L]:
ij = eikxi j (k),
(4.20)
where k is the wavenumber and j (k) are the Fourier coefficients. Since the rounding error tends to
grow or decay exponentially with time, it is reasonable to assume that j (k) varies exponentially
with time, i.e.,
ij = e t j eikxi ,
k
where is a constant. The next point to emphasize is that the functions eikxi are eigenfunctions of
the matrix G, so the last expansion can be interpreted as the expansion in eigenfunctions of G. In
addition, the equation for the error is linear, so it is enough to examine the grows of the error of a
typical term of the sum. Thus, from the practical point of view one take the error ij just as
ij = e t j eikxi .
The substitution of this expression into Eq. (4.20) results in the following relation
ij+1 = g(k)ij .
(4.21)
That is, one can interpert eikxi as an eigenvector corresponding to the eigenvalue g(k). The value
g(k) is often called an amplification factor. Finally, the stability criterium is then given by
|g(k)| 1
k .
(4.22)
Chapter 5
Advection Equation
Let us consider a continuity equation for the one-dimensional drift of incompressible fluid. In the
case that a particle density u(x,t) changes only due to convection processes one can write
u(x, t + t) = u(x c t, t).
If t is sufficient small, the Taylor-expansion of both sides gives
u(x,t) + t
u(x,t)
u(x,t)
u(x,t) ct
t
x
or, equivalently
u
u
+c
= 0.
(5.1)
t
x
Here u = u(x,t), x R, and c is a nonzero constant velocity. Equation (5.1) is called to be an
advection equation and describes the motion of a scalar u as it is advected by a known velocity
field. According to the classification given in Sec. 4.1, Eq. (5.1) is a hyperbolic PDE. The unique
solution of (5.1) is determined by an initial condition u0 := u(x, 0)
u(x,t) = u0 (x ct) ,
(5.2)
Alltogether the solution of (5.1) takes the form (5.2). The solution (5.2) is just an initial function
u0 shifted by ct to the right (for c > 0) or to the left (c < 0), which remains constant along the
characteristic curves (du/ds = 0).
57
10
0
0
10
Now we focus on different explicit methods to solve advection equation (5.1) numerically on the
periodic domain [0, L] with a given initial condition u0 = u(x, 0).
j = 0 ... T .
i = 0 ... M,
x =
L
.
M
Adopting a forward temporal difference scheme (4.3), and a centered spatial difference scheme (4.7),
Eq. (5.1) yields
j
j
u j ui1
ui
= c i+1
t
2x
ct
j
j
uij+1 = uij
ui+1
ui1
.
2x
j+1
ui
(5.3)
Here we use a notation uij := u(xi , t j ). Shematic representation of the FTCS approximation (5.3) is
shown on Fig. 5.2.
u
xi1
e
6
7 o
S
S
u
xi
t j+1
S
S
Su
xi+1
tj
where ij+1 stands for the cumulative rounding error at time t j . The von Neumanns stability condition (4.22) for the amplification factor g(k) reads:
|g(k)| 1
k.
c2 t 2 2
sin (kx),
x2
One can see that the magnitude of the amplification factor g(k) is greater than unity for all k.
This implies that the instability occurs for all given c, t and x, i.e., the FTCS scheme (5.3) is
unconditionally unstable.
t
x
ct
j
uij ui1
,
uij+1 = uij
x
(c > 0) .
(5.4)
(c < 0) .
(5.5)
and
j
u j uij
ui
= c i+1
t
x
ct
j
uij+1 = uij
ui+1
uij
x
j+1
ui
Note that the upwind scheme (5.4) corresponds to the case of positive velocities c, whereas
Eq. (5.5) stands for the case c < 0. The next point to emphasize is that both schemes (5.4)(5.5) are
only first-order in space and time. Shematic representations of both upwind methods is presented
on Fig. 5.3
In the matrix form the upwind scheme (5.4) takes the form
u j+1 = Au j ,
(5.6)
1 ch 0
0 . . . ch
ch 1 ch 0
. . .0
A=
ch 1 ch . . .0
0
. . . . . . . . . . . . . . . . . . . . . . . . . .
0
...
ch 1 ch
The boxed element A1n indicates the influence of the periodic boundary conditions. Similary, one
can also represent the scheme (5.5) in the form (5.6) with matrix
1 + ch ch
0
. . .0
0 1 + ch ch . . .0
. . . . . . . . . . . . . . . . . . . . . . . . .
A=
. . . 1 + ch ch
0
0 1 + ch
ch . . .
Again, the boxed element An1 displays the influence of periodic boundary conditions.
ij+1 = g(k)ij ,
where the amplification factor g(k) for, e.g., the upwind scheme (5.4) is given by
ct
ct
g(k) = 1
1 eikx = =
, = kx = 1 + ei .
x
x
(a)
(b)
u
e
6
7
t j+1
e
6
o
S
S
S
S
Su
u
t j+1
u
u
u
tj
tj
xi+1
xi+1
xi
xi
xi1
xi1
Fig. 5.3 Schematic visualization of the first-order upwind methods. (a) Upwind scheme (5.4) for
c > 0. (b) Upwind scheme (5.5) for c < 0.
t=0
t=50
t=100
t=150
u(x)
0.8
t=200
0.6
0.4
0.2
0
0
10
|g(k)| 1 1 0
ct
x
1c
.
x
t
(5.7)
That is, the method (5.4) is conditionally stable, i.e., is stable if and only if the physical velocity
c is not bigger than the spreading velocity x/t of the numerical method. This is equivalent to
the condition that the time step, t, must be smaller than the time taken for the wave to travel the
distance of the spatial step, x. Schematic illustration of stability condition (5.7) is shown on Fig. .
Condition (5.7) is called a Courant-Friedrichs-Lewy (CFL) stability criterion whereas is. The
condition (5.7) is named after R. Courant, K. Friedrichs, and H. Lewy, who described it in their
paper in 1928 [38].
Numerical results
Figure 5.5 shows an example of the calculation in which the upwind scheme (5.4) is used to advect
a Gaupulse. Parameters of the calculation are choosen as
Space interval
Initial condition
Space discretization step
Time discretization step
Velocity
Amount of time steps
L=10
u0 (x) = exp((x 2)2 )
x = 0.1
t = 0.05
c = 0.5
T = 200
For parameter values given above the stability condition (5.7) is fulfilled, so the scheme (5.4)
is stable. On the other hand, one can see, that the wave-form shows evidence of dispersion. We
discuss this problem in details in the next section.
.
(5.8)
2
2x
In this case the matrix A of the linear system (5.6) is given by a sparse matrix with zero main
diagonal
0 a 0 0 ... 0 0 b
b 0 a 0 ... 0 0 0
0 b 0 a ... 0 0 0
. . . . . . . . . . . . . . . . . . .
A=
. . . . . . . . . . . . . . . . . . . ,
. . . . . . . . . . . . . . . . . . .
0 0 0 0 ... b 0 a
a 0 0 0 ... 0 b 0
where
1 ct
,
2 2x
1 ct
.
b= +
2 2x
a=
and the boxed elements represent the influence of periodic boundary conditions.
ct
x
ct
sin kx .
x
u
xi1
S
7 o
S
u
xi
t j+1
S
S
Su
xi+1
tj
ct
1,
x
which is again the Courant-Friedrichs-Lewy condition (5.7). In fact, all stable explicit differencing
schemes for solving the advection equation (5.1) are subject to the CFL constraint, which determines the maximum allowable time-step t.
Numerical results
Consider a realization of the Lax method (5.8) on the concrete numerical example:
Space interval
Initial condition
Space discretization step
Time discretization step
Velocity
Amount of time steps
L=10
u0 (x) = exp(10(x 2)2 )
x = 0.05
t = 0.05
c = 0.5
T = 200
As can be seen from Fig. 5.7 (a) like the upwind method (5.4), the Lax scheme introduces a
spurious dispersion effect into the advection problem (5.1). Although the pulse is advected at
the correct speed (i.e., it appears approximately stationary in the co-moving frame x ct (see
Fig. 5.7 (b))), it does not remain the same shape as it should.
(a)
(b)
1
t=0
0.8
0.8
t=50
0.6
0.6
u
u(x)
t=0
t=100
t=200
t=100
t=150
0.4
0.4
t=200
0.2
0
0
0.2
6
x
10
0
0
0.5
1.5
2
xct
2.5
3.5
Fig. 5.7 Numerical implementation of the Lax method (5.8). Parameters are: Advection velocity is
c = 0.5, length of the space interval is L = 10, space and time discretization steps are x = 0.05 and
t = 0.05, amount of time steps is T = 200, and initial condition is u0 (x) = exp(10(x 2)2 ). (a)
Time evolution of u(x,t) for different time moments. Solutions at t = 0, 100, 150, 200 are shown.
(b) Time evolution in the co-moving frame x ct at t = 0, 100, 200.
(a)
(b)
14
12
8
6
3
2
2
0
0
=0.75
=0.5
=0.25
2 t/
1 t/
10
=1
=0.5
=0.25
0.2
0.4
0.6
0.8
0
0
0.2
k x/
0.4
0.6
0.8
k x/
Fig. 5.8 Illustration of the dispersion relation for the Lax method calculated for different values of
the Courant number . (a) Real part of . (b) Imaginary part of .
Fourier Analysis
One can try to understand the origin of the dispersion effect with the help of the dispersion relation.
The ansatz of the Fourier mode of the form
uij eikxi i t j
for Eq. (5.8) results in the following relation
ei t = cos kx i sin kx ,
where again = ct/x. For = 1 the right hand side of this relation is equal to exp(ikx)
and one otbaines
x
=k
= k c.
t
That is, in this case the Lax method (5.8) is exact (the phase velocity /k is equal c). However, in
general case one should suppose = 1 + i2 , i.e., the Fourier modes are of the form
u(x,t) eikxi(1 +i2 )t = ei(kx1 t) e2 t
and the corresponding dispersion relation reads
t = (1 i2 )t = i ln cos kx i sin kx .
(5.9)
Hence, if 2 0 one has deal with damped waves, that decay exponentially with the time contstant
1/2 . Furthermore, from Eq. (5.9) can be seen, that for < 1 Fouirer modes with wavelength
about some grid constants ( = 2 /k 4x) are not only damped (see Fig 5.8 (b)) but, on the
other hand, propagate with the essential greater phase velocity 1 /k as long-wave components
(see Fig. 5.8 (a)). Now the question we are interested in is what is the reason for this unphysical
behavior? To answer this question let us rewrite the differencial scheme (5.8):
1 j+1 1 j+1 1 j1 1 j1 1 j
ct j
j
j
ui + ui ui + ui = ui+1 + ui1
+ uij uij
ui+1 ui1
2
2
2
2
2
2x
|
{z
}
{z
}|
{z
}
|
j+1
ui
1 j
ct j
1
1 j+1
j
j
u x2 2 u
u t 2 2 u
=
c
t
2t x2
x
2 t2
(5.10)
Althougth the last term in (5.10) tends to zero as t 0, the behavior of the first term depends
on the behavior of t and x. That is, the Lax method is not a consistent way to solve Eq. (5.1).
This message becomes clear if one calculates the partial derivative
2 u (5.1) 2 2 u
= c
.
t2
x2
Substitution of the last expresssion into Eq. (5.10) relults in the equation, which in addition to the
advection term includes diffusion term as well,
u
u
2u
= c
+D 2 ,
t
x
x
where
t
x2
c2
2t
2
is a positive diffusion constant. Now the unphysical behavior of the Fourier modes becomes clear
we have integrated the wrong equation! That is, other numerical approximations should be used to
solve Eq. (5.1) in a more correct way.
D=
Now we use the central difference to approximate the derivative ux |i, j+ 1 , i.e.,
2
uij+1 = uij
j+ 21
i 12
ct
u
x
j+ 21
i+ 12
j+ 12
i 12
u
o
S
7
6
S
Se
e
7
o
S
7 o
S
S
S
u
Su
Su
xi1
u
j+ 21
i 12
j+ 21
i+ 12
uij+1
t j+1
xi
1 j
ct
j
j
ui + ui1
uij ui1
,
2
2x
ct
1 j
j
u + ui+1
u j uij ,
=
2 i
2x i+1
ct
j+ 1
j+ 1
= uij
u 12 u 12 .
i+ 2
i 2
x
xi+1
t j+ 1
tj
(5.11)
(5.12)
( + 1) ,
2
b0 = 1 2 ,
b1 = ( 1)
2
b1 =
and is the Courant number. The matrix A of the linear system (5.6) is a sparse matrix of the form
b0 b1 0 0 . . . 0 0 b1
b1 b0 b1 0 . . . 0 0
0
0
0 b1 b0 b1 . . . 0 0
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A=
,
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
0 0 0 0 . . . b1 b0 b1
b1 0 0 0 . . . 0 b1 b0
where boxed elements stays for influence of the periodic boundary conditions.
Notice that the three-point scheme (5.12) is second-order accurate in space and time. The distinguishing feature of the LaxWendroff method is, that for the linear advection equation (5.1) it is
the only explicit scheme of second-order accuracy in space and time.
The second way to derive the Lax-Wendroff diffrential scheme is based on the idea that we would
like to get a scheme with second-order accurate in space and time. First of all, we use Taylor series
expansion in time, namely
u(xi ,t j+1 ) = u(xi ,t j ) + t t u(xi ,t j ) +
t 2 2
u(xi ,t j ) + O(t 2 ) .
2 t
In the next place one replaces time derivatives in the last expression by space derivatives according
to the relation
(n)
(n)
t u = (c)n x u.
Hence
c2 t 2 2
x u(xi ,t j ) + O(t 2 ) .
2
Finally, the space derivatives are approximated by central differences (4.7), (4.12), resulting in the
Lax-Wendroff scheme (5.12).
u(xi ,t j+1) = u(xi ,t j ) ct x u(xi ,t j ) +
2
|g(k)|2 = 1 2 (1 2 ) 1 cos(kx) .
1 2 0 =
cx
1,
t
Fourier analysis
In order to check availability of dispersion, let us calculate the dispersion relation for the scheme (5.12).
The ansatz of the form exp(i(kxi t j )) results in
ei t = 1 + 2 (cos(kx) 1) i sin(kx) ,
t = 1 t i2 t = i ln 1 + 2 (cos(kx) 1) i sin(kx) .
One can easily see, that in the case of (5.12) dispersion (see Fig. 5.10 (a)) as well as damping
(diffusion) (see Fig. 5.10 (b)) of Fourier modes take place. However, as can be seen on Fig. 5.10
and Fig. 5.11, dispersion and diffusion are weaker as for the Lax method (5.8) and appear by
much smaller wave lengths. Because of these properties and taking into account the fact that the
method (5.12) is of the second order, it becomes a standard scheme to approximate Eq. (5.1).
Moreover, the scheme (5.12) can be generalized to the case of conservation equation in general
form.
u F(u)
+
= 0,
t
x
(5.13)
where u = u(x, t) and the form of a function F(u) depends on the problem we are interested in.
One can try to apply the Lax-Wendroff method (5.12) to Eq. (5.13). With Fi j := F(uij ) one obtains
the following differential scheme
t
1 j
j+ 1
j
j
ui + ui1
Fi j Fi1
,
u 12 =
i 2
2
2x
t
1 j
j+ 1
j
u + ui+1
F j Fi j ,
u 12 =
i+ 2
2 i
2x i+1
t
j+ 1
j+ 1
uij+1 = uij
F 12 F 12 .
(5.14)
i 2
x i+ 2
(a)
(b)
4
3.5
1 t/
=1
=0.5
=0.75
=1/2
=0.5
=0.75
=1/2
4
2 t/
4.5
2.5
2
1.5
3
2
1
1
0.5
0
0
0.2
0.4
0.6
k x/
0.8
0
0
0.2
0.4
0.6
0.8
k x/
Fig. 5.10 Illustration of the dispersion relation for the Lax-Wendroff method calculated for different values of . (a) Real part of (dispersion). (b) Imaginary part of (diffusion).
(a)
(b)
1
0.8
0.8
0.6
0.6
t=0
u(x)
u(x)
t=400
0.4
0.4
0.2
0.2
0
0
6
x
10
0
0
t=800
2
xct
Fig. 5.11 Numerical implementation of the Lax-Wendroff method (5.12). Parameters are: Advection velocity is c = 0.5, length of the space interval is L = 10, space and time discretization
steps are x = 0.05 and t = 0.05, amount of time steps is T = 800, and initial condition is
u0 (x) = exp((x 2)2 ). (a) Time evolution of u(x,t) for different time moments. (b) Time evolution in the co-moving frame x ct at t = 0, 400, 800.
Chapter 6
Burgers Equation
One of the major challenges in the field of complex systems is a thorough understanding of the
phenomenon of turbulence. Direct numerical simulations (DNS) have substantially contributed to
our understanding of the disordered flow phenomena inevitably arising at high Reynolds numbers.
However, a successful theory of turbulence is still lacking which whould allow to predict features
of technologically important phenomena like turbulent mixing, turbulent convection, and turbulent
combustion on the basis of the fundamental fluid dynamical equations. This is due to the fact that
already the evolution equation for the simplest fluids, which are the so-called Newtonian incompressible fluids, have to take into account nonlinear as well as nonlocal properties:
(6.1)
Nonlinearity stems from the convective term and the pressure term, whereas nonlocality enters due
to the pressure term. Due to incompressibility, the pressure is defined by a Poisson equation
(6.2)
In 1939 the dutch scientist J.M. Burgers [4] simplified the Navier-Stokes equation (6.1) by just
dropping the pressure term. In contrast to Eq. (6.1), this equation can be investigated in one spatial
dimension (Physicists like to denote this as 1+1 dimensional problem in order to stress that there
is one spatial and one temporal coordinate):
(6.3)
Note that usually the Burgers equation is considered without external force F(x,t). However, we
shall include this external force field.
The Burgers equation 6.3 is nonlinear and one expects to find phenomena similar to turbulence.
However, as it has been shown by Hopf [20] and Cole [6], the homogeneous Burgers equation
lacks the most important property attributed to turbulence: The solutions do not exhibit chaotic
features like sensitivity with respect to initial conditions. This can explicitly shown using the HopfCole transformation which transforms Burgers equation into a linear parabolic equation. From the
numerical point of view, however, this is of importance since it allows one to compare numerically
obtained solutions of the nonlinear equation with the exact one. This comparison is important
to investigate the quality of the applied numerical schemes. Furthermore, the equation has still
interesting applications in physics and astrophysics. We will briefly mention some of them.
71
1
h(x,t) (h(x,t))2 = 2 h(x,t) + F(x,t) .
t
2
x
(6.4)
This equation is obtained from the simple advection equation for a surface at z = h(x,t) moving
with velocity U(x,t)
h(x,t) + U h(x,t) = 0 .
(6.5)
t
The velocity is assumed to be proportional to the gradient of h(x,t), i.e. the surface evolves in the
direction of its gradient. Surface diffusion is described by the diffusion term.
Burgers equation (6.3) is obtained from the KPZ-equation just by forming the gradient of
h(x,t):
u(x,t) = h(x,t) .
(6.6)
(x,t) = (x,t) .
(6.7)
t
We perform the ansatz
(x,t) = eh(x,t)/2
(6.8)
and determine
=
leading to
1
1
h+
(h)2 eh/2
2
2
(6.9)
1
(6.10)
h (h)2 = h .
t
2
However, this is exactly the Kardar-Parisi-Zhang equation (6.4). The complete transformation is
then obtained by combining
1
u(x,t) = ln (x,t) .
(6.11)
2
We explicitly see that the Hopf-Cole transformation turns the nonlinear Burgers equation into the
linear heat conduction equation. Since the heat conduction equation is explicitly solvable in terms
of the so-called heat kernel we obtain a general solution of the Burgers equation. Before we construct this general solution, we want to emphasize that the Hopf-Cole transformation applied to the
multi-dimensional Burgers equation only leads to the general solution provided the initial condition
u(x, 0) is a gradient field. For general initial conditions, especially for initial fields with u(x,t),
the solution can not be constructed using the Hopf-Cole transformation and, consequently, is not
known in analytical terms. In one dimension spatial dimension it is not necessary to distinguish
between these two cases.
(x, 0) = e 2
u(x, 0) ,
(6.12)
(x,t) =
dx G(x x ,t) (x , 0)
(6.13)
1 (xx )2
e 4 t
4 t
In terms of the initial condition (6.12) the solution explicitly reads
G(x x ,t) =
(x,t) =
1
4 t
dx e
(xx )2
4 t
R
21 x dx u(x ,0)
(6.14)
(6.15)
The n-dimensional solution of the Burgers equation (6.3) for initial fields, which are gradient fields,
are obtained analogously:
(x,t) =
1
(4 t)d/2
dx e
(xx )2
4 t
R
21 x dx u(x ,0)
(6.16)
Agian, we see that the solution exist provided the integral is independent of the integration contour:
Z x
(6.17)
We can investigate the limiting case of vanishing viscosity, 0. In the expression for (x,t),
eq. (6.16), the integral is dominated by the minimum of the exponential function,
Z
1 x
(x x )2
dx
u(x
,
0)
.
(6.18)
min
4 t
2
x
This leads to the so-called characteristics (see App. (B))
x = x tu(x , 0) ,
(6.19)
which we have already met in the discussion of the advection equation (5.1) (see Chapter 5). A
special solution for the viscid Burgers equation is
x xc t
.
(6.20)
u(x,t) = 1 tanh
2
(x,t) = (x,t) U(x,t) (x,t) ,
t
(6.21)
1
U(x,t) .
2
(6.22)
The relationship with the Schrodinger equation for a particle moving in the potential U(x,t) is
obvious. Recently, the Burgers equation with a fluctuating force has been investigated [36]. Interestingly, Burgers equation with a linear force, i.e. a quadratic potential
U(x,t) = a(t)x2
(6.23)
for an arbitrary time dependent coefficient a(t) could be solved analytically [15].
2u
u
u
+u
= 2.
t
x
x
When = 0, Burgers equation becomes the inviscid Burgers equation:
u
u
+u
= 0,
t
x
(6.24)
which is a prototype for equations for which the solution can develop discontinuities (shock waves).
As was mentioned above, as the solution of the advection equation (5.1), the solution of Eq. (6.24)
can be constructed by the method of characteristics (see App. B). Suppose we have an initial value
problem, i.e., a smooth function u(x, 0) = u0 (x), x R is given. In this case the coefficients A, B
and C are
A = u, B = 1, C = 0.
Equations (B.2-B.3) read
dt
= 1 |t(0) = 0| t = s,
ds
du
= 0 |u(0) = u0 (x0 )| u(s, x0 ) = u0 (x0 ),
ds
dx
= u |x(0) = x0 | x = u0 (x0 )t + x0 .
ds
Hence the general solution of (6.24) takes the form
u(x,t) = u0 (x u0 (x0 )t,t).
(6.25)
Eq. (6.25) is an implicit relation that determines the solution of the inviscid Burgers equation.
Note that the characteristics are straight lines, but not all the lineas have the same slope. It will be
possible for the characteristics to intersect. If we write the characteristics as
t=
u
x0
,
u0 (x0 ) u0 (x0 )
one can see, that the slope 1/u0 (x0 ) of the characteristics depends on the point x0 and on the initial
function u0 . For inviscid Burgers equation (6.24), the time Tc at which the characteristics cross
and a shock forms, the breaking time, can be determined exactly as
Tc =
1
min{ux (x, 0)}
This relation can be used if Eq. (6.24) has smooth initial data (so that it is differentiable). From the
formula for Tc , we can see that the solution will break and a shock will form if ux (x, 0) is negative
at some point. From numerical point of view it is convenient to rewrite the Burgers equation as
u 1 2
+
(u ) = 0
t 2 x
(6.26)
Equation (6.26) describes a one-dimensional conservation law (5.13) with F = 21 u2 and can be
solve, e.g., with the upwind method (5.4) or with the Lax-Wendroff method (5.14).
Space interval
Initial condition
Space discretization step
Time discretization step
Amount of time steps
L=10
u0 (x) = exp((x 3)2 )
x = 0.05
t = 0.05
T = 36
10
0
0
10
10
u(x,t)
0.8
0.6
0.4
0.2
0
0
4
x
(6.28)
L=10
ul = 0.8, ur = 0.2
x = 0.05
t = 0.05
T = 100
Space interval
Initial condition
Space discretization step
Time discretization step
Amount of time steps
The initial condition is:
u(x, 0) = u0 (x) =
x < 5;
x 5.
0.8,
0.2,
(a)
(6.29)
(b)
10
1
8
0.8
u(x,t)
6
0.6
4
0.4
0.2
0
0
6
x
10
0
0
Fig. 6.3 a) Numerical solution of the inviscid Burgers equation (6.24) for the Riemann problem
for ul < ur . b) Characterics of Eq. (6.24) with initial conditions (6.29). The red line indicates the
curve x = a + ct.
10
ul < ur : In this case there are infinitely many weak solutions. One of them is again (6.28)
with the same velocity (see Fig. 6.4 (a)). Note that in this case the characteristics go out of the
shock (Fig. 6.4 (b)) and the solution is not stable to perturbations.
(a)
(b)
10
1
8
0.8
u(x,t)
6
0.6
4
0.4
0.2
0
0
6
x
10
0
0
Fig. 6.4 a) Numerical solution of the inviscid Burgers equation (??) for the Riemann problem for
ul < ur . b) Characterics of the inviscid Burgers equation with initial conditions (??). The red line
indicates the curve x = a + ct.
10
Chapter 7
Another classical example of a hyperbolic PDE is a wave equation. The wave equation is a secondorder linear hyperbolic PDE that describes the propagation of a variety of waves, such as sound or
water waves. It arises in different fields such as acoustics, electromagnetics, or fluid dynamics. In
its simplest form, the wave equation refers to a scalar function u = u(r,t), r Rn that satisfies:
2u
= c2 2 u .
t2
(7.1)
Here 2 denotes the Laplacian in Rn and c is a constant speed of the wave propagation. An even
more compact form of Eq. (7.1) is given by
2 u = 0 ,
where 2 = 2 c12
2
t2
is the dAlembertian.
2u
2u
= c2 2 .
2
t
x
(7.2)
The one-dimensional wave equation (7.2) can be solved exactly by dAlemberts method, using a
Fourier transform method, or via separation of variables. To illustrate the idea of the dAlembert
method, let us introduce new coordinates ( , ) by use of the transformation
= x ct ,
= x + ct .
(7.3)
1
utt = u 2 u + u ,
c2
2u
= 0.
(7.4)
79
That is, the function u remains constant along the curves (7.3), i.e., Eq. (7.3) describes characteristic
curves of the wave equation (7.2) (see App. B). Moreover, one can see that the derivative u/
does not depends on , i.e.,
u
u
= f ( ) .
=0
u( , ) = F( ) + G( ) ,
where F is the primitive function of f and G is the constant of integration, in general the function
of . Turning back to the coordinates (x, t) one obtains the general solution of Eq. (7.2)
u(x,t) = F(x ct) + G(x + ct) .
(7.5)
t 0,
(7.6)
ut (x, 0) = g(x) .
To write down the general solution of the IVP for Eq. (7.2), one needs to exspress the arbitrary
function F and G in terms of initial data f and g. Using the relation
where
F (x ct) :=
F( )
one becomes:
u(x, 0) = F(x) + G(x) = f (x) ;
ut (x, 0) = c(F (x) + G (x)) = g(x) .
After differentiation of the first equation with respect to x one can solve the system in terms of
F (x) and G (x), i.e.,
1
1
1
1
F (x) =
f (x) g(x) ,
f (x) + g(x) .
G (x) =
2
c
2
c
Hence
1 x
1 x
1
1
f (x)
g(y) dy +C, G(x) = f (x) +
g(y) dy C ,
2
2c 0
2
2c 0
where the integration constant C is chosen in such a way that the initial condition F(x) + G(x) =
f (x) is fullfield. Alltogether one obtains:
F(x) =
u(x,t) =
1
2
Z
1 x+ct
f (x ct) + f (x + ct) +
g(y) dy .
2c xct
(7.7)
u 2ui + ui1
uij+1 2uij + uij1
= c2 i+1
,
t 2
x2
(7.8)
j
j
uij+1 = uij1 + 2(1 2 )uij + 2 (ui+1
+ ui1
) .
(7.9)
u1i u1
i
+ O(t 2 ) .
2t
(7.10)
ij+1 = g j eikxi ,
which leads to the following expression for the amplification factor g(k)
g2 = 2(1 2 )g 1 + 2 2 g cos(kx) .
After several transformations the last expression becomes just a quadratic equation for g, namely
u
kQ
3
6Q
Q
u
u
Qu
t j+1
t j1
xi1
xi
xi+1
tj
Fig. 7.2 Schematical visualization of the implicit numerical scheme (7.12) for (7.2).
u
j
t j+1
tj
t j1
xi1
xi
g2 2 g + 1 = 0 ,
where
= 1 2 2 sin2
Solutions of the equation for g(k) read
g1,2 =
xi+1
(7.11)
kx
.
2
p
2 1.
Notice that if > 1 then at least one of absolute value of g1,2 is bigger that one. Therefor one
should desire for < 1, i.e.,
p
g1,2 = i 2 1
and
|g|2 = 2 + 1 2 = 1 .
That is, the scheme (7.9) is conditional stable. The stability condition reads
kx
1 1 2 2 sin2
1,
2
what is equivalent to the standart CFL condition (5.7)
ct
1.
x
2u
+
u
+
u
2u
+
u
.
i
i
i+1
i1
i+1
i1
t 2
2x2
(7.12)
j+1
= g j eikxi
g2 2g + = 0
with
= 1 + 2 sin2
kx
2
3.5
3
t/
2.5
=1
=0.8
=0.2
=1
=0.8
=0.2
1.5
1
0.5
0
0
0.2
0.4
0.6
0.8
k x/
One can see that 1 for all k. Hence the solutions g1,2 take the form
p
1i 12
g1,2 =
and
1 (1 2 )
= 1.
2
That is, the implicit scheme (7.12) is absolute stable.
Now, the question is, whether the implicit scheme (7.12) is better than the explicit scheme (7.9)
form numerical point of view. To answer this question, let us analyse dispersion relation for the
wave equation (7.2) as well as for both schemes (7.9) and (7.12). The exact dispersion relation is
|g|2 =
= ck ,
i.e, all Fourier modes propagate without dispersion with the same phase velocity /k = c. Using
the ansatz uij eikxi i t j for the explicit method (7.9) one obtains:
cos( t) = 1 2 (1 cos(kx)) ,
(7.13)
1
.
1 + 2 (1 cos(kx))
(7.14)
One can see that for 0 both methods provide the same result, otherwise the explicit
scheme (7.9) always exceeds the implicit one (see Fig. (7.3)). For = 1 the scheme (7.9) becomes exact, while (7.12) deviates more and more from the exact value of for increasing .
Hence, for Eq. (7.2) there are no motivation to use implicit scheme instead of the explicit one.
7.1.3 Examples
Example 1.
Use the explicit method (7.9) to solve the one-dimansional wave equation (7.2):
2
0
2
25
20
15
t
10
5
0
utt = 4 uxx
for x [0, L]
10
and t [0, T ]
(7.15)
u(L, t) = 0 .
and
ut (x, 0) = g(x) = 0 .
L=10
x = 0.1
t = 0.05
T = 20
First one can find the dAlambert solution. In the case of zero initial velocity Eq. (7.7) becomes
u(x,t) =
i.e., the solution is just a sum of a travelling waves with initial form, given by
solution of (7.15) is shown on Fig. (7.4).
f (x)
2 .
Numerical
Example 2.
Solve Eq. (7.15) with the same boundary conditions. Assume now, that initial distributions of
position and velocity are
0, x [0, x1 ];
u(x, 0) = f (x) = 0 and ut (x, 0) = g(x) = g0 , x [x1 , x2 ];
0, x [x , L] .
2
Other parameters are:
g0 =0.5
x1 = L/4, x2 = 3L/4
L=10
x = 0.1
t = 0.05
T = 400
for
(7.16)
u(L,t) = 0 .
and
ut (x, 0) = g(x) = 0 ,
n = 1, 2, 3, . . . .
L=1
x = 0.01
t = 0.0025
T = 2000
Usually a vibrating string produces a sound whose frequency is constant. Therefore, since frequency characterizes the pitch, the sound produced is a constant note. Vibrating strings are the
basis of any string instrument like guitar or cello. If the speed of propagation c is known, one can
calculate the frequency of the sound produced by the string. The speed of propagation of a wave c
is equal to the wavelength multiplied by the frequency f :
c=f
If the length of the string is L, the fundamental harmonic is the one produced by the vibration
whose nodes are the two ends of the string, so L is half of the wavelength of the fundamental
harmonic, so
c
2L
Solutions of the equation in question are given in form of standing waves. The standing wave is
a wave that remains in a constant position. This phenomenon can occur because the medium is
moving in the opposite direction to the wave, or it can arise in a stationary medium as a result of
interference between two waves traveling in opposite directions (see Fig. (7.6))
f=
n=1
n=2
n=3
0.5
0.5
0.5
0.5
0.5
0.5
1
0
0.2
0.4
0.6
0.8
1
0
0.2
n=4
0.4
0.6
0.8
n=5
1
0.5
0.5
0.5
0.5
0.5
0.5
1
0.2
0.4
0.6
0.8
0.4
0.6
0.8
0.6
0.8
n=6
0.2
1
0
0.2
0.4
0.6
0.8
0.2
Fig. 7.6 Standing waves in a string. The fundamental mode and the first five overtones are shown.
The red dots represent the wave nodes.
u = u(x, y,t)
on the rectangular domain [0, L] [0, L] with Dirichlet boundary conditions. Other parameters are:
0.4
Space interval
Space discretization step
Time discretization step
Amount of time steps
Initial condition
L=1
x = y = 0.01
t = 0.0025
T = 2000
u(x, y, 0) = 4 x2 y (1 x) (1 y)
Numerical solution of the problem for two different time moments t = 0 and t = 500 can be seen
on Fig. (7.7)
t =0
t = 500
Fig. 7.7 Numerical solution of the two-dimensional wave equation, shown for t = 0 and t = 500.
Chapter 8
Sine-Gordon Equation
The sine-Gordon equation is a nonlinear hyperbolic partial differential equation involving the
dAlembert operator and the sine of the unknown function. The equation, as well as several solution techniques, were known in the nineteenth century in the course of study of various problems
of differential geometry. The equation grew greatly in importance in the 1970s, when it was realized that it led to solitons (so-called kink and antikink). The sine-Gordon equation appears in
a number of physical applications [27, 11, 46], including applications in relativistic field theory,
Josephson junctions [39] or mechanical trasmission lines [42, 39].
The equation reads
utt uxx + sin u = 0 ,
(8.1)
where u = u(x, t). In the case of mechanical trasmission line, u(x, t) describes an angle of rotation
of the pendulums. Note that in the low-amplitude case (sin u u) Eq. (8.1) reduces to the KleinGordon equation
utt uxx + u = 0 ,
admiting solutions in the form
u(x, t) = u0 cos(k x t) ,
p
1 + k2 .
(8.2)
89
u
6
5
4
3
2
-3
-2
-1
where c1 is an arbitrary constant of integration. Notice that we look for solutions for which u 0
and u 0 when , so c1 = 1. Now we can rewrite the last equation as
du
2
=
d .
sin 2u
1 c2
(8.3)
u
( 0 ) = 2 ln tan
,
4
1 c2
2
or
0
.
u( ) = 4 arctan exp
1 c2
That is, the solution of Eq. (8.1) becomes
x x0 ct
.
u(x, t) = 4 arctan exp
1 c2
(8.4)
Equation (8.4) represents a localized solitary wave, travelling at any velocity |c| < 1. The
signs correspond to localized solutions which are called kink and antikink, respectively. For the
mechanical transmission line, when c increases from to + the pendlums rotate from 0 to 2
for the kink and from 0 to 2 for the antikink. (see Fig. 8.1)
One can show [27, 39], that Eq. (8.1) admits more solutions of the form
F(x)
u(x, t) = 4 arctan
.
G(t)
where F and G are arbitrary functions. Namely, one distinguishes the kink-kink and the kinkantikink collisions as well as the breather solution. The kink-kink collision solution reads
c sinh x
1c2
u(x,t) = 4 arctan
(8.5)
ct
cosh
1c2
and describes the collision between two kinks with respective velocities c and c and approaching
the origin from t and moving away from it with velocities c for t (see Fig. 8.2). In a
similar way, one can construct solution, corresponding to the kink-antikink collision. The solution
has the form:
sinh ct
1c2
(8.6)
u(x,t) = 4 arctan
c cosh x
1c2
The breather soliton solution, which is also called a breather mode or breather soliton [39], is
given by
u
6
4
2
x
-10
-5
10
-2
-4
-6
uB (x,t) = 4 arctan
1 2 sin( t)
cosh( 1 2 x)
(8.7)
which is periodic for frequencies < 1 and decays exponentially when moving away from x = 0.
Now we are in the good position to look for numerical solutions of Eq. (8.1).
u
x
-10
-5
10
-2
-4
ut (x, 0) = g(x) ,
(8.8)
Let us try to apply a simple explicit scheme (7.9) to Eq. (8.1). The discretization scheme reads
j
j
+ ui1
) t 2 sin(uij )
uij+1 = uij1 + 2 (1 2 ) uij + 2 (ui+1
(8.9)
with = t/x, i = 0, . . . , M and t = 0, . . . , T . To the implementation of the second initial condition one needs again the virtual point u1
i ,
ut (xi , 0) = g(xi ) =
u1i u1
i
+ O(t 2 ) .
2t
1 2
t 2
sin( f (xi )) .
( f (xi1 ) + f (xi+1 ))
2
2
(8.10)
In addition, no-flux boundary conditions lead to the following expressions for two virtual space
j
j
points u1
and uM+1
:
j
u1j u1
u
j
=
0
= 0 u1
= u1j ,
x x=a
2x
j
u j uM+1
u
j
j
= 0 uM+1
= uM
.
= 0 M
x x=b
2x
One can try to rewrite the differential scheme to more general matrix form. In matrix notation the
second time-row is given by
u1 = t 1 + Au0
t 2
1 ,
2
(8.11)
where
T
1 = g(a), g(x1 ), g(x2 ), . . ., g(xM1 ), g(b)
and
T
1 = sin(u00 ), sin(u01 ), . . ., sin(u0M1 ), sin(u0M )
0
. . .0
1 2 2
2
/2 1 2 2 /2 . . .0
2
2
A= 0
/2 1 . . .0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
1 2
The boxed elements indicate the influence of boundary conditions. Other time rows can also be
written in the matrix form as
u j+1 = u j1 + B u j t 2 ,
Here
j = 1, . . ., T 1
j
j
j
j T
= sin(u0 ), sin(u1 ), . . ., sin(uM1 ), sin(uM )
(8.12)
Now we can apply the explicit scheme (8.9) described above to Eq. (8.1). Let us solve it on the
interval [L, L] with no-flux boundary conditions using the following parameters set:
Space interval
Space discretization step
Time discretization step
Amount of time steps
Velocity of the kink
L=20
x = 0.1
t = 0.05
T = 1800
c = 0.2
We start with the numerical representation of kink and antikink solutions. The initial condition for
the kink is
x
f (x) = 4 arctan exp
,
1 c2
c
x
g(x) = 2
.
sech
1 c2
1 c2
Figure 8.4 (a) shows the space-time plot of the numerical kink solution. For the antikink the initial
condition reads
x
f (x) = 4 arctan exp
,
1 c2
x
c
sech
.
g(x) = 2
1 c2
1 c2
Numerical solutions is shown on Fig. 8.4 (b).
(a)
(b)
Fig. 8.4 Numerical solution of Eq. (8.1), calculated with the scheme (8.9) for the case of (a) the
kink and (b) antikink solitons, moving with the velocity c = 0.2. Space-time information is shown.
Now we are in position to find numerical solutions, corresponding to kink-kink and kinkantikink collisions. For the kink-kink collision we choose
x L/2
x + L/2
+ 4 arctan exp
,
f (x) = 4 arctan exp
1 c2
1 c2
x + L/2
x L/2
c
c
sech
sech
+2
,
g(x) = 2
1 c2
1 c2
1 c2
1 c2
whereas for the kink-antikink collision the initial conditions are
x L/2
x + L/2
+ 4 arctan exp
,
f (x) = 4 arctan exp
1 c2
1 c2
c
x + L/2
x L/2
c
g(x) = 2
sech
sech
2
.
1 c2
1 c2
1 c2
1 c2
Numerical solutions, corresponding to both cases is presented on Fig. 8.5 (a)-(b), respectively.
Finally, for the case of breather we choose
(a)
(b)
Fig. 8.5 Space-time representation of the numerical solution of Eq. (8.1) for (a) kink-kink collision
and (b) kink-antikink collision.
f (x) = 0 ,
p
p
g(x) = 4 1 c2 sech x 1 c2 .
Corresponding numerical solution is presented on Fig. 8.6.
Chapter 9
The Korteweg-de Vries (KdV) equation is the partial differential equation, derived by Korteweg
and de Vries [26] to describe weakly nonlinear shallow water waves. The nondimensionalized
version of the equation reads
u
u 3u
= 6u
,
(9.1)
t
x x3
where u = u(x, t). The factor of 6 is convenient for reasons of complete integrability, but can easily
be scaled out if desired. Equation (9.1) was found to have solitary wave solutions, vindicating the
observations of a solitary channel wave made by Russell [41].
95
1
c
u( ) = sech2
c ( 0 ) ,
2
2
where 0 is an arbitrary constant. In (x, t) coordinates the traveling wave solution reads
1
c
c (x x0 ct) .
u(x, t) = sech2
2
2
(9.2)
Equation (9.2) describes the localized traveling wave solution with a negative amplitude (see
Fig. 9.1 (a)), which is called a soliton. The term soliton was first introduced by Zabusky and
Kruskal [53], who studied Eq. (9.1) with periodic boundary conditions numerically. They found [53,
46, 27] that initial condition of the form u(x, 0) = cos(2 x/L), x [0, L] broke up into a train of
solitary waves with successively large amplitude. Moreover the solitons seems to be almost unaffected in shape by passing through each other (though this could cause a change in their position).
An example of two-soliton solution is shown on Fig. 9.1 (b).
(b)
0.0
-0.5
-2
-1.0
-4
(a)
-6
-1.5
-2.0
0
10
-8
-10
-5
0
x
10
Fig. 9.1 Solitary solutions of KdV equation (9.1). (a)A single-soliton solution (9.2) for c = 2,
calculated for t = 1. (b) A two-soliton solution, calculated at t = 0.3.
ui1
uij+1 = uij + 3 h uij ui+1
h
j
j
j
.
u j 2 ui+1
+ 2 ui1
ui2
2 x2 i+2
(9.3)
u(x)
2
4
6
8
20
10
0
x
10
20
Since Eq. (9.1) is nonlinear, the direct verification of the stability of the scheme (9.3) with the
help of von Neumann analysis (see Sec. 4.3). However, one can examine the stability of the liner
equation
ut = uxxx .
(9.4)
Using the usual ansatz (4.21) the following criterium for Eq. (9.4) can be obtained [24]
t
1
x3 ,
m
(9.5)
where
3 3
2.6 .
2
That is, the linear equation (9.4) is conditionally stable, what is not surprising for explicit schemata.
However, if we apply the schema (9.3), one can see that after several intergation steps a numerical
instability occurs (see Fig. 9.2). That is, the schema (9.3) is unstable and has to be modified.
The first idea is to modify the relation for the time derivative on the right hand side. As was
mentioned above, the direct usage of the central difference formula is impossible due to initial
condition. On the other hand, the artificial point u1
i is essential only on the first time step. Hence,
on the first time step ( j = 0) the schema (9.3) can be used, whereas for j = 1 , . . . , T the central
difference formula is applied:
u j+1 uij1
u
i
.
t
2 t
m = max | sin(2 k x) 2 sin(k x)| =
In addition, we replace uij on the right hand side by the average, namely
uij
That is, the final modified schema reads
j
j
uij+1 = uij1 + 2 h ui1
+ uij + ui+1
1 j
j
.
u + uij + ui+1
3 i1
h
j
j
j
j
j
j
ui+1
ui1
. (9.6)
ui+2
2 ui+1
+ 2 ui1
ui2
2
x
Let us apply the modified schema (9.6) to Eq. (9.1) for the case of two-soliton solution. That
is, we solve Eq. (9.1 on the interval x [L, L] according to
Space interval
Space discretization step
Time discretization step
Amount of time steps
We start with initial condition
L = 10
x = 0.18
t = 2e 3
T = 1e + 5
10
x 10
6
t
0
x
Chapter 10
The diffusion equation is a partial differential equation which describes density fluctuations in a
material undergoing diffusion. The equation can be written as:
u(r,t)
= D(u(r,t), r)u(r,t) ,
t
(10.1)
where u(r,t) is the density of the diffusing material at location r = (x, y, z) and time t. D(u(r,t), r)
denotes the collective diffusion coefficient for density u at location r. If the diffusion coefficient
doesnt depend on the density, i.e., D is constant, then Eq. (10.1) reduces to the following linear
equation:
u(r,t)
= D2 u(r,t) .
(10.2)
t
Equation (10.2) is also called the heat equation and also describes the distribution of a heat in a
given region over time.
Equation (10.2) can be derived in a straightforward way from the continuity equation, which
states that a change in density in any part of the system is due to inflow and outflow of material
into and out of that part of the system. Effectively, no material is created or destroyed:
u
+ = 0,
t
where is the flux of the diffusing material. Equation (10.2) can be obtained easily from the last
equation when combined with the phenomenological Ficks first law, which assumes that the flux
of the diffusing material in any part of the system is proportional to the local density gradient:
= D u(r,t) .
u(x,t)
2 u(x,t)
=D
t
x2
on the interval x [0, L] with initial condition
(10.3)
99
u(x, 0) = f (x),
x [0, L]
(10.4)
t > 0 .
(10.5)
(10.6)
T (t) + D T (t) = 0.
(10.7)
Let us consider the first equation for X(x). Taking into account the boundary conditions (10.5) one
obtains (T (t) 6= 0 as we are loocking for nontrivial solutions)
u(0,t) = X(0)T (t) = 0 X(0) = 0 ,
X(x) = C1 e
+C2 e
Taking into account the boundary conditions one gets C1 = C2 = 0, so for < 0 only the trivial
solution exists.
2. = 0:
X(x) = C1 x +C2
Again, due to the boundary conditions, one gets only trivial solution of the problem (C1 = C2 =
0).
3. > 0:
n
,
X(L) = C2 sin( L) = 0 sin( L) = 0 n =
L
Hence,
n = 1, 2, . . ..
n
x .
X(t) = Cn sin
L
That is, the second equation for the function T (t) takes the form:
T (t) + D
2
n
n
T (t) = 0 T (t) = Bn exp D
t ,
L
L
where Bn is constant.
Altogether, the general solution of the problem (10.3) can be written as
u(x,t) =
An sin
n=1
2
n
n
t ,
x exp D
L
L
An = const .
In order to find An one can use the initial condition (10.4). Indeed, if we write the function f (x) as
a Fourier series, we obtain:
f (x) =
Fn sin
n=1
An = Fn =
n
n
x = An sin
x ,
L
L
n=1
2
L
Z L
0
f ( ) sin
n
d .
L
ZL
n
n
n 2
2
x
exp
D
t
.
f
(
)
sin
sin
L
L
L
n=1 L 0
u(x,t) =
(10.8)
u 2ui + ui1
uij+1 uij
= D i+1
,
t
x2
t
or, with = D x
2
j
j
+ ui1
).
uij+1 = (1 2 ) uij + (ui+1
(10.9)
In order to check the stability of the schema (10.9) we apply again the ansatz (4.21) (see
Sec. 4.3), considering a single Fourier mode in x space and obtain the following equation for
the amplification factor g(k):
g2 = (1 2 )g + 2g cos(kx) ,
from which
g(k) = 1 4 sin2
The stability condition for the method (10.9) reads
kx
.
2
u
3
kQ
Q
6
Q
Qu
u
u
i t/
15
10
t j+1
tj
exact
=0.1
=0.2
=0.3
=0.4
0
0
|g(k)| 1
0.2
0.4
0.6
k x/
1
1 x2
t
.
2
2 D
0.8
(10.10)
Although the method (10.9) is conditionally stable, the derived stability condition (10.10), however,
hides an uncomfortable property: A doubling of the spatial resolution x requires a simultaneous
reduction in the time-step t by a factor of four in order to maintain numerical stability. Certainly,
the above constraint limits us to absurdly small time-steps in high resolution calculations.
The next point to emphasize is the numerical dispersion. Indeed, let us compare the exact
dispersion relation for Eq. (10.3) and relation, obtained by means of the schema (10.9). If we
consider the perturbations in form exp(ikx i t) the dispersion relation for Eq. (10.3) reads
i = D k 2 .
On the other hand, the FTCS schema (10.3) leads to the following relation
kx
ei t = 1 4 sin2
,
2
or, in other words
kx
.
i t = ln 1 4 sin2
2
The comparison between exact and numerical disperion relations is shown on Fig. (10.2). One
can see, that both relations are in good agreement only for kx 1. For > 0.25 the method is
stable, but the values of can be complex, i.e., the Fourier modes drops off, performing damped
oscillations (see Fig. (10.2) for = 0.3 and = 0.4). Now, if we try to make the time step smaler,
in the limit t 0 (or 0) we obtain
i t 4 sin2
kx
2
sin2
= k2 Dt
kx
2
kx
2
2 ,
i.e., we get the correct dispersion relation only if the space step x is small enougth too.
u 2ui + ui1
uij+1 uij1
= D i+1
,
2t
x2
or, with = D t/x2
j+1
ui
j1
= ui
j
j
j
+ 2 ui+1 2ui + ui1 .
(10.11)
Unfortunately, one can show that the schema (10.11) is unconditional unstable. Indeed, amplification factor g(k) in this case fulfilles the following equation:
g2 + 2 g 1 = 0,
= 4 sin2
giving
g1,2 =
kx
,
2
p
2 + 1.
Since |g2 (k)| > 1 for all values of k, the schema (10.11) is absolut unstable.
u j+1 +u j1
u 2 i 2 i
uij+1 uij1
= D i+1
2t
x2
+ ui1
1 j1
j
,
u j + ui1
u +
1+ i
1 + i+1
(10.12)
where = 2D t/x. When the usual von Neumann stability analysis is applied to the method (10.12),
the amplification factor g(k) can be found from
(1 + )g2 2g cos(kx) + ( 1) = 0 .
It can be easily shown, that stability condition is fulfilled for all values of , so the method (10.12)
is unconditionally stable. However, this does not imply that x and t can be made indefinitely
large; we must still worry about the accuracy of the method. Indeed, consider the Taylor expansion
u
3
kQ
6Q
Q
u
Qu
t j+1
t j1
xi1
xi
xi+1
u
e
u
k
Q
3
6
Q
Q
Qu
tj
t j+1
tj
j+1
j1
u ui ui + ui1
uij+1 uij1
= D i+1
2t
x2
x2
2x4
D
2t 4
2
2
ut
x
u
+
uttt + . . . =
u
t
u
u
+
.
.
.
xx
xxxx
tt
tttt
3!
x2
4!
4!
4
2
t
t
utt + O
.
ut + O(t 2 ) = Duxx + O(x2 ) D
x2
x2
In other words, the method (10.12) has order of accuracy
t 2
O t 2 , x2 ,
.
x2
For cosistency, t/x 0 as t 0 and x 0, so (10.12) is inconsistent. This constitutes an
effective restriction on t. For large t, however, the scheme (10.12) is consistent with another
equation of the form
D utt + ut = D uxx .
(10.13)
t j+1
t j+ 1
u
q
xi
xi1
qu
xi+1
t j1
kx2 1
g(k) = 1 + 4 sin2
.
2
That is, the schema (10.13) is unconditionally stable. However, the method has order of accuracy
O(t, x2 ), i.e., first order in time, and second in space. Is it possible to improve it? The answer
to is given below.
j+ 1
=D
j+ 1
j+ 1
ui+12 2 ui 2 + ui12
.
x2
The approximation used for the space derivative is just an average of approximations in points
(xi , t j ) and (xi , t j+1 ):
n
n
n
(un+1 2un+1
+ un+1
un+1
uni
i
i1 ) + (ui+1 2ui + ui1 )
i
= D i+1
.
t
2 x2
(10.14)
All terms on the right-hand side of Eq. (10.14) are known. Hence, the equations in (10.14) form a
tridiagonal linear system
Au = b .
The amplification factor for Eq. (10.14) reads
g(k) =
1 (1 cos kx)
.
1 + (1 cos kx)
Since and 1 cos k x are positive, the denominator of the last expression is always greater
than the numerator. That is, the absolute value of g is less than one, i.e., the method (10.14) is
unconditionaly stable.
30
25
0.8
20
0.6
15
0.4
10
0.2
0
0
0.5
x
10.1.3 Examples
Example 1
Use the FTCS explicit method (10.9) to solve the one-dimensional heat equation
ut = uxx ,
on the interval x [0, L], if the initial heat distribution is given by u(x, 0) = f (x), and the temperature on both ends of the interval is u(0,t) = Tl , u(L,t) = Tr . Other parameters are choosen
according to the table below:
Space interval
Amount of space points
Amount of time steps
Boundary conditions
Initial heat distribution
L=1
M = 10
T = 30
Tl = Tr = 0
f (x) = 4x(1 x)
Example 2
Use the implicit BTCS method (10.13) to solve the one-dimensional diffusion equation
ut = uxx ,
on the interval x [L, L], if the initial distribution is a Gauss pulse of the form u(x, 0) = exp(x2 )
and the density on both ends of the interval is given as ux (L,t) = ux (L,t) = 0. For the other
parameters see the table below:
Space interval
Space discretization step
Time discretization step
Amount of time steps
Solution of the problem is shown on Fig. (10.7).
L=5
x = 0.1
t = 0.05
T = 200
200
0.8
150
0.6
100
0.4
50
0.2
0
5
0
x
15
2
10
t
1.5
5
0
0
0.5
x
0.5
Example 3
Use the Crank-Nicolson method (10.14) to solve the one-dimensional heat equation
ut = 1.44 uxx ,
on the interval x [0, L], if the initial heat distribution is u(x, 0) = f (x) and again, the temperature
on both ends of the interval is given as u(0,t) = Tl , u(L,t) = Tr . Other parameters are choosen as:
Space interval
Space discretization step
Time discretization step
Amount of time steps
Boundary conditions
Initial heat distribution
L=1
x = 0.1
t = 0.05
T = 15
Tl = 2, Tr = 0.5
f (x) = 2 1.5x + sin( x)
2
u
u 2u
,
=D
+
t
x2 y2
(10.15)
where u = u(x, y, t), x [ax , bx ], y [ay , by ]. Suppose, that the inittial condition is given and function u satisfies boundary conditions in both x- and in y-directions.
As before, we discretize in time on the uniform grid tn = t0 + nt, n = 0, 1, 2, . . .. Futhermore, in
the both x- and y-directions, we use the uniform grid
xi = x0 + i x ,
i = 0, , M ,
y j = y0 + j y ,
j = 0, , N ,
bx ax
,
M+1
by ay
y =
.
N +1
x =
=D
The ansatz
(10.16)
1
.
2
x2 y2
.
2D(x2 + y2 )
x2
,
4D
(10.17)
(10.18)
Let us consider the approximation (10.18) on the 55 grid, i.e., i = j = 0, . . ., 4. Moreover, suppose
that Dirichlet boundary conditions are given, that is, all values u0 j , u4 j , ui0 , ui4 are known. Suppose
also that n = 1 and define = 1 + 2 + 2 . Then the approximation above leads to the neun
algebraic equations:
u221 + u211 u212
2
u22 + u212 (u213 + u211 )
u223 + u213 u212
2
(u31 + u211 ) + u221 u222
2
(u32 + u212 ) + u222 (u223 + u221 )
u221 + u231 u232
u222 + u232 (u233 + u231 )
u223 + u233 u232
Formally, one can rewrite the system above to the matrix form Au = b, i.e.,
u2 u1 + u2 + u2
0 0 0 0 0 0
01
10
11
11
1
2
u212
0 0 0 0 0
1 u12 +2 u02 2
+
+
u
u
u
0 0 0 0 0 0
13 1 03 2 14
13
2
u
u
+
u
0 0 0 0 0
21
21
20
2
1
=
u
u
0 0 0 0
22
22
2
2
1
u23 + u24
0 0 0 0 0 u23
2 1
2
2
0
0
0
0
0
0 0 0 0 0
2
1
2
u
u32 + u42
32
0 0 0 0 0 0
u233
u133 + u144 + u234
The matrix A is a five-band matrix. Nevertheless, despite of the fact that the schema is absolute
stable, two of five bands are desposed so far apart from the main diagonal, that simple O(n) algorithms like TDMA are difficult or even impossible to apply.
ui j
unij
t/2
=D
n+1/2
n+1/2
ui+1 j 2ui j
x2
n+1/2
+ ui1 j
n+1/2
un+1
i j ui j
t/2
=D
n+1/2
n+1/2
ui+1 j 2ui j
n+1/2
+ ui1 j
x2
n+1
n+1
un+1
i j+1 2ui j + ui j1
y2
Dt
,
2x2
Dt
.
2y2
Than we get:
n+1/2
n+1/2
ui+1 j + (1 + 2 )ui j
n+1/2
n+1
n+1
un+1
i j+1 + (1 + 2 )ui j ui j1
(10.19)
n+1/2
n+1/2
n+1/2
= ui+1 j + (1 2 )ui j
+ ui+1 j .
Instead of five-band matrix in BTCS method (10.18), here each time step can be obtained in two
sweeps. Each sweep can be done by solving a tridiagonal system of equations. The ADI-method
is second order in time and space and is absolute stable [22] (however, the ADI method in 3D is
conditional stable only).
10.2.2 Examples
Use the ADI method (10.19) to solve the two-dimensional diffusion equation
t u(r, t) = u(r, t) ,
where u = u(r, t), r R2 on the interval r [0, L] [0, L], if the initial distribution is a Gauss
pulse of the form u(x, 0) = exp(20 (x L/2)2 20 (y L/2)2 ) and the density on both ends of
the interval is given as ur (0, t) = ur (L, t) = 0. Other parameters are choosen according to the table
below.
Space interval
Amount of points
Time discretization step
Amount of time steps
Solution of the problem is shown on Fig. (10.9).
L=1
M = 100, (x = y)
t = 0.001
T = 40
(a)
(b)
0.8
0.8
0.6
0.6
y
0.4
0.4
0.2
0.2
0
0
0.2
0.4
0.6
0.8
0
0
0.2
0.4
0.6
0.8
0.6
0.8
(c)
(d)
1
0.8
0.8
0.6
0.6
y
0.4
0.4
0.2
0.2
0
0
0.5
x
0
0
0.2
0.4
x
Fig. 10.9 Numerical solution of the two-dimensional diffusion equation 10.2 by means of the ADI
method (10.19), calculated at four different time moments: (a) t=0; (b) t=10; (c) t=20; (d) t=40.
Chapter 11
Reaction-diffusion (RD) equations arise naturally in systems consisting of many interacting components, (e.g., chemical reactions) and are widely used to describe pattern-formation phenomena in
variety of biological, chemical and physical systems. The principal ingredients of all these models
are equation of the form
t u = D2 u + R(u) ,
(11.1)
where u = u(r, t) is a vector of concentration variables, R(u) describes a local reaction kinetics
and the Laplace operator 2 acts on the vector u componentwise. D denotes a diagonal diffusion
coefficient matrix. Note that we suppose the system (11.1) to be isotropic and uniform, so D is
represented by a scalar matrix, independent on coordinates.
113
Initially the subject was discussed and investigated mostly in mathematical society (see,
e.g., [16] where nonlinear diffusion equation was discussed in details). The interest in physics
in these type of fronts was stimulated in the early 1980s by the work of G. Dee and coworkers
on the theory of dendritic solidification [12]. Examples of such fronts can be found in various
physical [28, 52], chemical [43, 14] as well as biological [3] systems.
Notice that for Eq. (11.3) the propagating front always relaxes to a unique shape and velocity
(11.4)
c = 2 D,
if the initial profile is well-localized [1, 2, 50].
Numerical treatment
Let us consider Eq. (11.3) and suppose that initial distribution u(x, 0) = f (x) as well as no-flux
boundary conditions are given. We can try to apply an implicit BTCS-method (10.13) (see Chapter 10) for the linear part of the equation, taking the nonlinearilty explicitly, i.e.,
j+1
u j+1 2uij+1 + ui1
uij+1 uij
= D i+1
+ R(uij ) ,
2
t
x
where R(uij ) = uij (uij )2 . We can rewrite the last equation to the matrix form
A un+1 = un + t R(un ) ,
(11.5)
0
. . .0
1 + 2 2
. . .0
1 + 2
1 + 2 . . .0 ,
A= 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0
...
2 1 + 2
= Dt/x2 . The boxed elements indicate the influence of no-flux boundary conditions.
As an example, let us solve Eq. (11.3) on the interval x [L, L] with the scheme (11.5). Parameters are:
Space interval
Space discretization step
Time discretization step
Amount of time steps
Diffusion coefficient
Initial distribution
L = 50
x = 0.2
t = 0.05
T = 800
D=1
f (x) = 0.05 exp(5 x2 )
Numerical solution for six different time moments is shown on Fig. (11.1). One can see, that a
small local initial fluctuation around u = 0 leads to an instability, that develops in a nonlinear way:
a front propagates away from the initial perturbation. Finally the uniform stable state with u = 1 is
established on the whole space interval.
u(x,t)
0.8
0.6
0.4
0.2
0
50
0
x
50
(0, 1) .
(11.6)
The fundamental form of a pattern in bistable infinite one-component media is a trigger wave,
which represents a propagating front of transition from one stationary state into the other. In the
literature other nomenclature, e.g., switching waves is also used. The propagation velocity of a flat
front is uniquely determined by the properties of the bistable medium. Indeed, moving to a frame,
moving with a constant velocity := x ct, and considering partial solution of the form u = u( )
one obtains an equation
Du + c u + R(u) = 0
with boundary conditions
u( ) = u ,
Introducing the potential R(u) =
can be determined as [16]
V (u)
u
u( +) = u+ .
one can show that in this situation the velocity of the front
c=
V (u+ ) V (u )
+
R
(u )2 d
The numerator of the last equation uniquely defines the velocity direction. In particular, if
V (u+ ) = V (u ) the front velocity equals zero, so stationary front is also a solution in bistable
one-component media. However, the localized states in form of a domain, which can be produced
by a connection of two fronts propagating in opposite directions, are normally unstable. Indeed,
for the arbitrary choice of parameters one state (V (u+ ) or V (u )) will be dominated. This causes
either collapse or expansion of the two-front solution.
L = 10
x = 0.04
t = 0.05
T = 150
D=1
u , for
u+ , for
u ,
u(x, 0) = u+ ,
u ,
x [L, 0]
x (0, L] .
(a)
(b)
(c)
(d)
Fig. 11.2 Numerical solution of Eq. (11.6), calculated with the scheme (11.5) for four different
cases: a) a front, propagating to the right for = 0.8; b) a front, propagating to the left for = 0.1;
c) collision of two fronts, = 0.8; d) scattering of two fronts, = 0.1.
Space interval
Space discretization step
Time discretization step
Amount of time steps
Diffusion coefficient
L = 10
x = 0.04
t = 0.05
T = 100
D=1
u , for
u+ , for
u , for
u(x, 0) = u+ , for
u , for
x 0,
x > 0.
x [L, L/4] ,
x (L/4, L/4) ,
x [L/4, L] .
Solutions of the problem, corresponding to both cases are shown on Fig. 11.3.
(a)
(b)
1
t=0
t=T
0.5
u(x,t)
u(x,t)
0.5
0
0.5
1
10
t=0
t=T
0
0.5
0
x
10
1
10
0
x
Fig. 11.3 Numerical solution of Eq. (11.7) by means of scheme (11.5): a) A stable stationary front.
b) A stable stationary pulse.
t u = D 2 u + R(u)
(11.8)
where u = u(r, t) = (u, v)T is a vector of concentration variables, R(u) = ( f (u, v), g(u, v))T describes as before a local reaction kinetics and the Laplace operator 2 acts on the vector u componentwise. D denotes a diagonal diffusion coefficient matrix,
Du 0
D=
.
0 Dv
Let u0 = (u0 , v0 )T be a homogeneous solution (or steady-state solution) of the system (11.8), i.e.
f (u0 , v0 ) = g(u0 , v0 ) = 0. Suppose that this solution is stable in absence of diffusion, namely the
real parts of all eigenvalues of the Jacobi matrix
f f
A = ( R/ u)u=u0 = u v ,
gu gv
describing the local dynamics of the system (11.8) are less that zero. For the case of a 22 matrix
this is equivalent to the simple well-known condition for the trace and the determinant of the matrix
A (Vietas formula), namely
10
Sp(A) = 1 + 2 = f u + gv < 0 ,
det(A) = 1 2 = = f u gv f v gu > 0 .
(11.9)
Keeping Eq. (11.9) in mind, let us see if the presence of diffusion term can change the stability
of u0 . To this end, consider a small perturbation u,
i.e. u = u0 + u and the corresponding linear
equation for it:
t u = D 2 u + A u .
(11.10)
After decomposition u into modes u ak eikr we get the equation
a k = B ak ,
(11.11)
= A k2 D.
where B
As mentioned above, the stability conditions for the system (11.11) with a 22 matrix B can
be written as:
Sp(B) < 0 k ,
(11.12)
det(B) > 0 k ,
where
(11.13)
2
(11.14)
Notice, that for k = 0 the conditions (11.12) are equivalent to the stability criterion (11.9) for
the local dynamics. In particular this implies that Sp(B) < 0 for all k (see gray curve in Fig. 11.4
for illustration), so the instability of the homogeneous solution can occur only due to violation of
the second condition (11.12), that is, det(B) should be equal to zero for some k. It means that the
instability occur at the point where the equation det(B) = 0 has a multiple root. To find it we can
simply calculate a minimum of the function T (k) = det(B):
gv
1 fu
.
+
T (k) = 4 Du Dv k3 2(Du gv + Dv f u )k = 0 k2 =
2 Du Dv
From the last equation can be seen that the situation described above is possible if
Du gv + Dv f u > 0 .
(11.15)
s
1 fu
gv
+
kc =
2 Du Dv
(11.16)
kc4 =
2
1 fu
detA
gv
>
+
.
2 Du Dv
Du Dv
(11.17)
The instability scenario, described above is illustrated in Fig. 11.4, where three different cases
of dependence of the function T (k) = det(B) on the wave vector k are presented. In Fig. 11.4 (a)
the function T (k) has no roots, so the stability of u0 is not affected as well as in the case (b). Here
T (k) > 0 for all k, but minimum of this function exists. Finally, in Fig. 11.4 (c) T (k) = 0 for k = kc ,
indicating the onset of instability.
Hence, the full system of the conditions for instability of the homogeneous solution u0 is
Sp(B),det(B)
(a)
(b)
(c)
0
kc
f u + gv < 0 ,
f u gv f v gu > 0 ,
Du gv + Dv f u > 0 ,
gv 2 4detA
fu
>
+
.
Du Dv
Du Dv
(11.18)
A detailed description of the mechanism of Turing instability can also be found in [32, 31, 23].
While the conditions for the onset of a Turing bifurcation are rather simple, the determination
of the nature of the pattern that is selected is a more difficult problem since beyond the bifurcation
point a finite band of wavenumbers is unstable. Pattern selection is usually approached by studying
amplitude equations that are valid near the onset of the instability. To determine which modes are
selected, modes and their complex conjugates are usually treated in pairs so that the concentration
field, expanded about the homogeneous solution, reads
n
u(r,t) = u0 + A j (t)eikj r + c.c. ,
j=1
where kj are different wavevectors such that |kj | = kc . In one dimensional space the situation is
rather simple, as result of the instability is represented by a periodic in space structure. In two
space dimension this form leads to stripes for n = 1, rhombi (or squares) for n = 2 and hexagons
for n = 3. The pattern and wavelength that is selected depends on coefficients in the nonlinear
amplitude equation for the complex amplitude A j , but some conclusions about selected pattern can
be made using, e.g., symmetry arguments. In particular, in the case of hexagonal pattern, in which
three wave vectors are mutually situated at an angle of 2 /3, i.e., k1 + k2 + k3 = 0, the absence of
inversion symmetry (u 7 u) leads to additional quadratic nonlinearity in the amplitude equation.
The latter, in its turn, ends in a fact, that hexagonal pattern has the maximum growth rate near the
threshold and is therefor preferred (for details see [10]).
The general procedure in details for the derivation of such amplitude equations based on mode
projection techniques can be found in [19]. Another approach, using multi scale expansion was
evolved in [33].
ut = Du u + a (b + 1) u + u2 v ,
vt = Dv v + b u u2 v .
(11.19)
(11.20)
Here u = u(x, y,t), v = v(x, y,t), a, b are posotive constants. The steady state solution is
u0 = a ,
v0 =
b
.
a
b 1 Du k 2
a2
.
b
Dv k2 a2
Sp(A) = b 1 a2 < 0 ,
Det(A) = (b 1) a2 + a2 b = a2 > 0 .
Note that the violation of the first condition above leads to tthe Hopf bifurcation, i.e., the onset of
Hopf instability is
Sp(A) 0 b bH = 1 + a2 .
The critical wavenumber is
s
1 b 1 a2
kc =
.
2 Du
Dv
Du 2
Du
a +1
< 1.
Dv
Dv
(11.21)
On Fig. 11.5 both bH (blue line), bT (red line) as functions of a are shown. The thresholds of these
two instabilities coincide at codimensional-two Turing-Hopf point bH = bT
2
,
ac =
1
where = Du /Dv .
25
20
15
b
10
0
0
From a numerical point of view, one can apply the scheme (10.19), taking the nonlinear terms
explicitly. Parameters are
Space interval
Space discretization step
Time discretization step
Amount of time steps
Diffusion coefficients
Reaction kinetics
L = 50
x = 0.5
t = 0.05
T = 4000
Du = 5, Dv = 12
a = 3, b = 9
The result of calculation is shown on Fig. 11.6. The uniform state becomes unstable in favor of
finite wave number perturbation. That is, starting with random perturb homogeneous solution (see
Fig. 11.6 (a)) one obtains a high-amplitude stripe pattern, shown in Fig. 11.6 (c).
(a)
(b)
(c)
Fig. 11.6 Stripe pattern, obtained as a numerical solution of Eq. (11.19) by means of the modified
ADI scheme (10.19) for three different time moments: a) t = 0; b) t = 2000; c) t = 4000.
Appendix A
The tridiagonal matrix algorithm (TDMA), also known als Thomas algorithm, is a simplified form
of Gaussian elimination that can be used to solve tridiagonal system of equations
ai xi1 + bi xi + ci xi+1 = yi ,
i = 1, . . .n,
(A.1)
b1 c1 0 . . . . . . 0
x1
y1
a2 b2 c2 . . . . . . 0 x2 y2
0 a3 b3 c3 . . . 0 =
. . . . . . . . . . . . . . . . c
n1
0 . . . . . . 0 an bn
xn
yn
The TDMA is based on the Gaussian elimination procedure and consist of two parts: a forward
elimination phase and a backward substitution phase [37]. Let us consider the system (A.1) for
i = 1 . . .n and consider following modification of first two equations:
Eqi=2 b1 Eqi=1 a2
which relults in
(b1 b2 c1 a2 )x2 + c2 b1 x3 = b1 y2 a2 y1 .
The effect is that x1 has been eliminated from the second equation. In the same manner one can
eliminate x2 , using the modified second equation and the third one (for i = 3):
(b1 b2 c1 a2 )Eqi=3 a3 (mod. Eqi=2 ),
which would give
(b3 (b1 b2 c1 a2 ) c2 b1 a3 )x3 + c3 (b1 b2 c1 a2 )x4 = y3 (b1 b2 c1 a2 ) (y2 b1 y1 a2 )a3
If the procedure is repeated until the nth equation, the last equation will involve the unknown
function xn only. This function can be then used to solve the modified equation for i = n 1 and
so on, until all unknown xi are found (backward substitution phase). That is, we are looking for a
backward ansatz of the form:
(A.2)
xi1 = i xi + i .
If we put the last ansatz in Eq. (A.1) and solve the resulting equation with respect to xi , the following relation can be obtained:
123
xi =
yi ai i
ci
xi+1 +
ai i + bi
ai i + bi
(A.3)
i+1 =
ci
,
ai i + bi
i+1 =
yi ai i
.
ai i + bi
(A.4)
Equation (A.4) involves the recursion formula for the coefficients i and i for i = 2, . . ., n 1. The
missing values 1 and 1 can be derived from the first (i = 1) equation (A.1):
x1 =
y1 c1
c1
1
x2 2 = , 2 =
1 = 1 = 0.
b1 b1
b1
b1
The last what we need is the value of the function xn for the first backward substitution. We can
obtain if we put the ansatz
xn1 = xn + n
into the last (i = n) equation (A.1):
an ( xn + n ) + bn xn = yn ,
yielding
yn an n
.
an n + bn
One can get this value directly from Eq. (A.2), if one formal puts
xn =
xn+1 = 0.
Altogether, the TDMA can be written as:
1. Set 1 = 1 = 0;
2. Evaluate for i = 1, . . ., n 1
i+1 =
3. Set xn+1 = 0;
4. Find for i = n + 1, . . ., 2
ci
,
ai i + bi
i+1 =
yi ai i
;
ai i + bi
xi1 = i xi + i .
The algorithm admits O(n) operations instead of O(n3 ) required by Gaussian elimination.
Limitation
The TDMA is only applicable to matrices that are diagonally dominant, i.e.,
|bi | > |ai | + |ci |,
i = 1, . . ., n.
Appendix B
The method of characteristics is a method which can be used to solve an initial value problem for
general first order PDEs [7]. Let us consider a quasilinear equation of the form
A
u
u
+B
+Cu = 0,
x
t
u(x, 0) = u0 ,
(B.1)
where u = u(x,t), and A, B and C can be functions of independent variables and u. The idea of the
method is to change coordinates from (x,t) to a new coordinate system (x0 , s), in which Eq. (B.1)
becomes an ordinary differential equation along certain curves in the (x,t) plane. Such curves,
(x(s),t(s)) along which the solution of (B.1) reduces to an ODE, are called the characteristic
curves. The variable s can be varied, whereas x0 changes along the line t = 0 on the plane (x,t) and
remains constant along the characteristics. Now if we choose
dx
= A,
ds
and
dt
= B,
ds
(B.2)
then we have
dx
dt
du
= ux + ut
= Aux + But ,
ds
ds
ds
and Eq. (B.1) becomes the ordinary differential equation
du
+Cu = 0
ds
(B.3)
125
References
1. David Acheson. From Calculus to Chaos. Oxford University Press, New York, 1997.
2. D. G. Aronson and H. F. Weinberger. Multidimensional nonlinear diffusion arising in population genetics. Advances in Mathematics, 30:3376, 1978.
3. E. Ben-Jacob, H. Brand, G. Dee, L. Kramer, and J. S. Lange. Pattern propagation in nonlinear
dissipative systems. Physica D, 14:348364, 1985.
4. N.F. Britton. ReactionDiffusion Equations and their Applications to Biology. Academic
Press, New York, 1986.
5. J. M. Burgers. The Nonlinear Diffusion Equation. D. Reidel, Dordrecht, 1974.
6. J. G. Charney, R. Fjoertoft, and J. von Neumann. Numerical integration of the barotropic
vorticity equation. Tellus, 2:237254, 1950.
7. J. D. Cole. On a quasi-linear parabolic equation occurring in aerodynamics. Quarterly of
Applied Mathematics, 9:225236, 1951.
8. R. Courant and D. Hilbert. Methods of Mathematical Physics II. Wiley, 1962.
9. R. Courant, E. Isaacson, and M. Rees. On the solution of non-linear hyperbolic differential
equations. Communications on Pure and Applied Mathematics, 5:243255, 1952.
10. J. Crank and P. Nicolson. A practical method for numerical evaluation of solutions of partial
differential equations of the heat-conduction type. Proceedings of the Cambridge Philosophical Society, 43:5067, 1947.
11. M. C. Cross and P. C. Hohenberg. Pattern formation outside of equilibrium. Reviews of
Modern Physics, 65(3):8511112, 1993.
12. A. S. Davydov. Solitons in Molecular Systems (Mathematics and its Applications). Dordrecht,
Netherlands: Reidel, 1985, 1985.
13. G. Dee and J. S. Langer. Propagating pattern selection. Phys. Rev. Lett., 50:383386, 1983.
14. U. Ebert and W. van Saarloos. Front propagation into unstable states: universal algebraic
convergence towards uniformly translating pulled fronts. Physica D, 146:199, 2000.
15. I. R. Epstein and K. Showalter. Nonlinear chemical dynamics: Oscillations, patterns, and
chaos. J. of Phys. Chem., 100:1313213147, 1996.
16. S. Eule and R. Friedrich. A note on the forced burgers equation. Physics Letters A, 351:238,
2006.
17. P. C. Fife. Mathematical Aspects of Reacting and Diffusing Systems. Lecture Notes in
Biomathematics, volume 28. Springer, Berlin, 1979.
18. R. A. Fisher. The wave of advance of advantageous genes. Ann. Eugenics, 7:355, 1937.
19. P. Glandsdorff and I. Prigogine. Thermodynamic theory of structure, stability and fluctuations.
Wiley, New York, 1971.
20. H. Haken. Synergetics, An Introduction. 3rd ed. Springer Ser. Synergetics, Berlin, Heidelberg,
New York, 1983.
21. E. Hopf. The partial differential equation ut + uux = uxx . Communications on Pure and
Applied Mathematics, 3:201230, 1950.
22. E. Isaacson and H. B. Keller. Analysis of numerical Method. Wiley, 1965.
23. J. Douglas Jr. and J. E. Gunn. A general formulation of alternating direction methods: Part i.
parabolic and hyperbolic problems. Numerische Mathematik, 6(1):428453, 1964.
24. R. Kapral. Pattern formation in chemical systems. Physica D, 86(1-2):149157, 1995.
25. W. Kinzel and G. Reents. Physik per Computer. Programmierung physikalisher Probleme mit
Mathematica und C. Spektrum Akademischer Verlag, 1996.
26. A. Kolmogorov, I. Petrovsky, and N. Piscounov. A study of the equation of dissusion with
increase in the quantity of matter, and its application to a bilological problem. Moscow Univ.
Bull. Math. A, 1:1, 1937.
27. D. J. Korteweg and F. de Vries. On the change of form of long waves advancing in a rectangular canal, and on a new type of long stationary waves. Philosophical Magazine, 39:422443,
1895.
28. G. L. Jr. Lamb. Elements of Soliton Theory. New York: Wiley, 1980.