0% found this document useful (0 votes)
12 views

Lecture Notes 1

The document covers numerical methods for partial differential equations, focusing on finite difference formulas for derivatives, the fourth-order Runge-Kutta method for solving systems of first-order ODEs, and Fourier series with discrete Fourier transforms. It includes derivations, examples, and exercises related to these methods, as well as applications to the heat equation with Dirichlet boundary conditions. The content is structured into chapters that provide mathematical preliminaries and practical applications in numerical analysis.

Uploaded by

Desia Mathekga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Lecture Notes 1

The document covers numerical methods for partial differential equations, focusing on finite difference formulas for derivatives, the fourth-order Runge-Kutta method for solving systems of first-order ODEs, and Fourier series with discrete Fourier transforms. It includes derivations, examples, and exercises related to these methods, as well as applications to the heat equation with Dirichlet boundary conditions. The content is structured into chapters that provide mathematical preliminaries and practical applications in numerical analysis.

Uploaded by

Desia Mathekga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

APPM 623

Numerical methods for Partial Differential Equations

Prof CP Olivier
Chapter 1

Review of Mathematical
Preliminaries

1.1 Finite difference formulas


The idea behind the finite difference formulas is to provide a numerical approximation for derivatives
by only using function values. To derive these formulas, we use Taylor series expansions. The general
form of a Taylor series expansion of the function f ∈ C n [a, b] are given by
n
X hn hn+1 (n+1)
f (x + h) = f (n) (x) + f (ξ) ,
j=0
j! (n + 1)!

1 1 1
= f (x) + hf 0 (x) + h2 f 00 (x) + · · · + hn f (n) (x) + hn+1 f (n+1) (ξ),
2 n! (n + 1)!
for some ξ between x and x + h.
The idea behind central difference formulas is to construct linear combinations of f (x+h), f (x−h),
f (x + 2h), f (x − 2h), · · · in order to retain one derivative term, and eliminate as many of the remaining
terms as possible.

1.1.1 First derivative


Forward difference formula
f (x + h) − f (x)
f 0 (x) = + O(h)
h
Backward difference formula
f (x) − f (x − h)
f 0 (x) = + O(h)
h
Central difference formula
f (x + h) − f (x − h)
f 0 (x) = + O h2

2h

1
CHAPTER 1. REVIEW OF MATHEMATICAL PRELIMINARIES 2

Five point stencil

f (x − 2h) − 8f (x − h) + 8f (x + h) − f (x + 2h)
f 0 (x) =
12h

1.1.2 Second derivative


Central difference formula
f (x + h) − 2f (x) + f (x − h)
f 00 (x) = + O h2 .

h 2

1.1.3 Higher order approximations


Example: Show derivation of the five-point stencil for f 0 (x)

1.1.4 Central difference formula for higher order derivatives


Example: Show the derivation of the central difference formula for f (4) (x).

Exercise 1.1
1. Derive the five-point stencil for f 00 (x)
2. Derive the central difference formula for f 000 (x).

1.2 The fourth-order Runge-Kutta method


1.2.1 System of first-order ODEs

ẋ1 = f1 (t, x1 , x2 , · · · , xn ) ,
ẋ2 = f2 (t, x1 , x2 , · · · , xn ) ,
.... ..
.. .
ẋn = fn (t, x1 , x2 , · · · , xn ) ,

with initial condition x1 (0) = x10 , x2 (0) = x20 , · · · , xn (0) = xn0 .


In vector form, let x = [x1 , x2 , · · · , xn ]T , and let the function f : Rn → Rn be defined as
 
f1 (t, x)
 f2 (t, x) 
f (t, x) =  .
 
..
 . 
fn (t, x)

Introduce a grid with tj = jk, where k is the timestep size. We now introduce the notation

xj = x (tj ) .
CHAPTER 1. REVIEW OF MATHEMATICAL PRELIMINARIES 3

The fourth-order Runge-Kutta method is then given by


1
xn+1 = xn + (k1 + 2k2 + 2k3 + k4 ) ,
6
where
k1 = hf (tn , xn ) ,
 
h 1
k2 = hf tn + , xn + k1 ,
2 2
 
h 1
k3 = hf tn + , xn + k2 ,
2 2
k4 = hf (tn + h, xn + k3 ) .

Exercise 1.2
Rewrite the following system of ODE initial value problem as a system of first-order ODEs:

ÿ1 = −2y1 + y2
,
ÿ2 = −2y2 + y1

y1 (t0 ) = y10 , ẏ1 (t0 ) = u10 , y2 (t0 ) = y20 , ẏ2 (t0 ) = u20 .

1.3 The Fourier series and Discrete Fourier Transform


1.3.1 The Fourier series
The Fourier series expansion of the function f that is L-periodic (i.e. f (x + L) = f (x) for all x) on
the interval x ∈ [−L/2, L/2] is given by

2πinx
X
f (x) = Fn e L .
n=−∞

For the basis functions satisfy the orthogonality condition


Z L/2 
2πi(n−m)x L if m = n
e L dx = .
−L/2 0 if m 6= n

This shows that the basis functions are linearly independent. Using this identity, one can derive the
complex Fourier coefficients Fn , given by
Z L/2
1 2πinx
Fn = f (x)e− L dx.
L −L/2

Theorem: If the function f (x) is real, it follows that F−n = Fn∗ .


CHAPTER 1. REVIEW OF MATHEMATICAL PRELIMINARIES 4

2πinx
Proof: Assume that f (x) is real. Since the basis functions e L are linearly dependent, it follows
that each pair
2πinx 2πinx
Sn = Fn e L + F−n e− L
must also be real. Let Fn = a + ib and F−n = c + id. We need to show that c = a and d = −b.
Substituting these expressions into the sum, and decomposing the basis functions into its real and
imaginary parts give
2πnx 2πnx
Sn = (a + c) cos + (d − b) sin
 L L 
2πnx 2πnx
+ i (b + d) cos + (a − c) sin .
L L

To ensure Sn ∈ R, the imaginary part of the solution must vanish. Therefore b + d = 0 and a − c = 0,
so that c = a and d = −b. Hence F−n = a − ib = Fn∗ .

It follows that
∞  
X 2πin 2πinx
f 0 (x) = Fn e L ,
n=−∞
L

∞  2
X 2πin 2πinx
f 00 (x) = Fn e L
n=−∞
L

∞ 
4π 2 n2

2πinx
X
− 2
Fn e L ,
n=−∞
L

and generally
∞  p
X 2πin 2πinx
f (p) (x) = Fn e L .
n=−∞
L

1.3.2 Discrete Fourier Transform


The Fourier series maps the function f (x) defined on the entire continuous interval x ∈ − L2 , L2 onto
 

the Fourier coefficients Fn for n ∈ Z. In many applications, it is useful to apply the Fourier series
to the function on a discrete subset of the interval. To this end, we introduce a grid with N evenly
spaced subintervals. Assuming that N is even (a necessary condition for optimal speeds of the fast
Fourier transform), one can express the grid points as xj = jh,for j = −N/2, −N/2 + 1, · · · , N/2,
where h = L/N is the length of the subintervals.
For the function values evaluated at these points, we can then introduce the Fourier series in
discrete form by using the function values fj = f (xj ) for j = −N/2, −N/2 + 1, · · · , N/2 − 1, given by
the truncated series
N/2−1
X 2πinxj
fj = Fn e L .
n=−N/2
CHAPTER 1. REVIEW OF MATHEMATICAL PRELIMINARIES 5

y
-1

-2
-3 -2 -1 0 1 2 3
x

Figure 1.1: Illustration of aliasing error

Note that, due to the L-periodicity of the function, fN/2 = f L2 = f L2 − L = f−N/2 . As such,
 

we exclude fN/2 from this formula, as it doesn’t provide any additional information. By substituting
xj = jh = jL/N into this formula, one obtains the formula for the inverse discrete Fourier transform
N/2−1
X 2πijn
fj = Fn e N

j=−N/2

for n = −N/2, −N/2 + 1, · · · , N/2 − 1.


On the other hand, the discrete Fourier transform maps the function values onto the Fourier
coefficients, thus approximating the formula for the continuous case

1 L/2
Z
2πinx
Fn = f (x)e− L dx.
L −L/2

To approximate this integral, we apply the trapezoidal rule. By substituting the expressions for xj
and h into the resulting sum, one obtains the inverse discrete Fourier transform as the sum
N/2−1
1 X 2πijn
Fn = fj e − N .
N
j=−N/2

By truncating the Fourier series, one introduces a special kind of truncation error, known as the
2πij(n+N ) 2πijn
aliasing error. This error arises due to the fact that, for the given grid, the function e N =e N
due to the periodicity. As such, frequencies outside the range of n = −N/2, −N/2 + 1, · · · , N/2 − 1
will be calculated as the lower order frequency within this range, thus producing an aliasing error.
To illustrate this, consider the function f (x) = cos x on the interval x ∈ [−π, π] with N = 6
subintervals. In Figure 1.1 the blue curve shows the function f (x), while the red dots shows the
function values at the gridpoints. The blue curve shows the function g(x) = cos 7x. Here we see that
this function intersect the same grid points as the function f (x). Thus, in terms of the grid, this
function would be identified as f (x). This problem is reduced by increasing the number of subintervals
N.

ˆ Discuss Gibbs phenomenon during Matlab practical session


Chapter 2

Finite difference methods for the


heat equation

2.1 Heat equation with Dirichlet boundary conditions


The heat equation in one dimension is given by

∂u ∂2u
= α2 2 . (2.1)
∂t ∂x
To close the problem, one must define a set of boundary conditions, along with an initial condition.
The Dirichlet boundary conditions are given by

u(0, t) = 0, u(L, t) = 0,

while the initial condition is the value of u when t = 0, given by

u(x, 0) = f (x)

for some function f .


In order to apply the finite difference method, we discretize the domain that is the xt plane. To
do this, we start by introducing a grid with N + 1 points on the interval x ∈ [0, L]. The gridpoints are
then given by
xi = ih, for i = 0, 1, 2, · · · , N,
where h = L/N is the interval length. The points are therefore given by x0 = 0, x1‘ = h, x2 =
2h,· · · ,xN = N h = L.
Similarly we introduce a grid on the time variable. For a timestep k, we set

tj = jk.

Unlike the grid in the x domain, the time domain is unbounded. Therefore, if one wants the solution
at some termination time T , the grid is defined as t0 , t1 , · · · tT /k ,where tT /k = (T /k) × k = T .

6
CHAPTER 2. FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 7

Having defined the grid, we now introduce a shorthand notation for the function values at each
gridpoint. This is simply given by
uij = u (xi , tj ) .
For the given boundary conditions, we know that u(0, t) = 0. Since u (0, tj ) = u0j , it follows that
u0j = 0 for all j. Similarly, uN,j = 0 for all j (from the second boundary condition). The initial
condition can be treated in a similar manner. Since, t0 = 0, it follows that ui,0 = u (xi , 0) = f (xi ) for
all i = 1, 2, · · · , N − 1.
The idea behind finite differnce methods is to use finite difference formulas to approximate the
partial derivatives in (2.1). To do this, we note that the Taylor series expansion of a partial derivative
with respect to a single variable (x or t) reduces to a similar form as the Taylor series for a function
of one variable. In particular, we have
∂u 1 ∂2u
(x, t) + h2 2 (x, t) + O h3 .

u(x + h, t) = u(x, t) + h
∂x 2 ∂x
Similarly, a Taylor series expansion in the t direction gives
∂u 1 ∂2u
(x, t) + k 2 2 + O k 3 .

u(x, t + k) = u(x, t) + k
∂t 2 ∂t
From this, it follows that we can use the same formulas obtained in Chapter 1 to approximate the
partial derivatives for the heat equation.
In all the finite difference schemes for the heat equation, we will use the central difference formula
to approximate the derivative ∂ 2 u/∂x2 . To do so, notice that the central difference formula for the
second partial derivative can be written as
∂2u u(x + h, t) − 2u(x, t) + u(x − h, t)
+ O h2

2
(x, t) = 2
∂x h
If we evaluate this function at (xi , tj ), we get
∂2u u (xi + h, tj ) − 2u (xi , tj ) + u (xi − h, tj )
2
(xi , tj ) =
∂x h2

ui+1,j − 2uij + ui−1,j


= . (2.2)
h2
The reduction in notation follows from the fact that xi + h = ih + h = (i + 1)h = xi+1 , and similarly
xi − h = xi−1 .

2.1.1 The forward difference scheme


For the forward difference scheme, we use the forward difference formula to appoximate the time
derivative. For partial derivatives, the forward difference formula is given by
∂u u (xi , tj + k) − u (xj , tj )
(xi , tj + k) = + O (k) .
∂t k

ui,j+1 − uij
= + O (k) . (2.3)
k
CHAPTER 2. FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 8

If we now substitute the approximations (2.2) and (2.3) into the heat equation (2.1), we get
ui,j+1 − uij ui+1,j − 2uij + ui−1,j
= α2 + O k + h2 .

k h2

If we solve for ui,j+1 , we get

kα2
ui,j+1 = uij + (ui+1,j − 2uij + ui−1,j ) . (2.4)
h2
The method can now be constructed as follows: From the initial condition, we know the values
ui,0 for i = 1, 2, · · · , N − 1. Note that u0,j = uN,j = 0 from the boundary conditions. As such,
these function values are known from the boundary conditions, so that we do not need to approximate
them. If we substitute j = 0 into (2.4), we can obtain the solutions at ui,1 for all the i values. This
gives the solution at t = t1 = k. By evaluating (2.4) for j = 1, we can obtain the solutions ui,2 for
i = 1, 2, · · · , N − 1. We can reiterate this process until we find the solution at the termination time
t = T = tT /k .
For finite difference schemes, one should always pay special attention to the boundary conditions.
In this case, they affect the formulas when i = 1 and i = N − 1. In the first case, when i = 1, we get

kα2
u1,j+1 = uij + (u2,j − 2u1,j + u0,j ) .
h2
From the boundary condition, we know that u0,j = 0. By substituting this into this approximation,
we simply get
kα2
u1,j+1 = uij + 2 (u2,j − 2u1,j ) .
h
Similarly, when i = N − 1, the boundary condition gives that uN,j = 0, so that

kα2
uN −1,j+1 = uN −1,j + (uN,j − 2uN −1,j + uN −2,j )
h2

kα2
= uN −1,j + (−2uN −1,j + uN −2,j ) .
h2
If we set λ = kα2 /h2 , one can write the resulting set of equations at timestep j as the following
system of equations:
CHAPTER 2. FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 9

u1,j+1 = (1 − 2λ)u1,j + λu2,j

u2,j+1 = λu1,j + (1 − 2λ)u2,j + λu3,j

u3,j+1 = λu2,j + (1 − 2λ)u3,j + λu4,j

..
.

uN −2,j+1 = λuN −3,j + (1 − 2λ)uN −2,j + λuN −1,j

uN −1,j+1 = (1 − 2λ)uN −1,j + λuN −2,j


T
By introducing the vector uj = [u1,j , u2,j , · · · , uN −1,j ] , we can write this in matrix form
uj+1 = Auj ,
where A is a tridiagonal matrix
 
1 − 2λ λ 0 0 ··· 0

 λ 1 − 2λ λ 0 ··· 0 

 0 λ 1 − 2λ λ ··· 0 
.. ..
 
A= .. .. .. .. .

 . . . . . . 

 .. 
 0 0 . λ 1 − 2λ λ 
0 0 ··· 0 λ 1 − 2λ
This form is not efficient for computation, but useful for analysis, in particular for stability analysis,
as considerd in Section 2..1.4.

2.1.2 The backward difference


The backward difference scheme are derived in a very similar way to the derivation of the forward
difference scheme. The difference is that we use the backward difference formula for the time derivative,
given by
∂u u (xi , tj ) − u (xj , tj − k)
(xi , tj ) = + O (k) .
∂t k

ui,j − ui,j−1
= + O (k) . (2.5)
k
If we now substitute the approximations (2.2) and (2.5) into the heat equation (2.1), we get
ui,j − ui,j−1 ui+1,j − 2uij + ui−1,j
= α2 + O k + h2 .

k h2
CHAPTER 2. FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 10

Taking all terms with subscript j to the left hand side, and moving the term with subscript j − 1 to
the right hand side gives
kα2
ui,j − (ui+1,j − 2uij + ui−1,j ) = ui,j−1 . (2.6)
h2
Once again the values ui,0 for i = 1, 2, · · · , N − 1 is known from the initial condition, and u0,j =
uN,j = 0 from the boundary conditions. In contrast to the forward difference method, these equations
must be solved simultaneously. To show this, let’s consider the system of equations that arise for a
fixed choice of j.
From the boundary conditions, it follows that the equation for i = 1 is given by
kα2
u1,j − (u2,j − 2u1,j ) = u1,j−1 .
h2
Similarly, when i = N − 1 we get
kα2
uN −1,j − (−2uN −1,j + uN,j−2 ) = uN −1,j−1 .
h2
If we set λ = kα2 /h2 , one can write the resulting set of equations at timestep j as the following
system of equations:

(1 + 2λ) u1,j − λu2,j = u1,j−1

−λu1,j + (1 + 2λ)u2,j − λu3,j = u2,j−1

−λu2,j + (1 + 2λ)u3,j − λu4,j = u3,j−1

..
.

−λuN −3,j + (1 + 2λ)uN −22,j − λuN −1,j = uN −2,j−1

−λuN −2,j + (1 + 2λ)uN −1,j = uN −1,j−1


T
By using the vector uj = [u1,j , u2,j , · · · , uN −1,j ] , we can write this in matrix form
Auj = uj−1 , (2.7)
where A is a tridiagonal matrix
 
1 + 2λ −λ 0 0 ··· 0
 −λ 1 + 2λ −λ 0 ··· 0 
 
 0 −λ 1 + 2λ −λ ··· 0 
.. ..
 
A= .. .. .. .. .

 . . . . . . 

 .. 
 0 0 . −λ 1 + 2λ −λ 
0 0 ··· 0 −λ 1 + 2λ
CHAPTER 2. FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 11

To solve (2.7), it should be noted that uj−1 is known, and uj is unknown. Therefore we must use a
linear solver such as Gauss elimination. 
The backward difference method is also O k + h2 accurate, precisely like the forward difference
scheme.

2.1.3 The Cranck-Nicholson scheme


The Cranck-Nicholson method combines the forward and backward difference schemes. To obtain the
general formula, we evaluate the equation 21 ×(2.4)+ 12 ×(2.6), where we use j + 1 and j for the latter
equation. We then get

kα2 kα2
   
1 1
ui,j+1 − 2 (ui+1,j+1 − 2ui,j+1 + ui−1,j+1 ) + ui,j+1 = uij + 2 (u,i+1,j −2uij + ui−1,j ) + uij
2 2h 2 h

so that

kα2 kα2
ui,j+1 − 2
(ui+1,j+1 − 2ui,j+1 + ui−1,j+1 ) = ui,j + 2 (u,i+1,j −2uij + ui−1,j ) . (2.8)
2h 2h
In matrix form, this can be written as
Auj+1 = Buj , (2.9)
where
− λ2 ···
 
1+λ 0 0 0
 − λ2 1+λ − λ2 0 ··· 0 
− λ2 − λ2
 

 0 1+λ ··· 0 

A= .. .. .. .. .. .. ,

 . . . . . . 

 .. 
 0 0 . − λ2 1+λ − λ2 
0 0 ··· 0 − λ2 1+λ
λ
1−λ ···
 
2 0 0 0
λ λ

2 1−λ 2 0 ··· 0 
λ λ
 

 0 2 1−λ 2 ··· 0 

B= .. .. .. .. .. .. .

 . . . . . . 

 .. λ λ

 0 0 . 2 1−λ 2

λ
0 0 ··· 0 2 1−λ
To solve (2.9), we must first resolve the RHS by setting bj = Buj . We then find uj+1 by solving the
system Auj+1 = bj .
Finally, for the purposes of stability analysis, we note that the Cranck-Nicholson method can be
written in the form
uj+1 = A−1 Buj .
CHAPTER 2. FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 12

2.1.4 Stability Analysis


As we mentioned above, each of the three numerical schemes discussed above can be written in the
form
uj+1 = Cuj ,
where C is a tridiagonal matrix. For nontrivial initial conditions, one usually introduce a small
rounding-off error. In other words, practically speaking, the initial condition has the form
u0 + e
where |e|  1. From the initial condition, we therefore have
u1 = Cu0 + Ce,
u2 = Cu1 = C 2 u0 + C 2 e,
..
.
un = Cun−1 = · · · = C n u0 + C n e.
Here C n u0 represents the solution at tn , while C n e represents the error. If the matrix C has an
eigenvalue α satisfying |α| > 1, the error term e grows exponentially fast, leading to a complete
dominance of the error. We refer to such a numerical scheme as unstable. Conversely, if |α| ≤ 1 for
all eigenvalues of C, the scheme is called stable.
The method used to perform the stability analysis is known as von Neumann stability analysis,
sometimes also referred to as Fourier stability analysis. A useful property of eigenvalues (used in the
exercises) are given below:

Theorem 2.1: If α is an eigenvalue of the matrix A with corresponding eigenvector x,


then 1/α is an eigenvalue of the matrix B = A−1 with corresponding eigenvector x.

Proof: Since α is an eigenvalue of A with corresponding eigenvector x, it follows that


Ax = αx.
Multiplying both sides with A−1 from the left then gives
A−1 Ax = A−1 αx

∴ x = αA−1 x

1
∴ x = A−1 x
α

1
∴ Bx = x.
α
1
Since Bx = α x,we can conclude that 1/α is an eigenvalue of B with corresponding
eigenvector x.
CHAPTER 2. FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 13

In the following, we consider the stability criterion for the forward difference scheme. As we showed
earlier, the forward difference scheme can be represented in matrix form as follows:

x(j+1) = Ax(j) ,

where  
1 − 2λ λ 0 0 ··· 0

 λ 1 − 2λ λ 0 ··· 0 

 0 λ 1 − 2λ λ ··· 0 
.. ..
 
A= .. .. .. .. ,

 . . . . . . 

 .. 
 0 0 . λ 1 − 2λ λ 
0 0 ··· 0 λ 1 − 2λ
and λ = kα2 /h2 . For the von Neumann stability analysis, we assume that the matrix A has eigenvectors
of the form
T
φ = [φ1 , φ2 , · · · , φN −1 ] ,
where
φj = eiβxj = eiβjh (2.10)
for some unknown values of β. We now look for eigenvalues γ satisfying

Aφ = γφ. (2.11)

To find possible eigenvalues, we consider row j of (2.11), given by

λφj−1 + (1 − 2λ)φj + λφj+1 = γφj . (2.12)


Substituting (2.10) into (2.12) gives

λeiβ(j−1)h (1 − 2λ)eiβjh + λeiβ(j+1)h = γeiβjh .

Dividing this equation by eiβjh leads to

λe−iβh + (1 − 2λ) + λeiβh = γ.

From the identity eix + e−ix = 2 cos x, it follows that

γ = 2λ cos βh + 1 − 2λ.

Since −1 ≤ cos βh ≤ 1, it follows that


1 − 4λ ≤ γ ≤ 1.
Now, to ensure stability, we must ensure that all eigenvalues are less or equal to 1 in absolute value.
Therefore, to ensure stability, we must choose λ in a way that ensures that the eigenvalues are greater
or equal to −1. This gives
1 − 4λ ≥ −1,
so that λ ≤ 1/2. Since λ = kα2 /h2 , we get the stability criterion
h2
k≤ .
2α2
CHAPTER 2. FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 14

Exercise 2.1.4
Show that the backward difference and Cranck-Nicholsen schemes are unconditionally stable.

2.2 Heat equation with generalized boundary conditions


2.2.1 Von Neumann boundary conditions
The heat equation with von Neumann boundary equations are given by

∂u ∂2u
= α2 2 . (2.13)
∂t ∂x
with boundary conditions
∂u ∂u
(0, t) = 0, (L, t) = 0,
∂x ∂x
and initial condition
u(x, 0) = f (x).
The only difference between this problem and the heat equation with Dirichlet boundary conditions
are, of course, the boundary conditions.
In order to solve this problem numerically, we use introduce a step length h and grid xi exactly
as in the previous example. One important difference, however, is that the solution at the boundary
points u0 and uN are unknown. Indeed, we only know the slope at these points. As such, we need to
include these points as unknowns. We therefore look to solve the unknown vectors
T
uj = [u0 (tj ) , u1 (tj ) , · · · , uN (tj )]

where tj = jk with some timestep k.


To start off, let’s consider the forward difference scheme. By using the forward difference approxi-
mation for ∂u/∂t, and the central difference appoximation for ∂ 2 u/∂x2 , as before, we obtain the same
finite difference formula, namely

kα2
ui,j+1 = ui,j + (ui+1,j − 2uij + ui−1,j )
h2
for i = 0, 1, 2, · · · , N .
The resulting equation works fine for i = 1, 2, · · · , N − 1. However, at the two boundary points
i = 0 and i = N one obtains
kα2
u0,j+1 = u0,j + (u1,j − 2u0,j + u−1,j ) (2.14)
h2
and
kα2
uN,j+1 = uN,j + (uN +1,j − 2uN,j + uN −1,j ) , (2.15)
h2
respectively. For i = 0, we see a term u−1,j , corresponding to a function value at x−1 = −h. Note
that this point lies outside the spatial domain x ∈ [0, L]. Similarly, for i = N , we get the point uN +1,j ,
corresponding to the function value at xN +1 = L + h, which is again outside the domain x ∈ [0, L].
CHAPTER 2. FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 15

In order to deal with these problems, we use the boundary conditions. Let’s first consider the left
boundary i = 0. From the boundary condition, we know that
∂u ∂u0,j
(0, tj ) = = 0.
∂x ∂x
We may now use the central difference formula to approximate this derivative. We then get
u1,j − u−1,j
= 0.
2h
Solving for u−1,j , one obtains the approximation

u−1,j = u1,j .

Substituting this into (2.12) then gives the following equation for i = 0 :

kα2
u0,j+1 = u0,j + [2u1,j − 2u0,j ] .
h2
A similar treatment can be performed on the right boundary. Since xN = L, we can use the
approximation
∂u ∂uN j uN +1,j − uN −1,j
(L, tj ) = = .
∂x ∂x 2h
From the boundary conditions, we can set the RHS equal to zero, giving uN +1 = uN −1 . It follows that
the equation associated with i = N is given by

kα2
uN,j+1 = uN,j + (−2uN,j + 2uN −1,j ) .
h2
If we write out the equations from i = 0 to i = N , we get

kα2
u0,j+1 = u0,j + [2u1,j − 2u0,j ] ,
h2
kα2
u1,j+1 = u1,j + + u [u2,j − 2u1,j + u0,j ] ,
h2
kα2
u2,j+1 = u2,j + + u [u3,j − 2u2,j + u1,j ] ,
h2
..
.
kα2
uN −1,j+1 = uN −1,j + + u [uN,j − 2uN −1,j + uN −2,j ] ,
h2
kα2
uN,j+1 = uN,j + (−2uN,j + 2uN −1,j ) .
h2
In matrix form, this can be represented as

uj+1 = Auj ,
CHAPTER 2. FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 16

where A is the tridiagonal matrix


 
1 − 2λ 2λ 0 0 ··· ······ ... 0

 λ 1 − 2λ λ 0 ··· ··· ··· 0 

 0 λ 1 − 2λ λ 0 ··· ··· 0 
.. .. ..
 
A= .. .. .. .. .. ,

 . . . . . . . . 

 .. .. 
 . . 0 0 0 λ 1 − 2λ λ 
0 0 ··· ··· ··· 0 2λ 1 − 2λ
T
and uj = [u0 (tj ) , u1 (tj ) , · · · , uN (tj )] .

A similar treatment of the boundary conditions can be used to derive backward difference and
Cranck-Nicholson numerical schemes for von Neumann boundary conditions.

Exercise 2.2.1
1) Derive the system of equations for the backward difference method of the heat equation with von
Neumann boundary conditions, and write the equations in matrix form.

2) Repeat (1) for the Cranck-Nicholson method.

2.2.2 Robin boundary conditions


Robin boundary conditions consist of a linear combination of Dirichlet and von Neumann boundary
conditions. A general form is given by
∂u ∂u
α1 (t)u(0, t) + β1 (t) (0, t) = γ1 (t), α2 (t)u(L, t) + β2 (t) (L, t) = γ2 (t).
∂x ∂x
In the following, we derive the forward difference scheme for boundary conditions of this form, and
consider constant values for the α, β and γ functions, that is
∂u
α1 u(0, t) + β1 (0, t) = γ1 , (2.16)
∂x
∂u
α2 u(L, t) + β2
(L, t) = γ2 (2.17)
∂x
The same ideas can be applied to the backward difference and Cranck-Nicholson methods.
As before, the boundary conditions only affect the equations (2.4) for i = 0 and i = N . For i = 0,
one obtains
kα2
u0,j+1 = u0,j + 2 (u1,j − 2u0,j + u−1,j ) . (2.18)
h
In order to deal with u−1,j , we need to use the boundary condition. By using the central difference
formula for ∂u
∂x , we get
∂u0,j u1,j − u−1,j
= .
∂x 2h
CHAPTER 2. FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 17

Substituting this into the left boundary (2.16), one obtains the formula
u1,j − u−1,j
α1 u0,j + β1 = γ1 .
2h
Solving for u−1,j , one gets
2h
u−1,j = u1,j + (α1 u0,j − γ1 ) .
β1
Substituting this back into (2.18) then gives

kα2
 
2hα1 2h
u0,j+1 = u0,j + 2 u1,j − 2u0,j + u1,j + u0,j − γ1 .
h β1 β1

 
2hα1 2hλ
= 1 − 2λ + λ u0 + 2λu1 − γ1 ,
β1 β1
kα2
where λ = h2 . A similar treatment of the boundary (2.17) gives the following equation for j = N :
 
2hα2 2hλ
uN,j+1 = 2λuN −1,j + 1 − 2λ − λ uN,j + γ2 .
β2 β2

The resulting system of equations is then given by


 
2hα1 2hλ
u0,j+1 = 1 − 2λ + λ u0 + 2λu1 − γ1 ,
β1 β1

u1,j+1 = λu0,j + (1 − 2λ)u1,j + λu2,j

u2,j+1 = λu1,j + (1 − 2λ)u2,j + λu3,j

..
.

uN −1,j+1 = λuN −2,j + (1 − 2λ)uN −1,j + λuN j

 
2hα2 2hλ
uN,j+1 = 2λuN −1,j + 1 − 2λ − λ uN,j + γ2 .
β2 β2
T
By introducing the vector uj = [u0,j , u1,j , · · · , uN j ] , we can write this in matrix form

uj+1 = Auj + bj ,
CHAPTER 2. FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 18

where
2hα1
1 − 2λ + β1 λ 2λ 0 0 ··· 0
 

 λ 1 − 2λ λ 0 ··· 0 

 0 λ 1 − 2λ λ ··· 0 
.. ..
 
A= .. .. .. .. ,

 . . . . . . 

 .. 
 0 0 . λ 1 − 2λ λ 
2hα2
0 0 ··· 0 2λ 1 − 2λ − β2 λ

− 2hλ
β1 γ1
 

 0 

 0 
b= .
 
..

 . 

 0 
2hλ
β2 γ2

Exercise 2.2.2
1. Derive the equations for the backward difference scheme with the general form of Robin boundary
conditions, and express the resulting equations in matrix form.

2. Repeat (1) for the Cranck-Nicholson method.

2.2.3 Periodic boundary conditions


Periodic boundary conditions correspond to function u(x, t) satisfying the condition

u(x + L, t) = u(x, t)

for all x and t. One way to interpret this is a circular domain where, after moving L units in x, one
returns to the same position.
The introduction of periodic boundary conditions to a finite difference scheme is surprisingly
straightforward. Unlike the previous examples, where the values x−1 and xN +1 lie outside the spatial
domain, in this case one obtains

u−1,j = u (−h, tj )

= u (L − h, tj )

= uN −1,j
CHAPTER 2. FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 19

L
since xN −1 = (N − 1) × N = L − h. Similarly,
uN,j = u (L, tj )

u (L − L, tj )

u0,j
since x1 = h. Notice that, in this problem we have u0,j = uN,j . As such, we do not need to include
uN,j as an unknown, since it is simply equal to u0,j . As such, the unknown function values are given
by the vector
T
uj = [u0j , u1j · · · uN −1,j ] . (2.19)
Now, the first boundary point occurs at i = 0, giving
kα2
u0,j+1 = u0,j + (u1,j − 2u0,j + u−1,j ) .
h2
Since u−1,j = uN −1,j , this simply becomes
kα2
u0,j+1 = u0,j + (u1,j − 2u0,j + uN −1,j ) .
h2
For the right boundary at i = N − 1, we get
kα2
uN −1,j+1 = uN −1,j + (uN,j − 2uN −1,j + uN −2,j ) .
h2
Once again, we simply substitute un,j = u0,j to get
kα2
uN −1,j+1 = uN −1,j + (u0,j − 2uN −1,j + uN −2,j ) .
h2
This can be expressed in matrix form as
uj+1 = Auj ,
where uj takes the form (2.19) and
 
1 − 2λ λ 0 0 ··· ······ ... λ

 λ 1 − 2λ λ 0 ··· ··· ··· 0 

 0 λ 1 − 2λ λ 0 ··· ··· 0 
.. .. ..
 
A= .. .. .. .. .. .

 . . . . . . . . 

 .. . ..

 . 0 0 0 λ 1 − 2λ λ 
λ 0 ··· ··· ··· 0 λ 1 − 2λ

Exercise 2.2.3
1. Derive the equations for the backward difference scheme with periodic boundary conditions, and
express the resulting equations in matrix form.
2. Repeat (1) for the Cranck-Nicholson method.
CHAPTER 2. FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 20

2.2.4 Time-dependent boundary conditions


Thus far, we assumed that the boundary conditions do not depend on time. Time dependence, however,
can easily be incorporated into finite difference schemes.
As an example, consider the heat equation under the boundary conditions

u(0, t) = sin t, u(L, t) = cos t.

In order to incorporate these conditions into the numerical scheme, we must now evaluate the boundary
condition at each time step tk . To this end, we merely evaluate the left boundary at each time step,
given by u (0, tk ) = sin tk and u (L, tk ) = cos tk . Notice that, like the Dirichlet boundary conditions,
the function values at x0 and xN are known. As such, we only need to approximate the function values
at the grid points xi for i = 1, 2, · · · , N − 1.
Now consider the forward difference scheme. For grid points i = 2, 3, · · · , N − 2, the boundary
conditions do not come into play, and one obtains the same set of equations as those derived above.
However, at x1 , we have
kα2
u1,j+1 = u1j + 2 (u2,j − 2u1,j + u0,j ) .
h
From the boundary conditions we get u0,j = sin tj .
Chapter 3

Finite difference method for the


Wave Equation

Another fundamental partial differential equation is the heat equation, given by

∂2u ∂2u
2
= α2 2 . (3.1)
∂t ∂x
This equation is accomponied by two boundary conditions, and two initial conditions of the form
∂u
u(x, 0) = f (x), (x, 0) = g(x). (3.2)
∂t
The wave equation is well suited for a finite difference treatment. In the following, we derive finite
difference schemes for Dirichlet boundary conditions and von Neumann boundary conditions.

3.1 Dirichlet boundary conditions


For Dirichlet boundary conditions, we look to solve the wave equation (3.1) with initial conditions of
the form (3.2) and Dirichlet boundary conditions given by

u(0, t) = u(L, t) = 0.

To start off, we discretize the domain by introducing a grid on the x axis with N + 1 equally
distributed points, given by
xi = x0 + ih = ih,
where the stepsize is given by h = L/N . Similarly, for some chosen timestep k, we discretize time by
introducing
tj = t0 + jk = jk.
We then use the notation uij = u (xi , tj ) for the function value at the gridpoints. Since u0,j =
u (x0 , tj ) = 0 for all j, we do not need to approximate uij at i = 0. Similarly, from the other
boundary condition, we do not need to approximate uij when i = N . As such, we only consider

21
CHAPTER 3. FINITE DIFFERENCE METHOD FOR THE WAVE EQUATION 22

the grid i = 1, 2, · · · , N − 1. If we have a termination time T , then we look for the solution uij for
j = 0, 1, · · · , T /k. Note that this step is identical to the finite difference discretization for the heat
equation with Dirichlet boundary conditions.
We can now apply the central difference formula for the second derivative for both partial deriva-
tives. When expressed in terms of the gridpoints, one obtains

∂ 2 uij 1
= 2 (ui,j+1 − 2uij + ui,j−1 ) + O k 2 ,

∂t2 k
while
∂2u 1
= 2 (ui+1,j − 2uij + ui−1,j ) + O h2 .

∂x2 h
By substituting these equations into the wave equation (3.1), and solving for ui,j+1 , one obtains

α2 k 2
(ui+1,j − 2uij + ui−1,j ) + O h2 + k 2 .

ui,j+1 = 2uij − ui,j−1 + 2
(3.3)
h
At the boundaries i = 1 one obtains
α2 k 2
u1,j+1 ≈ 2u1,j − u1,,j−1 + (u2,j − 2u1,j ) ,
h2
since u0,j = 0 for all j from the boundary conditions. Similarly, at i = N − 1 one obtains

α2 k 2
uN −1,j+1 ≈ 2uN −1,j − uN −1,,j−1 + (−2uN −1,j + uN −2,j ) ,
h2
since uN,j = 0 from the boundary conditions.
At this stage, it should be noted, however, that the general equation (3.3) contains terms at tj+1 ,
tj and tj−1 . To find the solution at t1 , one must therefore use the point ui,−1 , a function value
that is not defined since the solution starts at t = 0, meaning that the solution is undefined at
ui,−1 = u (xi , t−1 ) = u (xi , −k) . We must therefore find an alternative approach to find the solution
at t1 .
To this end, we use the both initial conditions u(x, 0) = f (x) and ∂u ∂t (x, 0) = g(x). To utilize this
information, we take a Taylor series expansion of u(x, t1 ) = u(x, k) about the point t = 0, giving

∂u 1 ∂2u
(x, 0) + k 2 2 (x, 0) + O k 3 .

u(x, 0 + k) = u(x, 0) + k
∂t 2 ∂t
∂u
Since u(x, 0) = f (x) and ∂t = g(x), one then gets

1 ∂2u
u(x, 0 + k) = f (x) + kg(x) + k 2 2 (x, 0) + O k 3 .

2 ∂t
2 2
In order to approximate the third term in the expansion, one uses the fact that ∂∂t2u = α2 ∂∂xu2 from the
wave equation. If we now write this in the subscript notation, and use the central difference formula
∂2u
for ∂xi,j
2 ,one obtains

 2 
2 α  2
+ O k3 .
 
ui,1 = f (xi ) + kg (xi ) + k f (xi+1 ) − 2f (xi ) + f (xi−1 ) + O h
2h2
CHAPTER 3. FINITE DIFFERENCE METHOD FOR THE WAVE EQUATION 23

Finally, we can therefore use the following equation for j = 0:

α2 k 2
[f (xi+1 ) − 2f (xi ) + f (xi−1 )] + O k 3 + h2 k 2 .

ui,1 = f (xi ) + kg (xi ) + (3.4)
2h2
In conclusion, for j = 0 one uses equation (3.4), and for j = 1, 2, · · · , T /k one uses the general equation
(3.3).
Chapter 4

The method of lines

Up to this point, we considered finite difference methods for solving PDEs numerically. One issue with
these methods is that the error associated with the time step is fairly low. The forward difference
method  for heat equation produced an error of O (k). For the wave equation, we obtained an error of
O k 2 . In this Chapter, we consider a different approach to dealing with the time integration, that
is, to advance from one timestep to the next. For this method, we will obtain stepsize errors of O k 4
by using the method of lines.
To introduce the approach, we reconsider the heat and wave equations. Finally, we will apply the
method to a nonlinear partial differential equation, namely the Korteweg-de Vries equation.

4.1 The heat equation


To start off, we revisit the heat equation with Dirichlet boundary conditions, given by
∂u ∂2u
= α2 2
∂t ∂x
with initial condition
u(x, 0) = g(x)
and boundary conditions
u(0, t) = u(L, t) = 0.
We start off by introducing a grid on the interval x ∈ [0, L] with N + 1 points in the usual manner,
given by xi = ih with h = L/N . Once more, due to the boundary conditions, we look to approximate
the solution for i = 1, 2, · · · , N − 1. In addition, we consider a time step. However, for these problems,
we denote the timestep as ∆t.
2
The idea is to now approximate the spatial partial derivative ∂∂xu2 , but keep the temporal derivative
∂u
∂t . For i = 1, we then get
α2
u̇1 = 2 [u2 (t) − 2u1 (t)] ,
h
where the overdot corresponds to differentiation with respect to t. Similarly, for i = 2 we get
α2
u̇2 = [u3 (t) − 2u2 (t) + u1 (t)] .
h2

24
CHAPTER 4. THE METHOD OF LINES 25

Indeed, for 2 ≤ i ≤ N − 2 one obtains the general formula

α2
u̇i = [ui+1 (t) − 2ui (t) + ui−1 ] .
h2
Finally, for i = N − 1 one gets

α2
u̇N −1 = [−2uN −1 (t) + uN −2 (t)] .
h2
Since we discretized the spatial derivative, but not the time derivative, we refer to the resulting set of
equations as a semi-discretization.
Let us now consider the resulting problem, consisting of a system of coupled differential equations.
Firstly, since each ui is a function of t only, it means we are dealing with regular derivatives, not
partial derivatives. This means that we have reduced the partial derivative to a system of ordinary
differential equations. We can now proceed to solve this system numerically.
To do this, let’s start by writing the system in vector form. To do so, we write the system of ODEs
in vector form as    
u̇1 u2 − 2u1
 u̇2 
 

 u3 − 2u2 + u1 

 u̇3  α2  u4 − 2u 3 + u 2

= 2  .
   
.. ..
h


 . 


 . 

 u̇N −2   uN −1 − 2uN −2 + uN −3 
u̇N −1 −2uN −1 + uN −2
T
By introducing the vector u(t) = [u1 (t), u2 (t), · · · uN −1 (t)] , this system can be written in the form

u̇ = f (u), (4.1)

where  
u2 − 2u1

 u3 − 2u2 + u1 

α2  u4 − 2u3 + u2 
f (u) = 2  .
 
..
h  . 
 
 uN −1 − 2uN −2 + uN −3 
−2uN −1 + uN −2
In addition, we have the initial condition given by
T
u(0) = [g1 , g2 , · · · , gN −1 ] ,

where gi = g (xi ) is obtained from the initial condition.


The resulting initial value problem can now be solved by means of the fourth-order Runge-Kutta
method. If we set tj = j∆t and uj = u (tj ) then the fourth-order Runge-Kutta method gives
1
uj+1 = uj + (k1 + 2k2 + 2k3 + k4 ) ,
6
where
k1 = ∆tf (uj ) ,
CHAPTER 4. THE METHOD OF LINES 26
 
1
k2 = ∆tf uj + k1 ,
2
 
1
k3 = ∆tf uj + k2 ,
2
k4 = ∆tf (uj + k3 ) .
For this scheme, the following two important aspects hold:

1. The method is O h2 + ∆t4 accurate
2. The scheme is explicit, resulting in conditional stability. Therefore the time step ∆t must be
chosen sufficiently small relative to the spatial stepsize h.

4.2 Wave equation


Let’s now consider the wave equation with Dirichlet boundary conditions, given by
∂2u ∂2u
2
= α2 2 ,
∂t ∂x
u(0, t) = u(L, t) = 0,
∂u
u(x, 0) = g1 (x), (x, 0) = g2 (x).
∂t
By introducing the same spatial grid, and using the same notations as above, one obtains

ü = f (u),

with initial conditions


T
u0 = [g11 , g12 , · · · g1,N −1 ]
and
u̇0 = [g21 , g22 , · · · , g2,N −1 ] ,
where gpq = gp (xq ). Notice that the resulting system of ODEs is a system of second order ODEs.
To solve this by means of the Runge-Kutta method, we must rewrite this system of equations as a
first-order system of ODEs. To do this, consider the general equation corresponding to 2 ≤ i ≤ N − 2,
given by
α2
üi = 2 (ui+1 − 2ui + ui−1 ) .
h
By introducing the variable vi = u̇i , this can be rewritten as

u̇i = vi ,
α2
v̇i = (ui+1 − 2ui + ui−1 ) ,
h2
where vi = v (xi ) . We therefore introduce the vector
 
u
w= ,
v
CHAPTER 4. THE METHOD OF LINES 27

where v(t) = [v1 (t), v2 (t), · · · , vN −1 (t)] . The resulting system of ODEs then becomes

ẇ = f (w),

where  
v1
 v2 
..
 
 

 . 


 vN −1 

 u2 − 2u1 
f (w) =  .

 u3 − 2u2 + u1 


 u4 − 2u3 + u2 

 .. 

 . 

 uN −1 − 2uN −2 + uN −3 
−2uN −1 + uN −2
The initial condition is then given by
 
g11
 g12 
..
 
 

 . 

 g1,N −1 
w0 = 
 g21
.

 
 g22 
 
 .. 
 . 
g2,N −1

Exercise
Derive the method of lines numerical scheme for the Korteweg - de Vries equation, given by

∂u 1 ∂ 3 u ∂u
+ +u = 0,
∂t 2 ∂x3 ∂x
using periodic boundary conditions and initial condition u(x, 0) = f (x), and the finite difference
formula
f (x + 2h) − 2f (x + h) + 2f (x − h) − f (x − 2h)
f 000 (x) = + O h2 .

2h3
Chapter 5

The pseudospectral method

In the previous chapter we introduced an alternative method to integrate over time for a partial
differential equation. In this chapter, we consider an alternative approach to approximating the spatial
derivatives by using the Fourier series.
As a starting point, it is important to note that this method applies only to periodic boundary
conditions. The violation of this set of boundary conditions means that the Fourier series can no longer
provide an accurate approximation of the function. To start off, let’s consider the heat equation with
periodic boundary conditions.

5.1 Heat equation


We consider the standard form of the heat equation

∂u ∂2u
= α2 2 ,
∂t ∂x
with periodic boundary conditions and initial condition u(x, 0) = f (x).
The idea is to use the Fourier series to approximate the second partial derivative w.r.t. x. To do
this, we start in the continuous case where the Fourier series of the solution u(x, t) is given by

2πinx
X
u(x, t) = Un (t)e L , (5.1)
n=−∞

where Z L/2
1 2πinx
Un (t) = u(x, t)e− L dx. (5.2)
L −L/2

28
CHAPTER 5. THE PSEUDOSPECTRAL METHOD 29

We may now formally differentiate (5.1) twice with respect to x to get


∞  2
∂2u X 2πin 2πinx
(x, t) = Un (t)e L ,
∂x2 n=−∞
L


X 4π 2 2 2πinx
= − 2
n Un (t)e L . (5.3)
n=−∞
L

The spatial domain can be discretized by introducing a grid with N + 1 points on the interval
[−L/2, L/2]. Let xj = jh, where h = L/N and N is an even number. We now introduce the notation
uj (t) = u (xj , t) .
From the boundary conditions, it follows that u0 (t) = uN (t) for all t. We therefore consider only uj (t)
values for j = −N/2, · · · , N/2 − 1.
We now write the function values in vector form as follows:
 T
u(t) = u−N/2 (t), u−N/2+1 (t), · · · , uN/2−1 (t) .
We proceed by approximating the Fourier coefficients in (5.3) by means of the discrete Fourier trans-
form. To this end, we use the notation
F {u(t)} = U (t),
where  T
U (t) = U−N/2 (t), U−N/2+1 (t), · · · , UN/2−1 (t)
and satisfies the discrete Fourier transform
N/2−1
1 X 2πijn
Uj (t) = uj (t)e− N .
N
j=−N/2

−4π 2 2
The Fourier coefficients of the second derivative L2 n Un (t) can then be approximated as follows:
∂2u 4π 2
Vector containing Fourier coefficients of 2
(xi , t) = − 2 n2 U (t),
∂x L
T
where n = [−N/2, −N/2 + 1, · · · , N/2 − 1] , n2 = n n, and represents the element-wise or
Hadamard product, defined as
     
x1 y1 x1 y1
 x2   y2   x2 y2 
x y= . = .
     
.. ..
 ..
 
  .   . 
xn yn xn yn
Finally, we obtain our approximation of the second derivative by taking the inverse discrete Fourier
transform of vector of Fourier coefficients of the second partial derivative, given by
4π 2 2
 
−1
uxx (t) = F − 2n F {u} .
L
CHAPTER 5. THE PSEUDOSPECTRAL METHOD 30

5.1.1 Pseudospectral forward difference scheme


For a pseudospectral forward difference scheme, we now discretize time by using tj = jk for some time
step k. By substituting the forward difference approximation, one obtains (in vector form)
4π 2 2
 n o
(j+1) (j) −1 (j)
u = u + kF − 2n F u ,
L
where T
u(j) = u−N/2 (tj ) , u−N/2+1 (tj ) , · · · uN/2−1 (tj ) .


5.1.2 Pseudospectral method of lines


For a pseudospectral method of lines scheme, one represents the semi-discrete equations (in vector
form) as
u̇(t) = f (u),
where
4π 2
 
−1
f (u) = F − 2 n2 F {u} .
L
By discretizing t by tj = j∆t for some time step ∆t, one then obtains the formula
1
u(j+1) = u(j) + (k1 + 2k2 + 2k3 + k4 ) ,
6
where  
k1 = ∆t × f u(j) ,
 
k2 = ∆t × f u(j) + 0.5k1 ,
 
k3 = ∆t × f u(j) + 0.5k2 ,
 
k4 = ∆t × f u(j) + k3 .

Exercises
1. Apply the pseudospectral method to the KdV equation
∂u 1 ∂ 3 u ∂u
+ 3
+u =0
∂t 2 ∂x ∂x
with periodic boundary conditions and initial condition u(x, 0) = f (x). Implement both a
forward difference scheme and a method of lines scheme.
2. Apply the pseudospectral method to the nonlinear Schrödinger equation
∂ψ ∂ 2 ψ 2
i + + 2 |ψ| ψ = 0
∂t ∂x2
with periodic boundary conditions and initial condition ψ(x, 0) = f (x). Implement both a
forward difference scheme and a method of lines scheme.
Chapter 6

The split-step method

The split-step method provides an interesting approach to approximating the time derivative by re-
ducing the (typically nonlinear) partial differential equation as a series of simpler PDEs, in particular,
into a linear and a strictly nonlinear PDE. The method works best for periodic boundary conditions,
where the linear equation can be solved by means of a spectral method. In addition, the method excels
when the nonlinear problem can be solved analytically. A famous example of this is the nonlinear
Schrödinger equation, that will be considered later.

6.1 General form of split-step methods


A partial differential equation with linear and nonlinear terms, along with a first-order temporal
derivative, can be expressed in the form
∂ψ
= (L + N ) ψ,
∂t
where L is a linear operator, and N is a nonlinear operator. For an initial condition ψ(x̄, t̄), the formal
solution of this equation at t = t̄ + ∆t is given by

ψ(x̄, t̄ + ∆t) = e(L+N )∆t ψ (x̄, t̄) , (6.1)

where the exponential operator can be expressed as a Taylor series as follows:


1 1 1
e A = I + A + A2 + A3 + A4 + · · · .
2 6 24
If we apply this expansion to the operator in the solution (6.1), one obtains
2
(∆t)
e(L+N )∆t = I + ∆t (L + N ) + L2 + LN + N L + N 2 + O ∆t3 .
 
(6.2)
2
To start off, we consider the first-order split-step method.

31
CHAPTER 6. THE SPLIT-STEP METHOD 32

6.1.1 First-order split-step method


The first-order split-step method uses the following approximation for the exponential operator:

e(L+N )∆t ≈ eL∆t eN ∆t .

In order to determine the magnitude of the local truncation error, let us perform a Taylor series
expansion of the split operator, given by
  
L∆t N ∆t 1 2 2 3
 1 2 2 3

e e = I + L∆t + L ∆t + O ∆t I + N ∆t + N ∆t + O ∆t
2 2

∆t2 2
L + 2LN + N 2 + O ∆t3 .
 
= I + (L + N ) ∆t + (6.3)
2
Comparing (6.2) and (6.3), we see that the expressions would be equal provided that LN = N L.
However, for nonlinear operators, this is generally not the case. Therefore, since LN 6= N L, it follows
that
e(L+N )∆t = eL∆t eN ∆t + O ∆t2 ,


in other words the local trunction error is of the order O ∆t2 . Since we apply the method iteratively,
it follows that the method is first order accurate.

6.1.2 Second-order split-step method


For the second order split-step method, we consider a splitting of the form

e(L+N )∆t = e[(α1 +α2 )L+(β1 +β2 )N ]∆t ≈ eα1 L∆tβ1 ∆tα2 ∆tβ2 ∆t , (6.4)

where we want to choose α1 , α2 , β1 and β2 in a way that ensures that the local truncation order is
O ∆t3 . Notice that α1 + α2 = β1 + β2 = 1.
Expanding the RHS of (6.4), one obtains
  
1 1
eα1 L∆t eβ1 N ∆t eα2 ∆t eβ2 ∆t = I + α1 L∆t + α12 L2 ∆t2 + · · · I + β1 N ∆t + β12 N 2 ∆t2 + · · ·
2 2
  
1 2 2 2 1 2 2 2
= I + α2 L∆t + α2 L ∆t + · · · I + β2 N ∆t + β2 N ∆t + · · ·
2 2

1 2
= I + (α1 + α2 ) L∆t + (β1 + β2 ) N ∆t + L2 ∆t2 (α1 + α2 )
2
1 2
= + LN ∆t2 (α2 β2 + α1 β2 + α1 β1 ) + N L∆t2 (β1 α2 ) + N 2 ∆t2 (β1 + β2 ) + O ∆t3 .

2
(6.5)

By comparing (6.2) and (6.5), and remembering that α1 + α2 = β1 + β2 = 1, it then follows that the
following two equations must be satisfied:
1
α1 β1 + α1 β2 + α2 β2 = , (6.6)
2
CHAPTER 6. THE SPLIT-STEP METHOD 33

and
1
β1 α2 = . (6.7)
2
Starting with the latter, we set α2 = 1, so that β1 = 12 . It follows that α1 = 0 and β2 = 12 . Note that
these choices also satisfies (6.6). Substitution into (6.4) then gives
1 1
e(L+N )∆t = e 2 N ∆t eL∆t e 2 N ∆t + O ∆t3 .


In other words, the local truncation is third order. Since the method is applied iteratively, it follows
that the method is therefore second-order accurate i.t.o. ∆t.

6.2 Split-step method for NLS equation


We now consider the NLS equation with periodic boundary conditions, given by

∂ψ ∂ 2 ψ
i + + 2|ψ|2 ψ = 0.
∂t ∂x2
This can be expressed in operator form as
∂ψ
= (L + N ) ψ,
∂t
where
∂2ψ
Lψ = i
∂x2
and
N = 2i|ψ|2 ψ.
To use the first-order split-step method to advance from t = t̄ to t = t̄ + ∆t, one gets that

ψ(x̄, t̄ + ∆t) = eL∆t eN ∆t ψ (x̄, t̄) .

The first step is to find the intermediate solution

ψ̃(x̄, t̄) = eN ∆t ψ (x̄, t̄) .

One can then approximate the solution using

ψ (x̄, t̄ + ∆t) = eL∆t ψ̃ (x̄, t̄) .

As it turns out, each of these operators have can be solved analytically.

6.2.1 Solving eN ∆t ψ analytically


In order to solve eN ∆t ψ analytically, we note that ψ̃ (x̄, t̄) is the solution of

∂ψ
= 2i|ψ|2 ψ
∂t
CHAPTER 6. THE SPLIT-STEP METHOD 34

at t = t̄ + ∆t with initial condition ψ (x̄, t̄). To solve this equation analytically, one must showthat (do
this at home)

|ψ|2 = 0.
∂t
Therefore, the equation reduces to a simple first-order linear ODE with general solution
2
ψ (x̄, t̄ + ∆t) = e2i|ψ(x̄,t̄)| ∆t
ψ (x̄, t̄) .
Therefore it follows that the analytical expression for the intermediate solution is given by
2
ψ̃ (x̄, t̄) = e2i|ψ(x̄,t̄)| ∆t
ψ (x̄, t̄) . (6.8)

6.2.2 Solving eL∆t ψ analytically


In order to find an analytical expression for eL∆t , we look to solve the linear PDE
∂ψ ∂2ψ
=i 2 (6.9)
∂t ∂x
at t = t̄ + ∆t for some given solution ψ (x̄, t̄) . To do this, we expand ψ(x, t) as a Taylor series

2πinx̄
X
ψ(x̄, t̄) = Ψn (t̄)e L .
n=−∞

We can then formally evaluate the partial derivatives, given by



∂ψ X 2πinx̄
(x̄, t̄) = Ψ̇n (t̄) e L ,
∂t n=−∞

and
∞ 
∂2ψ −4π 2 n2

2πinx̄
X
2
(x̄, t̄) = 2
Ψ (x̄, t̄) e L .
∂x n=−∞
L
Substituting these expressions into (6.9) then gives
∞ 
4iπ 2 n2
 
2πinx̄
X
Ψ̇n + Ψ n e L = 0.
n=−∞
L2
2πinx̄
Since the exponential functions e L are linearly independent, it follows that the coefficient must be
zero for each choice of n, that is,
4iπ 2 n2
Ψ̇n + Ψn = 0
L2
for each n ∈ Z. This is just a first-order linear ODE with analytical solution
4iπ 2 n2
Ψn (t̄ + ∆t) = e− L2
∆t
Ψn (t̄) .
Substitution into the Fourier series then gives

X 4iπ 2 n2 2πinx̄
ψ (x̄, t̄ + ∆t) = e− L2
∆t
Ψn (t̄) e L . (6.10)
n=−∞
CHAPTER 6. THE SPLIT-STEP METHOD 35

6.2.3 Discretization and resulting split-step scheme


Having obtained analytical solutions for the two exponential operators, wecan now obtain a numerical
scheme by discretizing the problem. To do so, we introduce a grid on x ∈ − L2 , L2 consisting of N + 1
points. Assuming that N is even, we let xj = jh where h = L/N and j = −N/2, −N/2+1, · · · , N/2−1.
Let ψj (t̄) = ψ (xj , t̄) . We look to approximate

eL∆t eN ∆t ψ (x̄, t̄)

at these grid points.


To start off, we use the analytical solution (6.8) to find the intermediate solution
2
ψ̃j (t̄) = e2i|ψj (t̄)| ∆t
ψj (t̄) ,

for j = −N/2, · · · , N/2 − 1, where ψ̃j (t̄) = ψ̃ (xj , t̄) .


To solve
ψ (x̄, t̄ + ∆t) = eL∆t ψ̃ (x̄, t̄) ,
we use the discrete Fourier transform in a similar way to how we implement it for the pseudospectral
method. To this end, we introduce the vector
h iT
θ (t)= ψ̃−N/2 (t), ψ̃−N/2+1 (t), · · · , ψ̃N/2−1 (t) ,

and let h i
Θ(t) = Ψ̃−N/2 (t), Ψ̃−N/2+1 (t), · · · , Ψ̃N/2−1 (t) ,

where
N/2−1
1 X 2πijn
Ψ̃n (t) = ψ̃j (t)e− N ,
N
j=−N/2

N/2−1
X 2πijn
ψ̃j (t) = Ψ̃n (t)e N ,
n=−N/2

F {θ(t)} = Θ(t)
denotes the fast Fourier transform and

F −1 {Θ(t)} = θ(t)

denotes the inverse fast Fourier transform. From the analytical solution (6.10), we then get that
n o
ψ (t̄ + ∆t) = F −1 v  F {θ (t̄)} ,

where T
4iπ 2 (−N/2)2 4iπ 2 (−N/2 + 1)2 4iπ 2 (N/2 − 1)2

v= ∆t, ∆t, · · · , ∆t .
L2 L2 L2

You might also like