Exercises From Finite Difference Methods For Ordinary and Partial Differential Equations
Exercises From Finite Difference Methods For Ordinary and Partial Differential Equations
Contents
Chapter 1 4
Exercise 1.1 (derivation of finite difference formula) . . . . . . . . . . . . . . . . . . . . . . . 4
Exercise 1.2 (use of fdstencil) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Chapter 2 5
Exercise 2.1 (inverse matrix and Green’s functions) . . . . . . . . . . . . . . . . . . . . . . . 5
Exercise 2.2 (Green’s function with Neumann boundary conditions) . . . . . . . . . . . . . . 5
Exercise 2.3 (solvability condition for Neumann problem) . . . . . . . . . . . . . . . . . . . . 5
Exercise 2.4 (boundary conditions in bvp codes) . . . . . . . . . . . . . . . . . . . . . . . . . 5
Exercise 2.5 (accuracy on nonuniform grids) . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Exercise 2.6 (ill-posed boundary value problem) . . . . . . . . . . . . . . . . . . . . . . . . . 6
Exercise 2.7 (nonlinear pendulum) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Chapter 3 8
Exercise 3.1 (code for Poisson problem) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Exercise 3.2 (9-point Laplacian) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Chapter 4 9
Exercise 4.1 (Convergence of SOR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Exercise 4.2 (Forward vs. backward Gauss-Seidel) . . . . . . . . . . . . . . . . . . . . . . . . 9
Chapter 5 11
Exercise 5.1 (Uniqueness for an ODE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Exercise 5.2 (Lipschitz constant for an ODE) . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Exercise 5.3 (Lipschitz constant for a system of ODEs) . . . . . . . . . . . . . . . . . . . . . 11
Exercise 5.4 (Duhamel’s principle) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Exercise 5.5 (matrix exponential form of solution) . . . . . . . . . . . . . . . . . . . . . . . . 11
Exercise 5.6 (matrix exponential form of solution) . . . . . . . . . . . . . . . . . . . . . . . . 12
Exercise 5.7 (matrix exponential for a defective matrix) . . . . . . . . . . . . . . . . . . . . . 12
Exercise 5.8 (Use of ode113 and ode45) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Exercise 5.9 (truncation errors) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Exercise 5.10 (Derivation of Adams-Moulton) . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Exercise 5.11 (Characteristic polynomials) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Exercise 5.12 (predictor-corrector methods) . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Exercise 5.13 (Order of accuracy of Runge-Kutta methods) . . . . . . . . . . . . . . . . . . . 14
1
Exercise 5.14 (accuracy of TR-ZBDF2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Exercise 5.15 (Embedded Runge-Kutta method) . . . . . . . . . . . . . . . . . . . . . . . . . 15
Exercise 5.16 (accuracy of a Runge-Kutta method) . . . . . . . . . . . . . . . . . . . . . . . 15
Exercise 5.17 (R(z) for the trapezoidal method) . . . . . . . . . . . . . . . . . . . . . . . . . 15
Exercise 5.18 (R(z) for Runge-Kutta methods) . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Exercise 5.19 (Padé approximations) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Exercise 5.20 (R(z) for Runge-Kutta methods) . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Exercise 5.21 (starting values) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Chapter 6 19
Exercise 6.1 (Lipschitz constant for a one-step method) . . . . . . . . . . . . . . . . . . . . . 19
Exercise 6.2 (Improved convergence proof for one-step methods) . . . . . . . . . . . . . . . . 19
Exercise 6.3 (consistency and zero-stability of LMMs) . . . . . . . . . . . . . . . . . . . . . . 19
Exercise 6.4 (Solving a difference equation) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Exercise 6.5 (Solving a difference equation) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Exercise 6.6 (Solving a difference equation) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Exercise 6.7 (Convergence of backward Euler method) . . . . . . . . . . . . . . . . . . . . . 21
Exercise 6.8 (Fibonacci sequence) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Chapter 7 22
Exercise 7.1 (Convergence of midpoint method) . . . . . . . . . . . . . . . . . . . . . . . . . 22
Exercise 7.2 (Example 7.10) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Exercise 7.3 (stability on a kinetics problem) . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Exercise 7.4 (damped linear pendulum) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Exercise 7.5 (fixed point iteration of implicit methods) . . . . . . . . . . . . . . . . . . . . . 23
Chapter 8 24
Exercise 8.1 (stability region of TR-BDF2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Exercise 8.2 (Stiff decay process) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Exercise 8.3 (Stability region of RKC methods) . . . . . . . . . . . . . . . . . . . . . . . . . 25
Exercise 8.4 (Implicit midpoint method) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Exercise 8.5 (The θ-method) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Chapter 9 26
Exercise 9.1 (leapfrog for heat equation) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Exercise 9.2 (codes for heat equation) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Exercise 9.3 (heat equation with discontinuous data) . . . . . . . . . . . . . . . . . . . . . . 27
Exercise 9.4 (Jacobi iteration as time stepping) . . . . . . . . . . . . . . . . . . . . . . . . . 27
Exercise 9.5 (Diffusion and decay) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Chapter 10 29
Exercise 10.1 (One-sided and centered methods) . . . . . . . . . . . . . . . . . . . . . . . . . 29
Exercise 10.2 (Eigenvalues of A for upwind) . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Exercise 10.3 (skewed leapfrog) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Exercise 10.4 (trapezoid method for advection) . . . . . . . . . . . . . . . . . . . . . . . . . 30
Exercise 10.5 (modified equation for Lax-Wendroff) . . . . . . . . . . . . . . . . . . . . . . . 31
Exercise 10.6 (modified equation for Beam-Warming) . . . . . . . . . . . . . . . . . . . . . . 31
Exercise 10.7 (modified equation for trapezoidal) . . . . . . . . . . . . . . . . . . . . . . . . 31
Exercise 10.8 (computing with Lax-Wendroff and upwind) . . . . . . . . . . . . . . . . . . . 31
Exercise 10.9 (computing with leapfrog) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Exercise 10.10 (Lax-Richtmyer stability of leapfrog as a one-step method) . . . . . . . . . . 32
Exercise 10.11 (Modified equation for Gauss-Seidel) . . . . . . . . . . . . . . . . . . . . . . . 34
2
Chapter 11 35
Exercise 11.1 (two-dimensional Lax-Wendroff) . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Exercise 11.2 (Strang splitting) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Exercise 11.3 (accuracy of IMEX method) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3
Chapter 1 Exercises
From: Finite Difference Methods for Ordinary and Partial Differential Equations
by R. J. LeVeque, SIAM, 2007. http://www.amath.washington.edu/∼rjl/fdmbook
(a) Use the method of undetermined coefficients to set up the 5 × 5 Vandermonde system
that would determine a fourth-order accurate finite difference approximation to u 00 (x)
based on 5 equally spaced points,
u00 (x) = c−2 u(x − 2h) + c−1 u(x − h) + c0 u(x) + c1 u(x + h) + c2 u(x + 2h) + O(h4 ).
(b) Compute the coefficients using the matlab code fdstencil.m available from the website,
and check that they satisfy the system you determined in part (a).
(c) Test this finite difference formula to approximate u00 (1) for u(x) = sin(2x) with values of
h from the array hvals = logspace(-1, -4, 13). Make a table of the error vs. h for
several values of h and compare against the predicted error from the leading term of the
expression printed by fdstencil. You may want to look at the m-file chap1example1.m
for guidance on how to make such a table.
Also produce a log-log plot of the absolute value of the error vs. h.
You should observe the predicted accuracy for larger values of h. For smaller values,
numerical cancellation in computing the linear combination of u values impacts the ac-
curacy observed.
4
Chapter 2 Exercises
From: Finite Difference Methods for Ordinary and Partial Differential Equations
by R. J. LeVeque, SIAM, 2007. http://www.amath.washington.edu/∼rjl/fdmbook
(a) Write out the 5 × 5 matrix A from (2.43) for the boundary value problem u 00 (x) = f (x)
with u(0) = u(1) = 0 for h = 0.25.
(b) Write out the 5 × 5 inverse matrix A−1 explicitly for this problem.
(c) If f (x) = x, determine the discrete approximation to the solution of the boundary value
problem on this grid and sketch this solution and the five Green’s functions whose sum
gives this solution.
(a) Determine the Green’s functions for the two-point boundary value problem u 00 (x) = f (x)
on 0 < x < 1 with a Neumann boundary condition at x = 0 and a Dirichlet condition at
x = 1, i.e, find the function G(x, x̄) solving
(b) Using this as guidance, find the general formulas for the elements of the inverse of the
matrix in equation (2.54). Write out the 5 × 5 matrices A and A−1 for the case h = 0.25.
(a) Modify the m-file bvp2.m so that it implements a Dirichlet boundary condition at x = a
and a Neumann condition at x = b and test the modified program.
(b) Make the same modification to the m-file bvp4.m, which implements a fourth order
accurate method. Again test the modified program.
5
Exercise 2.5 (accuracy on nonuniform grids)
In Example 1.4 a 3-point approximation to u00 (xi ) is determined based on u(xi−1 ), u(xi ), and
u(xi+1 ) (by translating from x1 , x2 , x3 to general xi−1 , xi , and xi+1 ). It is also determined that
the truncation error of this approximation is 31 (hi−1 −hi )u000 (xi )+O(h2 ), where hi−1 = xi −xi−1
and hi = xi+1 − xi , so the approximation is only first order accurate in h if hi−1 and hi are
O(h) but hi−1 6= hi .
The program bvp2.m is based on using this approximation at each grid point, as described
in Example 2.3. Hence on a nonuniform grid the local truncation error is O(h) at each point,
where h is some measure of the grid spacing (e.g., the average spacing on the grid). If we
assume the method is stable, then we expect the global error to be O(h) as well as we refine
the grid.
(a) However, if you run bvp2.m you should observe second-order accuracy, at least provided
you take a smoothly varying grid (e.g., set gridchoice = ’rtlayer’ in bvp2.m). Verify
this.
(b) Suppose that the grid is defined by xi = X(zi ) where zi = ih for i = 0, 1, . . . , m + 1
with h = 1/(m + 1) is a uniform grid and X(z) is some smooth mapping of the interval
[0, 1] to the interval [a, b]. Show that if X(z) is smooth enough, then the local truncation
error is in fact O(h2 ). Hint: xi − xi−1 ≈ hX 0 (xi ).
(c) What average order of accuracy is observed on a random grid? To test this, set gridchoice
= ’random’ in bvp2.m and increase the number of tests done, e.g., by setting mvals =
round(logspace(1,3,50)); to do 50 tests for values of m between 10 and 1000.
6
(d) You might expect the linear system in part (c) to be singular since the boundary value
problem is not well posed. It is not, because of discretization error. Compute the
eigenvalues of the matrix A for this problem and show that an eigenvalue approaches
0 as h → 0. Also show that kA−1 k2 blows up as h → 0 so that the discretization is
unstable.
(a) Write a program to solve the boundary value problem for the nonlinear pendulum as
discussed in the text. See if you can find yet another solution for the boundary conditions
illustrated in Figures 2.4 and 2.5.
(b) Find a numerical solution to this BVP with the same general behavior as seen in Figure
2.5 for the case of a longer time interval, say T = 20, again with α = β = 0.7. Try larger
values of T . What does maxi θi approach as T is increased? Note that for large T this
solution exhibits “boundary layers”.
7
Chapter 3 Exercises
From: Finite Difference Methods for Ordinary and Partial Differential Equations
by R. J. LeVeque, SIAM, 2007. http://www.amath.washington.edu/∼rjl/fdmbook
(a) Test this script by performing a grid refinement study to verify that it is second order
accurate.
(b) Modify the script so that it works on a rectangular domain [ax , bx ] × [ay , by ], but still
with ∆x = ∆y = h. Test your modified script on a non-square domain.
(c) Further modify the code to allow ∆x 6= ∆y and test the modified script.
(a) Show that the 9-point Laplacian (3.17) has the truncation error derived in Section 3.5.
Hint: To simplify the computation, note that the 9-point Laplacian can be written as the
5-point Laplacian (with known truncation error) plus a finite difference approximation
that models 16 h2 uxxyy + O(h4 ).
(b) Modify the matlab script poisson.m to use the 9-point Laplacian (3.17) instead of the
5-point Laplacian, and to solve the linear system (3.18) where fij is given by (3.19).
Perform a grid refinement study to verify that fourth order accuracy is achieved.
8
Chapter 4 Exercises
From: Finite Difference Methods for Ordinary and Partial Differential Equations
by R. J. LeVeque, SIAM, 2007. http://www.amath.washington.edu/∼rjl/fdmbook
(a) Run this program for each method and produce a plot similar to Figure 4.2.
(b) The convergence behavior of SOR is very sensitive to the choice of ω (omega in the code).
Try changing from the optimal ω to ω = 1.8 or 1.95.
(c) Let g(ω) = ρ(G(ω)) be the spectral radius of the iteration matrix G for a given value of
ω. Write a program to produce a plot of g(ω) for 0 ≤ ω ≤ 2.
(d) From equations (4.22) one might be tempted to try to implement SOR as
for iter=1:maxiter
uGS = (DA - LA) \ (UA*u + rhs);
u = u + omega * (uGS - u);
end
where the matrices have been defined as in iter_bvp_Asplit.m. Try this computation-
ally and observe that it does not work well. Explain what is wrong with this and derive
the correct expression (4.24).
(a) The Gauss-Seidel method for the discretization of u00 (x) = f (x) takes the form (4.5) if
we assume we are marching forwards across the grid, for i = 1, 2, . . . , m. We can also
define a backwards Gauss-Seidel method by setting
(b) Implement this method in iter_bvp_Asplit.m and observe that it converges at the same
rate as forward Gauss-Siedel for this problem.
(c) Modify the code so that it solves the boundary value problem
9
with u(0) = 0 and u(1) = 0, where a ≥ 0 and the u0 (xi ) term is discretized by the
one-sided approximation (Ui − Ui−1 )/h. Test both forward and backward Gauss-Seidel
for the resulting linear system. With a = 1 and = 0.0005. You should find that they
behave very differently:
Errors
0
10
−2
10
−4
10
−6
10
−8
10 forward backward
−10
10
−12
10
−14
PSfrag replacements 10
−16
10
0 10 20 30 40 50 60 70 80 90 100
iteration
Explain intuitively why sweeping in one direction works so much better than in the other.
Hint: Note that this equation is the steady equation for an advection-diffusion PDE
ut (x, t) + aux (x, t) = uxx (x, t) − f (x). You might consider how the methods behave in
the case = 0.
10
Chapter 5 Exercises
From: Finite Difference Methods for Ordinary and Partial Differential Equations
by R. J. LeVeque, SIAM, 2007. http://www.amath.washington.edu/∼rjl/fdmbook
(a) Determine the best possible Lipschitz constant for this function over 2 ≤ u < ∞.
u0 (t) = log(u(t)),
u(0) = 2.
Explain why we know that this problem has a unique solution for all t ≥ 0 based on
the existence and uniqueness theory described in Section 5.2.1. (Hint: Argue that f is
Lipschitz continuous in a domain that the solution never leaves, though the domain is
not symmetric about η = 2 as assumed in the theorem quoted in the book.)
Determine the Lipschitz constant for this system in the max-norm k · k∞ and the 1-norm k · k1 .
(See Appendix A.3.)
11
The initial value problem
has the solution v(t) = v0 cos(2t)+ 12 v00 sin(2t). Determine this solution by rewriting the ODE as
a first order system u0 = Au so that u(t) = eAt u(0) and then computing the matrix exponential
using (D.30) in Appendix D.
with initial conditions specified at time t = 0. Solve this problem in two different ways:
(a) Solve the first equation, which only involves u1 , and then insert this function into the
second equation to obtain a nonhomogeneous linear equation for u2 . Solve this using
(5.8).
(b) Write the system as u0 = Au and compute the matrix exponential using (D.30) to obtain
the solution.
with initial conditions specified at time t = 0. Solve this problem in two different ways:
(a) Solve the first equation, which only involves u1 , and then insert this function into the
second equation to obtain a nonhomogeneous linear equation for u2 . Solve this using
(5.8).
(b) Write the system as u0 = Au and compute the matrix exponential using (D.35) to obtain
the solution. (See Appendix C.3 for a discussion of the Jordan Canonical form in the
defective case.)
12
(a) Verify that the function
v(t) = − sin(2t) + t2 − 3
is a solution to this problem. How do you know it is the unique solution?
(b) Rewrite this problem as a first order system of the form u0 (t) = f (u(t), t) where u(t) ∈ lR3 .
Make sure you also specify the initial condition u(0) = η as a 3-vector.
(c) Use the matlab function ode113 to solve this problem over the time interval 0 ≤ t ≤ 2.
Plot the true and computed solutions to make sure you’ve done this correctly.
(d) Test the matlab solver by specifying different tolerances spanning several orders of
magnitude. Create a table showing the maximum error in the computed solution for
each tolerance and the number of function evaluations required to achieve this accuracy.
(e) Repeat part (d) using the matlab function ode45, which uses an embedded pair of
Runge-Kutta methods instead of Adams-Bashforth-Moulton methods.
(a) Using the expression for the local truncation error in Section 5.9.1,
Interpolate a quadratic polynomial p(t) through the three values f (U n ), f (U n+1 ) and
f (U n+2 ) and then integrate this polynomial exactly to obtain the formula. The coeffi-
cients of the polynomial will depend on the three values f (U n+j ). It’s easiest to use the
“Newton form” of the interpolating polynomial and consider the three times t n = −k,
tn+1 = 0, and tn+2 = k so that p(t) has the form
where A, B, and C are the appropriate divided differences based on the data. Then
integrate from 0 to k. (The method has the same coefficients at any time, so this is
valid.)
13
Exercise 5.11 (Characteristic polynomials)
Determine the characteristic polynomials ρ(ζ) and σ(ζ) for the following linear multistep
methods. Verify that (5.48) holds in each case.
(a) Verify that the predictor-corrector method (5.51) is second order accurate.
(b) Show that the predictor-corrector method obtained by predicting with the 2-step Adams-
Bashforth method followed by correcting with the 2-step Adams-Moulton method is third
order accurate.
0
1/2 1/2
1 0 1
1 0 0 1
0
1/3 1/3
2/3 0 2/3
1/4 0
14
Use the approach suggested in the Remark at the bottom of page 129 to test the accuracy
of the TR-BDF2 method (5.36).
0
1 1
1/2 1/4 1/4
1/2 1/2 0
(a) Determine the leading term of the truncation error (i.e., the O(k 2 ) term) for the Runge-
Kutta method (5.30) of Example 5.11.
(b) Do the same for the method (5.32) for the non-autonomous case.
(a) Apply the trapezoidal method to the equation u0 = λu and show that
n+1 1 + z/2
U = U n,
1 − z/2
where z = λk.
(b) Let
1 + z/2
R(z) = .
1 − z/2
Show that R(z) = ez + O(z 3 ) and conclude that the one-step error of the trapezoidal
method on this problem is O(k 3 ) (as expected since the method is second order accurate).
Hint: One way to do this is to use the “Neumann series” expansion
1 z z 2 z 3
=1+ + + + ···
1 − z/2 2 2 2
and then multiply this series by (1 + z/2). A more general approach to checking the
accuracy of rational approximations to ez is explored in the next exercises.
15
Exercise 5.18 (R(z) for Runge-Kutta methods)
Any r-stage Runge-Kutta method applied to u0 = λu will give an expression of the form
U n+1 = R(z)U n
where z = λk and R(z) is a rational function, a ratio of polynomials in z each having degree
at most r. For an explicit method R(z) will simply be a polynomial of degree r and for an
implicit method it will be a more general rational function.
Since u(tn+1 ) = ez u(tn ) for this problem, we expect that a pth order accurate method will
give a function R(z) satisfying
as discussed in the Remark on page 129. The rational function R(z) also plays a role in stability
analysis as discussed in Section 7.6.2.
One can determine the value of p in (Ex5.18a). by expanding ez in a Taylor series about
z = 0, writing the O(z p+1 ) term as
multiplying through by the denominator of R(z), and then collecting terms. For example, for
the trapezoidal method of Exercise 5.17,
1 + z/2 1 2 1 3
= 1 + z + z + z + · · · + Cz p+1 + O(z p+2 )
1 − z/2 2 6
gives
1 1 1 2 1 3
1+ z = 1− z 1 + z + z + z + · · · + Cz p+1 + O(z p+2 )
2 2 2 6
1 1 3
= 1 + z − z + · · · + Cz p+1 + O(z p+2 )
2 12
and so
1 3
Cz p+1 = z + ··· ,
12
from which we conclude that p = 2.
(a) Let
1 + 13 z
R(z) = .
1 − 32 z + 61 z 2
Determine p for this rational function as an approximation to ez .
16
Exercise 5.19 (Padé approximations)
A rational function R(z) = P (z)/Q(z) with degree m in the numerator and degree n in the
denominator is called the (m, n) Padé approximation to a function f (z) if
with q as large as possible. The Padé approximation can be uniquely determined by expanding
f (z) in a Taylor series about z = 0 and then considering the series
collecting powers of z, and choosing the coefficients of P and Q to make as many terms vanish
as possible. Trying to require that they all vanish will give a system of infintely many linear
equations for the coefficients. Typically these can not all be satisfied simultaneously, while
requiring the maximal number to hold will give a nonsingular linear system. (Note that the
(m, 0) Padé approximation is simply the first m + 1 terms of the Taylor series.)
For this exercise, consider the exponential function f (z) = ez .
(a) Determine the (1, 1) Padé approximation of the form
1 + a1 z
R(z) = .
1 + b1 z
Note that this rational function arises from the trapezoidal method applied to u 0 = λu
(see Exercise 5.18).
1 + a 1 z + a2 z 2
R(z) = .
1 + b 1 z + b2 z 2
and z = λk.
17
(a) Show that if the Runge-Kutta method is applied to the equation u0 = λu the formulas
(5.34) can be written concisely as
Y = U n e + zAY,
U n+1 = U n + zbT Y,
and hence
U n+1 = I + zbT (I − zA)−1 e U n .
(Ex5.20a)
(b) Recall that by Cramer’s rule that if B is an r × r matrix then the ith element of the
vector y = B −1 e is given by
det(Bi )
yi = ,
det(B)
where Bi is the matrix B with the ith column replaced by e, and det denotes the deter-
minant.
In the expression (Ex5.20a), B = I − zA and each element of B is linear in z. From
the definition of the determinant it follows that det(B) will be a polynomial of degree
at most r, while det(Bi ) will be a polynomial of degree at most r − 1 (since the column
vector e does does not involve z).
From these facts, conclude that (Ex5.20a) yields U n+1 = R(z)U n where R(z) is a rational
function of degree at most (r, r).
(c) Explain why an explicit Runge-Kutta method (for which A is strictly lower triangular)
results in R(z) being a polynomial of degree at most r (i.e., a rational function of degree
at most (r, 0)).
(d) Use (Ex5.20a) to determine the function R(z) for the TR-BDF2 method (5.36). Note
that in this case I − zA is lower triangular and you can compute (I − zA) −1 e by forward
substitution. You should get the same result as in Exercise 5.18(c).
18
Chapter 6 Exercises
From: Finite Difference Methods for Ordinary and Partial Differential Equations
by R. J. LeVeque, SIAM, 2007. http://www.amath.washington.edu/∼rjl/fdmbook
19
(a) U n+2 = 12 U n+1 + 21 U n + 2kf (U n+1 )
(b) U n+1 = U n
U n = (U 0 + U 1 ) + (U 0 − U 1 )(−1)n .
Now consider the difference equation U n+4 = U n with four starting values U 0 , U 1 , U 2 , U 3 .
Use the roots of the characteristic polynomial to find an analogous represenation of the solution
to this equation.
(a) Determine the general solution to the linear difference equation 2U n+3 −5U n+2 +4U n+1 −
U n = 0.
Hint: One root of the characteristic polynomial is at ζ = 1.
(b) Determine the solution to this difference equation with the starting values U 0 = 11,
U 1 = 5, and U 2 = 1. What is U 10 ?
(d) Suppose you use the values of β0 and β1 just determined in this LMM. Is this a convergent
method?
20
(c) Consider the iteration
U n+1 Un
0 1
= .
U n+2 −0.25 1 U n+1
The matrix appearing here is the “companion matrix” (D.19) for the above difference
equation. If this matrix is called A, then we can determine U n from the starting values
using the nth power of this matrix. Compute An as discussed in Appendix D.2 and show
that this gives the same solution found in part (b).
U n+1 = Φ(U n )
and so this shows that the implicit backward Euler method is convergent.
(a) Show that for large n the ratio Fn /Fn−1 approaches the “golden ratio” φ ≈ 1.618034.
(b) Show that the result of part (a) holds if any two integers are used as the starting values
F0 and F1 , assuming they are not both zero.
(c) Is this true for all real starting values F0 and F1 (not both zero)?
21
Chapter 7 Exercises
From: Finite Difference Methods for Ordinary and Partial Differential Equations
by R. J. LeVeque, SIAM, 2007. http://www.amath.washington.edu/∼rjl/fdmbook
(a) Choose a time step based on the stability analysis indicated in Example 7.12 and deter-
mine whether the numerical solution remains bounded in this case.
(b) How large can you choose k before you observe instability in your program?
(a) Modify the m-file to also implement the 2-step explicit Adams-Bashforth method AB2.
(b) Test the midpoint, trapezoid, and AB2 methods (all of which are second order accurate)
for each of the following case (and perhaps others of your choice) and comment on the
behavior of each method.
22
(iii) a = 100, b = 10 (more damped).
Û 0 = U n + kf (U n )
for j = 0, 1, . . . , N − 1
Û j+1 = U n + kf (Û j )
end
U n+1 = Û N .
Note that this can be interpreted as a fixed point iteration for solving the nonlinear
equation
U n+1 = U n + kf (U n+1 )
of the backward Euler method. Since the backward Euler method is implicit and has a
stability region that includes the entire left half plane, as shown in Figure 7.1(b), one
might hope that this predictor-corrector method also has a large stability region.
Plot the stability region SN of this method for N = 2, 5, 10, 20 (perhaps using plotS.m
from the webpage) and observe that in fact the stability region does not grow much in
size.
(d) Using the result of part (b), show that the fixed point iteration being used in the predictor-
corrector method of part (c) can only be expected to converge if |kλ| < 1 for all eigen-
values λ of the Jacobian matrix f 0 (u).
(e) Based on the result of part (d) and the shape of the stability region of Backward Euler,
what do you expect the stability region SN of part (c) to converge to as N → ∞?
23
Chapter 8 Exercises
From: Finite Difference Methods for Ordinary and Partial Differential Equations
by R. J. LeVeque, SIAM, 2007. http://www.amath.washington.edu/∼rjl/fdmbook
(a) Use decaytest.m to determine how many function evaluations are used for four different
choices of tol.
Modify the m-file decay1.m to solve this system by adding u4 = [D] and using the initial
data u4 = 0. Test your modified program with a modest value of K3 , e.g., K3 = 3, to
make sure it gives reasonable results and produces a plot of all 4 components of u.
(d) Test ode113 with K3 = 1000 and the four tolerances used in decaytest.m. You should
observe two things:
(i) The number of function evaluations requires is much larger than when solving
(Ex8.2a), even though the solution is essentially the same,
(ii) The number of function evaluations doesn’t change much as the tolerance is reduced.
(e) Plot the computed solution from part (d) with tol = 1e-2 and tol = 1e-4 and com-
ment on what you observe.
24
(f) Test your modified system with three different values of K3 = 500, 1000 and 2000. In
each case use tol = 1e-6. You should observe that the number of function evaluations
needed grows linearly with K3 . Explain why you would expect this to be true (rather
than being roughly constant, or growing at some other rate such as quadratic in K 3 ).
About how many function evaluations would be required if K3 = 107 ?
(g) Repeat part (f) using ode15s in place of ode113. Explain why the number of function
evaluations is much smaller and now roughly constant for large K3 . Also try K3 = 107 .
The first step is Backward Euler to determine an approximation to the value at the midpoint
in time and the second step is the midpoint method using this value.
where θ is a fixed parameter. Note that θ = 0, 1/2, 1 all give familiar methods.
(b) Plot the stability region S for θ = 0, 1/4, 1/2, 3/4, 1 and comment on how the stability
region will look for other values of θ.
25
Chapter 9 Exercises
From: Finite Difference Methods for Ordinary and Partial Differential Equations
by R. J. LeVeque, SIAM, 2007. http://www.amath.washington.edu/∼rjl/fdmbook
(b) Suppose we take k = αh2 for some fixed α > 0 and refine the grid. For what values of α
(if any) will this method be Lax-Richtmyer stable and hence convergent?
Hint: Consider the MOL interpretation and the stability region of the time-discretization
being used.
(a) The m-file heat_CN.m solves the heat equation ut = κuxx using the Crank-Nicolson
method. Run this code, and by changing the number of grid points, confirm that it is
second-order accurate. (Observe how the error at some fixed time such as T = 1 behaves
as k and h go to zero with a fixed relation between k and h, such as k = 4h.)
You might want to use the function error_table.m to print out this table and estimate
the order of accuracy, and error_loglog.m to produce a log-log plot of the error vs. h.
See bvp_2.m for an example of how these are used.
(b) Modify heat_CN.m to produce a new m-file heat_trbdf2.m that implements the TR-
BDF2 method on the same problem. Test it to confirm that it is also second order
accurate. Explain how you determined the proper boundary conditions in each stage of
this Runge-Kutta method.
(c) Modify heat_CN.m to produce a new m-file heat_FE.m that implements the forward Euler
explicit method on the same problem. Test it to confirm that it is O(h2 ) accurate as
h → 0 provided when k = 24h2 is used, which is within the stability limit for κ = 0.02.
Note how many more time steps are required than with Crank-Nicolson or TR-BDF2,
especially on finer grids.
(d) Test heat_FE.m with k = 26h2 , for which it should be unstable. Note that the instability
does not become apparent until about time 1.6 for the parameter values κ = 0.02, m =
39, β = 150. Explain why the instability takes several hundred time steps to appear,
and why it appears as a sawtooth oscillation.
26
Hint: What wave numbers ξ are growing exponentially for these parameter values?
What is the initial magnitude of the most unstable eigenmode in the given initial data?
The expression (16.52) for the Fourier transform of a Gaussian may be useful.
(a) Modify heat_CN.m to solve the heat equation for −1 ≤ x ≤ 1 with step function initial
data
1 if x < 0
u(x, 0) = (Ex9.3a)
0 if x ≥ 0.
With appropriate Dirichlet boundary conditions, the exact solution is
1 √
u(x, t) = erfc x/ 4κt , (Ex9.3b)
2
where erfc is the complementary error function
Z ∞
2 2
erfc(x) = √ e−z dz.
π x
(i) Test this routine m = 39 and k = 4h. Note that there is an initial rapid transient
decay of the high wave numbers that is not captured well with this size time step.
(ii) How small do you need to take the time step to get reasonable results? For a
suitably small time step, explain why you get much better results by using m = 38
than m = 39. What is the observed order of accuracy as k → 0 when k = αh with
α suitably small and m even?
(b) Modify heat_trbdf2.m (see Exercise 9.2) to solve the heat equation for −1 ≤ x ≤ 1
with step function initial data as above. Test this routine using k = 4h and estimate the
order of accuracy as k → 0 with m even. Why does the TR-BDF2 method work better
than Crank-Nicolson?
27
Consider the PDE
ut = κuxx − γu, (Ex9.5a)
which models a diffusion with decay provided κ > 0 and γ > 0. Consider methods of the form
k
Ujn+1 = Ujn + [U n −2Ujn +Uj+1
n n+1
+Uj−1 −2Ujn+1 +Uj+1
n+1
]−kγ[(1−θ)Ujn +θUjn+1 ] (Ex9.5b)
2h2 j−1
where θ is a parameter. In particular, if θ = 1/2 then the decay term is modeled with the same
centered-in-time approach as the diffusion term and the method can be obtained by applying
the Trapezoidal method to the MOL formulation of the PDE. If θ = 0 then the decay term
is handled explicitly. For more general reaction-diffusion equations it may be advantageous
to handle the reaction terms explicitly since these terms are generally nonlinear, so making
them implicit would require solving nonlinear systems in each time step (whereas handling the
diffusion term implicitly only gives a linear system to solve in each time step).
(a) By computing the local truncation error, show that this method is O(k p + h2 ) accurate,
where p = 2 if θ = 1/2 and p = 1 otherwise.
(b) Using von Neumann analysis, show that this method is unconditionally stable if θ ≥ 1/2.
(c) Show that if θ = 0 then the method is stable provided k ≤ 2/γ, independent of h.
28
Chapter 10 Exercises
From: Finite Difference Methods for Ordinary and Partial Differential Equations
by R. J. LeVeque, SIAM, 2007. http://www.amath.washington.edu/∼rjl/fdmbook
−1
0 1
−1 0 1
1
D0 = −1 0 1 (Ex10.1b)
2h
−1 0 1
1 −1 0
for a second-order accurate centered approximation. (These are illustrated for a grid with
m + 1 = 5 unknowns and h = 1/5.)
The advection equation ut + aux = 0 on the interval 0 ≤ x ≤ 1 with periodic boundary
conditions gives rise to the MOL discretization U 0 (t) = −aDU (t) where D is one of the
matrices above.
(a) Discretizing U 0 = −aD− U by forward Euler gives the first order upwind method
ak n
Ujn+1 = Ujn − n
(U − Uj−1 ), (Ex10.1c)
h j
where the index i runs from 0 to m with addition of indices performed mod m + 1 to
incorporate the periodic boundary conditions.
Suppose instead we discretize the MOL equation by the second-order Taylor series
method,
1
U n+1 = U n − akD− U n + (ak)2 D−
2 n
U . (Ex10.1d)
2
2 and also write out the formula for U n that results from this method.
Compute D− j
(b) How accurate is the method derived in part (a) compared to the Beam-Warming method,
which is also a 3-point one-sided method?
29
(c) Suppose we make the method (Ex10.1c) more symmetric:
ak 1
U n+1 = U n − (D+ + D− )U n + (ak)2 D+ D− U n . (Ex10.1e)
2 2
Write out the formula for Ujn that results from this method. What standard method is
this?
(a) Produce a plot similar to those shown in Figure 10.1 for the upwind method (10.21) with
the same values of a = 1, h = 1/50 and k = 0.8h used in that figure.
(b) Produce the corresponding plot if the one-sided method (10.22) is instead used with the
same values of a, h, and k.
Note that if ak/h ≈ 1 then this stencil roughly follows the characteristic of the advection
equation and might be expected to be more accurate than standard leapfrog. (If ak/h = 1 the
method is exact.)
(b) For what range of Courant number ak/h does this method satisfy the CFL condition?
(c) Show that the method is in fact stable for this range of Courant numbers by doing von
Neumann analysis. Hint: Let γ(ξ) = eiξh g(ξ) and show that γ satisfies a quadratic
equation closely related to the equation (10.34) that arises from a von Neumann analysis
of the leapfrog method.
30
Consider the method
ak n
Ujn+1 = Ujn − n
(U − Uj−1 + Ujn+1 − Uj−1
n+1
). (Ex10.4a)
2h j
for the advection equation ut + aux = 0 on 0 ≤ x ≤ 1 with periodic boundary conditions.
(a) This method can be viewed as the trapezoidal method applied to an ODE system U 0 (t) =
AU (t) arising from a method of lines discretization of the advection equation. What is
the matrix A? Don’t forget the boundary conditions.
(b) Suppose we want to fix the Courant number ak/h as k, h → 0. For what range of
Courant numbers will the method be stable if a > 0? If a < 0? Justify your answers
in terms of eigenvalues of the matrix A from part (a) and the stability regions of the
trapezoidal method.
(c) Apply von Neumann stability analysis to the method (Ex10.4a). What is the amplifica-
tion factor g(ξ)?
(d) For what range of ak/h will the CFL condition be satisfied for this method (with periodic
boundary conditions)?
(e) Suppose we use the same method (Ex10.4a) for the initial-boundary value problem with
u(0, t) = g0 (t) specified. Since the method has a one-sided stencil, no numerical boundary
condition is needed at the right boundary (the formula (Ex10.4a) can be applied at x m+1 ).
For what range of ak/h will the CFL condition be satisfied in this case? What are the
eigenvalues of the A matrix for this case and when will the method be stable?
31
(a) Observe how this behaves with m+1 = 50, 100, 200 grid points. Change the final time to
tfinal = 0.1 and use the m-files error_table.m and error_loglog.m to verify second
order accuracy.
(b) Modify the m-file to create a version advection_up_pbc.m implementing the upwind
method and verify that this is first order accurate.
(c) Keep m fixed and observe what happens with advection_up_pbc.m if the time step k is
reduced, e.g. try k = 0.4h, k = 0.2h, k = 0.1h. When a convergent method is applied
to an ODE we expect better accuracy as the time step is reduced and we can view
the upwind method as an ODE solver applied to an MOL system. However, you should
observe decreased accuracy as k → 0 with h fixed. Explain this apparent paradox. Hint:
What ODE system are we solving more accuracy? You might also consider the modified
equation (10.44).
(a) Modify the m-file to create a version advection_lf_pbc.m implementing the leapfrog
method and verify that this is second order accurate. Note that you will have to specify
two levels of initial data. For the convergence test set Uj1 = u(xj , k), the true solution at
time k.
(b) Modify advection_lf_pbc.m so that the initial data consists of a wave packet
η(x) = exp(−β(x − 0.5)2 ) sin(ξx) (Ex10.9a)
Work out the true solution u(x, t) for this data. Using β = 100, ξ = 80 and U j1 = u(xj , k),
test that your code still exhibits second order accuracy for k and h sufficiently small.
(c) Using β = 100, ξ = 150 and Uj1 = u(xj , k), estimate the group velocity of the wave packet
computed with leapfrog using m = 199 and k = 0.4h. How well does this compare with
the value (10.52) predicted by the modified equation?
32
(a) Show that the matrix B defined by (Ex10.10a) has 2(m + 1) eigenvectors of the form
− p + p
gp u gp u
, , for p = 1, 2, . . . , m + 1, (Ex10.10b)
up up
where up ∈ lRm+1 are the eigenvectors of A given by (10.12) and gp± are the two roots
of a quadratic equation. Explain how this quadratic equation relates to (10.34) (what
values of ξ are relevant for this grid?)
What are the eigenvalues of B?
(c) The result of part (b) is not sufficient to prove that leapfrog is Lax-Ricthmyer stable.
The matrix B is not normal and the matrix of right eigenvectors R with columns given
by (Ex10.10b) is not unitary. By (D.8) in Appendix D we have
To prove uniform power boundedness and stability we must show that the condition
number of R is uniformly bounded as k → 0 provided (Ex10.10c) is satisfied.
Prove this by the following steps:
(i) Let
1 1 2
u u · · · up ∈ lR(m+1)×(m+1)
U=√ (Ex10.10e)
m+1
be an appropriately scaled right eigenvector matrix of A. Show that with this
scaling, U is a unitary matrix: U H U = I.
(ii) Show that the right eigenvector matrix of B can be written as
U G− U G+
R= (Ex10.10f)
U U
33
(v) Use the previous result to show that
C
kR−1 k2 ≤ (Ex10.10h)
1 − ν2
for some constant C, where ν = ak/h is the Courant number.
(vi) Conclude from the above steps that B is uniformly power bounded and hence the
leapfrog method is Lax-Richtmyer stable provided that |ν| < 1.
(d) Show that the leapfrog method with periodic boundary conditions is also stable in the
case |ak/h| = 1 if m + 1 is not divisible by 4. Find a good set of initial data U 0 and U 1
to illustrate the instability that arises if m + 1 is divisible by 4 and perform a calculation
that demonstrates nonconvergence in this case.
34
Chapter 11 Exercises
From: Finite Difference Methods for Ordinary and Partial Differential Equations
by R. J. LeVeque, SIAM, 2007. http://www.amath.washington.edu/∼rjl/fdmbook
(a) Derive the two-dimensional Lax-Wendroff method from (11.6) by using standard centered
approximations to ux , uy , uxx and uyy and the approximation
1
uxy (xi , yj ) ≈ [(Ui+1,j+1 − Ui−1,j+1 ) − (Ui+1,j−1 − Ui−1,j−1 )] . (Ex11.1a)
4h2
(b) Compute the leading term of the truncation error to show that this method is second
order accurate.
(a) Show that the Strang splitting is second order accurate on the problem (11.18) by com-
paring
1 1
exp Ak exp(Bk) exp Ak (Ex11.2a)
2 2
with (11.22).
(b) Show that second order accuracy on (11.18) can also be achieved by alternating the
splitting (11.17) in even numbered time steps with
U ∗ = NB (U n , k),
(Ex11.2b)
U n+1 = NA (U ∗ , k)
35