Multistep Methods: Appendix I
Multistep Methods: Appendix I
Multistep Methods
In this appendix, we provide a brief introduction to the derivation
of linear multistep methods and analyze the conditions on the coef-
ficients that are necessary and sufficient to guarantee convergence of
order 𝑃 .
Among the seven basic examples in Chapter 5, one was a two-step
method, the leapfrog method. Multistep methods potentially obtain
more accurate approximations from fewer costly evaluations of a vec-
tor field if prior evaluations and values of the solution are stored for
later use. With each additional stored value comes the need for an
additional value for initialization not provided by the analytical prob-
lem. Each of these carries an additional degree of potential instability
and adds to the cost of changing the step-size. In spite of this, some
multistep methods have desirable absolute stability properties. We
will also describe some relationships between the accuracy and sta-
bility of these methods.
Recall that we are considering methods for approximating solu-
tions of the IVP
y′ = f (𝑡, y), y(𝑡𝑜 ) = y𝑜 , 𝑡 ∈ [𝑡𝑜 , 𝑡𝑜 + 𝑇 ], y ∈ R𝐷 , (I.1)
𝐷
satisfying a Lipschitz condition in some norm on R ,
∣∣f (𝑡, y1 ) − f (𝑡, y2 )∣∣ ≤ 𝐿∣∣y1 − y2 ∣∣. (I.2)
For simplicity of exposition, we will henceforth use notation for the
scalar case, but unless otherwise noted, generalizations to the case
of systems are typically straightforward. In scalar notation, linear
281
282 I. Multistep Methods
reduces this to
∫ 𝑡𝑛+1
𝑦 ′ (𝑡𝑛+1 ) + 𝑦 ′ (𝑡𝑛 )
𝑃2𝐴𝑀 (𝑠) 𝑑𝑠 = ℎ[
𝑡𝑛 2
𝑦 (𝑡𝑛+1 ) − 2𝑦 ′ (𝑡𝑛 ) + 𝑦 ′ (𝑡𝑛−1 )
′
− ].
12
Discretizing, we find that AM2 is given by
5 ′ 8 1
𝑦𝑛+1 = 𝑦𝑛 + ℎ( 𝑦 + 𝑦 ′ − 𝑦 ′ ).
12 𝑝 12 𝑛 12 𝑛−1
Another strategy for deriving multistep methods obtains the co-
efficients 𝑎𝑗 , 𝑏𝑗 as solutions of linear equations that guarantee the
method is formally accurate of order 𝑃 . These conditions are related
to the order of accuracy of a convergent method by the local trunca-
tion error of the method, 𝜖𝑛 . This quantity measures by how much
a solution of the differential equation fails to satisfy the difference
equation, in the sense
𝑚−1
∑
𝑦(𝑡𝑛+1 ) = 𝑎𝑗 𝑦(𝑡𝑛−𝑗 )
𝑗=0
𝑚−1
∑
+ℎ 𝑏𝑗 𝑦 ′ (𝑡𝑛−𝑗 ) + 𝜖𝑛 , where 𝑦𝑗′ = 𝑓 (𝑡𝑗 , 𝑦(𝑡𝑗 )).
𝑗=−1
(I.4)
In its top row, Table I.1 contains the first few terms of the Taylor
expansion of the left-hand side of (I.4), 𝑦(𝑡𝑛+1 ), about 𝑡𝑛 , in powers
of ℎ. Below the line, the rows contain the Taylor expansions of the
terms 𝑦(𝑡𝑛 − 𝑗ℎ) and 𝑦 ′ (𝑡𝑛 − 𝑗ℎ) on the right of (I.4), where we have
placed terms of the same order in the same column and set 𝑞 = 𝑚 = 1
for compactness of notation.
Algebraic conditions that determine a bound on the order of 𝜖𝑛
are obtained by comparing the collective expansions of both sides.
(𝑝)
The terms in each column are multiples of ℎ𝑝 𝑦𝑛 . If we form a com-
mon denominator by multiplying the 𝑏 terms by 𝑝/𝑝, the right-hand
I. Multistep Methods 285
1 ′′ 2 1 ′′′ 3
𝑦𝑛+1 = 𝑦𝑛 + 𝑦𝑛′ ℎ+ 2 𝑦𝑛 ℎ + 6 𝑦𝑛 ℎ + ⋅ ⋅ ⋅
′ 1
𝑏−1 ℎ𝑦𝑛+1 = 𝑏−1 𝑦𝑛′ ℎ+ 𝑏−1 𝑦𝑛′′ ℎ2 + ′′′ 3
2 𝑏−1 𝑦𝑛 ℎ + ⋅ ⋅ ⋅
+ 𝑎0 𝑦𝑛 = 𝑎0 𝑦𝑛
+ 𝑏0 ℎ𝑦𝑛′ = 𝑏0 𝑦𝑛′ ℎ
+ 𝑎1 𝑦𝑛−1 = 𝑎1 𝑦𝑛 − 𝑎1 𝑦𝑛′ ℎ+ 12 𝑎1 𝑦𝑛′′ ℎ2 − 1 ′′′ 3
6 𝑎1 𝑦𝑛 ℎ + ⋅ ⋅ ⋅
′ 1
+ 𝑏1 ℎ𝑦𝑛−1 = 𝑏1 𝑦𝑛′ ℎ− 𝑏1 𝑦𝑛′′ ℎ2 + ′′′ 3
2 𝑏1 𝑦𝑛 ℎ + ⋅ ⋅ ⋅
.. .. .. .. ..
. . . . .
+ 𝑎𝑞 𝑦𝑛−𝑞 = 𝑎𝑞 𝑦𝑛 − 𝑎𝑞 𝑦𝑛′ 𝑞ℎ+ 12 𝑎𝑞 𝑦𝑛′′ 𝑞 2 ℎ2 − 16 𝑎𝑞 𝑦𝑛−𝑞
′′′
𝑞 3 ℎ3 + ⋅ ⋅ ⋅
′ 1
+ 𝑏𝑞 ℎ𝑦𝑛−𝑞 = +𝑏𝑞 𝑦𝑛′ ℎ− 𝑏𝑞 𝑦𝑛′′ 𝑞ℎ2 − 2 𝑏𝑞 𝑦𝑛−𝑞 𝑞 2 ℎ3 + ⋅ ⋅ ⋅
′′′
286 I. Multistep Methods
In this case we know that the method formally approximates the dif-
ferential equation. This guarantees that the approximated equation
is the one that we intended. The more subtle issue of convergence
of a numerical method involves determining whether solutions of the
approximating equation (in this case the multistep method) do in-
deed approximate solutions of the approximated equation as the dis-
cretization parameter tends to zero. The root condition for 0-stability
discussed in Chapter 5 together with consistency are necessary and
sufficient for a multistep method to be convergent. If, in addition,
(I.5) is satisfied for all 𝑝 ≤ 𝑃 , then the convergence is with global
order of accuracy 𝑃 .
Four of our working example methods of Chapter 5 and three ad-
ditional methods discussed above fit into the linear 𝑚-step framework
with 𝑚 ≤ 2. Table I.2 summarizes the nonzero coefficients defining
these methods and identifies the value of 𝑃 for which the matching
conditions up to order 𝑃 are satisfied, but not the conditions of order
𝑃 + 1. For reference, the conditions for 𝑝 = 2, 3, and 4 are
𝑚−1
∑ 𝑚−1
∑
𝑗 2 𝑎𝑗 − 2𝑗𝑏𝑗 = 1,
𝑗=0 𝑗=−1
𝑚−1
∑ 𝑚−1
∑
− 𝑗 3 𝑎𝑗 + 3𝑗 2 𝑏𝑗 = 1,
𝑗=0 𝑗=−1
and
𝑚−1
∑ 𝑚−1
∑
𝑗 4 𝑎𝑗 − 4𝑗 3 𝑏𝑗 = 1,
𝑗=0 𝑗=−1
respectively.
I. Multistep Methods 287
of this system is
( )
1 ′ 4 1 ′
𝑦𝑛+1 = 𝑦𝑛−1 + 2ℎ 𝑦𝑛+1 + 𝑦𝑛′ + 𝑦𝑛−1 , (I.10)
6 6 6
known as Milne’s corrector. We can also interpret this as integrating
quadratic interpolation of 𝑦 ′ at 𝑡𝑛+1 , 𝑡𝑛 , 𝑡𝑛−1 (the Simpson-parabolic
rule) to approximate the integral in
∫ 𝑡𝑛+1
𝑦𝑛+1 − 𝑦𝑛−1 = 𝑦 ′ (𝑠) 𝑑𝑠.
𝑡𝑛−1
If 𝑝𝑤 (𝑟) has some multiple roots, we can index any set of them con-
secutively, 𝑟𝑗 (𝑤) = ⋅ ⋅ ⋅ = 𝑟𝑗+𝑠 (𝑤), in which case we replace the corre-
sponding terms in (I.12) by terms of the form 𝑐𝑗+𝑘 𝑛𝑘 𝑟𝑗𝑛 , 𝑘 = 0, . . . , 𝑠.
As 𝑤 → 0, the roots of 𝑟𝑗 (𝑤) approach corresponding roots of
𝜌(𝑟). We can use the fact that some root 𝑟(𝑤) must approximate 𝑒𝑤 =
1 + 1𝑤 as 𝑤 → 0 as another derivation of the consistency conditions
∑
(I.7). Since 𝑒0 = 1 must be a root of 𝑝0 , 𝑝0 (1) = 1 − 𝑗 𝑎𝑗 = 0,
which is the zeroth-order consistency condition. Treating 𝑟(𝑤) as a
curve defined implicitly by the relation 𝑃 (𝑟, 𝑤) = 𝑝𝑤 (𝑟) = 0 and
differentiate implicitly with respect to 𝑤 at (𝑟, 𝑤) = (1, 0), we obtain
⎛ ⎞
𝑚−1
∑ 𝑚−1
∑
− 𝑏𝑗 + 𝑟 ′ (𝑤) ⎝𝑚 − 𝑎𝑗 (𝑚 − (𝑗 + 1))⎠ = 0.
𝑗=−1 𝑗=0
order of the local truncation error; any more accuracy is wasted. For
one step methods, the initial value can be considered exact, since it is
given in the IVP, though even this value may include experimental or
computational errors. But for 𝑚-step methods with 𝑚 > 1, we must
use one-step methods to generate one or more additional values. Once
we have a second initial value, we could also use a two-step method to
generate a third, then a three-step method to generate a fourth, and
so on. No matter how we choose to do this, it is just the order of the
(local truncation) error of the initial values that limits the global error
of the solution. For this reason, it is sufficient to initialize a method
whose local truncation error has order 𝑃 + 1 using a method whose
local truncation error has order 𝑃 . For example, the local truncation
error of the leapfrog method has order 3. If 𝑦0 = 𝑦𝑜 , the exact initial
value, and we use Euler’s Method, whose local truncation error has
order 2, to obtain 𝑦1 from 𝑦0 , the resulting method has global order
of accuracy 2. If we use the midpoint method or Heun’s Method,
whose local truncation errors both have order 3, the global order of
accuracy of the resulting methods is still 2, no more accurate than
if we use Euler’s Method to initialize. But if we use a lower-order
approximation, 𝑦1 = 𝑦0 , a method whose local truncation error has
order 1 and is not even consistent, the savings of one evaluation of
𝑓 degrades the convergence of all subsequent steps to global order 1.
As another example, the two-step implicit Adams-Moulton Method,
AM2, has local truncation error of order 4. If we initialize it with the
midpoint method or Heun’s Method, we achieve the greatest possible
global order of accuracy, 3. Initializing with RK4 will not improve
this behavior, and initializing with Euler’s Method degrades the or-
der to 2. So the reason for including initial errors in the analysis of
error propagation for one-step methods is clarified when we consider
multistep methods.
When 𝑦𝑛+1 is only defined implicitly, the ease with which we can
determine its value from 𝑦𝑛 (and previous values in the case of a
multistep method) is significant from both practical and theoretical
points of view. In the first place, a solution might not even exist for all
values of ℎ > 0. For a simple one-step method such as the Backward
Euler Method, it can fail to have a solution even for the linear equation
294 I. Multistep Methods
its principal and parasitic roots have the same magnitude and it is
relatively stable. This shows that relative stability is strictly weaker
than the strong root condition.
For Euler’s Method, the Backward Euler Method, and the trape-
zoidal method, 𝜌(𝑟) = 𝑟 − 1. Since there are no nonprincipal roots,
they satisfy the strong root condition, the root condition, and the
relative stability condition by default. Both the explicit and the im-
plicit 𝑚-step Adams Methods, AB𝑚 and AM𝑚, are designed to have
𝜌(𝑟) = 𝑟 𝑚 − 𝑟 𝑚−1 = (𝑟 − 1)𝑟 𝑚−1 , so that all parasitic roots are zero!
These methods satisfy the strong root condition, as nicely as possible.
For BDF2, 𝜌(𝑟) = 𝑟 2 − 43 𝑟 + 13 = (𝑟 − 1)(𝑟 − 13 ) satisfies the strong
root condition. For higher 𝑚, BDF𝑚 is designed to have order of
accuracy 𝑚 if the method is convergent. However, these methods are
only 0-stable for 𝑚 ≤ 6, so BDF𝑚 is not convergent for 𝑚 ≥ 7.
If we apply Milne’s corrector, the implicit two-step method having
maximal local truncation error, to the model problem 𝑦 ′ = 𝜆𝑦, it takes
the form
𝑤 4𝑤 𝑤
𝑦𝑛+1 = 𝑦𝑛−1 + ( 𝑦𝑛+1 + 𝑦𝑛 + 𝑦𝑛−1 ).
3 3 3
Solutions are linear combinations 𝑦𝑛 = 𝑐+ 𝑟+ + 𝑐− 𝑟− where 𝑟± are the
roots of
𝑝𝑤 (𝑟) = (1 − 𝑤/3)𝑟 2 − (4𝑤/3)𝑟 − (1 + 𝑤/3).
By setting 𝑢 = 𝑤/3 and multiplying by 1/(1 − 𝑢) = 1 + 𝑢 + ⋅ ⋅ ⋅ , to
first order in 𝑢, these roots satisfy
𝑟 2 − 4𝑢(1 + ⋅ ⋅ ⋅ )𝑟 − (1 + 2𝑢 + ⋅ ⋅ ⋅ ) = 0
or √
𝑟± = 2𝑢 ± 4𝑢2 + 1 + 2𝑢.
Using the binomial expansion (1 + 2𝑢)1/2 ≈ 1 + 𝑢 + ⋅ ⋅ ⋅ , to first order
in 𝑢, 𝑟+ ≈ 1 + 3𝑢 and 𝑟− ≈ −1 + 𝑢. The root ≈ 1 + 3𝑢 = 1 + 𝜆ℎ
approximates the solution of the model problem 𝑦 ′ = 𝜆𝑦. As 𝜆ℎ → 0
in a way that 𝑢 in a neighborhood of the negative real axis near
the origin, the other root ≈ −1 + 𝑢 has magnitude greater than 1,
showing that Milne’s Method corrector is not relatively stable. Like
the leapfrog method, it satisfies the root condition, so it is stable, but
I. Multistep Methods 301
Iterative Interpolation
and Its Error
In this appendix we give a brief review of iterative polynomial inter-
polation and corresponding error estimates used in the development
and analysis of numerical methods for differential equations.
The unique polynomial of degree 𝑛,
𝑛
∑
𝑝𝑥0 ,... ,𝑥𝑛 (𝑥) = 𝑎 𝑗 𝑥𝑗 , (J.1)
𝑗=0
where
∏ (𝑥 − 𝑥𝑗 )
𝐿𝑖,𝑥0 ,... ,𝑥𝑛 (𝑥) = . (J.4)
(𝑥𝑖 − 𝑥𝑗 )
0≤𝑗≤𝑛,𝑗∕=𝑖
Here, we develop 𝑝𝑥0 ,... ,𝑥𝑛 (𝑥) inductively, starting from 𝑝𝑥0 (𝑥) = 𝑦0
and letting
𝑝𝑥0 ,... ,𝑥𝑗+1 (𝑥)
= 𝑝𝑥0 ,... ,𝑥𝑗 (𝑥) + 𝑐𝑗+1 (𝑥 − 𝑥0 ) ⋅ ⋅ ⋅ (𝑥 − 𝑥𝑗 ), 𝑗 = 0, . . . , 𝑛 − 1
(J.5)
303
304 J. Iterative Interpolation and Its Error
(so that each successive term does not disturb the correctness of
the prior interpolation) and defining 𝑐𝑗+1 so that 𝑝𝑥0 ,... ,𝑥𝑗+1 (𝑥𝑗+1 ) =
𝑦𝑗+1 , i.e.,
𝑦𝑗+1 − 𝑝𝑥0 ,... ,𝑥𝑗 (𝑥𝑗+1 )
𝑐𝑗+1 = = 𝑓 [𝑥0 , . . . , 𝑥𝑗+1 ]. (J.6)
(𝑥𝑗+1 − 𝑥0 )
Comparing (J.6) with (J.3), (J.4) gives an alternate explicit expres-
sion for 𝑓 [𝑥0 , . . . , 𝑥𝑛 ], the leading coefficient of the polynomial of
degree 𝑛 that interpolates 𝑓 at 𝑥0 , . . . , 𝑥𝑛 :
𝑛
∑ 𝑓 (𝑥𝑖 )
𝑓 [𝑥0 , . . . , 𝑥𝑛 ] = ∏ (J.7)
𝑖=0 𝑗∕=𝑖 (𝑥𝑖 − 𝑥𝑗 )