0% found this document useful (0 votes)
14 views48 pages

Chap 5

Chapter 5 discusses Linear Multistep Methods (LMM) for solving differential equations, highlighting their advantages such as higher order accuracy with fewer function evaluations compared to Runge-Kutta methods. It covers the formulation of k-step methods, the significance of polynomial interpolation, and specific methods like Adams and Backward Differentiation Formulas (BDF) for non-stiff and stiff problems, respectively. The chapter also addresses the importance of initial values, order, stability, and convergence in the context of LMM.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views48 pages

Chap 5

Chapter 5 discusses Linear Multistep Methods (LMM) for solving differential equations, highlighting their advantages such as higher order accuracy with fewer function evaluations compared to Runge-Kutta methods. It covers the formulation of k-step methods, the significance of polynomial interpolation, and specific methods like Adams and Backward Differentiation Formulas (BDF) for non-stiff and stiff problems, respectively. The chapter also addresses the importance of initial values, order, stability, and convergence in the context of LMM.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

CHAPTER 5: Linear Multistep

Methods

Multistep: use information from many steps


→ Higher order possible with fewer function
evaluations than with RK.
→ Convenient error estimates.
→ Changing stepsize or order is more difficult!
Recall ẏ = f (t, y), t ≥ t0.
Denote

fl = f (tl, yl),
yl ≈ y(tl).

A k-step linear multistep method is


Pk Pk
j=0 αj yn−j = ∆t j=0 βj fn−j .
↓ ↓
the method’s coefficients
To avoid degeneracy, assume

α0 6= 0, |αk | + |βk | =
6 0.

To eliminate scaling: α0 = 1.

LMM is explicit if β0 = 0,
implicit otherwise.

Note 1. Past k integration steps are equally spaced.

Assume f has many bounded derivatives as needed.

Note 2. LMM is called linear


because it is linear in f ,
unlike RK methods. (review!)

Note 3. f itself is not linear!

The most popular LMMs are based on polynomial


interpolation (even those that are not still use
interpolation to change step-size).
Review : Polynomial interpolation and
divided differences

We interpolate f (x) at k distinct points t1, t2, · · · , tk


by the unique polynomial φ(t) of degree < k.
φ(t) = φ0 +φ1t+φ2t2 +· · ·+φk−1tk−1, k unknowns;

i.e., φ(tl) = f (tl), l = 1, 2, · · · , k k equations.

→ Can set up and solve linear system for φi.

→ Can write down Lagrange interpolating polynomial.

→ Can use Newton form (divided differences).

Note 4. These are simply different descriptions of


the same polynomial!
y = ( xy21−y
−x2
1
)x + ( y1 x2 −y2 x1
x2 −x1 )
y = y1( xx−x
1 −x2
2
) + y 2 ( x−x1
x2 −x1 )
−y1
y = y1 + ( xy22−x 1
)(x − x1)

Newton’s form
φ(t) = f [t1] + f [t1, t2](t − t1) + · · ·
+f [t1, t2, · · · , tk ](t − t1)(t − t2) · · · (t − tk−1),

where the divided differences are defined by


f [tl] = f (tl),
f [tl, tm] = f [ttl]−f [tm ]
,
l −tm
..
f [tl+1 ,··· ,tl+i ]−f [tl ,··· ,tl+i−1 ]
f [tl, · · · , tl+i] = tl+i −tl .
Interpolation error at any point t:

k
Y
f (t) − φ(t) = f [t1, · · · , tk , t] (t − ti)
i=1

= O((∆t)k ) if all ti are within O(∆t).

f (k) (t)
Note 5. f [t1, t2, · · · , tk , t] ≈ k! if ∆t is small.
5.1.1 Adams methods

• Most popular non-stiff solvers

ẏ = f (t, y)
Z tn
⇒ y(tn) = y(tn−1) + f (t, y(t))dt.
tn−1

Approximate f (t, y(t)) with interpolating polynomial.

Pk Pk
Comparing with j=0 αj yn−j = ∆t j=0 βj fn−j ,
we see α0 = 1, α1 = 1, αj = 0, j = 2, 3, · · · , k.

For k-step explicit Adams (Adams–Bashforth)


interpolate f through k previous points
tn−1, tn−2, · · · , tn−k .
It can be shown that

k
X
yn = yn−1 + ∆t βj fn−j ,
j=1
k−1  
j−1
X i
where βj = (−1) γi ,
i=j−1
j − 1
Z 1 
i −s
γi = (−1) ds.
0 i

i factors
z }| {
s
 s(s − 1)(s − 2) · · · (s − i + 1)
Note 6. i = i! ,
s

0 = 1.
k-step methods ↔ use information at k points
tn−1, tn−2, · · · , tn−k .
Also called a (k + 1)-value method:
k + 1 ↔ total memory requirement.

• Local truncation error


Cp+1(∆t)py (p+1)(tn) + O((∆t)p+1) with p = k.

Note 7. Only 1 new function evaluation per step.

Example 1. First-order AB:

yn = yn−1 + ∆tβ1fn−1,

0
P1 i

where β1 = (−1) i=0 0 γ0 = γ0 , (verify)
0
R1 −s

and γ0 = (−1) 0 0 ds = 1. (verify)
yn = yn−1 + ∆tfn−1.

Forward Euler!
k=2: yn = yn−1 + ∆tβ1fn−1 + ∆tβ2fn−2.
P1 i
β1 = (−1)0 i=0 0 γi = γ0 + γ1 = 3
2

P0 i
β2 = (−1)1 i=1 1 γi = −γ1 = − 12

0
R1 −s
γ0 = (−1) 0 0 ds = 1
R1 −s R1
γ1 = (−1)1 0 1 ds h= 0
sds =i 1
2

AB2: yn = yn−1 + ∆t 32 fn−1 − 12 fn−2 .

Verify β1, β2, γ0, γ1, and AB2.

• In general, AB methods have small regions of


absolute stability!

→ We look for implicit methods:


Implicit Adams methods
l
Adams–Moulton.

Derivation is similar to AB,


but interpolating polynomial has degree ≤ k.
l
Use information at tn also.
k
X
yn = yn−1 + ∆t βj fn−j .
j=0
AM methods have order p = k + 1.
There are 2 AM1 methods
— one of order 1 and one of order 2.
yn = yn−1 + ∆tfn yn = yn−1 + ∆t2 [fn + fn−1 ]
backward Euler Trapezoidal method
AM2: yn = yn−1 + ∆t
12 [5fn + 8fn−1 − fn−2 ].

Note 8. • AM have smaller error constants than AB


(of the same order).

• They have one less data point for a given order.

• AM have much larger stability regions than AB.

• AB-AM often used as predictor-corrector pair.

• AM derived by straightforward polynomial


interpolation.
5.1.2 BDF (Backward Differentiation
Formulas)

• Most popular multistep method for stiff problems.

• Defining feature: only β0 6= 0;


i.e., only use fn.

• Motivation: obtain a formula with stiff decay.

Apply LMM to ẏ = λ(y − g(t)), ∆tRe(λ) → −∞

k
X
⇒ βj (yn−j − g(tn−j )) → 0.
j=0

So to have yn − g(tn) → 0 for arbitrary g(t),


we must have β0 6= 0 and β1 = β2 = · · · = βk = 0.
Recall, Adams methods fit a polynomial to past values
of f and integrate it.

In contrast, BDF methods fit a polynomial to past


values of y and set the derivative of the polynomial at
tn equal to fn:

k
X
αiyn−i = ∆tβ0f (tn, yn).
i=0

Note 9. • BDF methods are implicit


→ Usually implemented with modified Newton
(more later).

• Only the first 6 BDF methods are stable!


(order 6 is the bound on BDF order)

• BDF1 is backward Euler.


5.1.3 Initial values

With one-step methods, y0 = y(t0) is all we need!


With a k-step LMM, we need k − 1 additional starting
values.
→ These values must be O((∆t)p) accurate!
(↔ or within error tolerance).
Possible solutions:

• Start with a RK method of order p.

• (More elegant?)
Recursively use a j-step method,
building j = 1, 2, · · · , k − 1.
→ Gradually build up to k-step method.
– Nice conceptually because you can use only
methods from one family (this is what codes
do in practice).
– But order is less with lower k!
→ Need error control
(→ and order control!)
Example 2.

2 5 1
ẏ = −5ty + − 2 , y(1) = 1.
t t
5.2 Order, 0-stability, Convergence

It is still true that


Consistency + 0-stability = Convergence
(order at least 1)

For RK, 0-stability was easy,


high order was difficult!

For LMM, high order is straightforward:


just use enough past values.

But now 0-stability is not trivial.


5.2.1 Order

→ Simple but general derivation.

→ Not only order, but also leading coefficient of


truncation error.

→ Very useful in designing LMMs!


Pk
Define Lhy(t) = j=0 αj y(t−j∆t)−∆tβj ẏ(t−j∆t).

→ Truncation error

1
dn = Lhy(tn).
∆t

Because exact solution satisfies ẏ(t) = f (t, y(t)),


k
X
Lh y(t) = αj y(t − j∆t) −∆tβj f (t − j∆t, y(t − j∆t))
| {z } | {z }
j=0

expand about t
q (q)
Lh y(t) = C0 y(t) + C1 ∆tẏ(t) + · · · + Cq (∆t) y (t) + · · · ,
where

k
X
C0 = αj
j=0
 X k k 
1 1 X
Ci = (−1)i j i αj + j i−1βj
i j=1 (i − 1)! j=0
i = 1, 2, · · · .

For a method of order p,

C0 = C1 = · · · = Cp = 0, Cp+1 6= 0.

Cp+1 is called the error constant.

The first few equations:


C0 : 0 = α0 + α1 + α2 + · · · + αk
C1 : 0 = (α1 +2α2 +· · ·+kαk )+(β0 +β1 +· · ·+βk )
1
C2 : 0 = 2! (α1 + 22α2 + · · · + k 2αk )
+(β1 + 2β2 + · · · + kβk )
..
Example 3. • Forward Euler: α1 = −1, β1 = 1
(α0 = 1, β0 = 0)
C0 : α0 + α1 = 1 − 1 = 0
C1 : −(α1 + β0 + β1) = −1 + 0 + 1 = 0
C2 : 21 α1 + β1 = − 12 + 1 = 21 6= 0 ⇒ first order.
 
3
• AB2: yn = yn−1 + ∆t 2 fn−1 − 12 fn−2
⇒ α0 = 1, α1 = −1, β0 = 0, β1 = 23 , β2 = − 12
(α2 = 0)
C0 : α0 + α1 + α2 = 1 − 1 + 0 = 0
C1 : −(α1 + 2α1 + β0 + β1 + β2)
= −(−1 + 0 + 0 + 32 − 12 ) = 0
1
C2 : 2
2! [α1 + 2 α2 ] + [β1 + 2β2 ] = 12 (−1) + 23 − 1 = 0
 
1 3 1 2
C3 : − 3! [α1 + 2 α2] + 2! [β1 + 2 β2]
  
= − − 16 + 2! 1 3
2 −2 = 5
12 6= 0
⇒ second order
Example 4. You can derive/design LMMs similarly;
e.g., derive the 2-step BDF:

yn + α1yn−1 + α2yn−2 = ∆tβ0fn.

α1 = − 43

C 0 : 1 + α1 + α2 = 0 
1
C1 : α1 + 2α2 + β0 = 0 ⇒ α2 = 3
1 2
C2 : 2! (α1 + 4α2) = 0 β0 =

3
(verify!)
Also C3 = − 29 . (verify!)

→ You can design “optimal” methods,


e.g., highest order for given k; smallest Cp+1.
But beware of stability! Do not take it for granted!

Recall: Consistency ↔ Order p ≥ 1.

Pk
For a LMM, this means j=0 αj = 0,
Pk Pk
j=1 jα j + j=0 βj = 0.
Define characteristic polynomials

k
X
ρ(ξ) = αj ξ k−j ,
j=0
k
X
σ(ξ) = βj ξ k−j .
j=0

In these terms, a LMM is consistent iff

ρ(1) = 0 and ρ0(1) = σ(1).


5.2.2 Root Condition

We now derive (simple) conditions on the roots of


ρ(ξ) to guarantee 0-stability.

(Then adding consistency, we get convergence.)

We already know that for consistency ρ(1) = 0.

∴ ξ = 1 must be a root of ρ(ξ).

(This is called the principal root.)

Remaining analysis shows 0-stability requires all other


roots of ρ(ξ) have magnitude strictly less than 1 or
if their magnitude is 1, then their multiplicity cannot
exceed 1 (they cannot be repeated).

The combination of these two requirements is known


as the root condition.

Formal statement to follow shortly.


5.2.3 0-Stability and Convergence

Recall: 0-stability has a complicated definition.

Essentially, we are looking at the stability of the


difference equation as ∆t → 0.

It turns out to be sufficient to consider ẏ = 0.

Then we obtain

α0yn + α1yn−1 + · · · + αk yn−k = 0.

This must be stable for the LMM to be stable!


This is a linear, constant-coefficient difference
equation.
We guess a solution of the form

yn = ξ n.
Then we obtain

α0ξ n + α1ξ n−1 + · · · + αk ξ n−k = 0,


or ξ n−k [α0ξ k + α1ξ k−1 + · · · + αk ] = 0.
| {z }
ρ(ξ)

For yn = ξ n to be stable,

|yn| ≤ |yn−1|
|ξ n| ≤ |ξ n−1|
⇒ |ξ| ≤ 1

Theorem 1. The LMM is 0-stable iff all roots ξi of


ρ(ξ) satisfy
|ξi| ≤ 1,
and if |ξi| = 1, then ξi is a simple root.

This is called the root condition.


If

• the root condition is satisfied,

• the method has order p,

• the starting values are order p,

then the method is convergent with order p.


Exercise: Use this to show one-step methods are
(trivially) 0-stable.
Example 5. Consider

yn = −4yn−1 + 5yn−2 + 4∆tfn−1 + 2∆tfn−2.

This turns out to be the most accurate explicit, 2-step


method (in terms of LTE - Local Truncation Error).
However,

ρ(ξ) = ξ 2 + 4ξ − 5 = (ξ − 1) (ξ + 5).
| {z }
always if method is consistent
→ We see ξ2 = −5.
→ |ξ2| > 1 ⇒ method is unstable!

Consider solving ẏ = 0 with y0 = 0, y1 = .

Then
y2 = −4y1 = −4
y3 = −4y2 + 5y1 = 21
y4 = −4y3 + 5y2 = −104
..
It blows up!

Consider now the test equation ẏ = λy.


Applying a LMM:

k
X
(αj − ∆tλβj )yn−j = 0 (verify)
j=0

→ Linear, constant-coefficient difference equation


solutions of the form ξin where ξi is a root of

ρ(ξ) − ∆tλρ(ξ) = 0.
For Re(λ) < 0, we want |yn| < |yn−1|.
This is not possible if there are extraneous roots of
ρ(ξ) with magnitude 1.

Recall, for consistency, we need ρ(1) = 0.


This is called the principal root.
All others are called “extraneous roots”.

This leads us to the following definitions:

Definition 1. A LMM is

• strongly stable if all extraneous roots satisfy

|ξi| < 1;

• weakly stable if at least one extraneous root satisfies

|ξi| = 1,

and the method is 0-stable (|ξi| >


6 1).
Example 6. Weak stability is dangerous!
Consider Milne’s method

∆t
yn = yn−2 + (fn + 4fn−1 + fn−2)
3

applied to ẏ = λy; the error en satisfies

en = en−2 + ∆tλ
3 (en + 4en−1 + en−2 ). (verify)

Guessing en = ξ n , we have

(1 − 31 ∆tλ)ξ 2 − 34 ∆tλξ − (1 + 31 ∆tλ) = 0. (verify)

The roots are

2
p
3 ∆tλ ± 1 + 31 (∆tλ)2
ξ= 1 .
1 − 3 ∆tλ

Expand to yield

ξ1 = e∆tλ + O(∆tλ)5 ← principal


∆tλ )
ξ2 = −e−( 3 + O((∆t)3) ← extraneous
If Re(λ) < 0, |ξ2| → ∞! Solution is unstable!

=⇒ Any useful LMM must be strongly stable.

Consequently, Dahlquist proved that

A strongly stable k-step LMM has order at most k + 1.

Example 7. All Adams methods have

ρ(ξ) = ξ k − ξ k−1 = ξ k−1(ξ − 1).

→ Extraneous roots all zero! (“optimally” strongly


stable)

→ AM have optimal order.

Example 8. BDF methods are not 0-stable for k > 6!


5.3 Absolute Stability

Recall: Absolute stability is the property

|yn| ≤ |yn−1|

when the numerical scheme is applied to ẏ = λy


with Re(λ) ≤ 0.

A method is A-stable if its region of absolute stability


contains the left half-plane ∆tRe(λ) < 0.

A-stability is very difficult for LMMs to attain!

We have the following results from Dahlquist (1960s):

• An explicit LMM cannot be A-stable.

• An A-stable LMM cannot have p > 2.

• The trapezoidal method is the “best” second-order,


1
A-stable LMM in terms of error constant (C3 = 12 ).
For stiff problems, A-stability may not be crucial.
→ Stiff decay may be more useful!

BDF methods sacrifice A-stability for stiff decay.


5.4 Implementation of LMMs

Recall the implicit k-step linear multistep method

k
X k
X
αj yn−j = ∆t βj fn−j , β0 6= 0.
j=0 j=0

→ Solve a system of m nonlinear equations each step.

Use functional iteration (for non-stiff systems) or


(modified) Newton iteration for (stiff systems).

The initial guess can be from using an interpolant of


past y or f values or via an explicit LMM.
5.4.1 Implicit LMMs

• Functional iteration
k
X k
X
(ν+1) (ν)
yn = ∆tβ0f (tn, yn ) − αj yn−j + ∆t βj fn−j ,
j=1 j=1

ν = 0, 1, · · · .

Note 10. • Only appropriate for nonstiff problems


• Iterate to “convergence” (as described below).
• If no convergence in 2-3 iterations, or rate of
convergence too slow, reject current step and retry
with smaller step size.
5.4.2 Predictor-Corrector Methods

Often in codes for non-stiff problems, nonlinear


equations are not solved “exactly”, i.e., down to a
small multiple of unit roundoff.
Instead, only a fixed number of iterations is used.

(0)
• First use an explicit LMM to predict yn .
(0)
(This is “better” than predicting yn = yn−1.)
e.g., k-step AB of order k

P : yn(0) = −α̂1yn−1 − · · · − α̂k yn−k


+∆t[β̂1fn−1 + · · · β̂k fn−k ].

• Evaluate right-hand side

E: fn(0) = f (tn, yn(0)).


• Correct using implicit LMM,
e.g., k-step AM of order k + 1

C: yn(1) = −α1yn−1 − · · · − αk yn−k


+∆t[β0fn(0) + · · · + βk fn−k ].

We can stop here, (PEC method)


or the steps (EC) can be iterated ν times:
→ a P(EC)ν method.
It is advantageous (and natural!) to end with E step.
→ Use best guess for yn in fn−1 for the next step.
This turns out to significantly enhance the region of
absolute stability over the P(EC)ν method.
The most common scheme is PECE (ν = 1).
Note 11. • This is an explicit method.

• Because corrector is not iterated to convergence,


the order, error, and stability properties are not
generally the same as those of the corrector.
• Should only be used for non-stiff problems.

Example 9. 2-step AB predictor


+ 1-step (order 2) AM corrector
Given yn−1, fn−1, fn−2,

∆t
1. P : yn(0) = yn−1 + [3fn−1 − fn−2]
2
2. E : fn(0) = f (tn, yn(0))
∆t (0)
3. C : yn = yn−1 + [f + fn−1]
2 n
4. E : fn = f (tn, yn)

→ Explicit, 2nd-order method with LTE


1 ...
dn = − 12 (∆t) (tn) + O((∆t)3)
2y

(same as dn for corrector)



This is always the case.
(0)
(roughly because yn is also order k + 1 and enters
the corrector formula multiplied by ∆t.)
5.4.3 Modified Newton Iteration

For stiff problems, you need some kind of Newton


iteration to solve the nonlinear equations.

k
X
yn − ∆tβ0f (tn, yn) = − [αj yn−j + ∆tβj fn−j ]
j=1
| {z }
known!

Newton’s iteration:
 −1
(ν+1) (ν) ∂f
yn = yn−1 − I − ∆tβ0
∂y
|{z}
(ν)
at y=yn

k
(ν)
X
(α0yn−1 + αj yn−j + ∆tβj fn−j ). (verify)
j=1

(0)
Initial guess yn from interpolant through past values.
→ Not the cheapest possible!
∂f ∂f
 
Idea: Update ∂y and LU factors of I − ∆tβ0 ∂ y only
when necessary!
e.g.,

• Iteration does not converge.

• Stepsize changed significantly.

• Order changed.

• Some number of steps have passed.

Iteration has converged when


ρ (ν+1) (ν)
1−ρ |y n − yn | < N T OL,

Newton tolerance ≈ 31 ET OL
where ρ is a measure of the convergence rate

 (ν+1) (ν)  ν1
|yn − yn |
ρ= (1) (0)
, ν = 1, 2, · · · .
|yn − yn |
5.5 Software Design Issues

• Error estimation and control


l
varying step size varying order

• Solving nonlinear algebraic equations (done)

Less straightforward than for one-step methods!


5.5.1 Variable Stepsize Formulas

→ A crucial part of a practical code!


Pk Pk
Recall, j=0 αj yn−j = ∆t j=0 βj fn−j
assumes k equal steps!

If we change the stepsize to ∆tn (6= ∆t),


then we need (k − 1) new values at points
tn−1 −∆tn, tn−1 −2∆tn, · · · , tn−1 −(k−1)∆tn.

We had
tn−1 − ∆t, tn−1 − 2∆t, ··· , tn−1 − (k − 1)∆t.
There are 3 main strategies to obtain the missing
values.

We now illustrate in terms of BDF2.


This requires interpolation in y.

For Adams methods, the interpolation is in terms of


past values of f .

• Fixed-coefficient strategy: In this case, we want to


use the formula with fixed coefficients ↔ constant
∆t, so we generate the missing solution values at
evenly spaced points by interpolation.

3 4 1
BDF2 2 (y n − 3 y n−1 + 3 yn−2 ) = ∆tnf (tn, yn)

→ Suppose we want to take a step ∆tn having just


taken steps ∆tn−1, ∆tn−2.
Do quadratic (polynomial) interpolation to get
y at tn−1 + ∆tn.
Advantage: simple!
Method coefficients can be pre-computed.
Work is proportional to number of equations m.
Disadvantage: interpolation error
Interpolation must be performed every time ∆t
changes.
Theoretical and empirical evidence shows them to
be less stable than other methods.
e.g., Gear, Tu, and Watanabe show that it is not
hard to come up with examples that are unstable
because of this strategy.
→ Important when ∆tn needs to be changed often
or by a large amount.
• Variable-coefficient strategy: coeffs depend on ∆t.

This strategy does not require that past values be


evenly spaced.

It turns out that stability is better when formulas


are derived based on unequally spaced data.

This allows codes to change step size and order more


frequently (and drastically), but it is less efficient
than the alternatives.

In fact, this makes the variable-coefficient strategies


the method of choice for variable step size codes
applied to non-stiff problems.

Recall, BDF was derived by interpolating past y


values, then forcing the derivative of the interpolant
to agree with f at t = tn.

Now do the same thing but without constant step!


Use Newton interpolant form: (BDF2)

φ(t) = yn + [yn, yn−1](t − tn)


+[yn, yn−1, yn−2](t − tn)(t − tn−1).
Then,

φ̇(t) = [yn, yn−1] + [yn, yn−1, yn−2](tn − tn−1).


(verify)
So the collocation condition φ̇(tn) = f (tn, yn)
translates to

f (tn, yn) = [yn, yn−1] + ∆tn[yn, yn−1, yn−2]. (∗)


Writing out the divided difference in (*),
∆tn f (tn , yn ) = yn − yn−1

(∆tn )2 yn − yn−1 yn−1 − yn−2


 
+ − .
∆tn + ∆tn−1 ∆tn ∆tn−1
(verify)

→ The Newton iteration matrix is thus


  
∆tn ∂f
1 + ∆tn+∆t n−1
I − ∆tn ∂y . (verify)
In general, dependence on previous k − 1 stepsizes!

→ Not possible in practice to effectively freeze the


Newton iteration matrix.
• Fixed leading-coefficient strategy:
→ An “optimal” tradeoff between the efficiency of
fixed coefficients and the stability of variable ones.
Demonstrate on k-step BDF.
Construct predictor polynomial that interpolates the
(unequally spaced) yn−i, i = 1, 2, · · · , k + 1,

φ(tn−i) = yn−i.

Now require a second interpolating polynomial to


match the predictor polynomial on a uniform mesh
and satisfy the ODE at t = tn:

ψ(tn − i∆tn) = φ(tn − i∆tn), i = 1, 2, · · · , k,


ψ 0(tn) = f (tn, ψ(tn)).

Then take yn = ψ(tn).

• Stability properties are intermediate to other two


strategies, but efficiency matches fixed-coefficient.
5.5.2 Estimating and controlling local
error

As usual, local error is easier to control than global


error. (why?)

Recall, ∆tn(kdnk+O((∆t)p+1)) = klnk(1+O(∆tn)).

→ Codes try to estimate and control ∆tnkdnk.

To make life easier, assume “starting values” exact.


For predictor-corrector methods, error estimate can be
expressed in term of P-C difference:

Predictor: d̂n = Ĉp+1(∆t)py(p+1)(tn) + O((∆t)p+1).

Now consider
yn − yn(0) = (Cp+1 − Ĉp+1)(∆t)py(p+1)(tn) + O((∆t)p+1).

(0)
yn : predicted
yn : corrected
(∆t)py(p+1) : “solve” for dn below
∴ Corrector error
z }| {
dn = Cp+1 (∆t)py(p+1)(tn) + O((∆t)p+1)
Cp+1
= (yn − yn(0)) + O((∆t)p+1).
Cp+1 − Ĉp+1
Milne’s estimate

Note 12. Use difference of approximations of same


order (unlike embedded RK).

e.g., k-step AB + (k − 1)-step AM


⇒ O((∆t)k ) predictor-corrector with two function
evaluations per step.

Note 13. • If predictor is order k−1, then advancing


with corrected value of order k is local extrapolation.

• For general LMMs, dn estimated directly using


divided differences; e.g., for BDF2, if φ(t) is the
quadratic through yn, yn−1, and yn−2, then

fn = φ̇(tn) = [yn, yn−1]+∆tn[yn, yn−1, yn−2]+rn,


where

rn = ∆tn(∆tn + ∆tn−1)[yn, yn−1, yn−2, yn−3].

Then
dn ≈ β0rn.

• Step accepted if

∆t kdnk ≤ ET OL.
| {z }
estimated truncation error
Choosing stepsize and order for next step.
Begin by forming estimates of error by methods of
order p − 2, p − 1, |{z}
p ,p + 1
current order

Then ...

• Choose next order to maximize ∆t.


• Raise or lower the order according to whether
p−1 (p−1) p (p) p+1 (p+1) p+2 (p+2)
k(∆t) y k, k(∆t) y k, k(∆t) y k, k(∆t) y k

is increasing or decreasing.

If size of terms decreases ...


Taylor series is behaving (increase order);
Else lower order.
Now given order p̂ for next step,
choose ∆tn+1 = α∆tn,
where kαp̂+1 (∆tn)p̂+1Cp̂+1y(p̂+1) k = f| {z
rac} ET OL
| {z }
EST 0.9
  1
f racET OL p̂+1
⇒ α= EST .
5.5.3 Off-step points

What if you need the solution at a non-mesh point ?

→ Easy and cheap to construct polynomial interpolant.


(Harder for one-step methods!)

But note that the natural interpolant for BDF is


continuous (but not differentiable), and the natural
interpolant for Adams methods is not continuous (but
its derivative “is”).

Interpolants with higher orders of smoothness are


known.

You might also like