0% found this document useful (0 votes)
91 views3 pages

MAT 461/561: 12.3 Multistep Methods

This document summarizes multistep methods for solving ordinary differential equations numerically. [1] Multistep methods evaluate the function f(t,y) only once per time step, achieving high-order accuracy without multiple evaluations like one-step methods. [2] Adams methods and Backward Differentiation Formulas are examples of multistep methods. Predictor-corrector methods combine Adams methods to solve implicit equations without iteration.

Uploaded by

Debisa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
91 views3 pages

MAT 461/561: 12.3 Multistep Methods

This document summarizes multistep methods for solving ordinary differential equations numerically. [1] Multistep methods evaluate the function f(t,y) only once per time step, achieving high-order accuracy without multiple evaluations like one-step methods. [2] Adams methods and Backward Differentiation Formulas are examples of multistep methods. Predictor-corrector methods combine Adams methods to solve implicit equations without iteration.

Uploaded by

Debisa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

MAT 461/561: 12.

3 Multistep Methods

James V. Lambers

April 6, 2020

Announcements

Multistep Methods
In one-step methods, during each time step, f (t, y) in ODE y 0 = f (t, y) can be evaluated several
times for high-order accuracy (eg. RK4), but this is not necessary! Can achieve high-order accuracy
(that is, error is O(hp ) with p > 1) without so many evaluations in each time step.
General form of multistep method:
m
X m
X
αi yn+1−i = h βi f (tn+1−i , yn+1−i )
i=0 i=0

where m is the number of steps (m = 1 for one-step). Shorthand: fk = f (tk , yk ).


Pro: only one new evaluation of f per time step
Con: cannot use immediately! Must use a one-step method for first m−1 time steps to get “starting
values”. Example: m = 3, use one-step method to get y1 , y2 , then can use multistep to get y3 .
A multistep method is explicit if β0 = 0, and is implicit otherwise. Therefore to obtain yn+1 in an
implicit method, can use a root-finding method (e.g. Newton’s, or Secant) to find a root of
m
X m
X
F (y) = a0 y + αi yn+1−i − hβ0 y − βi fn+1−i = 0.
i=1 i=1

Adams Methods
By integrating y 0 = f (t, y) on [tn , tn+1 ], we get
Z tn+1
y(tn+1 ) = y(tn ) + f (s, y(s)) ds
tn

An Adams Method is based on approximating this integral using a quadrature rule in which
f (s, y(s)) is fit with a polynomial using interpolation points tn+1−m , tn+2−m , . . . , tn−1 , tn , and pos-
sibly tn+1 , for the case of an implicit method.
In all Adams Methods, α0 = 1, α1 = −1, and αi = 0 for i > 1.
Explicit Adams Methods (β0 = 0) are Adams-Bashforth Methods, while implicit Adams Meth-
ods are called Adams-Moulton Methods.

1
Example: derivation of Adams-Bashforth Method, m = 3:

yn+1 = yn + h(β1 fn + β2 fn−1 + β3 fn−2 )

Let p2 (t) be an interpolating polynomial that fits f (t, y(t)) at tn , tn−1 , tn−2 . Express in Lagrange
form:
2
X
p2 (t) = fn−i L2,i (t)
i=0
We then have Z tn+1 Z tn+1 Z 1
f (s, y(s)) ds ≈ p2 (s) ds = h p̃2 (u) du
tn tn 0

where u = (tn+1 − s)/h maps [tn , tn+1 ] to [0, 1], du = −1/h ds. Therefore the interpolation points
tn , tn−1 , tn−2 are mapped to 1,2,3, respectively. We then have
Z 1 Z 1
h p̃2 (u) du = h fn L̃2,0 (u) + fn−1 L̃2,1 (u) + fn−2 L̃2,2 (u) du
0 0
Z 1
(u − 2)(u − 3) (u − 1)(u − 3) (u − 1)(u − 2)
= h fn + fn−1 + fn−2 du
0 (1 − 2)(1 − 3) (2 − 1)(2 − 3) (3 − 1)(3 − 2)
 Z 1 Z 1 Z 1 
(u − 2)(u − 3) (u − 1)(u − 3) (u − 1)(u − 2)
= h fn du + fn−1 du + fn−2 du
0 (1 − 2)(1 − 3) 0 (2 − 1)(2 − 3) 0 (3 − 1)(3 − 2)

Therefore
1
(u − 2)(u − 3)
Z
23
β1 = du =
0 (1 − 2)(1 − 3) 12
1
(u − 1)(u − 3)
Z
16
β2 = du = −
0 (2 − 1)(2 − 3) 12
1
(u − 1)(u − 2)
Z
5
β3 = du =
0 (3 − 1)(3 − 2) 12
and our method is
h
[23fn − 16fn−1 + 5fn−2 ]
yn+1 = yn +
12
In general, a m-step Adams-Bashforth method has error O(hm ), and a m-step Adams-Moulton
method has error is O(hm+1 ).

Predictor-Corrector Methods
A predictor-corrector method uses a pair of Adams-Bashforth and Adams-Moulton methods
to avoid having to use an iterative method such as Newton’s Method with an implicit (Adams-
Moulton) method. The procedure is called PECE: Predict, Evaluate, Correct, Evaluate
1. Predict: Use the predictor (Adams-Bashforth) to compute ỹn+1

2. Evaluate: compute f˜n+1 = f (tn+1 , ỹn+1 ).

3. Corrector: Use the corrector (Adams-Moulton) method, with f˜n+1 , to obtain yn+1

4. Evaluate: compute fn+1 = f (tn+1 , yn+1 ) for next time step

2
Example: Predictor is 2-step Adams-Bashforth,
h
ỹn+1 = yn + [3fn − fn−1 ]
2

Then let f˜n+1 = f (tn+1 , ỹn+1 ), and correct with 2-step Adams-Moulton,

h ˜
yn+1 = yn + [5fn+1 + 8fn − fn−1 ]
12
Finally, for next time step, fn+1 = f (tn+1 , yn+1 ).

Backward Differentiation Formulas


A Backward Differentiation Formula (BDF) is an implicit multistep method in which the
only nonzero β is β0 = 1. The αi ’s are obtained by approximating y 0 at tn+1 using a backward
difference. General form:
Xm
αi yn+1−i = hfn+1
i=0

where
αi = L0m,i (tn+1 )
and Lm,i (t) is the ith Lagrange polynomial for the points tn+1 , tn , tn−1 , . . . , tn+1−m (see Section
9.1).
A m-step BDF has error O(hm ). Unfortunately no BDF of order greater than 6 is “stable”.

You might also like