Chap 5
Chap 5
Methods
fl = f (tl, yl),
yl ≈ y(tl).
α0 6= 0, |αk | + |βk | =
6 0.
To eliminate scaling: α0 = 1.
LMM is explicit if β0 = 0,
implicit otherwise.
Newton’s form
φ(t) = f [t1] + f [t1, t2](t − t1) + · · ·
+f [t1, t2, · · · , tk ](t − t1)(t − t2) · · · (t − tk−1),
k
Y
f (t) − φ(t) = f [t1, · · · , tk , t] (t − ti)
i=1
f (k) (t)
Note 5. f [t1, t2, · · · , tk , t] ≈ k! if ∆t is small.
5.1.1 Adams methods
ẏ = f (t, y)
Z tn
⇒ y(tn) = y(tn−1) + f (t, y(t))dt.
tn−1
Pk Pk
Comparing with j=0 αj yn−j = ∆t j=0 βj fn−j ,
we see α0 = 1, α1 = 1, αj = 0, j = 2, 3, · · · , k.
k
X
yn = yn−1 + ∆t βj fn−j ,
j=1
k−1
j−1
X i
where βj = (−1) γi ,
i=j−1
j − 1
Z 1
i −s
γi = (−1) ds.
0 i
i factors
z }| {
s
s(s − 1)(s − 2) · · · (s − i + 1)
Note 6. i = i! ,
s
0 = 1.
k-step methods ↔ use information at k points
tn−1, tn−2, · · · , tn−k .
Also called a (k + 1)-value method:
k + 1 ↔ total memory requirement.
yn = yn−1 + ∆tβ1fn−1,
0
P1 i
where β1 = (−1) i=0 0 γ0 = γ0 , (verify)
0
R1 −s
and γ0 = (−1) 0 0 ds = 1. (verify)
yn = yn−1 + ∆tfn−1.
Forward Euler!
k=2: yn = yn−1 + ∆tβ1fn−1 + ∆tβ2fn−2.
P1 i
β1 = (−1)0 i=0 0 γi = γ0 + γ1 = 3
2
P0 i
β2 = (−1)1 i=1 1 γi = −γ1 = − 12
0
R1 −s
γ0 = (−1) 0 0 ds = 1
R1 −s R1
γ1 = (−1)1 0 1 ds h= 0
sds =i 1
2
k
X
⇒ βj (yn−j − g(tn−j )) → 0.
j=0
k
X
αiyn−i = ∆tβ0f (tn, yn).
i=0
• (More elegant?)
Recursively use a j-step method,
building j = 1, 2, · · · , k − 1.
→ Gradually build up to k-step method.
– Nice conceptually because you can use only
methods from one family (this is what codes
do in practice).
– But order is less with lower k!
→ Need error control
(→ and order control!)
Example 2.
2 5 1
ẏ = −5ty + − 2 , y(1) = 1.
t t
5.2 Order, 0-stability, Convergence
→ Truncation error
1
dn = Lhy(tn).
∆t
expand about t
q (q)
Lh y(t) = C0 y(t) + C1 ∆tẏ(t) + · · · + Cq (∆t) y (t) + · · · ,
where
k
X
C0 = αj
j=0
X k k
1 1 X
Ci = (−1)i j i αj + j i−1βj
i j=1 (i − 1)! j=0
i = 1, 2, · · · .
C0 = C1 = · · · = Cp = 0, Cp+1 6= 0.
α1 = − 43
C 0 : 1 + α1 + α2 = 0
1
C1 : α1 + 2α2 + β0 = 0 ⇒ α2 = 3
1 2
C2 : 2! (α1 + 4α2) = 0 β0 =
3
(verify!)
Also C3 = − 29 . (verify!)
Pk
For a LMM, this means j=0 αj = 0,
Pk Pk
j=1 jα j + j=0 βj = 0.
Define characteristic polynomials
k
X
ρ(ξ) = αj ξ k−j ,
j=0
k
X
σ(ξ) = βj ξ k−j .
j=0
Then we obtain
yn = ξ n.
Then we obtain
For yn = ξ n to be stable,
|yn| ≤ |yn−1|
|ξ n| ≤ |ξ n−1|
⇒ |ξ| ≤ 1
ρ(ξ) = ξ 2 + 4ξ − 5 = (ξ − 1) (ξ + 5).
| {z }
always if method is consistent
→ We see ξ2 = −5.
→ |ξ2| > 1 ⇒ method is unstable!
Then
y2 = −4y1 = −4
y3 = −4y2 + 5y1 = 21
y4 = −4y3 + 5y2 = −104
..
It blows up!
k
X
(αj − ∆tλβj )yn−j = 0 (verify)
j=0
ρ(ξ) − ∆tλρ(ξ) = 0.
For Re(λ) < 0, we want |yn| < |yn−1|.
This is not possible if there are extraneous roots of
ρ(ξ) with magnitude 1.
Definition 1. A LMM is
|ξi| < 1;
|ξi| = 1,
∆t
yn = yn−2 + (fn + 4fn−1 + fn−2)
3
en = en−2 + ∆tλ
3 (en + 4en−1 + en−2 ). (verify)
Guessing en = ξ n , we have
2
p
3 ∆tλ ± 1 + 31 (∆tλ)2
ξ= 1 .
1 − 3 ∆tλ
Expand to yield
|yn| ≤ |yn−1|
k
X k
X
αj yn−j = ∆t βj fn−j , β0 6= 0.
j=0 j=0
• Functional iteration
k
X k
X
(ν+1) (ν)
yn = ∆tβ0f (tn, yn ) − αj yn−j + ∆t βj fn−j ,
j=1 j=1
ν = 0, 1, · · · .
(0)
• First use an explicit LMM to predict yn .
(0)
(This is “better” than predicting yn = yn−1.)
e.g., k-step AB of order k
∆t
1. P : yn(0) = yn−1 + [3fn−1 − fn−2]
2
2. E : fn(0) = f (tn, yn(0))
∆t (0)
3. C : yn = yn−1 + [f + fn−1]
2 n
4. E : fn = f (tn, yn)
k
X
yn − ∆tβ0f (tn, yn) = − [αj yn−j + ∆tβj fn−j ]
j=1
| {z }
known!
Newton’s iteration:
−1
(ν+1) (ν) ∂f
yn = yn−1 − I − ∆tβ0
∂y
|{z}
(ν)
at y=yn
k
(ν)
X
(α0yn−1 + αj yn−j + ∆tβj fn−j ). (verify)
j=1
(0)
Initial guess yn from interpolant through past values.
→ Not the cheapest possible!
∂f ∂f
Idea: Update ∂y and LU factors of I − ∆tβ0 ∂ y only
when necessary!
e.g.,
• Order changed.
(ν+1) (ν) ν1
|yn − yn |
ρ= (1) (0)
, ν = 1, 2, · · · .
|yn − yn |
5.5 Software Design Issues
We had
tn−1 − ∆t, tn−1 − 2∆t, ··· , tn−1 − (k − 1)∆t.
There are 3 main strategies to obtain the missing
values.
3 4 1
BDF2 2 (y n − 3 y n−1 + 3 yn−2 ) = ∆tnf (tn, yn)
φ(tn−i) = yn−i.
Now consider
yn − yn(0) = (Cp+1 − Ĉp+1)(∆t)py(p+1)(tn) + O((∆t)p+1).
(0)
yn : predicted
yn : corrected
(∆t)py(p+1) : “solve” for dn below
∴ Corrector error
z }| {
dn = Cp+1 (∆t)py(p+1)(tn) + O((∆t)p+1)
Cp+1
= (yn − yn(0)) + O((∆t)p+1).
Cp+1 − Ĉp+1
Milne’s estimate
Then
dn ≈ β0rn.
• Step accepted if
∆t kdnk ≤ ET OL.
| {z }
estimated truncation error
Choosing stepsize and order for next step.
Begin by forming estimates of error by methods of
order p − 2, p − 1, |{z}
p ,p + 1
current order
Then ...
is increasing or decreasing.