08-Generalized Second Derivative Linear Multistep Methods For Ordinary Differential Equations
08-Generalized Second Derivative Linear Multistep Methods For Ordinary Differential Equations
https://doi.org/10.1007/s11075-022-01260-8
ORIGINAL PAPER
Received: 6 December 2021 / Accepted: 8 January 2022 / Published online: 14 April 2022
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022
Abstract
This paper is devoted to investigate the modified extended second derivative back-
ward differentiation formulae from second derivative general linear methods point
of view. This makes it possible to open some maneuver rooms in developing the
methods with superior features by perturbing the abscissa vector of the methods. The
proposed methods are constructed to have better accuracy and stability properties in
comparison with the original ones. These improvements are verified by giving some
numerical experiments.
1 Introduction
In designing the efficient numerical methods for solving stiff initial value problems
(IVPs) of ordinary differential equations (ODEs), one should consider the accuracy
and stability properties together with the modest computational cost. Usually, there
is a conflict between these aims: although linear multistep methods (LMMs) has low
Ali Abdi
a [email protected]
Tahereh Majidi
[email protected]
Gholamreza Hojjati
[email protected]
computational cost, they suffer from the accuracy and stability properties because
of their structure and existing of some barriers on the order of methods with desir-
able stability properties [19]; In contrast, Runge–Kutta (RK) methods usually do not
have such drawbacks whereas their computational cost grows considerably with the
number of stages and so with the order. To achieve a good balance between these fea-
tures and circumvent the barriers, various modifications on these traditional methods
have been done in some directions. One of the most famous and successful directions
for LMMs is using of super-future point technique based on backward differentia-
tion formula (BDF) which led to introduce extended BDF (EBDF) [16], modified
EBDF (MEBDF) [17], and matrix-free MEBDF [25]. The state-of-the-art discovery
of general linear methods (GLMs) by Butcher [9] as a middle ground between the
traditional LMMs, RK methods, and their modifications make possibility of deriv-
ing methods which are neither RK method nor LMMs and nor slight modifications
of these methods. In the last decades, GLMs have been investigated, for instance,
in [12–15, 27] and the efficient codes dim18 [10], dim13s [28], and irks14
[8] based on these methods have been developed for nonstiff and stiff ODEs. Con-
sidering MEBDF from GLMs point of view, perturbations of these methods in two
classes referred to as perturbed MEBDF (PMEBDF) and fully perturbed MEBDF
(FPMEBDF), which improve their stability properties while preserve the order, have
been introduced in [20].
One of the successful directions to circumvent the barriers for traditional methods
is developing methods incorporating the second derivative of the solution. Various
methods have been introduced in this way in the class of RK and multistep meth-
ods, for instance, see [18, 21, 23]. Furthermore, GLMs have been also extended
to second derivative GLMs (SGLMs) including second derivative of the solution in
the formula. For more details on SGLMs, see [3–5, 7, 11]. Second derivative BDF
methods (SDBDFs) [23] as a popular class of second derivative multistep methods
have been equipped to super-future point technique which the resulting methods have
been referred to as extended SDBDF (ESDBDF) and modified ESDBDF (MES-
DBDF) [24]. The k-step ESDBDF methods for the numerical solution of the initial
value problem
y (t) = f (y(t)), t ∈ [t0 , T ],
(1)
y(t0 ) = y0 ,
k
k fn+k + h2 (
αj yn+j = hβ γk gn+k −
γk+1 gn+k+1 ), (2)
j =0
requires a separate predictor to compute yn+k+1 , and then, (2) is used as a corrector.
The used predictor in [24] is the k-step SDBDF method defined by
k−1
yn+k + αj yn+j = hβk fn+k + h2 γk gn+k , (3)
j =0
where the coefficients are computed so that the formula has order k + 1. Indeed,
to implement ESDBDF method (2), the following three-stage k-step scheme was
proposed in which the first two stages are k-step SDBDF method (3):
Stage 1: Compute y n+k as the solution of the k-step SDBDF
k−1
y n+k + αj yn+j = hβk f n+k + h2 γk g n+k , (4)
j =0
k−2
y n+k+1 + αk−1 y n+k + αj yn+j +1 = hβk f n+k+1 + h2 γk g n+k+1 , (5)
j =0
−h2
γk+1 g n+k+1 + h2 γk gn+k . (6)
This modification makes the scheme to have the same Jacobian matrix I − hβk ∂f
∂y −
h2 γk ∂g
∂y with Im as the identity matrix of dimension m, for the all stages which
reduces the computational cost in practical implementation.
In this paper, considering MESDBDF methods as SGLMs, we analyze the local
truncation error of the methods, and then, we introduce a more general class of
these methods which their stability and accuracy properties are improved while their
structure and computational complexity are preserved.
The organization of the paper is as follows. In Section 2, we review SGLMs
and their basic features together with the representation of MESDBDF methods as
230 Numerical Algorithms (2022) 91:227–250
In this section, we briefly review the main features of SGLMs and the representation
of MESDBDF methods as SGLMs.
SGLMs with s internal stages and r external stages for the numerical solution of
(1) are defined by
Y [n] = h(A ⊗ Im )f (Y [n] ) + h2 (A ⊗ Im )g(Y [n] ) + (U ⊗ Im )y [n−1] ,
y [n] = h(B ⊗ Im )f (Y [n] ) + h2 (B ⊗ Im )g(Y [n] ) + (V ⊗ Im )y [n−1] , (7)
n = 1, 2, · · · , N, where Nh = T − t0 , h is the stepsize, and ⊗ is the Kronecker
product of two matrices. Here, the vector Y [n] = [Yi[n] ]si=1 denotes approximations
of stage order q to the vector y(tn−1 + ch) = [y(tn−1 + ci h)]si=1 , i.e.,
p k
c
Y [n] = ⊗ hk y (k) (tn−1 ) + O(hq+1 ), (8)
k!
k=0
and
p
y [n] = qk ⊗ hk y (k) (tn ) + O(hp+1 ), (10)
k=0
In order to derive the order and stage conditions for SGLMs of order p and stage
order q = p, or p − 1, introducing the Vandermonde matrix C, the shifting matrix
K, the matrix W , and the upper triangular Toeplitz matrix
c2 cp
C = 1 c ··· , K = 0 e1 e2 · · · ep ,
2! p!
W = q0 q1 q2 · · · qp ,
⎡ ⎤
1 1 1
1 ...
⎢ 1! 2! p! ⎥
⎢ ⎥
⎢ 1 1 ⎥
⎢0 ⎥
⎢ 1 ...
(p − 1)! ⎥
⎢ 1! ⎥
⎢ .. ⎥
E = exp(K) = ⎢ 0 0 1 .
.. ⎥,
⎢ . ⎥
⎢ ⎥
⎢. .. .. . . 1 ⎥
⎢ .. . ⎥
⎢ . . ⎥
⎣ 1! ⎦
0 0 0 ... 1
as the p
with ej as the j th unit standard vector in Rp+1 , and defining the matrix X
first columns of a given matrix X, the following results have been proved.
Theorem 1 [4, 7] Assume that y [n−1] satisfies (9). Then, the SGLM (7) of order p
and stage order q = p satisfies (8) and (10) if and only if
U W = C − ACK − ACK 2 ,
V W = W E − BCK − BCK 2 .
Theorem 2 [2] Assume that y [n−1] satisfies (9). Then, the SGLM (7) of order p and
stage order q = p − 1 satisfies (8) and (10) if and only if
UW =C − AC K
− ACK K, (11)
V W = W E − BCK − BCK 2 . (12)
The linear stability properties of SGLMs are related to the stability function
scheme (4)–(6) as SGLM has been given in [22] as follows. By substituting (4) into
(5), we get
k−1
y n+k+1 = αk−1 α0 yn + (αk−1 αj − αj −1 )yn+j +1 − hαk−1 βk f n+k + hβk f n+k+1
j =0
The local truncation error of SGLMs in the case q = p have been already analyzed in
[4]. In this section, we are going to study it for the case q = p −1 which is applicable
for MESDBDF schemes where the order of two first stages is one unit less than the
order of overall method.
Definition 1 [4] The local truncation error, lte(tn ), of SGLM (7) at the point tn is
given by
[n] )
lte(tn ) = (W ⊗ Im )z(tn , h) − h(B ⊗ Im )f (Y
−h (B ⊗ Im )g(Y
2 ) − (V W ⊗ Im )z(tn−1 , h),
[n]
(15)
[n] is defined by
where Y
[n] = h(A ⊗ Im )f (Y
Y [n] ) + h2 (A ⊗ Im )g(Y
[n] ) + (U W ⊗ Im )z(tn−1 , h), (16)
and the vector z(t, h) stands for the Nordsieck vector defined by
⎡ ⎤
y(t)
⎢ hy (t) ⎥
⎢ ⎥
z(t, h) := ⎢ .. ⎥.
⎣ . ⎦
hp y (p) (t)
The following theorem gives a specific form for lte(tn ) of SGLMs at the point tn
in the case q = p − 1.
Theorem 3 The local truncation error lte(tn ) of the method (7) with q = p − 1 at
the point tn is given by
lte(tn ) = (ϕp ⊗ Im )hp+1 y (p+1) (tn−1 )
∂f
+(ψp ⊗ Im )hp+1 (tn−1 , y(tn−1 ))y (p) (tn−1 ) + O(hp+2 ), (17)
∂y
with
ϕp = W Ep+1 − BCp − BCp−1 , (18)
and
ψp = B(Cp − ACp−1 − ACp−2 − U qp ), (19)
234 Numerical Algorithms (2022) 91:227–250
Proof Suppose that the method (7) has order p and stage order q = p − 1, we have
[n] = y(etn−1 + ch) + γ (tn−1 )hp + O(hp+1 ),
Y (20)
where e is the all-ones s-dimensional vector, y(etn−1 + ch) = [y(tn−1 + ci h)]si=1 ,
[n] . Substituting
and γ (tn−1 ) stands for the principal part of the error in stage values Y
the last relation into (16) leads to
y(etn−1 + ch) + γ (tn−1 )hp = h(A ⊗ Im )y (etn−1 + ch)
+h2 (A ⊗ Im )y (etn−1 + ch)
+(U W ⊗ Im )z(tn−1 , h) + O(hp+1 ), (21)
where y (etn−1 + ch) = [y (t
n−1 + ci h)]si=1
and y (et
n−1 + ch) = [y (t
n−1 +
ci h)]si=1 . We expand the components of the vectors y(etn−1 + ch), y (etn−1 + ch)
and y (etn−1 + ch) in Taylor series around tn−1 and use the order conditions (11)
appearing in Theorem 2; then, by comparing the coefficients of hp , we get
γ (tn−1 ) = (−(Cp − ACp−1 − ACp−2 − U qp ) ⊗ Im )y (p) (tn−1 ). (22)
Also, using (20) and Taylor series, we can write
[n] ) = hf (y(etn−1 + ch) + γ (tn−1 )hp + O(hp+1 ))
hf (Y
∂f
= hf (y(etn−1 + ch)) + (y(etn−1 + ch))γ (tn−1 )hp+1 + O(hp+2 )
∂y
∂f
= hy (etn−1 + ch) + (y(etn−1 + ch))γ (tn−1 )hp+1 + O(hp+2 )
∂y
= (CK ⊗ Im )z(tn−1 , h) + (Cp ⊗ Im )hp+1 y (p+1) (tn−1 )
∂f
+ (y(etn−1 ))γ (tn−1 )hp+1 + O(hp+2 ), (23)
∂y
and
[n] ) =
h2 g(Y h2 g(y(etn−1 + ch) + γ (tn−1 )hp + O(hp+1 ))
= h2 g(y(etn−1 + ch)) + O(hp+2 )
= h2 y (etn−1 + ch) + O(hp+2 )
= (CK 2 ⊗ Im )z(tn−1 , h) + (Cp−1 ⊗ Im )hp+1 y (p+1) (tn−1 ) + O(hp+2 ),
(24)
and also
z(tn , h) = (E ⊗ Im )z(tn−1 , h) + (Ep+1 ⊗ Im )hp+1 y (p+1) (tn−1 ) + O(hp+2 ). (25)
Numerical Algorithms (2022) 91:227–250 235
1
k
p(w, z) = aj (z)w j , (30)
(1 − λz − μz2 )3
j =0
on the boundary of the stability region of the methods as the solutions of the equations
k
aj (z)eij ϑν = 0, ν = 1, 2, . . . , n,
j =0
with
⎧ ⎫
⎨ ⎬
I m(z)
k
Iν = arctan : Re(z) < 0, a (z)e ij ϑν
= 0 , ν = 1, . . . , n,
Re(z)
j
⎩ ⎭
j =0
5 Examples of GSLMMs
In this section, we present the coefficients for the constructed GSLMMs of orders
p = 3, 4, . . . , 11. Here, we only give the optimum values for η1 and η2 , and the
coefficients matrices A, A, and U ; we refrain from giving the matrices B, B and
V and the abscissa vector c which are in the form (28) and (29). For the methods
of orders p = 3, 4, 5, 6, taking into account the original MESDBDF methods is A-
stable, we search for the pairs of (η1 , η2 ) for which the derived GSLMMs is also
A-stable with a value of ecp (η1 , η2 )1 very close to its minimum.
388 262
η1 = − , η2 = − ,
⎡ 593 1113 ⎤
0.345699831365936 0 0
A = ⎣ 0.418900348328583 0.345699831365936 0 ⎦,
0.654300168634064 0 0.345699831365936
238 Numerical Algorithms (2022) 91:227–250
⎡ ⎤
−0.0597541867032183 0 0
A = ⎣ −0.0570750288614188 −0.0597541867032183 0 ⎦,
−0.0567051189885751 0.0445680163663581 −0.0597541867032183
⎡ ⎤
1
U = ⎣1⎦.
1
249 1087
η1 = , η2 = ,
5000
⎡ 1066 ⎤
0.894620349779437 0 0
A = ⎣ 0.943766813757910 0.894620349779437 0 ⎦,
0.000013507606583 0 0.894620349779437
⎡ ⎤
−0.310542598088171 0 0
A = ⎣ −0.356838365106060 −0.310542598088171 0 ⎦,
−0.0781184735770454 0.046709612907379 −0.310542598088171
⎡ ⎤
1.155179650220563 −0.1551796502205635
U = ⎣ 1.181312648845392 −0.1813126488453925 ⎦ .
1.105366142613980 −0.1053661426139802
559 826
η1 = , η2 = − ,
1250
⎡ 1013 ⎤
1.04493723752626 0 0
A = ⎣ −0.09792722027844 1.04493723752626 0 ⎦,
−0.04304115422279 0 1.04493723752626
⎡ ⎤
−0.375955066021513 0 0
A = ⎣ 0.0992465090613176 −0.375955066021513 0 ⎦,
0.0733392030974382 −0.187046389359499 −0.375955066021513
⎡ ⎤
1.51430995958412 −0.626357156694508 0.112047197110383
U = ⎣ 0.09895709651888 1.03967598714780 −0.138633083666673 ⎦ .
1.00417004819333 −0.01023617969013 0.006066131496801
Numerical Algorithms (2022) 91:227–250 239
729 739
η1 = , η2 = − ,
994
⎡ 658 ⎤
1.09175537776366 0 0
A = ⎣ 0.0371625192846510 1.09175537776366 0 ⎦,
−0.0371433717354413 0 1.09175537776366
⎡ ⎤
−0.383917386720813 0 0
A = ⎣ −0.0172342496730442 −0.383917386720813 0 ⎦,
0.0460856740122321 −0.272839434295113 −0.383917386720813
⎡ ⎤
2.05731033248579 −0.419754094936734 1.01351493633442
⎢ −1.57401178271781 1.60792315408183 −0.0937747193205140 ⎥
UT = ⎢ ⎥
⎣ 0.617737592629104 −0.208602224353128 0.0923926236095538 ⎦ .
−0.101036142397075 0.0204331652080347 −0.0121328406234596
This method is A-stable with
ec6 (η1 , η2 )1 ≈ 0.0034 ≤ ec6 (0, 1)1 ≈ 0.7568.
223 3689
η1 = , η2 = − ,
2500
⎡ 2094 ⎤
0.726882299008286 0 0
A = ⎣ 0.050932833410381 0.726882299008286 0 ⎦,
−0.073087869282132 0 0.726882299008286
⎡ ⎤
−0.168142504383280 0 0
A = ⎣ −0.024811668011971 −0.168142504383280 0 ⎦,
0.038580253381901 0.529256740672978 −0.168142504383280
⎡ ⎤
1.586335028857337 −0.2647399059918332 0.9094349903363600
⎢ −0.8991616013627598 0.9156607214238014 0.5974111911303670 ⎥
⎢ ⎥
U T =⎢
⎢ 0.4171872087187099 0.4343919385010570 −0.5840935009085844⎥ .
⎥
⎣ −0.1199120277867760 −0.0963216452358359 0.0844190373544743 ⎦
0.0155513915734884 0.0110088913028108 −0.0071717179126169
This method is A-stable with
ec7 (η1 , η2 )1 ≈ 0.0014 ≤ ec7 (0, 1)1 ≈ 2.7009.
240 Numerical Algorithms (2022) 91:227–250
795 2653
η1 = , η2 = ,
898
⎡ 369 ⎤
1.00376386286450 0 0
A = ⎣ −383.843160941161 1.00376386286450 0 ⎦,
−0.090298614832670 0 1.00376386286450
⎡ ⎤
−0.303392797830922 0 0
A = ⎣ 368.189424432527 −0.303392797830922 0 ⎦,
0.062283265416413 −0.000019812804409 −0.303392797830922
⎡ ⎤
3.15712186021681 2106.26512124348 0.947076133927057
⎢ −4.40382431527382 −5713.03643169425 0.329989469072597 ⎥
⎢ ⎥
⎢ 3.60710937581011 6435.29315894129 −0.471047631906984 ⎥
U =⎢
T
⎢ −1.81450979299187
⎥.
⎢ −3939.59233578207 0.259456647103144 ⎥⎥
⎣ 0.518916223851599 1290.15441427517 −0.074574192442845 ⎦
−0.064813351612829 −178.083926983629 0.009099574247030
This method is A-stable with
ec8 (η1 , η2 )1 ≈ 0.0081 ≤ ec8 (0, 1)1 ≈ 8.7592.
599 894
η1 = , η2 = − ,
998
⎡ 2107 ⎤
0.854665353891539 0 0
A = ⎣ −0.044094794206219 0.854665353891539 0 ⎦,
0.133545384315309 0 0.854665353891539
⎡ ⎤
−0.217328023674293 0 0
A = ⎣ 0.025768746703558 −0.217328023674293 0 ⎦,
−0.0250865040464106 −0.311199417769920 −0.217328023674293
⎡ ⎤
2.85207185421918 0.373589765611886 0.993168943541146
⎢ −3.98854388676492 1.37105727258033 0.040557875205721 ⎥
⎢ ⎥
⎢ 3.75659882783086 −1.29483689299705 −0.055811465556413 ⎥
⎢ ⎥
UT = ⎢
⎢ −2.40095117561742 0.809769808136031 0.030917257293390 ⎥ .
⎥
⎢ 0.998077215422595 −0.330690921784529 −0.010905729761232 ⎥
⎢ ⎥
⎣ −0.243872859896778 0.079724522351708 0.002291774385616 ⎦
0.026620024806484 −0.008613553898379 −0.000218655108228
This method is A(89.01◦ )-stable with
ec9 (η1 , η2 )1 ≈ 0.0059 ≤ ec9 (0, 1)1 ≈ 26.9126.
Numerical Algorithms (2022) 91:227–250 241
In Fig. 1, the region of absolute stability of the GSLMM of order p = 9 has been
plotted and compared with that for MESDBDF of the same order.
1021 999
η1 = , η2 = − ,
⎡612 1000 ⎤
1.13103276675478 0 0
A = ⎣ 0.008473424615545 1.13103276675478 0 ⎦,
0.016749071756286 0 1.13103276675478
⎡ ⎤
−0.367894383836064 0 0
A = ⎣ −0.004729018694588 −0.367894383836064 0 ⎦,
−0.005500558260371 −0.497000817990068 −0.367894383836064
⎡ ⎤
8.52958947240140 −0.385211585082337 1.09236462944105
⎢ −22.9792413935374 1.59917126544019 −0.524048993916518 ⎥
⎢ ⎥
⎢ 33.4483929738593 −0.065341971394488 0.724479797961413 ⎥
⎢ ⎥
⎢ −31.1416649735185 −0.344046741390260 −0.430189094080153 ⎥
U =⎢
T ⎢ ⎥.
⎥
⎢ 19.1273561024877 0.301346895947004 0.181355616070281 ⎥
⎢ −7.54665185579959 −0.135873239011399 −0.052563088963023 ⎥
⎢ ⎥
⎣ 1.74152037016807 0.033549924280162 0.009376435129413 ⎦
−0.179300696060988 −0.003594548788872 −0.000775301642467
This method is A(85.76◦ )-stable with
ec10 (η1 , η2 )1 ≈ 0.0097 ≤ ec10 (0, 1)1 ≈ 79.9393.
In Fig. 2, the region of absolute stability of the GSLMM of order p = 10 has been
plotted and compared with that for MESDBDF of the same order.
4.5
3.5
3
Im(z)
2.5
1.5
0.5
-1 0 1 2 3 4 5 6 7
Re(z)
Fig. 1 Regions of absolute stability of GSLMM (solid line) and MESDBDF (dashed line) for k = 7
242 Numerical Algorithms (2022) 91:227–250
6
5.5
5
4.5
4
3.5
Im(z)
3
2.5
2
1.5
1
0.5
-1 0 1 2 3 4 5 6
Re(z)
Fig. 2 Regions of absolute stability of GSLMM (solid line) and MESDBDF (dashed line) for k = 8
2447 1733
η1 = , η2 = − ,
⎡1542 2000 ⎤
1.058056544651493 0 0
A = ⎣ 0.0058915880426507 1.058056544651493 0 ⎦,
0.0156549147344995 0 1.058056544651493
⎡ ⎤
−0.3184611259077115 0 0
A = ⎣ −0.0028358925963986 −0.3184611259077115 0 ⎦,
−0.0050152548558462 −0.3709175051135338 −0.3184611259077115
⎡ ⎤
9.45028547028813 −0.406201798902460 1.05220117801065
⎢ −28.3606413769783 2.16183816440191 −0.290273764690985 ⎥
⎢ ⎥
⎢ 47.4661594414867 −1.16181684318959 0.416868129988613 ⎥
⎢ ⎥
⎢ −52.5362679459510 0.566078898898886 −0.273655690753791 ⎥
⎢ ⎥
UT = ⎢
⎢ 40.0717878368428 −0.195076404035980 0.131216353347548 ⎥ .
⎥
⎢ −20.9787129560454 0.035035822214096 −0.045914123193849 ⎥
⎢ ⎥
⎢ 7.23490297448344 0.002219182972755 0.011098976291083 ⎥
⎢ ⎥
⎣ −1.48538588577429 −0.002434544088596 −0.001655853187798 ⎦
0.137872441647813 0.000357521728979 0.000114794188534
This method is A(80.72◦ )-stable with
ec10 (η1 , η2 )1 ≈ 0.0053 ≤ ec10 (0, 1)1 ≈ 232.1534.
In Fig. 3, the region of absolute stability of the GSLMM of order p = 11 has been
plotted and compared with that for MESDBDF of the same order.
For k = 1, 2, . . . , 6, the resulting methods are A-stable and for k = 7, 8, 9,
the methods are A(α)-stable with larger angles α in comparison with those for
Numerical Algorithms (2022) 91:227–250 243
6
5.5
5
4.5
4
3.5
Im(z)
3
2.5
2
1.5
1
0.5
-1 0 1 2 3 4 5 6 7
Re(z)
Fig. 3 Regions of absolute stability of GSLMM (solid line) and MESDBDF (dashed line) for k = 9
MESDBDF. Furthermore, in all cases, GSLMMs have noticeably smaller error con-
stants than MESDBDFs. These results are reported in Table 2. For the sake of
comparison, the values of angle α of A(α)-stability of GSLMMs together with those
of MESDBDFs, SDBDFs, and GLMMs [26] are reported in Table 3.
6 Numerical experiments
Table 2 Error coefficients and angles of A(α)-stability for GSLMMs described in Section 4
GSLMM MESDBDF
388 262
1 0.0045 − 0.00022 − − 0.0047 90◦ 0.1250 90◦
593 1113
249 1087
2 10−7 0.000081 0.000081 90◦ 0.0025 90◦
5000 1066
559 826
3 − 0.0033 0.000023 − 0.0033 90◦ 0.1705 90◦
1250 1013
729 739
4 − 0.00332 − 0.00008 − 0.0034 90◦ 0.7568 90◦
994 658
223 3689
5 − 0.00071 0.00064 − 0.0014 90◦ 2.7009 89.86◦
2500 2094
795 2653
6 − 0.0057 − 0.0024 0.0081 90◦ 8.7592 88.49◦
898 369
599 894
7 0.0031 0.0028 − 0.0059 89.01◦ 26.9126 85.43◦
998 2107
1021 999
8 0.0038 − 0.0059 − 0.0097 85.76◦ 79.9393 81.81◦
612 1000
2447 1733
9 0.0026 0.0027 − 0.0053 80.72◦ 232.1534 76.34◦
1542 2000
244 Numerical Algorithms (2022) 91:227–250
Table 3 Angles α of A(α)-stability for GSLMMs, MESDBDFs, SDBDFs, and GLMMs for k =
1, 2, . . . , 8
k p α p α p α p α
Moreover, the results of the proposed methods are compared with those of GLMMs
derived in [26].
To verify the theoretical stability improvements, we consider the linear stiff system
y1 = −αy1 − βy2 + (α + β − 1)e−t , y1 (0) = 1,
(31)
y2 = βy1 − αy2 + (α − β − 1)e−t , y2 (0) = 0,
with the exact solution y1 (t) = y2 (t) = e−t . In our numerical experiments, we
consider α = 10, β = 290. Since eigenvalues of the Jacobian of the system are
−α ± iβ, in implementing of the methods, the stepsize h must be chosen such that
the points z := −hα ± ihβ lie inside the absolute stability region of the methods.
As it has been presented in Fig. 4, these points lie outside (or inside) of the stability
region of MESDBDFs and GLMMs with k = 7 and k = 8 for the stepsize h = 0.01
(or h = 0.005). These points, however, lie inside the absolute stability region of
GSLMMs for both stepsizes. As it is expected, MESDBDFs and GLMMs diverge for
the stepsize h = 0.01 while this is not the case for GSLMMs. These phenomenons
have been illustrated in Figs. 5 and 6.
To show the theoretical accuracy improvements, we consider the HIRES problem
[23, 30]
⎧
⎪
⎪ y1 = −1.71y1 + 0.43y2 + 8.32y3 + 0.0007, y1 (0) = 1,
⎪
⎪
⎪
⎪
y2 = 1.71y1 − 8.75y2 , y2 (0) = 0,
⎪
⎪
⎪
⎪
y3 = −10.03y3 + 0.43y4 + 0.035y5 , y3 (0) = 0,
⎪
⎪
⎪
⎨ y = 8.32y + 1.71y − 1.12y ,
4 2 3 4 y 4 (0) = 0,
(32)
⎪
⎪ y5 = −1.745y2 + 0.43(y6 + y7 ), y5 (0) = 0,
⎪
⎪
⎪
⎪ y6 = −280y6 y8 + 0.69y4 + 1.71y5 − 0.43y6 + 0.69y7 , y6 (0) = 0,
⎪
⎪
⎪
⎪
⎪
⎪ y7 = 280y6 y8 − 1.81y7 , y7 (0) = 0,
⎪
⎩
y8 = −280y6 y8 + 1.81y7 , y8 (0) = 0.0057,
Numerical Algorithms (2022) 91:227–250 245
3.5
2.5
2
Im(z)
1.5 MESDBDF, k = 7
GSLMM, k = 7
MESDBDF, k = 8
1 GSLMM, k = 8
GLMM, k = 7
0.5 GLMM, k = 8
0
-2 -1.8 -1.6 -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0 0.2
Re(z)
Fig. 4 Boundaries of the stability regions near the origin of GSLMMs, MESDBDFs, and GLMMs with
k = 7 and k = 8 together with the points z = −10h + 290hi for h = 0.01 and h = 0.005
with t ∈ [0, 321.8122]. The errors of MESDBDF and GSLMMs for k = 1 (as
A-stable methods) and k = 7 (as A(α)-stable methods) applied to the HIRES prob-
lem for various values of N = 210+i , i = 1, 2, 3, with Nh = 321.8122 have
been represented in Table 4. To compute the error of the methods, we use the refer-
ence solution obtained by the MATLAB function ode15s with very tight tolerances
100
1010
MESDBDF, k = 7 MESDBDF, k = 7
GSLMM, k = 7 GSLMM, k = 7
GLMM, k = 7 GLMM, k = 7
105
10-5
100
Error
Error
10-5
10-10
10-10
10-15 10-15
0 10 20 30 0 10 20 30
t t
Fig. 5 Errors versus t for GSLMMs, MESDBDFs, and GLMMs with k = 7 applied to problem (31) for
the fixed stepsizes h = 0.01 (left) and h = 0.005 (right). Every fiftieth (left) and hundredth (right) point
is plotted
246 Numerical Algorithms (2022) 91:227–250
10150 100
10100
10-5
Error
Error
1050
10-10
100
MESDBDF, k = 8 MESDBDF, k = 8
GSLMM, k = 8 GSLMM, k = 8
GLMM, k = 8 GLMM, k = 8
10-50 10-15
0 10 20 30 0 10 20 30
t t
Fig. 6 Errors versus t for GSLMMs, MESDBDFs, and GLMMs with k = 8 applied to problem (31) for
the fixed stepsizes h = 0.01 (left) and h = 0.005 (right). Every fiftieth (left) and hundredth (right) point
is plotted
atol = rtol = 10−14 . In this table, “Ratio” stands for the ratio of the error of MES-
BDF methods to that of GSLMMs. This quantity shows how much better the latter is
than the former in the accuracy point of view.
To show capability of the proposed methods in solving stiff problems in higher
dimensions, we consider the CUSP problem [23]
⎧ σ
⎪
⎪ yi = −ε−1 (yi3 + ai yi + bi ) + (yi−1 − 2yi + yi+1 ),
⎪
⎪ (Δx)2
⎪
⎨ σ
ai = bi + 0.07vi + (ai−1 − 2ai + ai+1 ), (33)
⎪
⎪ (Δx)2
⎪
⎪ σ
⎪
⎩ bi = (1 − ai2 )bi − ai − 0.4yi + 0.035vi + (bi−1 − 2bi + bi+1 ),
(Δx)2
Table 4 Numerical results of MESDBDF and GSLMM for k = 1 and k = 7 applied to problem (32)
for i = 1, 2, . . . , N and
ui
vi = , ui = (yi − 0.7)(yi − 1.3),
ui + 0.1
10-1 GSLMM, k = 1
MESDBDF, k = 1
GLMM, k = 2
10-2
10-3
Error
10-4
10-5
10-6
1 1.5 2 2.5 3 3.5 4
nfe 104
10-5
GSLMM, k = 7
MESDBDF, k = 7
GLMM, k = 8
Error
10-6
10-7
2 2.5 3 3.5 4 4.5
nfe 104
Fig. 7 Errors versus nfe for GSLMMs, MESDBDFs, and GLMMs with k = 1 (top) and k = 7 (bottom)
applied to problem (33)
248 Numerical Algorithms (2022) 91:227–250
yN+1 = y1 , aN+1 = a1 , bN +1 = b1 .
with tout = 1.1. In our implementation, the second derivative function is computed as
g(·) = fy (·)f (·); also we use (fy (·))2 as a piecewise constant approximation to the
Jacobian gy (·). Indeed, we implement the second derivative methods without addi-
tional computational cost. Errors versus the number of function evaluations, nfe, for
GSLMMs, MESDBDFs, and GLMMs with k = 1 and k = 7 applied to this prob-
lem have been represented in Fig. 7. These results illustrate that GSLMMs are more
cost-effective than two other mentioned methods.
7 Conclusion
Data availability Data sharing not applicable to this article as no datasets were generated or analyzed
during the current study.
Declarations
References
1. Abdi, A.: Construction of high-order quadratically stable second-derivative general linear methods for
the numerical integration of stiff ODEs. J. Comput. Appl. Math. 303, 218–228 (2016)
2. Abdi, A., Behzad, B.: Efficient Nordsieck second derivative general linear methods: construction and
implementation. Calcolo 55(28), 1–16 (2018)
3. Abdi, A., Braś, M., Hojjati, G.: On the construction of second derivative diagonally implicit multistage
integration methods for ODEs. Appl. Numer. Math. 76, 1–18 (2014)
4. Abdi, A., Conte, D.: Implementation of second derivative general linear methods. Calcolo 57, 1–29
(2020)
5. Abdi, A., Hojjati, G.: An extension of general linear methods. Numer. Algorithms 57, 149–167 (2011)
6. Abdi, A., Hojjati, G.: Implementation of Nordsieck second derivative methods for stiff ODEs. Appl.
Numer. Math. 94, 241–253 (2015)
7. Abdi, A., Hojjati, G.: Maximal order for second derivative general linear methods with Runge–Kutta
stability. Appl. Numer. Math. 61, 1046–1058 (2011)
8. Abdi, A., Jackiewicz, Z.: Towards a code for nonstiff differential systems based on general linear
methods with inherent Runge–Kutta stability. Appl. Numer. Math. 136, 103–121 (2019)
9. Butcher, J.C.: On the convergence of numerical solutions to ordinary differential equations. Math.
Comp. 20, 1–10 (1966)
10. Butcher, J.C., Chartier, P., Jackiewicz, Z.: Experiments with a variable-order type 1 DIMSIM code.
Numer. Algorithms 22, 237–261 (1999)
11. Butcher, J.C., Hojjati, G.: Second derivative methods with RK stability. Numer. Algorithms 40, 415–
429 (2005)
12. Butcher, J.C., Jackiewicz, Z.: Construction of diagonally implicit general linear methods of type 1 and
2 for ordinary differential equations. Appl. Numer. Math. 21, 385–415 (1996)
13. Butcher, J.C., Jackiewicz, Z.: Construction of high order DIMSIMs for ordinary differential equations.
Appl. Numer. Math. 27, 1–12 (1998)
14. Butcher, J.C., Jackiewicz, Z.: Diagonally implicit general linear methods for ordinary differential
equations. BIT 33, 452–472 (1993)
15. Butcher, J.C., Jackiewicz, Z.: Implementation of diagonally implicit multistage integration methods
for ordinary differential equations. SIAM J. Numer. Anal. 34, 2119–2141 (1997)
16. Cash, J.R.: On the integration of stiff systems of ODEs using extended backward differentiation
formulae. Numer. Math. 34, 235–246 (1980)
17. Cash, J.R.: The integration of stiff initial value problems in ODEs using modified extended backward
differentiation formulae. Comput. Math. App. 9, 645–657 (1983)
18. Chan, R.P.K., Tsai, A.Y.J.: On explicit two-derivative Runge–Kutta methods. Numer. Algorithms 53,
171–194 (2010)
19. Dahlquist, G.: A special stability problem for linear multistep methods. BIT 3, 27–43 (1963)
20. D’Ambrosio, R., Izzo, G., Jackiewicz, Z.: Perturbed MEBDF methods. Comput. Math. Appl. 63,
851–861 (2012)
21. Enright, W.H.: Second derivative multistep methods for stiff ordinary differential equations. SIAM J.
Numer. Anal. 11, 321–331 (1974)
22. Ezzeddine, A.K., Hojjati, G., Abdi, A.: Perturbed second derivative multistep methods. J. Numer.
Math. 23, 235–245 (2015)
23. Hairer, E., Wanner, G.: Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic
Problems. Springer, Berlin (2010)
24. Hojjati, G., Rahimi Ardabili, M.Y., Hosseini, S.M.: New second derivative multistep methods for stiff
systems. Appl. Math. Model. 30, 466–476 (2006)
25. Hosseini, S.M., Hojjati, G.: Matrix free MEBDF method for the solution of stiff systems of ODEs.
Math. Comput. Modell. 29, 67–77 (1999)
26. Izzo, G., Jackiewicz, Z.: Generalized linear multistep methods for ordinary differential equations.
Appl. Numer. Math. 114, 165–178 (2017)
27. Jackiewicz, Z.: General Linear Methods for Ordinary Differential Equations. Wiley, New Jersey
(2009)
28. Jackiewicz, Z.: Implementation of DIMSIMs for stiff differential systems. Appl. Numer. Math. 42,
251–267 (2002)
250 Numerical Algorithms (2022) 91:227–250
29. Lambert, J.D.: Numerical Methods for Ordinary Differential Systems. Wiley, New York (1991)
30. Mazzia, F., Iavernaro, F., Magherini, C.: Test set for initial value problem solvers. University of Bari
(2003)
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published
maps and institutional affiliations.