0% found this document useful (0 votes)
46 views15 pages

(PDF) Practical Rungekutta Methods For Scientific Computation

Uploaded by

mc180401877
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views15 pages

(PDF) Practical Rungekutta Methods For Scientific Computation

Uploaded by

mc180401877
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

5/28/2020 (PDF) Practical rungekutta methods for scientific computation

See all › See all › Recruit researchers Join for free


Download citation Share Download full-text PDF Login
3 Citations 12 References

ods for scientific computation

n The ANZIAM Journal 50(03):333 - 342 · January 2009 with 442 Reads 


9000030

tcher
niversity of Auckland

methods have a special role in the numerical solution of stiff problems, such as those found by applying the
partial differential equations arising in physical modelling. Of particular interest in this paper are the high-
on Gaussian quadrature and the efficiently implementable singly implicit methods.

research

bers
cations
rojects

ohn C. Butcher Author content


Download full-text PDF
t to copyright.

ZIAM J. 50(2009), 333–342


10.1017/S1446181109000030

PRACTICAL RUNGE KUTTA METHODS FOR SCIENTIFIC


https://www.researchgate.net/publication/231784930_Practical_rungekutta_methods_for_scientific_computation 1/15
5/28/2020 (PDF) Practical rungekutta methods for scientific computation
PRACTICAL
See all ›
RUNGE–KUTTA METHODS FOR SCIENTIFIC
See all › COMPUTATION Download citation Share Download full-text PDF
3 Citations 12 References
J. C. BUTCHER1

(Received 9 November, 2007; revised 18 November, 2008)

Abstract
Implicit Runge–Kutta methods have a special role in the numerical solution of stiff
problems, such as those found by applying the method of lines to the partial differential
equations arising in physical modelling. Of particular interest in this paper are the high-
order methods based on Gaussian quadrature and the efficiently implementable singly
implicit methods.

2000 Mathematics subject classification: primary 65L05.


Keywords and phrases: implicit Runge–Kutta methods, implicit Euler method, stiff
problems, A-stability, L-stability, method of lines, Gauss–Legendre quadrature, DIRK
methods, SIRK methods, Laguerre polynomials.

1. Introduction
e whole process of solving a scientific problem, including formulation, model
struction and numerical approximation, requires an incredible range of skills and
wledge. Very few mathematical scientists can expect to be actively involved in all
ps of the process, but Stephen White was such a scientist. Although his primary
rests were in applied mathematical modelling, he took a serious interest in the
mputational algorithms that are now an integral part of the problem-solving process.
Crucial to many applied areas is the numerical solution of ordinary or partial
erential equations. For partial differential equations in which diffusion plays a
nificant role, the method of lines is often used to replace continuous dependence
space variables by a discretized approximation by a high-dimensional system of
inary differential equations. The initial value problems which result are typically
hly stiff, and this is where implicit Runge–Kutta methods have a natural role.
Although Runge–Kutta methods were invented more than 100 years ago, implicit
nge–Kutta methods have been known for less than 50 years. Today implicit methods

partment of Mathematics, University of Auckland, 38 Princes St, Science Centre, Building 303,
el 3, Auckland Central; e-mail: [email protected].
Australian Mathematical Society 2009, Serial-fee code 1446-1811/2009 $16.00

333

J. C. Butcher [2]

ecome much more important than explicit methods, even though their
ational costs are far higher. The reason is that implicit methods are less
https://www.researchgate.net/publication/231784930_Practical_rungekutta_methods_for_scientific_computation 2/15
5/28/2020 (PDF) Practical rungekutta methods for scientific computation

ed in their
See all › performance
See all › by stability restrictions. The aim of this paper is to
me of the flavour 12
3 Citations
of References
what is possible with implicit Runge–KuttaDownload
methods.citation Share Download full-text PDF
ughout the paper, we will consider the solution of an autonomous
nsional initial value problem written in the form

y 0 (t ) = f (y(t )), y(t0 ) = y0 , f : RN → RN .

mplest of all methods for the step-by-step solution of this problem is the
) Euler method. This consists of forming approximations to the solutions at
+ nh , n = 1, 2, . . . , where the time-step h is here assumed to be constant, for
of simplicity. The approximations are given by

yn = yn−1 + h f ( yn−1 ), n = 1, 2, . . . . (1.1)

ast to this process, in which each quantity is computed explicitly from known
have the implicit Euler method. In this method the term h f (yn−1 ) in (1.1) is
to h f (yn ). We then obtain the sequence of approximations given by

yn = yn−1 + h f ( yn ). (1.2)

merical scheme (1.2) requires the solution of a nonlinear algebraic equation


practical problems, this is an overwhelming cost. Problems in which this cost
paying are known as stiff problems.
et the scene, stiffness is explained using a standard example problem in
2. This is followed in Section 3 by a brief introduction to Runge–Kutta
and, in Section 4, to implicit Runge–Kutta methods in particular. The
der methods based on Gaussian quadrature are discussed in Section 5 and, in
the lower-order, but more efficient, DIRK and SIRK methods in Sections 6
espectively.

2. A classical example of a stiff problem


ider the two-dimensional diffusion equation with Dirichlet boundary
ns on the unit square:
∂u
= ∇2 u, u(x, y) = 0, on the boundary of [0, 1] × [ 0, 1].
∂t
ctrum of the Laplacian is the set
 2 2
−π (m + n 2 ) : m, n = 1, 2, . . . .

(2.1)

e the problem numerically, discretise with N points in each space direction,


the standard five-point approximation to the Laplacian.

Practical Runge–Kutta methods 335

https://www.researchgate.net/publication/231784930_Practical_rungekutta_methods_for_scientific_computation 3/15
5/28/2020 (PDF) Practical rungekutta methods for scientific computation

See all › See all ›


The3discretized
Citations
problem becomes
12 References
Download citation Share Download full-text PDF

dU
= MU ,
dt
ere M is a specific banded N 2 × N 2 matrix. In replacing an unbounded operator by
approximation represented by a finite-dimensional matrix, we cannot expect a well-
ditioned result. To make this comparison quantitative, we will find the eigenvalues
M. The spectrum of M is
     


2 2 2 nπ
M) = −4(N + 1) sin + sin : m, n = 1, 2, . . . , N .
2(N + 1) 2(N + 1)

e members of σ (M) run from about −2π 2 to about −8(N + 1)2 , compared with
members of the set (2.1) which lie in {−∞, −2π 2 }. For accuracy, 2π 2 h should be
all and, for stability, in the case of the Euler method, 8(N + 1)2 h should be small.
wever, for the implicit Euler method there is no such restriction due to stability.
To solve this problem by the Euler method, we need to compute approximations
y2 , . . . , yn−1 , yn , . . . , using the formula yn = (I + h M) yn−1 . However, for
implicit Euler method, we need to do the more costly computation yn = (I −
)−1 yn−1 . Costly though this is, it is simple compared with what is required for
-linear problems, where Newton iterations have to be carried out.

3. Introduction to Runge–Kutta methods


t will be convenient to consider only autonomous initial value problems

y 0 (t ) = f (y(t )), y(t0 ) = y0 , f : RN → RN .

e simple Euler method,

yn = yn−1 + h f ( yn−1 ), h = tn − tn−1 ,

be made more accurate by using either the mid-point or the trapezoidal rule
drature formula:

1
 
yn = yn−1 + h f yn−1 + h f (yn−1 ) ,
2
1 1
yn = yn−1 + h f (yn−1 ) + h f (yn−1 + h f (yn−1 )).
2 2
ese methods, from Runge’s 1895 paper [9], are “second-order” because the error in
ngle step behaves like O(h 3 ). At a specific output point the error is O(h 2 ). A few
rs later, Heun [7] gave a full explanation of third-order methods and Kutta [8] gave
etailed analysis of fourth-order methods.

https://www.researchgate.net/publication/231784930_Practical_rungekutta_methods_for_scientific_computation 4/15
5/28/2020 (PDF) Practical rungekutta methods for scientific computation

See all › See all ›


Download citation Share Download full-text PDF
3 Citations 12 References
J. C. Butcher [4]

e early days of Runge–Kutta methods, the aim was to find explicit methods of
nd higher order. However, more recently an important aim has been to find
suitable for the solution of stiff problems.
rrying out a step of a Runge–Kutta method, we evaluate s stage values
. . , Ys and s stage derivatives F1 , F2 , . . . , Fs , using the formula Fi = f (Yi ).
Yi is found as a linear combination of the F j added on to y0 ,
s
X
Yi = y0 + h ai j F j ≈ y(t0 + ci h),
j=1

approximation at t1 = t0 + h is found from


s
X
y1 = y0 + h bi Fi ≈ y(t0 + h).
i=1

epresent the method by a tableau and introduce the matrix A and the vectors b T

c1 a11 a12 · · · a1s


c2 a21 a22 · · · a2s
c A
T
= ... ... ... ...
b
cs as1 as2 · · · ass
b1 b2 · · · bs

= sj=1 ai j , i = 1, 2, . . . , s. If the method is explicit, so that c1 = 0 and ai j


P
nless i > j, a simplified tableau can be used in which the diagonal and upper
ar parts of A are omitted.
e two examples of methods made famous by Runge [9], the corresponding
are

0 0
1 1
2 2 and 1 1
1 1
0 1 2 2

nd fourth-order methods require three and four stages, respectively. Examples


can be found in the papers by Heun [7] and Kutta [8] and in [2] and [5].
pattern that has emerged, that order s can be attained with an s -stage explicit
is an illusion; order five requires six stages, order six requires seven stages
er seven requires nine stages. It is also known that for p ≥ 8, s ≥ p + 3 stages
ed. This bound is tight for p = 8 but, above this order, little more is known.
details of the order conditions and the derivation of particular methods of
orders, see [2].

https://www.researchgate.net/publication/231784930_Practical_rungekutta_methods_for_scientific_computation 5/15
5/28/2020 (PDF) Practical rungekutta methods for scientific computation

See all › See all ›


Download citation Share Download full-text PDF
3 Citations 12 References

Practical Runge–Kutta methods 337

4. Implicit Runge–Kutta methods


f A is a full matrix, one step of the method consists of the evaluation of
Y2 , . . . , Ys which satisfy
s
X
Yi = y0 + h ai j f (Y j ).
j=1

ite this in the form


Y = 1 ⊗ y0 + h(A ⊗ I N )F,
ere Y and F are s N -dimensional vectors, for an N -dimensional problem and 1 ∈ Rs
every component equal to 1.
For N large, this problem is very difficult to solve without excessive cost and usually
mplified form of Newton is used. Let J denote the Jacobian matrix for f computed
cently”, so that the Newton corrections

Yi → Yi − Di , i = 1, 2, . . . , s,

sfy
s
X
Yi − Di = y0 + h ai j (F j − J D j ). (4.1)
j=1

Write (4.1) in the compact form

(I ⊗ I N − h A ⊗ J )D = Y − 1 ⊗ y0 − h(A ⊗ I N )F (4.2)

assess the cost of the numerical process. This is in two parts: first the costs
ociated with an occasional recomputation and factorization, and secondly the costs
olved in an actual iteration.
The “occasional” costs are the evaluation of J followed by the factorization of the
N ) × (s N ) matrix I ⊗ I N − h A ⊗ J at a cost of s 3 N 3 multiplied by a small number.
The costs per iteration consist of the evaluation of the s values of f , the evaluation
Y − 1 ⊗ y0 − h(A ⊗ I N )F, and finally the solution of a pre-factored (s N ) × (s N )
ar system (4.2) at a cost of s 2 N 2 multiplied by a small constant. The factors
and N 2 seem to be intrinsic requirements in implementing a multistage method.
contrast, the challenge in developing efficient methods is to lower the s 3 and s 2
fficients.
n addition to the order and the implementation costs, a third vital question
ut implicit Runge–Kutta methods is their stability behaviour with stiff problems.
ociated with each method is a stability function R(z). This is defined in terms
a linear problem y 0 = q y , for which R(hq ) is the growth factor. That is yn =
hq ) yn−1 . Write z = hq and substitute h F = zY so that (I − z A)Y = y0 1, y1 =
Y + y0 . Eliminate Y and we find R(z) = y1 / y0 = 1 + z bT (I − z A)−1 1.

https://www.researchgate.net/publication/231784930_Practical_rungekutta_methods_for_scientific_computation 6/15
5/28/2020 (PDF) Practical rungekutta methods for scientific computation

See all › See all ›


Download citation Share Download full-text PDF
3 Citations 12 References

J. C. Butcher [6]

ant stable behaviour for the exact solution, which corresponds to Re(z) ≤ 0, to
able behaviour of the computed solution. This means that for z in the left half-
R(z)| ≤ 1. This property is referred to as A-stability. Some methods have the
al property that R(∞) = 0. A-stable methods, which possess this additional
ment, are said to be L-stable, and for many problems this is a desirable property.
scussion of the advantages of L-stability over simple A-stability, see [6].
ask is now to explore various families of implicit methods and ask, for each
o what extent we can achieve the three desirable properties of high order, good
, and moderate implementation costs.

5. Methods based on Gaussian quadrature


remarkable that associated with each Gaussian quadrature formula on [0, 1]
ists a Runge–Kutta method the same order 2s as the quadrature formula itself.
methods are simply constructed. First, the ci are chosen as the zeros of the
mial Ps (2x − 1) to give orthogonality on [0, 1], where Ps is the Legendre
mial of degree s. The bi are then chosen so that the quadrature formula
Z 1 s
X
φ (x) d x ≈ bi φ (ci ), (5.1)
0 i=1

whenever φ is a polynomial of degree not exceeding s − 1. The orthogonality


then implies that (5.1) actually holds up to degree 2s − 1. By substituting
Y x − xi
` j (x) = , j = 1, 2, . . . , s,
j6=i
x j − x i

grange interpolation, we find


Z 1
bj = ` j (x) d x .
0

ments of A are also related to quadrature formulae, but on intervals [0, ci ].


ally,
Z ci
ai j = ` j (x) d x .
0
ods defined in this way are always A-stable. Although they are not also
, closely related methods exist with this additional property, such as the Radau
hods; see [6]. Even though the order of Gauss methods is 2s , in practice it is not
possible to observe rapid convergence of approximations, as h → 0, because
e order is only s . Methods based on Gaussian quadrature have been discovered
an important role in the solution of problems in Hamiltonian mechanics [4],
effectiveness for stiff problems is limited by their high implementation costs.

https://www.researchgate.net/publication/231784930_Practical_rungekutta_methods_for_scientific_computation 7/15
5/28/2020 (PDF) Practical rungekutta methods for scientific computation

See all › See all ›


Download citation Share Download full-text PDF
3 Citations 12 References

Practical Runge–Kutta methods 339

We present the single example, with s = 2:


√ √
1 3 1 1 3
2 − 6 4√ 4 − 6

1 3 1 3 1
2 + 6 4 + 6 4
1 1
2 2

6. Diagonally implicit methods


f A is lower triangular, with constant diagonals, we get the diagonally implicit
nge–Kutta (DIRK) methods of Alexander [1]. The following method illustrates
at is possible for DIRK methods:

λ λ
1 (1 + λ) 1
2 2 (1 − λ) λ
1 2 1 2
1 4 (−6λ + 16λ − 1) 4 (6λ − 20λ + 5) λ
1 2 1 2
4 (−6λ + 16λ − 1) 4 (6λ − 20λ + 5) λ

ere λ ≈ 0.435 866 521 5 satisfies 1/6 − (3/2)λ + 3λ2 − λ3 = 0.


This method has order 3 and the stability function is
 
1 + (1 − 3λ)z + 12 − 3λ + 3λ2 z 2
R(z) = .
(1 − λz )3

cause the numerator has degree only 2, R(∞) = 0. Because |R(z)| ≤ 1 when
z) ≤ 0, it is A-stable, and therefore also L-stable.
For a general implicit method with s = 3, the two components of the cost would
(27N 3 , 9N 2 ). But in this case, because of the special structure they are now only
3 , 3N 2 ). This is a major step forward, but low stage order still bedevils us. We

mpt to overcome this handicap using singly implicit methods.

7. Singly implicit Runge–Kutta methods


A singly implicit Runge–Kutta (SIRK) method is characterized by the equation
A) = {λ}. That is, A has a one-point spectrum. While DIRK methods are a special
e, they seem to possess advantages that are not possessed by the whole family. That
for DIRK methods the stages can be computed independently and sequentially from
ations of the form Yi − hλ f (Yi ) = a known quantity. Each stage requires the same

orized matrix I N − hλJ to permit solution by a modified Newton iteration process


hhttps://www.researchgate.net/publication/231784930_Practical_rungekutta_methods_for_scientific_computation
J ≈ ∂ f /∂ ) 8/15
5/28/2020 (PDF) Practical rungekutta methods for scientific computation
N p y p
here J ≈ ∂ f /∂ y ).
OurSee
aimallis› to extend this
See all › advantage to SIRK methods in general. The secret lies in
Download citation Share Download full-text PDF
inclusion of a transformation
3 Citations 12 References to Jordan canonical form into the computation.

J. C. Butcher [8]

ose the matrix T transforms A to canonical form by the formula T −1 A T = A,


 
1 0 ··· 0 0
−1 1 ··· 0 0
 
 
.. ... .. ..
 
A =λ
 . . .
.

0 0 ··· 1 0 
 

0 0 · · · −1 1
consider a single Newton iteration, simplified by the use of the same
mate Jacobian J for each stage.
me that the incoming approximation is y0 , and that we are attempting to

y1 = y0 + h(b T ⊗ I N )F,
we recall that Y and F are made up from the s subvectors Yi and Fi = f (Yi ),
vely.
mplicit equations to be solved are

Y = 1 ⊗ y0 + h(A ⊗ I N )F,

e recall that 1 is the vector in Rs with every component equal to 1. The Newton
consists of solving the linear system

(I ⊗ I N − h A ⊗ J )D = Y − 1 ⊗ y0 − h(A ⊗ I N )F,

n updating
Y → Y − D.
nefit from the SIRK property, write

Y = (T −1 ⊗ I N )Y, F = (T −1 ⊗ I N )F, D = (T −1 ⊗ I N )D,

(I ⊗ I N − h A ⊗ J )D = Y − 1 ⊗ y0 − h(A ⊗ I N )F.
are doing the back-substitutions using the transformed matrix A, the cost
ents are reduced from (s 3 N 3 , s 2 N 2 ) to (N 3 , s N 2 ), just as for a DIRK
There are extra costs associated with the transformations, but these consist
derate number multiplied by s 2 N . For large problems, these N terms are
ely swamped by the N 2 and N 3 terms and they can be regarded as a small
d.
https://www.researchgate.net/publication/231784930_Practical_rungekutta_methods_for_scientific_computation 9/15
5/28/2020 (PDF) Practical rungekutta methods for scientific computation

btainSee
practical
all › methods
See all › in the SIRK family, we will seek methods with high
der.3In fact,
Citations
we will aim to achieve order at least s together withDownload
12 References
stage order s.
citation Share Download full-text PDF
der s means that
Xs Z ci
ai j φ (ci ) = φ (t) dt ,
j=1 0

Practical Runge–Kutta methods 341

φ any polynomial of degree s − 1. This implies that


1 k
Ack−1 = c , k = 1, 2, . . . , s,
k
ere the vector powers are interpreted component by component. This is equivalent

1 k
Ak c0 =
c , k = 1, 2, . . . , s.
k!
m the Cayley–Hamilton theorem,
(A − λI )s c0 = 0,

ch can be expanded in the form


s  
s
(−λ)s−i Ai c0 = 0.
X

i=0
i

nce, each component of c satisfies


s
1 s x i
X   
− = 0,
i=0
i! i λ

hat  
x
Ls = 0,
λ
ere L s denotes the Laguerre polynomial of degree s .
Let ξ1 , ξ2 , . . . , ξs denote the zeros of L s , so that
ci = λξi , i = 1, 2, . . . , s.
Before discussing the choice of λ, we remark that it is possible to give an explicit
ression for the transformation matrix. It is in fact equal to the generalized
ndermonde matrix
L 0 (ξ1 ) L 1 (ξ1 ) · · · L s−1 (ξ1 )
 L 0 (ξ2 ) L 1 (ξ2 ) · · · L s−1 (ξ2 ) 
.. .
.. ..

 


T=
 . . 


https://www.researchgate.net/publication/231784930_Practical_rungekutta_methods_for_scientific_computation 10/15
5/28/2020 (PDF) Practical rungekutta methods for scientific computation
T 

.

See all › See all › L 0 (ξs ) L 1 (ξs ) · · · L s−1 (ξs ) Download citation Share Download full-text PDF
3 Citations 12 References
The choice of λ should be made taking stability into account. A convenient option
= ξ k−1 , for some k . This will mean that ck = 1 and that the stability function is
o at infinity. For s as high as eight (with the exception of seven) this leads to an
table method.
However, for s > 2, k has to be chosen less than s . This means that
, c
1 k+2 , . . . , cs are all greater than 1. Furthermore, as s increases, the amount

J. C. Butcher [10]

h some abscissae exceed 1 steadily increases. This has to be reckoned as a


isadvantage of the method.
nately, this apparent barrier to accurate and stable numerical modelling using
mplicit methods is not insuperable. It is possible to pull the abscissae back
1], in fact to any set of distinct points that one chooses, without losing
merical property that really matters; see [3]. The trick is to replace “order”
ctive order”. For most applications of effective order, complicated starting,
g and step-changing schemes have to be added to the method, but in this case
e no significant overheads arising from a variable stepsize implementation.
ming that singly implicit methods are implemented as efficiently as possible,
chniques such as those discussed in this section, it must be asked how
tive they are likely to be when faced with demanding and large-scale stiff
s. While tests with a range of problems have shown singly implicit methods
accurate and reliable results, they need to be compared in rigorous tests against
own successful codes such as those based on Radau quadrature [6], but there
ons to expect that they will exhibit their own specific advantages. They have
ge order and no order reduction should be anticipated, but this is at the cost,
ed with Radau methods, of lower order per stage. However, the most significant
ge of the new methods is that the singly implicit methods scale up in an
t manner for high-dimensional problems, simply because a single factorized
an be used for all the transformed stages. Furthermore, the cost of the back-
tions is the same per stage as for the BDF (backward difference formulae)
and less than for Radau methods. The transformations which seem to be an
al overhead for high-order singly implicit methods, are relatively insignificant
large problems.

References
Alexander, “Diagonally implicit Runge–Kutta methods for stiff ODEs”, SIAM J. Numer. Anal.
1977) 1006–1021.
Butcher, Numerical methods for ordinary differential equations, 2nd edn (Wiley, Chichester,
8).
. Butcher and D. J. L. Chen, “ESIRK methods and variable stepsize”, Appl. Numer. Math. 28

https://www.researchgate.net/publication/231784930_Practical_rungekutta_methods_for_scientific_computation 11/15
5/28/2020 (PDF) Practical rungekutta methods for scientific computation
8) 193–207.
SeeC.all Lubich
› See G.
all ›Wanner, Geometric numerical integration. Structure-preserving
Hairer, and
3 Citations
rithms for ordinary 12 equations (Springer-Verlag, Berlin, 2002). Download citation
References
differential Share Download full-text PDF
Hairer, S. P. Nørsett and G. Wanner, Solving ordinary differential equations I. Nonstiff problems,
ume 8 of Springer Series in Comput. Math. (Springer, Berlin, 1993).
Hairer and G. Wanner, Solving ordinary differential equations II. Stiff and differential-algebraic
blems, Volume 14 of Springer Series in Comput. Math. (Springer, Berlin, 1996).
Heun, “Neue Methoden zur approximativen Integration der Differentialgleichungen einer
bhängigen veränderlichen”, Z. Math. Phys. 45 (1900) 23–38.
Kutta, “Beitrag zur näherungsweisen Integration totaler Differentialgleichungen”, Z. Math.
s. 46 (1901) 435–453.
Runge, “Über die numerische Auflösung von Differentialgleichungen”, Math. Ann. 46 (1895)
–178.

eferences (12)

computation all modes (including high modes) are integrated accurately with zero-dissipative time integration
Runge-Kutta methods based on Gaussian quadrature [5, 6]. Hence, the accuracy of the obtained solution in
on the selected stepsizes and the order of accuracy of the time stepping schemes. ...

tion technique with Runge-Kutta methods

h N. Chau · Detlef Kuhl

for high-order unstructured methods

YS
adarajah

unge-Kutta methods
le
m

https://www.researchgate.net/publication/231784930_Practical_rungekutta_methods_for_scientific_computation 12/15
5/28/2020 (PDF) Practical rungekutta methods for scientific computation

See all › See all ›


Download citation Share Download full-text PDF
3 Citations 12 References
Recommendations
Discover more publications, questions and projects in Scientific Computation

Project

B-series
John C. Butcher

To understand all I can about B-series and write a book on this subject with special reference to the algebraic analysis of numerical
methods

View project

Project

High order G-symplectic methods


John C. Butcher

G-symplectic methods are able to approximately preserve similar invariants as for canonical RK methods. Theoretically, they fail
eventually because of parasitism but millions of time steps can be ... [more]

View project

Conference Paper

Some Applications of Generalized Char- Sets of Ordinary Differential Polynomial Sets


January 2016

Farkhanda Afzal

The notion of characteristic sets, which are a special kind of triangular sets, is introduced by J. F Ritt and W.T. Wu. Wu extended
Ritt’s work and developed the characteristic set method not only in theory but in algorithms, efficiency and its numerous applications.
Triangular sets are widely considered as a good representation for the solution of polynomial systems. After the introduction of ...
[Show full abstract]

Read more

https://www.researchgate.net/publication/231784930_Practical_rungekutta_methods_for_scientific_computation 13/15
5/28/2020 (PDF) Practical rungekutta methods for scientific computation

See all › See all ›


Article Download citation Share Download full-text PDF
3 Citations 12 References

Practical Runge--Kutta methods for scientific computation


November 2009
John C. Butcher

Implicit Runge�Kutta methods have a special role in the numerical solution of stiff problems, such as those found by applying the
method of lines to the partial differential equations arising in physical modelling. Of particular interest in this paper are the high-order
methods based on Gaussian quadrature and the efficiently implementable singly implicit methods.

Read more

Article Full-text available

A Parallel Implementation of Gauss Methods for ODEs


Luigi Brugnano · Felice Iavernaro · Tiziana Susca

We introduce a new formulation of Gauss collocation methods for the numerical solution of ordinary differential equations. These
formulae may be thought of as Runge-Kutta methods with rank-deficient array and may be specified in order to allow an easy parallel
implementation. We show some preliminary results on Gauss methods of order 4, 6 and 8.

View full-text

https://www.researchgate.net/publication/231784930_Practical_rungekutta_methods_for_scientific_computation 14/15
5/28/2020 (PDF) Practical rungekutta methods for scientific computation

See all › See all ›


Article Download citation Share Download full-text PDF
3 Citations 12 References

Order and stability of generalized Padé approximations


March 2009 · Applied Numerical Mathematics
John C. Butcher

Given a sequence of integers [n0,n1,…,nr], where n0,nr⩾0 and ni⩾−1,i=1,2,…,r−1, a sequence of r polynomials (P0,P1,…,Pr) is a
generalized Padé approximation to the exponential function if ∑i=0rexp((r−i)z)Pi(z)=O(zp+1), where the order of the approximation p
is given by p=∑i=0r(ni+1)−1. The main result of this paper is that if 2n0>p+2, then ∑i=0rwr−iPi(z) is not the stability polynomial of an
... [Show full abstract]

Read more

Article

High Order A-stable Numerical Methods for Stiff Problems


October 2005 · Journal of Scientific Computing

John C. Butcher

Stiff problems pose special computational difficulties because explicit methods cannot solve these problems without severe
limitations on the stepsize. This idea is illustrated using a contrived linear test problem and a discretized diffusion problem. Even
though the Euler method can solve these problems if the stepsize is small enough, there is no such limitation for the implicit Euler
method. ... [Show full abstract]

Read more

Discover more

Company Support Business solutions

About us Help Center Advertising


News Recruiting
Careers

© 2008-2020 ResearchGate GmbH. All rights reserved. Terms · Privacy · Copyright · Imprint

https://www.researchgate.net/publication/231784930_Practical_rungekutta_methods_for_scientific_computation 15/15

You might also like