Higher Order Numerical Methods For
Higher Order Numerical Methods For
Citation Pal, K. K., (2015). Higher order numerical methods for fractional
order differential equations. (Doctoral dissertation). University of
Chester, United Kingdom.
Usage policy The full-text may be used and/or reproduced in any format
or medium, without prior permission or charge, for personal
research or study, educational, or not-for-profit purposes
provided that: - A full bibliographic reference is made to the
original source - A link is made to the metadata record in
ChesterRep - The full-text is not changed in any way - The full-
text must not be sold in any format or medium without the formal
permission of the copyright holders. - For more information
please email [email protected]
August, 2015
i
Abstract
This thesis explores higher order numerical methods for solving fractional differential
equations.
Firstly, we consider two approaches to construct higher order numerical methods for
solving fractional differential equations. Based on a direct discretization of the fractional
differential operator we show that, the order of convergence of the linear fractional differ-
ential equation with 0 < α < 1 is O(h3−α ), where α denotes the order of the fractional
derivative. Based on discretization of the integral in the equivalent form of non-linear frac-
tional differential equations the order of convergence of the numerical method is O(h3 )
for α ≥ 1 and O(h1+2α ) for 0 < α ≤ 1 for sufficiently smooth functions.
Secondly, we introduce extrapolation algorithms for accelerating the convergence order
of the two considered numerical methods. Numerical experiments are given for each
algorithm to show that the numerical results are consistent with the theoretical results.
Finally we introduce a higher order algorithm for solving two-sided space-fractional
partial differential equations. The space-fractional derivatives we consider here are left-
handed and right-handed Riemann-Liouville fractional derivatives which are expressed
by using the Hadamard finite-part integrals. We approximate the Hadamard finite-part
integrals by using piecewise quadratic interpolation polynomials and obtain a numerical
approximation of the space-fractional derivative with convergence order O(∆x3−α ), 1 <
α < 2. A shifted implicit finite difference method is applied for solving the two-sided
space-fractional partial differential equation and we prove that the order of convergence
of the finite difference method is O(∆t + ∆xmin(3−α,β) ), 1 < α < 2, β > 0, where ∆t, ∆x
denote the time and space stepsizes, respectively, and α is the order of the fractional
derivative and β is the Lipschitz constant related to the exact solution. Numerical exam-
ples, where the solutions have varying degrees of smoothness, are presented and compared
with the exact analytical solution to compare the practical performance of the method
with the theoretical order of convergence.
ii
Declaration
No part of the work referred to in this thesis has been submitted in support of an
application for another degree or qualification of this or any other institution of learning.
However some parts of the materials contained herein have been published previously.
Publications
• Y. Yan, K. Pal and N. J. Ford [93], Higher order numerical methods for solving
fractional differential equations, BIT Numer. Math., 54 (2014), 555-584.
• K. Pal, F. Liu and Y. Yan [73], Numerical solutions for fractional differential equa-
tions by extrapolation, Lecture Notes in Computer Science, Springer series, Volume
9045 (2015), 299-306.
• K. Pal, F. Liu, Y. Yan and G. Roberts [74], Finite difference method for two-sided
space-fractional partial differential equations, Lecture Notes in Computer Science,
Springer series, Volume 9045 (2015), 307-314.
• N. J. Ford, K. Pal and Y. Yan [42], An algorithm for the numerical solution of
two-sided space-fractional partial differential equations, Computational Methods in
Applied Mathematics, 15 (2015), 497-514.
Conference presentations
• A higher order numerical method for solving fractional differential equations (FDEs)
(Diethelm’s Method); Sixth Conference on Finite Difference Methods: Theory and
Applications, June 18-23, 2014, Lozenetz, Bulgaria.
• A higher order numerical method for solving fractional differential equations (FDEs)
(Predictor-corrector method); 6th International Conference on Computational Meth-
ods in Applied Mathematics, Sep 28- Oct 4, 2014, Strobl, Austria.
iii
• Basic concepts of fractional differential equations (PDEs); SCI Early Career Re-
search Meeting on 6th Nov, 2014, in Thornton Science Park, University of Chester.
Poster presentations
Acknowledgements:
I am grateful to my PhD supervisors Dr. Yubin Yan and Professor Neville J. Ford for
their continuous help, encouragement and guidance.
I would like to take the opportunity to thank all the colleagues and friends of the math-
ematics research group, at University of Chester for accepting me as a visiting lecturer in
2014-1015 and providing a friendly and stimulating working environment.
Furthermore, I would like to thank all the staff in the Department of Mathematics
and international center at University of Chester for their contribution during my PhD
studies. I also would like to thank Nicola Banks for commenting on a draft of this thesis.
1 Introduction 1
1.1 Fractional calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Basic functions of fractional calculus . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 Gamma Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.2 Beta Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.3 Mittag-Leffler function . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Structure of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
v
vi
viii
List of Tables
ix
x
Introduction
However the interest in the specific topic of fractional calculus surged only at the end of
the last century. Fractional differential equations, that is, those involving real and complex
1
2
order derivatives, have assumed an important role in modelling the anomalous dynamics
of many processes related to complex systems in the most diverse areas of science and
engineering [4]. During the last 25 years there has been a spectacular increase in the use
of fractional differential models to simulate the dynamics of many different anomalous
process, especially those involving ultra-slow diffusion. The following table is only based
on the Scopus database, but it reflects this state of affairs clearly: [4]
Z ∞
Γ(z) = e−t tz−1 dt, Re(z) > 0 (1.2.1)
0
which is the Euler integral of the second kind and converges in the right half of the
complex plane Re(z) > 0. If z = x + iy, indeed we have
3
Z ∞ Z ∞
−t x−1+iy
Γ(x + iy) = e t dt = e−t tx−1 eiylog(t) dt.
Z0 ∞ 0
which is convergent for any x > 0. The reduction formula of the gamma function is
Z ∞ Z ∞
−t z
Γ(z + 1) = e t dt = [−e−t tz ]∞
0 +z e−t tz−1 dt = zΓ(z).
0 0
Since, Γ(1) = 1, the recurrence shows that for any positive integer z [72],
which is the Euler’s integral of first kind. By using Laplace transform the beta function
can be written in terms of gamma function.
Γ(z)Γ(w)
B(z, w) = , Re(z) > 0, Re(w) > 0. (1.2.5)
Γ(z + w)
The Mittag-Leffler function also plays a very important role in the research of fractional
calculus. The classical Mittag-Leffler function for one parameter is defined by [51],
∞
X zk
Eα (z) := , z ∈ C, Re(α) > 0, (1.2.6)
k=0
Γ(αk + 1)
4
√
In particular, when α = 1 and α = 2, we have, E1 (z) = ez and E2 (z) = cosh( z).
The Mittag-Leffler type function with two parameter α, β, is defined by the series
expansion as follows [76],
∞
X zk
Eα,β (z) := , (α > 0, β > 0).
k=0
Γ(αk + β)
2.1 Introduction
Fractional differential equations provide an excellent mathematical tool for the descrip-
tion of memory and hereditary properties of various materials and processes [10]. These
operators are non-local which is the most significant advantage in the applications. The
standard derivative of a function includes information about the value of the function at
certain earlier time points only, while the fractional derivative encapsulates information
about the function’s behaviour from the earliest point in time up to the present.
Several analytical methods have been proposed to solve FDEs, for example Laplace
transform, Mellin transform, Fourier transform, model synthesis, eigenvector expansion
6
7
etc.. Most of these methods are only applicable to solve linear FDEs but cannot be applied
in non-linear FDEs.
2.2 Definitions
In this section we will introduce some of the fundamental definitions of fractional deriva-
tives and integrals, such as Riemann-Liouville integral, Riemann-Liouville fractional deriva-
tives, Caputo derivative, Hadamard finite-part integral etc. We will also discuss some
theorems and facts related to fractional calculus that we will apply in our research.
For n = 0 we set Ja0 := I, the identity operator and in this case the operator is quite
convenient for further manipulations. In fact,
Z t
n 1
lim Ja f (t) = lim (t − τ )n−1 f (τ )dτ,
n→0 n→0 Γ(n) a
Z t (t − τ )n
1
= lim f (τ )d − ,
n→0 Γ(n) a n
Z t
1 h
n 0 n
i
= lim f (a)(t − a) + f (τ )(t − τ ) dτ ,
n→0 Γ(n + 1) a
Z t i
= 1 · [f (a) · 1 + f 0 (τ ) · 1dτ ,
a
= f (a) + f (t) − f (a) = f (t).
1
Solution: Here p = 2
and lies on the interval 0 < p < 1 such that n = 1. Using (2.2.4)
gives
Z t
R 2
1
− 21 d 1 − 12 2
0 Dt f (t) = D 1 [R
0 Dt f (t)] = (t − τ ) τ dτ . (2.2.5)
dt Γ( 21 ) 0
Suppose n − 1 < p < n and p > 0 we define the following Caputo’s fractional derivative
as [76]
Z t
C p 1
0 Dt f (t) =R
0 Dtp−n [Dn f (t)] = (t − τ )n−p−1 [Dn f (τ )] dτ, (2.2.6)
Γ(n − p) 0
1
Example 2. Suppose f (t) = t2 , find the value of C
0 Dt f (t)?
2
1
Solution: Here p = 2
and lies on the interval 0 < p < 1 such that n = 1. Using (2.2.6)
gives
Z t
C
1 1 1
−1 − 12 d
0 Dt f (t)
2
=R
Dt [D f (t)] = 1
0
2 1
(t − τ ) f (τ ) dτ. (2.2.7)
Γ( 2 ) 0 dτ
Remark 3. Suppose p > 0 and n − 1 < p < n, then the relation between Riemman-
Liouville and Caputo fractional derivative can be expressed by the theorem [25] below
Similarly, we can prove the case for n − 1 < p < n, n > 1, i.e;
n−1
R p p
X f (k) (0)
0 Dt f (t) =C
0 Dt f (t) + tk−p . (2.2.9)
k=0
Γ(−p + k + 1)
Hadamard finite-part integral is one of the most important mathematical tools in frac-
tional derivatives, integral equations and partial differential equations. Let N denote the
set of all natural numbers then, for p ∈
/ N, on a general interval [a, b] Hadamard finite-part
integral is defined in [24] as follows:
I b
(x − a)−p f (x)dx (2.2.10)
a
bpc−1
X f (k) (a)(b − a)k+1−p Z b
:= + (x − a)−p Rbpc−1 (x, a)dx,
k=0
(k + 1 − p)k! a
where
1 x
Z
Rµ (x, a) := (x − y)µ f (µ+1) (y)dy, (2.2.11)
µ! a
H
and denotes the Hadamard finite-part integral. bpc denotes the largest integer not
exceeding p, where p 6∈ N.
Hadamard finite-part integral is the mathematical tool which reformulates a boundary
value problem for a partial differential equation with integer-order singularities and also
encountered the non-integer order singularities.
In particular, from [24] we can see that the Riemann-Liouville fractional derivatives
R p
a Dx f of order p > 0, p ∈
/ N of the function f may be expressed as a finite-part integral
according to
I x
1
R p
a Dx f (x) = (x − y)−p−1 f (y)dy. (2.2.12)
Γ(−p) a
Grünwald and Letnikov independently developed another non-integer derivative nearly the
same time when Riemann and Liouville developed Riemann-Liouville fractional derivative
11
to solve fractional differential equations. Later on many other authors use this Grünwald-
Letnikov fractional derivative to construct numerical methods for fractional differential
equations.
The Grünwald-Letnikov fractional derivative can be expressed as follows [51]. Let
α ∈ R+ . The operator GL
Daα defined by,
m
GL (∆αh f )(t) 1 X α
Daα f (t) = lim = lim (−1)k f (t − kh), (2.2.13)
h→0 hα mh=t−a, h→0 h α
k=0 k
finite-part integral in the equivalent form of the considered equations and defined a nu-
merical method with order of convergence of O(h2−α ), 0 < α < 1, see also [30]. Here
we are aiming to use the second-degree compound quadrature formula to approximate
the Hadamard finite-part integral for higher order convergence method. And in [29] Kai
Diethelm, Neville J. Ford and Alan D. Freed introduced Adams-type predictor-corrector
method for solving both linear and nonlinear fractional differential equations. In the
numerical algorithm the authors converted the considered equations into the Volterra
integral equation and then approximated the integral by using a piecewise linear inter-
polation polynomial and proved that the order of convergence of the numerical method
is O(h2 ) for 1 < α < 2 and O(h1+α ) for 0 < α < 1 if C α 2
0 Dt y(t) ∈ C [0, T ]. We will use
C q
0 Dx y(x) = f (x, y(x)), (2.4.1)
where C q
0 Dx y(x) represents the Caputo fractional derivative of order q > 0, with m − 1 <
q < m,
Z x
C q 1
0 Dx y(x) := (x − u)m−1−q y (m) (u)du. (2.4.3)
Γ(m − q) 0
13
The existence and the uniqueness of the solution is described by Diethelm and Ford
[27] in the following two theorems that are very similar to the corresponding classical
theorems known in the case of first-order equation.
Theorem 2.4.1. (Existence) [27] Assume that D := [0, X ∗ ] × [y00 − α, y00 + α] with some
X ∗ > 0 and some α > 0, and let the function f : D → R be continuous. Furthermore,
1
define X := min{X ∗ , ( αΓ(q+1)
||f ||∞
) q }. Then there exists a function y : [0, X ] → R solving the
initial value problem (2.4.1)-(2.4.2).
Theorem 2.4.2. (Uniqueness) [27] Assume that D := [0, X ∗ ] × [y00 − α, y00 + α] with some
X ∗ > 0 and some α > 0. Furthermore, let the function f : D → R be bounded on D and
fulfil a Lipschitz condition with respect to the second variable, i.e.
To prove the existence and uniqueness theorems we need to know the following results:
Lemma 2.4.3. [27] If the function f is continuous, then the initial value problem (2.4.1)-
(2.4.2) is equivalent to the non-linear Volterra integral equation of the second kind
m−1 x
xk (k)
Z
X 1
y(x) = y (0) + (x − z)q−1 f (z, y(z))dz. (2.4.4)
k=0
k! Γ(q) 0
with m − 1 < q ≤ m. In other words, every solution of the Volterra equation (2.4.4) is
also a solution of our initial value problem (2.4.1)-(2.4.2), and vice-versa.
Theorem 2.4.4. [27] Let U be a nonempty closed subset of a Banach space E, and
let αn ≥ 0 for every n and such that ∞
P
n=0 αn converges. Moreover, let the mapping
for every n ∈ N and every u, v ∈ U . Then, A has a unique defined fixed point u∗ .
Furthermore, for any u0 ∈ U , the sequence (An u0 )∞ ∗
n=1 converges to this point u .
14
y = Ay.
and in order to prove our desired uniqueness result, we have to show that A has a unique
fixed point. Let us therefore investigate the properties of the operator A. First we note
that, for 0 ≤ x1 ≤ x2 ≤ X ,
(Ay)(x1 ) − (Ay)(x2 )
Z x1 Z x2
1 q−1
= (x1 − z) f (z, y(z))dz − (x2 − z)q−1 f (z, y(z))dz (2.4.8)
Γ(q) 0 0
Z x1 Z x2
1 q−1 q−1
= (x1 − z) − (x2 − z) f (z, y(z)dz + (x2 − z)q−1 f (z, y(z))dz
Γ(q) 0
Z x2 x1
k f k∞ h x1
Z i
≤ (x2 − z)q−1 − (x1 − z)q−1 dz + (x2 − z)q−1 dz
Γ(q) 0 x1
k f k∞
= 2(x2 − x1 )q + xq1 − xq2 . (2.4.9)
Γ(q + 1)
proving that Ay is a continuous function. Moreover, for y ∈ U and x ∈ [0, X ], we find
Z x
1 1
(Ay)(x) − y00 = (x − z)q−1 f (z, y(z))dz ≤ k f k∞ xq
Γ(q) 0 Γ(q + 1)
1 1 αΓ(q + 1)
≤ k f k∞ Xq ≤ k f k∞ = α.
Γ(q + 1) Γ(q + 1) k f k∞
Thus, we have shown that Ay ∈ U if y ∈ U ; i.e., A maps the set U to itself. Then next
step is to prove that, for every n ∈ N0 and every x ∈ [0, X ], we have
15
(Lxq )n
k An y − An ȳ kL∞ [0,x] ≤ k y − ȳ kL∞ [0,x] . (2.4.10)
Γ(1 + qn)
This can be seen by induction. In the case n = 0, the statement is trivially true. For the
induction step n − 1 → n, we write
Remark 1. Note that Theorem 2.4.4 not only asserts that the solution is unique; it
actually gives us (at least theoretically) a means of determining this solution by Picard-
type iteration process.
Remark 2. Without the Lipschitz assumption on f the solution needs not be unique.
To see this, look at the simple one-dimensional example
C q
0 Dx y = yk , 0 < q < 1,
with initial condition y(0) = 0. Consider 0 < k < 1, so that the function on the right-
hand side of the differential equation is continuous, but the Lipschitz condition is violated.
Obviously, the zero function is a solution of the initial value problem. However, setting
pj (x) = xj , we recall that
C q Γ(j + 1)
0 Dx pj (x) = pj−q (x).
Γ(j + 1 − q)
p
Thus, the function y(x) = k Γ(j + 1)/Γ(j + 1 − q)xj with j = q/(1 − k) also solves the
problem, proving that the solution is not unique.
Proof. of Theorem 2.4.1 [27]: We begin by argument similar to those of the previous
proof. In particular, we use the same operator A defined in (2.4.7) and recall that it maps
the nonempty, convex, and closed set U = {y ∈ C[0, x] :k y − y00 k∞ ≤ α} to itself.
We shall now prove that A is a continuous operator. A stronger result, (2.4.10), has
been derived above, but in that derivation we used the Lipschitz property of f which
we do not assume to hold here. Therefore, we proved differently and note that, since f
is continuous on the compact set D, it is uniformly continuous there. Thus, given an
arbitrary > 0, we can find δ > 0 such that
|f (x, y) − f (x, z)| < Γ(q + 1) whenever |y − z| < δ. (2.4.11)
xq
Now let y, ȳ ∈ U such that k y − ȳ k< δ. Then, in view of (2.4.11),
|f (x, y(x)) − f (x, ȳ(x))| < Γ(q + 1), (2.4.12)
xq
17
A(U ) := {Ay : y ∈ U }.
• Fractional calculus in electrochemistry and tracer fluid flows: The fractional advection-
dispersion equation (FADE) is used in groundwater hydrology to model the trans-
port of passive tracers carried by fluid flow in a porous medium. Dispersion (or
spreading) of tracers depends strong on the scale of observation. In general, there
are three different mechanisms of dispersion [17]: molecular diffusion, variations in
the permeability field (microdispersion), and variations of the fluid velocity in a
19
• Continuous time random walk (CTRW) model: The CTRW models impose a ran-
dom waiting time between particle jumps [67] and the non-local CTRW model is a
good phenomenological description of the tick-by-tick dynamics, which can take into
account the pathological time evolution of financial markets. This non-local CTRW
model is very much related to the fractional calculus. For details see [63, 68].
• Fractional order dynamical systems in control theory [17] : This is the generaliza-
tion of the classical P ID-controller, the concept of the P I λ Dµ -controller, involving
fractional-order integrator and fractional order differentiator [76], which has been
found to be a more efficient control of fractional order dynamic systems.
Chapter 3
3.1 Introduction
We consider numerical methods for solving the fractional differential equation
C α
0 Dt y(t) = f (t, y(t)), 0 < t < T, (3.1.1)
where the y0k may be arbitrary real numbers and α > 0. Here C α
0 Dt denotes the differential
20
21
using the Hadamard finite-part integral and approximated the integral by using a quadra-
ture formula and obtained an implicit numerical algorithm for solving a linear fractional
differential equation. Diethelm and Luchko [32] used the observation that a fractional dif-
ferential equation has an exact solution, which can be expressed as a Mittag-Leffler type
function. Then they used convolution quadrature and discretised operational calculus to
produce an approximation to this Mittag-Leffler function. Blank [5] applied a colloca-
tion method to approximate the fractional differential equation. Podlubny [76] used the
Grünwald and Letnikov method to approximate the fractional derivative and defined an
implicit finite difference method for solving (3.1.1)-(3.1.2) and proved that the order of
convergence is O(h), where h is the stepsize. Gorenflo [47] introduced a second order O(h2 )
difference method for solving (3.1.1)-(3.1.2), but the conditions to achieve the desired ac-
curacy are restrictive. In [28], the authors converted the equations (3.1.1)-(3.1.2) into a
Volterra integral equation and then approximated the integral by using a piecewise lin-
ear interpolation polynomial and introduced a fractional Adams-type predictor-corrector
method for solving (3.1.1)-(3.1.2), proving that the order of convergence of the numerical
method is min{2, 1 + α} for 0 < α ≤ 2 if C α 2
0 Dt y ∈ C [0, T ]. Deng [18] modified the method
in [28] and introduced a new predictor-corrector method for solving (3.1.1)-(3.1.2) and
the convergence order is proved to be min{2, 1 + 2α} for α ∈ (0, 1]. In [95], the authors
introduced a so-called Jacobi-predictor-corrector approach to solve (3.1.1)-(3.1.2) which
is based on the polynomial interpolation and the Gauss-Lobatto quadrature with respect
to some Jacobi-weight function and the computational cost is O(N ), N = 1/h and any
desired convergence order can be obtained. In [9], a higher order numerical method for
solving (3.1.1)-(3.1.2) is obtained where a quadratic interpolation polynomial was used
to approximate the integral. Ford, Morgado and Rebelo recently (see [41]) used a non-
polynomial collocation method to achieve good convergence properties without assuming
any smoothness of the solution. There are also several works that are related to the fixed
memory principle and the nested memory concept for solving (3.1.1)-(3.1.2), see, e.g.,
[44, 29, 18, 19, 22].
In [26], Diethelm considered the following linear fractional differential equation, with
22
0 < α < 1,
C α
0 Dt y(t) = βy(t) + f (t), 0 ≤ t ≤ 1, (3.1.3)
y(0) = y0 , (3.1.4)
where β < 0, f is a given function on the interval [0, 1]. Diethelm [26] used a first-degree
compound quadrature formula to approximate the Hadamard finite-part integral in the
equivalent form of (3.1.3)-(3.1.4) and defined a numerical method for solving (3.1.3)-
(3.1.4) and proved that the order of convergence of the numerical method is O(h2−α ), 0 <
α < 1. Here we approximate the Hadamard finite-part integral by using the second-degree
compound quadrature formula and obtain an asymptotic expansion of the error for solving
(3.1.3)-(3.1.4), which implies that the order of convergence of the numerical method is
O(h3−α ), 0 < α < 1. Moreover, a high order finite difference method (O(h3−α ), 0 < α < 2)
for approximating the Riemann-Liouville fractional derivative is given, which may be
applied to construct high order numerical methods for solving time-space-fractional partial
differential equations.
y(0) = y0 . (3.2.2)
where α is the order of the derivative, f is a given function on the interval [0,1], β ≤ 0 and
y is the unknown function. From the definition of Riemann-Liouville fractional derivative
23
Lemma 3.2.1. [35] The Hadamard finite-part integral for the Riemann-Liouville deriva-
tive (3.2.4) can be written as
I t
1
R α
0 Dt y(t) = (t − τ )−1−α y(τ )dτ.
Γ(−α) 0
H
where 0 < α ≤ 1, represents the symbol of Hadamard finite-part integral.
Lemma 3.2.2. [26] Assume that 0 = t0 < t1 < t2 < · · · < tk < · · · < tn = 1 is the
partition on the interval [0, 1] and 0 < α < 1, then at t = tj ,
j
R α −α
X t−α
j
0 Dt [y(tj )] = h ωkj y(tj − tk ) + Rj , j = 1, 2, 3, . . . , n.
k=0
Γ(α)
where ωkj are called the weights and Rj is the remainder term given by
where ω is the new variable [y(τ ) = y(tj − tj ω)] introduce in the proof, h is the time-step
size and the weights ωkj satisfy
1,
k=0
Γ(2 − α)ωkj = −2k 1−α + (k − 1)1−α + (k + 1)1−α , k = 1, 2, . . . , j − 1
−(α − 1)k −α + (k − 1)1−α − k 1−α ,
k=j
The proof for this Lemma 3.2.2 is straightforward and requires a piecewise linear
Lagrange interpolation polynomial.
Proof. We have
I tj
R α 1 y(τ )
0 Dt y(tj ) = dτ.
Γ(−α) 0 (tj − τ )α+1
Suppose tj − τ = tj ω, then
t−α 1
y(tj − tj ω)
I
R α j
0 Dt y(tj ) = dω.
Γ(−α) 0 ω α+1
24
where g1 (ω) is the piecewise linear interpolation polynomial of g(ω) with the equispaced
nodes and Rj is the remainder term.
Note that,
k k−1
ω− ω−
j k−1 j k k−1 k
g1 (ω) = k−1 k
g + k k−1
g , on , .
j
− j
j j
− j
j j j
Thus,
I 1 I 1
−(1+α)
g(ω)ω dω ≈ g1 (ω)ω −(1+α) dω = Qj (g). (3.2.5)
0 0
Applying the Lagrange interpolation polynomial on each integral on the right hand side
of (3.2.5) gives
I 1 1
" 1 #
j
−(1+α)
I
j ω− jω−0 1
g1 (ω)ω dω = 1 g(0)
+ 1 g ω −(α+1) dω
0 0 0− jj
−0 j
" #
ω − 2j 1
2 2
ω −
I I
j
−(1+α)
j 1 j 2
g1 (ω)ω dω = 1 2g + 2 1g ω −(α+1) dω
1
j
1
j j
−j j j
−j j
...
j−1 j−1 j j−1
" #
ω− ω−
j−1
I I
j j
j j j
g1 (ω)ω −(1+α) dω = j−1 j g + j j−1 g ω −(α+1) dω.
j
j
j
j j
− j
j j
− j
j
25
Therefore
1 1 1
Qj (g) = −α
g − g(0)
(1 − α)j j α(1 − α)j −α
j " −α 1−α
X k−1 k k j k
+ −
k=2
j −α j 1−α j
−α −α #
k k−1 j k−1
− +
−α j 1−α j
" 1−α −α
k j k k−1 k
+ g −
j 1−α j −α j
1−α −α #
j k−1 k−1 k−1
− +
1−α j −α j
j X j
X k
= αkj g = αkj y(tj − tk ),
k=0
j k=0
when k = 0,
−1
α0j = ,
α(1 − α)j −α
and when k = j,
" 1−α −α
j j j−1 j
αjj = −
1−α j −α j
1−α −α #
j j−1 j−1 j−1
− +
1−α j −α j
j j−1 (j − 1)1−α (j − 1)1−α
= − − +
1−α −α (1 − α)j −α (−α)j −α
αj 1−α + (1 − α)(j − 1)j −α − α(j − 1)1−α + (α − 1)(j − 1)1−α
=
α(1 − α)j −α
αj 1−α + (1 − α)j 1−α − (1 − α)−α − (j − 1)1−α
=
α(1 − α)j −α
(α − 1)j −α − (j − 1)1−α + j 1−α
= .
α(1 − α)j −α
27
For k = 1, 2, 3, 4, . . . , j − 1, we have
−α −α −α
k+1 k+1 j k+1 k+1 k
αkj = − −
−α j 1−α j −α j
1−α 1−α −α 1−α
j k j k k−1 k j k−1
+ + − −
1−α j 1−α j −α j 1−α j
−α
k−1 k−1
+
−α j
1 1 1−α 1 1
= − (k + 1) + − + (k − 1)1−α
−αj −α (1 − α)j −α (1 − α)j −α (−α)j −α
" −α −α #
k+1 k k−1 k 1 1
+ − − + + k 1−α
−α j −α j (1 − α)j −α (1 − α)j −α
α−1−α −α + (α − 1)
= −α
(k + 1)1−α + (k − 1)1−α
α(1 − α)j α(1 − α)j −α
−α
2k k 2
+ + −α
k 1−α
α j (1 − α)j
1−α
(k + 1) (k − 1)1−α 2k 1−α
= − +
α(1 − α)j −α α(1 − α)j −α α(1 − α)j −α
1 h
1−α 1−α 1−α
i
= 2k − (k − 1) − (k + 1) .
α(1 − α)j −α
Thus, we get
j
t−α 1 t−α
I hX i
R α j −(α+1) j
0 Dt y(tj ) = g(ω)ω dω = αkj y(tj − tk ) + Rj (g)
Γ(−α) 0 Γ(−α) k=0
j
α
X t−α
j
= h wkj y(tj − tk ) + Rj (g),
k=0
Γ(−α)
where
1, k=0
−α
α(α−1)j αkj = Γ(2−α)ωkj = −2k 1−α + (k − 1)1−α + (k + 1)1−α , k = 1, 2, ..., j − 1
−(α − 1)k −α + (k − 1)1−α − k 1−α ,
k=j
where
We remark that Lemma 3.2.2 for 0 < α < 1 can be extended to the case for 1 < α < 2
to yield the following weights,
−1, k=0
α, k = 1, j = 0
α(1 − α)j −α αkj = 2 − 21−α , k = 1, j > 1
2k 1−α − (k − 1)1−α − (k + 1)1−α ,
k = 1, 2, . . . , j − 1, j ≥ 3
(α − 1)k −α − (k − 1)1−α + k 1−α ,
k = j, j ≥ 2.
These weights are obtained by following the same process from Lemmas 3.2.1 and 3.2.2.
The only difference lies from the Hadamard finite-part integral.
Theorem 3.2.3. [26] Let 0 < α < 1. Assume y(tj ) and yj are the exact and approximate
solutions of (3.2.7) and (3.2.8), respectively. Also, assume that the function involved is
sufficiently smooth, then there exists a constant C = C(α, g, β), such that
Lemma 3.2.4. Let 0 < α < 1 be the order of derivative and the sequence (dj ) satisfy
d =1
1
d = 1 + α(1 − α)j −α Pj α d .
j k=1 kj j−k
Then, we have
1 ≤ dj ≤ C α j α , j = 1, 2, . . . , n.
1
where the positive constant Cα = [(−α)(−α+1)Γ(−α)Γ(α+1)]
29
ej = y(tj ) − yj .
Note that
1
α0j = < 0, Γ(−α) < 0, β < 0, αkj > 0.
−α(1 − α)j −α
then we have
j
!
1 X
|ej | ≤ αkj |ej−k | + |Rj |
−α0j k=1
j
!
X
≤ α(1 − α)j −α αkj |ej−k | + j α−2 t2j ||y 00 ||∞
k=1
j
X
2 00 −α
≤ α(1 − α)h ||y ||∞ + α(1 − α)j αkj |ej−k | .
k=1
2 00
By denoting a = α(1 − α)h ||y ||∞ and assume for simplicity that e0 = 0 then we get
j
X
−α
|ej | ≤ a + α(1 − α)j αkj |ej−k | , j = 1, 2, . . . , n,
k=1
|ej | ≤ adj , j = 1, 2, . . . , n,
where
d =1
1
d = 1 + α(1 − α)j −α Pj αkj dj−k
j k=1
Next we will show that yn − y(tn ) has an asymptotic expansion. We have the following
Theorem.
Theorem 3.2.5. [33] Let tn = 1 be fixed. Let yn and y(tn ) be the solutions of (3.2.8)
and (3.2.7), respectively. Then there exist coefficients Cµ (α) and Cµ∗ (α) such that the
sequence {yn }possesses an asymptotic expansion of the form.
M1
X M2
X
y(tn ) = yn + Cµ (α)n α−µ
+ Cµ∗ (α)n−2µ + O(n−M3 ) f or n → ∞,
µ=2 µ=1
where µ∗ is the integer satisfying 2µ∗ < m + 1 − α < 2(µ∗ + 1), dµ and d∗µ are certain
coefficients that depend on g. Here g1 (t) is the linear interpolation polynomial of g(t) on
[0, 1].
Proof of Theorem 3.2.5. To understand the idea of the proof. We assume, e.g, that
y ∈ C m+2 [0, 1], m = 4. Then we shall prove that, there exist C̃2 , C̃1∗ , C̃3 , C̃4 , C̃2∗ , such
that
y(tn ) − yn = C̃2 nα−2 + C1∗ n−2 + C̃3 nα−3 + C̃4 nα−4 + C̃5 nα−5 (3.2.9)
tj j
To prove (3.2.9), we will consider yj − y(tj ), j → ∞, for fixed tj such that tn
= n
=
c0 , c0 is a constant, that is n depends on j. Here tn = 1, tj = c0 .
εj = y(tj ) − yj = C̃2 nα−2 + C1∗ n−2 + C̃3 nα−3 + C̃4 nα−4 (3.2.10)
By Lemma 3.2.6, we see that, for g ∈ C m+2 [0, 1], m ≥ 2, (m = 4), we have
I 1 I 1 j−1 I k+1
X j
−1−α −1−α
Rj (g) = t g(t)dt − t g1 (t)dt = t−1−α [g(t) − g1 (t)]dt
k
0 0 k=0 j
Rj (g) = d˜2 nα−2 + d˜∗1 n−2 + d˜3 nα−3 + d˜4 nα−4 + d˜5 nα−5 + d˜∗2 n−4 + O(nα−5 ). (3.2.12)
εj = y(tj ) − yj = C̃2 nα−2 + C̃1∗ n−2 + C̃3 nα−3 + C̃4 nα−4 + C̃5 nα−5 (3.2.13)
where
1
C̃` = d˜` , ` = 2, 3, 4, 5.
−cα0 Γ(−α)β− α1
1
C̃`∗ = d˜∗ , ` = 1, 2.
−c0 Γ(−α)β − α1 `
α
This shows that ε1 possesses an asymptotic expansion w.r.t powers of n, and we can
check indeed by comparing the coefficients of powers of n,
ε1 = C̃2 nα−2 + C̃1∗ n−2 + C̃3 nα−3 + C̃4 nα−4 + C̃5 nα−5 + C̃2∗ n−4 + O(nα−5 ).
ε` = C̃2 nα−2 + C̃1∗ n−2 + C̃3 nα−3 + C̃4 nα−4 + C̃5 nα−5 + C̃2∗ n−4 + O(nα−5 ), ` = 0, 1, 2, . . . , j − 1.
This shows that εj possesses an asymptotic expansion w.r.t powers of n, and we can check
indeed, comparing with the coefficients of powers of n,
εj = C̃2 nα−2 + C̃1∗ n−2 + C̃3 nα−3 + C̃4 nα−4 + C̃5 nα−5 + C̃2∗ n−4 + O(nα−5 ), j → ∞.
R α
0 Dt [y(t) − y0 ] = βy(t) + f (t), 0 ≤ t ≤ 1, (3.3.1)
where R α
0 Dt y(t) denotes the Riemann-Liouville fractional derivative defined by,
R α
0 Dt [y(t2j ) − y0 ] = βy(t2j ) + f (t2j ), j = 1, 2, . . . , M, (3.3.4)
34
2j+1
and at node t2j+1 = 2M
, the equation (3.3.1) satisfies
R α
0 Dt [y(t2j+1 ) − y0 ] = βy(t2j+1 ) + f (t2j+1 ), j = 0, 1, 2, . . . , M − 1. (3.3.5)
t−α
I t2j I 1
1 −1−α 2j
R α
0 Dt y(t2j ) = (t2j −τ ) y(τ ) dτ = w−1−α y(t2j −t2j w) dw. (3.3.6)
Γ(−α) 0 Γ(−α) 0
For every j, we replace g(w) = y(t2j − t2j w) in the integral in (3.3.6) by a piecewise
quadratic interpolation polynomial with equispaced nodes 0, 2j1 , 2j2 , . . . , 2j
2j
. We then have
I 1 I 1
−1−α
w g(w) dw = w−1−α g2 (w) dw + R2j (g), (3.3.7)
0 0
where
2−α (α + 2), for l = 0,
(−α)22−α ,
for l = 1,
(−α)(−2−α α) + 21 F0 (2),
for l = 2,
−F1 (k),
for l = 2k − 1,
(−α)(−α + 1)(−α + 2)(2j)−α αl,2j =
k = 2, 3, . . . , j,
1
2
(F2 (k) + F0 (k + 1)), for l = 2k,
k = 2, 3, . . . , j − 1,
1
F (j), for l = 2j,
2 2
F0 (k) =(2k − 1)(2k) (2k)−α − (2k − 2)−α (−α + 1)(−α + 2)
− (2k − 1) + 2k (2k)−α+1 − (2k − 2)−α+1 (−α)(−α + 2)
−α+2 −α+2
+ (2k) − (2k − 2) (−α)(−α + 1),
35
−α −α
F1 (k) =(2k − 2)(2k) (2k) − (2k − 2) (−α + 1)(−α + 2)
− (2k − 2) + 2k (2k)−α+1 − (2k − 2)−α+1 (−α)(−α + 2)
+ (2k)−α+2 − (2k − 2)−α+2 (−α)(−α + 1),
and
F2 (k) =(2k − 2)(2k − 1) (2k)−α − (2k − 2)−α (−α + 1)(−α + 2)
−α+1 −α+1
− (2k − 2) + (2k − 1) (2k) − (2k − 2) (−α)(−α + 2)
+ (2k)−α+2 − (2k − 2)−α+2 (−α)(−α + 1).
1 2 2j
Proof. For fixed 2j, let 0 < 2j
< 2j
< ··· < 2j
= 1 be a partition of [0, 1]. Denote
l
wl = 2j
, l = 0, 1, 2, . . . , 2j. We then have, for k = 1, 2, . . . , j,
(3.3.10)
36
where g2 (w) is the piecewise quadratic interpolation polynomial of g(w) with the nodes
1 2 2j
0, 2j+1 , 2j+1 , . . . , 2j+1 and R2j+1 (g) is the remainder term.
Similarly, we can prove the following lemma.
37
2−α (α + 2)
α0,2j = < 0, (3.3.14)
(−α)(−α + 1)(−α + 2)(2j)−α
and αk,2j > 0 for k > 0, k 6= 2. For k = 2, there exists α1 ∈ (0, 1) such that α2,2j ≥ 0 for
0 < α < α1 and α2,2j ≤ 0 for α1 < α < 1.
and, with j = 1, 2, . . . , M − 1,
2j
1 h X
y(t2j+1 ) = tα2j+1 Γ(−α)f (t2j+1 ) − αk,2j+1 y(t2j+1−k )
α0,2j+1 − tα2j+1 Γ(−α)β k=1
2j Z t1 i
X
+ y0 αk,2j+1 − R2j+1 (g) − tα2j+1 (t2j+1 − τ )−1−α y(τ ) dτ . (3.3.16)
k=0 0
Here α0,l −tαl Γ(−α)β < 0, l = 2j, 2j +1, which follow from (3.3.14) and Γ(−α) < 0, β < 0
and α0,2j+1 = α0,2j .
Let y2j ≈ y(t2j ) and y2j+1 ≈ y(t2j+1 ) denote the approximations of the exact solutions
y(t2j ) and y(t2j+1 ), respectively. Assume that the starting values y0 and y1 are given. We
define the following numerical methods for solving (3.3.1), with j = 1, 2, . . . , M,
2j 2j
1 h X X i
y2j = tα2j Γ(−α)f (t2j ) − αk,2j y2j−k + y0 αk,2j , (3.3.17)
α0,2j − tα2j Γ(−α)β k=1 k=0
and, with j = 1, 2, . . . , M − 1,
2j
1 h
α
X
y2j+1 = t2j+1 Γ(−α)f (t2j+1 ) − αk,2j+1 y2j+1−k
α0,2j+1 − tα2j+1 Γ(−α)β k=1
2j Z t1 i
X
+ y0 αk,2j+1 − tα2j+1 (t2j+1 − τ )−1−α y(τ ) dτ . (3.3.18)
k=0 0
38
R t1
Remark 5. In practice, we need to approximate 0
(t2j+1 − τ )−1−α y(τ ) dτ . One way
is to divide the integral [0, t1 ] into small intervals 0 ≤ t11 ≤ t21 ≤ · · · ≤ tN
1 = t1 with
Theorem 3.3.3. Let 0 < α < 1 and M be a positive integer. Let 0 = t0 < t1 <
t2 < · · · < t2j < t2j+1 < · · · < t2M = 1 be a partition of [0, 1] and h the stepsize.
Let y(t2j ), y(t2j+1 ), y2j and y2j+1 be the exact solutions and the approximate solutions of
(3.3.15) - (3.3.18), respectively. Assume that the function y ∈ C m+2 [0, 1], m ≥ 3. Further
assume that we obtain the exact starting values y0 = y(0) and y1 = y(t1 ). Then there exist
coefficients cµ = cµ (α) and c∗µ = c∗µ (α) such that the sequence {yl }, l = 0, 1, 2, . . . , 2M
possesses an asymptotic expansion of the form
µ ∗
m+1
X X
y(t2M ) − y2M = cµ (2M ) α−µ
+ c∗µ (2M )−2µ + o((2M )α−m−1 ), for M → ∞,
µ=3 µ=2
that is,
µ ∗
m+1
X X
y(t2M ) − y2M = cµ h µ−α
+ c∗µ h2µ + o(hm+1−α ), for h → 0,
µ=3 µ=2
where µ∗ is the integer satisfying 2µ∗ < m + 1 − α < 2(µ∗ + 1), and cµ and c∗µ are certain
coefficients that depend on y.
To prove Theorem 3.3.3, we need the following lemma for the asymptotic expansions
for the remainder terms R2j (g) and R2j+1 (g) in (3.3.7) and (3.3.12).
Lemma 3.3.4. Let 0 < α < 1 and g ∈ C m+2 [0, 1], m ≥ 3. Let R2j (g) and R2j+1 (g)
be the remainder terms in (3.3.7) and (3.3.12), respectively. Then we have, with l =
2, 3, . . . , 2j, 2j + 1, . . . , 2M,
µ∗
m+1
X X
Rl (g) = dµ lα−µ + d∗µ l−2µ + o(lα−m−1 ), (3.3.19)
µ=3 µ=2
39
where µ∗ is the integer satisfying 2µ∗ < m + 1 − α < 2(µ∗ + 1), and dµ and d∗µ are certain
coefficients that depend on g.
Proof. We follow the proof of Theorem 1.3 in [33] where the piecewise linear Lagrange
interpolation polynomials are used.
We first consider the case l = 2j for j = 1, 2, . . . , M . Let 0 = w0 < w1 < w2 < · · · <
k 1
w2j = 1, wk = 2j
,k = 0, 1, 2, . . . , 2j be a partition of [0, 1]. Let h1 = 2j
be the stepsize.
Let g2 (w) denote the piecewise quadratic Lagrange interpolation polynomial defined by
(3.3.9) on [w2l , w2l+2 ], l = 0, 1, 2, . . . , j − 1. Then we have
I 1 I 1
−1−α
R2j (g) = w g(w) dw − w−1−α g2 (w) dw
0 0
j−1 Z w2l+2 j−1 Z 1 h
X X
−1−α
= w g(w) − g2 (w) dw = (w2l + 2h1 s)−1−α g(w2l + 2h1 s)
l=0 w2l l=0 0
1 1 i
− (2s − 1)(2s − 2)g(w2l ) − (2s)(2s − 2)g(w2l+1 ) + (2s)(2s − 1)g(w2l+2 ) (2h1 ) ds.
2 2
By using the Taylor formula, we have
g 0 (w2l + 2h1 s) g 00 (w2l + 2h1 s)
g(w2l ) = g(w2l + 2h1 s) + (−2h1 s) + (−2h1 s)2
1! 2!
g 000 (w2l + 2h1 s) g (M )
(w 2l + 2h 1 s) (1)
+ (−2h1 s)3 + · · · + (−2h1 s)m + Rm+1 ,
3! m!
g 0 (w2l + 2h1 s) g 00 (w2l + 2h1 s)
g(w2l+1 ) = g(w2l + 2h1 s) + (h1 − 2h1 s) + (h1 − 2h1 s)2
1! 2!
g 000 (w2l + 2h1 s) g (m)
(w 2l + 2h 1 s) (2)
+ (h1 − 2h1 s)3 + · · · + (h1 − 2h1 s)m + Rm+1 ,
3! m!
g 0 (w2l + 2h1 s) g 00 (w2l + 2h1 s)
g(w2l+2 ) = g(w2l + 2h1 s) + (2h1 − 2h1 s) + (2h1 − 2h1 s)2
1! 2!
g 000 (w2l + 2h1 s) g (m)
(w 2l + 2h 1 s) (3)
+ (2h1 − 2h1 s)3 + · · · + (2h1 − 2h1 s)m + Rm+1 ,
3! m!
(3.3.20)
(i)
where Rm+1 , i = 1, 2, 3 denote the remainder terms. Thus we obtain
j−1 Z 1 h m−3 i
X X
−1−α
R2j (g) =(2h1 ) (w2l + 2h1 s) hr+3
1 g (r+3)
(w 2l + 2h1 s)π r (s) ds
l=0 0 r=0
j−1 1
XZ
+ (2h1 ) (w2l + 2h1 s)−1−α m+1 (s) ds = I + II,
l=0 0
(i)
where m+1 (s) depends on the remainder terms Rm+1 , i = 1, 2, 3 and πr (s) are some
functions of s.
40
For I, we have
m−3 Z 1 h j−1 i
X X
I= hr+3
1 2h1 (w2l + 2h1 s)−1−α g (r+3) (w2l + 2h1 s) πr (s) ds.
r=0 0 l=0
Applying Theorem 3.2 in [56], we have, with w̄l = w2l , h̄1 = 2h1 ,
j−1
X
2h1 (w2l + 2h1 s)−1−α g (r+3) (w2l + 2h1 s)
l=0
j−1
X
= h̄1 (w̄l + h̄1 s)−1−α g (r+3) (w̄l + h̄1 s)
l=0
m−r−3
X m−r−2
X
= aj (s)hj1 + a0,j (s)hj−α
1 + o(hm−r−2
1 ),
j=0 j=0
m−3
X m−r−2
X hZ 1 i
+ a0,j (s)πr (s) ds h3+r+j−α
1 + o(hm+1
1 )
r=0 j=0 0
µ ∗
m+1
X X
= dµ (2j) α−µ
+ d∗µ (2j)−2µ + o((2j)−m−1 ), (3.3.21)
µ=3 µ=2
where µ∗ is the integer satisfying 2µ∗ < m + 1 − α < 2(µ∗ + 1), and dµ and d∗µ are certain
coefficients that depend on g. We remark that the expansion does not contain any odd
integer of powers of (2j) which follows from the argument in the proof of Theorem 1.3 in
[33].
For II, we have, following the argument of the proof for Theorem 1.3 in [33],
j−1 Z 1
X
II = 2h1 (w2l + 2h1 s)−1−α m+1 (s) ds = o((2j)α−m−1 ).
l=0 0
41
I 2j I 2j
2j+1 2j+1
−1−α
R2j+1 (g) = w g(w) dw − w−1−α g2 (w) dw
0 0
j−1 w2l+2 j−1 Z 1
XZ X h
−1−α −1−α
= w g(w) − g2 (w) dw = (w2l + 2h1 s) g(w2l + 2h1 s)
l=0 w2l l=0 0
1 1 i
− (2s − 1)(2s − 2)g(w2l ) − (2s)(2s − 2)g(w2l+1 ) + (2s)(2s − 1)g(w2l+2 ) (2h1 ) ds.
2 2
Following the same argument as for the case l = 2j, we show that (3.3.19) also holds for
l = 2j + 1. Together these estimates complete the proof of Lemma 3.3.4.
Proof of Theorem 3.3.3. We follow the proof of Theorem 2.1 in [33] where the piecewise
linear Lagrange interpolation polynomials are used to approximate the Hadamard finite-
part integral.
Let us fix tl = c to be a constant for l = 1, 2, . . . , 2M . Let t2M = 1 be fixed. We will
investigate the difference
l
el = y(tl ) − yl , for l → ∞, with tl = lh = = c,
2M
l = c · (2M ), or M = l/(2c),
Let us first consider the case l = 2j. Subtracting (3.3.17) from (3.3.15), we have,
2j
noting t2j = (2j)h = 2M
= c,
2j
1 h X i
e2j = 2j α − αk,2j (y(t2j−k ) − y2j−k ) − R2j (g)
α0,2j − ( 2M ) Γ(−α)β k=1
2j
1 X
= α e
k,2j 2j−k + R2j (g) . (3.3.23)
cα Γ(−α)β − α0,2j k=1
Note that g(·) = y(t2j − t2j ·) ∈ C m+2 [0, 1], m ≥ 3, we have, by Lemma 3.3.4,
µ ∗
m+1
X X
R2j (g) = dµ (2j) α−µ
+ d∗µ (2j)−2µ + o((2j)α−m−1 ), for j → ∞, (3.3.24)
µ=3 µ=2
where µ∗ is the integer satisfying 2µ∗ < m + 1 − α < 2(µ∗ + 1), and dµ and d∗µ are certain
coefficients that depend on g.
Note that (2j)/(2M ) = c, we can write (3.3.24) into
µ ∗
m+1
X X
R2j (g) = d˜µ (2M )α−µ + d˜∗µ (2M )−2µ + o((2M )α−m−1 ), for j → ∞. (3.3.25)
µ=3 µ=2
Choose
1
cµ = d˜µ , µ = 3, 4, . . . , m + 1, (3.3.26)
−cα Γ(−α)β − 1/α
1
c∗µ = d˜∗µ , µ = 1, 2, . . . , µ∗ , (3.3.27)
−cα Γ(−α)β − 1/α
we will prove below that (3.3.22) holds for the coefficents cµ , c∗µ defined in (3.3.26) and
(3.3.27).
We shall use mathematical induction to prove (3.3.22). By assumption e0 = 0, e1 = 0,
hence (3.3.22) holds for l = 0, 1 with the coefficients given by (3.3.26) and (3.3.27). Let
2−α (α+2)(2M c)α
us now consider the case for l = 2. We have, noting that α0,l = (−α)(−α+1)(−α+2)
and
applying Lemma 3.3.4,
2
1 X
e2 = y(t2 ) − y2 = α αk,2 e2−k + R2 (g)
c Γ(−α)β − α0,2 k=1
h m+1 µ∗
1 X X
= α cµ (2M )α−µ
+ c∗µ (2M )−2µ + o((2M )α−m−1 )
c Γ(−α)β − α0,2 µ=3 µ=2
2
X i
· αk,2 − α0,2 + R2 (g) . (3.3.28)
k=0
43
(3.3.29)
This shows that the sequence e2 possesses an asymptotic expansion with respect to the
powers of 2M , and it is easy to check that, by comparing with the coefficients of powers
of (2M ), see [33],
µ ∗
m+1
X X
e2 = cµ (2M ) α−µ
+ c∗µ (2M )−2µ + o((2M )α−m−1 ).
µ=3 µ=2
Assume that (3.3.22) holds for l = 0, 1, . . . , 2j − 1. Then we have, following the same
argument for (3.3.29), noting 2j
P
k=0 αk,2j = −1/α and applying Lemma 3.3.4,
(3.3.30)
This shows that the sequence e2j possesses an asymptotic expansion with respect to
the powers of 2M , and it is easy to check that, by comparing with the coefficients of
powers of (2M ), see [33],
µ ∗
m+1
X X
e2j = cµ (2M ) α−µ
+ c∗µ (2M )−2µ + o((2M )α−m−1 ).
µ=3 µ=2
44
(3.3.31)
This again shows that the sequence e2j+1 possesses an asymptotic expansion with respect
to the powers of 2M , and it is easy to check that, by comparing with the coefficients of
powers of 2M , see [33],
µ ∗
m+1
X X
e2j+1 = cµ (2M )α−µ + c∗µ (2M )−2µ + o((2M )α−m−1 ).
µ=3 µ=2
Hence (3.3.22) holds also for l = 2j + 1. Together these estimates complete the proof of
(3.3.22). Applying l = 2M in (3.3.22), we complete the proof of Theorem 3.3.3.
C α
0 Dt y(t) = βy(t) + f (t), t ∈ [0, 1], (3.4.1)
y(0) = y0 , (3.4.2)
45
where y0 = 0, 0 < α < 1, β = −1 and f (t) = (t2 +2t2−α /Γ(3−α))+(t3 +3!t3−α /Γ(4−α)).
The exact solution is y(t) = t2 + t3 .
The main purpose is to check the order of convergence of the numerical method with
respect to the fractional order α. For various choices of α ∈ (0, 1), we computed the errors
at t = 1. We choose the stepsize h = 1/(5 × 2l ), l = 1, 2, . . . , 7, i.e, we divided the interval
[0, 1] into n = 1/h small intervals with nodes 0 = t0 < t1 < · · · < tn = 1. Then we
compute the error e(tn ) = y(tn ) − yn . By Theorem 3.3.3, we have
To observe the order of convergence we shall compute the error |e(tn )| at tn = 1 for
the different values of h. Denote |eh (tn )| the error at tn = 1 for the stepsize h. Let
hl = h = 1/(5 × 2l ) for a fixed l = 1, 2, . . . , 7. We then have
|ehl (tn )| Ch3−α
≈ l
= 23−α ,
|ehl+1 (tn )| Ch3−α
l+1
|ehl (tn )|
which implies that the order of convergence satisfies 3 − α ≈ log2 |ehl+1 (tn )|
. In Tables
3.4.1- 3.4.2, we compute the experimentally determined orders of convergence (EOC) for
the different values of α. The numerical results are consistent with the theoretical results.
−5
−10
−15
log2(|e(t)|)
−20
−25
−30
−35
−10 −9 −8 −7 −6 −5 −4 −3
log2(h)
Let y = log2 (|e(tn )|) and x = log2 (h). In Figure 3.1, we plot the function y = y(x) for the
different values of x = log2 (h) where h = 1/(5×2l ), l = 1, 2, . . . , 7. To observe the order of
convergence, we also plot the straight line y = (3−α)x, where α = 0.1, 0.2, 0.4, 0.5, 0.7, 0.8.
We see that these two lines are exactly parallel which means that the order of convergence
of the numerical method is O(h3−α ).
47
−5
−10
−15
log2(|e(t)|)
−20
−25
−30
−10 −9 −8 −7 −6 −5 −4 −3
log2(h)
−8
−10
−12
−14
−16
log2(|e(t)|)
−18
−20
−22
−24
−26
−28
−10 −9 −8 −7 −6 −5 −4 −3
log2(h)
−8
−10
−12
log2(|e(t)|) −14
−16
−18
−20
−22
−24
−26
−10 −9 −8 −7 −6 −5 −4 −3
log2(h)
−6
−8
−10
−12
log2(|e(t)|)
−14
−16
−18
−20
−22
−24
−10 −9 −8 −7 −6 −5 −4 −3
log2(h)
−6
−8
−10
−12
log2(|e(t)|)
−14
−16
−18
−20
−22
−10 −9 −8 −7 −6 −5 −4 −3
log2(h)
4.1 Introduction
A predictor-corrector approximation method [29] for fractional differential equations has
been developed by the three well-known mathematicians Kai Diethelm, Neville J. Ford
and Alan D. Freed. The popularity of this method is due to its suitability for use both
for linear and for nonlinear problems and the easy implementation of a computational
algorithm.
We consider numerical methods for solving the fractional differential equations
C α
0 Dt y(t) = f (t, y(t)), 0 < t < T, (4.1.1)
where the y0k may be arbitrary real numbers and α > 0. Here C α
0 Dt denotes the differential
50
51
The approach for solving the fractional differential equation (4.1.1)-(4.1.2) is based on
the discretization of the integral in the equivalent form of (4.1.1)-(4.1.2), see [28]. It is
well-known that (4.1.1)-(4.1.2) is equivalent to the Volterra integral equation
dαe−1 ν Z t
(ν) t 1
X
y(t) = y0 + (t − u)α−1 f (u, y(u)) du. (4.1.3)
ν=0
ν! Γ(α) 0
In [29], the authors approximated the integral in (4.1.3) by using a piecewise linear
interpolation polynomial and introduced a fractional Adams method for solving (4.1.1)-
(4.1.2) and proved that the order of convergence of the numerical method is O(h2 ) for
1 < α < 2 and O(h1+α ) for 0 < α < 1 if C α 2
0 Dt y(t) ∈ C [0, T ]
Let m be a positive integer and let 0 = t0 < t1 < t2 < · · · < t2j < t2j+1 < · · · < t2m = T
be a partition of [0, T ] and h the stepsize. Note that the system (4.1.1)-(4.1.2) is equivalent
52
(The second of the initial conditions only for 1 < α < 2 of course). At node t = t2j+1 , j =
1, 2, . . . , m − 1, we have
Z t2j+1
(1) t2j+1 1
y(t2j+1 ) = y0 + y0 + (t2j+1 − u)α−1 f (u, y(u)) du
1! Γ(α) 0
Z t1
(1) t2j+1 1
= y0 + y0 + (t2j+1 − u)α−1 f (u, y(u)) du
1! Γ(α) 0
Z t2j+1
1
+ (t2j+1 − u)α−1 f (u, y(u)) du
Γ(α) t1
Z t1
(1) t2j+1 1
= y0 + y0 + (t2j+1 − u)α−1 f (u, y(u)) du
1! Γ(α) 0
Z t2j
1
+ (t2j − u)α−1 f (u + h, y(u + h)) du (4.2.3)
Γ(α) 0
Rt
We will replace f (u, y(u)) in the integral 0 2j (t2j − u)α−1 f (u, y(u)) du in (4.2.2) by the
following piecewise quadratic polynomial, for t2l ≤ u ≤ t2l+2 , l = 0, 1, 2, . . . , j − 1 with
j = 1, 2, . . . , m,
and
Z t2j 2j
X
α−1
(t2j − u) Q2 (u) du = ck,2j f (tk+1 , y(tk+1 )), (4.2.7)
0 k=0
where
1
F (0),
2 0
if k = 0,
1
hα F (l) + 12 F2 (l − 1), if k = 2l, l = 1, 2, . . . , j − 1,
2 0
ck,2j =
α(α + 1)(α + 2)
−F1 (l), if k = 2l + 1, l = 0, 1, 2, . . . , j − 1,
1 F (j − 1), if k = 2j,
2 2
and
α+2 α+2
F0 (l) = α(α + 1) (2j − 2l) − (2j − 2l − 2)
α+1 α+1
+ α(α + 2) 2(2j) − (2l + 1) − (2l + 2) (2j − 2l − 2) − (2j − 2l)
+ (α + 1)(α + 2) (2j − 2l − 1)(2j − 2l − 2) (2j − 2l)α − (2j − 2l − 2)α ,
F1 (l) = α(α + 1) (2j − 2l)α+2 − (2j − 2l − 2)α+2
α+1 α+1
+ α(α + 2) 2(2j) − (2l) − (2l + 2) (2j − 2l − 2) − (2j − 2l)
+ (α + 1)(α + 2) (2j − 2l)(2j − 2l − 2) (2j − 2l)α − (2j − 2l − 2)α ,
F2 (l) = α(α + 1) (2j − 2l)α+2 − (2j − 2l − 2)α+2
+ α(α + 2) 2(2j) − (2l) − (2l + 1) (2j − 2l − 2)α+1 − (2j − 2l)α+1
α α
+ (α + 1)(α + 2) (2j − 2l)(2j − 2l − 1) (2j − 2l) − (2j − 2l − 2) ,
54
Proof. We have
Note that,
Z t2l+2
(t2j − u)α−1 (u − t2l+1 )(u − t2l+2 )du
t2l
Z t2l+2
= (t2j − u)α−1 [(u − t2j ) + (t2j − t2l+1 )][(u − t2j ) + (t2j − t2l+2 )]du
t
Z 2lt2l+2 Z t2l+2
α+1
= (t2j − u) du − (t2j − u)α (t2j − t2l+1 − t2l+2 )du
t t2l
Z 2lt2l+2
+ (t2j − u)α−1 (t2j − t2l+1 )(t2j − t2l+2 )du
t2l
(2j − 2l)α+2 − (2j − 2l − 2)α+2
= hα+2
α+2
[2.2j − (2l + 1) − (2l + 2)][(2j − 2l)α+1 − (2j − 2l − 2)α+1 ]
+
α+1
[2j − (2l + 1)][2j − (2l + 2)][(2j − 2l)α − (2j − 2l − 2)α ]
+
α
α+2
h
= α(α + 1)[(2j − 2l)α+2 − (2j − 2l − 2)α+2 ]
α(α + 1)(α + 2)
+ α(α + 2)[2.2j − (2l + 1) − (2l + 2)][(2j − 2l)α+1 − (2j − 2l − 2)α+1 ]
α α
+ (α + 1)(α + 2)(2j − 2l − 1)(2j − 2l − 2)[(2j − 2l) − (2j − 2l − 2)
hα+2
= F0 (l), l = 0, 1, 2, 3 . . . , j − 1.
α(α + 1)(α + 2)
55
Similarly, we have,
Z t2l+2
(t2j − u)α−1 (u − t2l )(u − t2l+2 )du
t2l
Z t2l+2
= (t2j − u)α−1 [(u − t2j ) + (t2j − t2l )][(u − t2j ) + (t2j − t2l+2 )]du
t
Z 2lt2l+2 Z t2l+2
α+1
= (t2j − u) du − (t2j − u)α (2t2j − t2l − t2l+2 )du
t t2l
Z 2lt2l+2
+ (t2j − u)α−1 (t2j − t2l )(t2j − t2l+2 )du
t2l
hα+2
= α(α + 1)[(2j − 2l)α+2 − (2j − 2l − 2)α+2 ]
α(α + 1)(α + 2)
+ α(α + 2)[2.2j − 2l − (2l + 2)][(2j − 2l − 2)α+1 − (2j − 2l)α+1 ]
α α
+ (α + 1)(α + 2)(2j − 2l)(2j − 2l − 2)[(2j − 2l) − (2j − 2l − 2)
hα+2
= F1 (l), l = 0, 1, 2, 3 . . . , (j − 1).
α(α + 1)(α + 2)
and,
Z t2l+2
(t2j − u)α−1 (u − t2l )(u − t2l+1 )du
t2l
Z t2l+2
= (t2j − u)α−1 [(u − t2j ) + (t2j − t2l )][(u − t2j ) + (t2j − t2l+1 )]du
t
Z 2lt2l+2 Z t2l+2
α+1
= (t2j − u) du − (t2j − u)α (2t2j − t2l − t2l+1 )du
t t2l
Z 2lt2l+2
+ (t2j − u)α−1 (t2j − t2l )(t2j − t2l+1 )du
t2l
(2j − 2l)α+2 − (2j − 2l − 1)α+2
= hα+2
α+2
[2.2j − 2l − (2l + 1)][(2j − 2l − 1)α+1 − (2j − 2)α+1 ]
+
α+1
(2j − 2l)(2j − 2l − 1)[(2j − 2l)α − (2j − 2l − 1)α ]
+
α
α+2
h
= α(α + 1)[(2j − 2l)α+2 − (2j − 2l − 1)α+2 ]
α(α + 1)(α + 2)
+ α(α + 2)[2.2j − 2l − (2l + 1)][(2j − 2l − 1)α+1 − (2j − 2l)α+1 ]
+ (α + 1)(α + 2)(2j − 2l)(2j − 2l − 1)[(2j − 2l)α − (2j − 2l − 1)α
hα+2
= F2 (l), l = 0, 1, 2, 3 . . . , j − 1.
α(α + 1)(α + 2)
Thus we get,
56
Z t2j
(t2j − u)α−1 P2 (u)du
0
j−1
X hα F0 (l)
= g(t2l )
l=0
α(α + 1)(α + 2) 2
hα F1 (l) hα F2 (l)
− g(t2l+1 ) + g(t2l+2 )
α(α + 1)(α + 2) 2 α(α + 1)(α + 2) 2
2j
hα X
= ck,2j f (tk , y(tk )),
α(α + 1)(α + 2) k=0
We now define a fractional Adams numerical method for solving (4.1.3). Let yl ≈ y(tl )
denote the approximation of y(tl ), l = 0, 1, 2, . . . , 2m. The corrector formula is defined
by
2j−1
(1) t2j 1 X
P
y2j = y0 + y0 + ck,2j f (tk , yk ) + c2j,2j f (t2j , y2j ) , j = 1, 2, . . . , m, (4.2.8)
1! Γ(α) k=0
and
Z t1
(1) t2j+1 1
y2j+1 = y0 + y0 + (t2j+1 − u)α−1 f (u, y(u)) du
1! Γ(α) 0
2j−1
1 X P
+ ck,2j f (tk+1 , yk+1 ) + c2j,2j f (t2j+1 , y2j+1 ) , j = 1, 2, . . . , m − 1.
Γ(α) k=0
(4.2.9)
The remaining problem is the determination of the predictor formula required to calcu-
P P
late y2j and y2j+1 . The idea is the same as the one described above: we replace f (u, y(u))
and f (u + h, y(u + h)) of the integrals on the right-hand sides of equations (4.2.2) and
(4.2.3), respectively, by the piecewise linear interpolation polynomials and obtain
2j−1
(1) t2j 1 X
P PP
y2j = y0 +y0 + ak,2j f (tk , yk )+a2j,2j f (t2j , y2j ) , j = 1, 2, . . . , m, (4.2.10)
1! Γ(α) k=0
and, with j = 1, 2, . . . , m − 1,
2j
(1) t2j+1 1 X
P PP
y2j+1 = y0 + y0 + ak,2j+1 f (tk , yk ) + a2j+1,2j+1 f (t2j+1 , y2j+1 ) , (4.2.11)
1! Γ(α) k=0
57
PP PP
Similarly, to calculate y2j and y2j+1 , we replace f (u, y(u)) and f (u + h, y(u + h)) in
the integrals on the right-hand sides of equations (4.2.2) and (4.2.3), respectively, by the
piecewise constants and obtain
2j−1
PP (1) t2j 1 X
y2j = y0 + y0 + bk,2j f (tk , yk ), j = 1, 2, . . . , m, (4.2.12)
1! Γ(α) k=0
and
2j
PP (1) t2j+1 1 X
y2j+1 = y0 + y0 + bk,2j+1 f (tk , yk ), j = 1, 2, . . . , m − 1. (4.2.13)
1! Γ(α) k=0
hα
bk,n+1 = (n + 1 − k)α − (n − k)α . (4.2.14)
α
Our basic fractional Adams method, is completely described now by equations (4.2.8)
- (4.2.13).
Remark 8. In practice, we need to approximate the integral in (4.2.9). We shall use the
same ideas as in Remark 5.
We have thus completed the description of our numerical algorithm. Now we will
discuss the error analysis of the scheme.
(4.2.3), (4.2.8), (4.2.9), respectively. Assume that y0 = y(0) and y1 = y(t1 ) exactly. Then
there exists a positive constant C0 > 0 such that
C0 h1+2α , if 0 < α ≤ 1,
max |y(tk ) − yk | ≤
0≤k≤2m C0 h3 , if 1 < α ≤ 2.
Lemma 4.3.2 ( Theorem 2.4 [28]). Let 0 < α ≤ 2. If z ∈ C 1 [0, T ], then there is a
constant C1α depending only on α such that
Z t2j 2j−1
X
α−1
(t2j − u) z(u) du − bk,2j z(tk ) ≤ C1α tα2j h.
0 k=0
hα α α
bk,2j = (2j − k) − (2j − 1 − k) .
α
Lemma 4.3.3 ( Theorem 2.5 [28]). Let 0 < α ≤ 2. If z ∈ C 2 [0, T ], then there is a
constant C2α depending only on α such that
Z t2j 2j
X
α−1
(t2j − u) z(u) du − ak,2j z(tk ) ≤ C2α tα2j h2 .
0 k=0
Lemma 4.3.4. Let 0 < α ≤ 2. If z ∈ C 3 [0, T ], then there is a constant C3α depending
only on α such that
Z t2j 2j
X
α−1
(t2j − u) z(u) du − ck,2j z(tk ) ≤ C3α tα2j h3 . (4.3.1)
0 k=0
and
Z t2j+1 2j
X
α−1
(t2j+1 − u) z(u) du − ck,2j z(tk+1 ) ≤ C3α tα2j+1 h3 , (4.3.2)
t1 k=0
59
Proof. We have
Z t2j 2j
X
α−1
I= (t2j − u) z(u) du − ck,2j z(tk )
0 k=0
Z t2j Z t2j
= (t2j − u)α−1 z(u) du − (t2j − u)α−1 P2 (u) du, (4.3.3)
0 0
where P2 (u) is the piecewise quadratic interpolation polynomial of z(u), defined by (4.2.4).
Thus we have
j−1 Z t2k+2
X
α−1
|I| = (t2j − u) z(u) − P2 (u) du
k=0 t2k
j−1 t2k+2
XZ z 000 (ξ)
= (t2j − u)α−1 (u − t2k )(u − t2k+1 )(u − t2k+2 ) du
k=0 t2k 3!
000 t2j
kf k∞
Z
≤ (2h)3 (t2j − u)α−1 du = C3α tα2j h3 ,
3! 0
Lemma 4.3.5. [28] Let 0 < α ≤ 2 and m be a positive integer. Let ak,2j and bk,2j , k =
0, 1, 2, . . . , 2j, j = 1, 2, . . . , m be introduced in (4.2.10) and (4.2.12), respectively. Then
we have
and
2j 2j
X 1 X 1 α
ak,2j ≤ T α, bk,2j ≤ T . j = 1, 2, . . . , m.
k=0
α k=0
α
and
2j
X 1 α
ck,2j ≤ T . (4.3.5)
k=0
α
F1 (l) ≤ 0, l = 0, 1, 2, . . . , j − 1. (4.3.7)
F0 (l) + F2 (l − 1) ≥ 0, l = 1, 2, . . . , j − 1. (4.3.8)
(4.3.9)
Hence (4.3.8) follows from (4.3.9). Finally we can also show F0 (0) ≥ 0 and F2 (j − 1) ≥ 0.
Hence we prove (4.3.4).
61
Proof of Theorem 4.3.1. We first consider the case where 1 < α ≤ 2. We will use mathe-
matical induction. Note that, by assumptions, |y(t0 ) − y0 | = 0, |y(t1 ) − y1 | = 0. Assume
that
|y(tk ) − yk | ≤ C0 h3 , (4.3.10)
= I1 + II1 + III1 .
For II1 , we have, by Lemma 4.3.6 and the Lipschitz condition (4.2.1),
2j−1 2j−1
X X
|II1 | ≤ ck,2j |f (tk , y(tk )) − f (tk , yk )| ≤ ck,2j L|y(tk ) − yk |
k=0 k=0
1
≤ T α L max |y(tk ) − yk |.
α 0≤k≤2j−1
62
P
|III1 | ≤ c2j,2j |f (t2j , y(t2j )) − f (t2j , y2j )| ≤ D3α hα L|y(t2j ) − y2j
P
|.
P
Now let us consider the bound for |y(t2j ) − y2j |. We have
P
Γ(α) y(t2j ) − y2j
Z t2j 2j−1
X
α−1
= (t2j − u) f (u, y(u)) du − ak,2j f (tk , yk ) − a2j,2j f (t2j , tP2jP )
0 k=0
Z t2j Z t2j
= (t2j − u)α−1 f (u, y(u)) du − (t2j − u)α−1 P1 (u) du
0 0
2j−1
X
+ ak,2j f (tk , y(tk )) − f (tk , yk ) + a2j,2j f (t2j , y(t2j )) − f (t2j , tP2jP )
k=0
= I2 + II2 + III2 .
For II2 , we have, by Lemma 4.3.5 and the Lipschitz condition (4.2.1),
2j−1 2j−1
X X
|II2 | ≤ ak,2j |f (tk , y(tk )) − f (tk , yk )| ≤ ak,2j |y(tk ) − yk |
k=0 k=0
1
≤ T α L max |y(tk ) − yk |.
α 0≤k≤2j−1
PP
|III2 | ≤ a2j,2j |f (t2j , y(t2j )) − f (t2j , y2j )| ≤ D2α hα L|y(t2j ) − y2j
PP
|.
PP
We also need to consider the bound for |y(t2j ) − y2j |. We have
Z t2j 2j−1
X
PP α−1
Γ(α) y(t2j ) − y2j = (t2j − u) f (u, y(u)) du − bk,2j f (tk , yk )
0 k=0
Z t2j 2j−1
X
= (t2j − u)α−1 f (u, y(u)) du − bk,2j f (tk , y(tk ))
0 k=0
2j−1
X
+ bk,2j f (tk , y(tk )) − f (tk , yk ) = I3 + II3 .
k=0
63
1 α
|II3 | ≤ T L max |y(tk ) − yk |.
α 0≤k≤2j−1
1
Γ(α)|y(t2j ) − y2j | ≤ C3α T α h3 + T α L max |y(tk ) − yk |
α 0≤k≤2j−1
1 1
+ D3α hα L C2α T α h2 + T α L max |y(tk ) − yk |
Γ(α) α 0≤k≤2j−1
α α 1 h
α α 1 α i
+ D2 h L C T h + T L max |y(tk ) − yk |
Γ(α) 1 α 0≤k≤2j−1
α α α 2+α
h D LC2 T h Dα Dα L2 C1α T α h1+2α i
≤ C3α T α h3 + 3 + 3 2
Γ(α) Γ(α)2
h1 Dα L2 ( α1 T α )hα D3α D2α ( α1 T α )L3 h2α i
+ T αL + 3 + max |y(tk ) − yk |.
α Γ(α) Γ(α)2 0≤k≤2j−1
1
We first choose T sufficiently small, see Lemma 3.1 in [28] such that Γ(α+1)
T αL ≤ 12 .
Then we fix this value for T and make the sum of the remaining terms in the right hand
C0 3
side of (4.3.11) smaller than 2
h (for sufficiently small h) by choosing C0 sufficiently
large. Hence we obtain, for 1 < α ≤ 2,
C0 3 C0 3
|y(t2j ) − y2j | ≤ h + h = C0 h3 . (4.3.12)
2 2
|y(t2j+1 ) − y2j+1 | ≤ C0 h3 , j = 1, 2, . . . , m − 1.
The initial conditions were chosen to be homogeneous (y(0) = 0, y 0 (0) = 0; the latter
only in the case 1 < α < 2). This equation has been chosen because it exhibits a difficult
(nonlinear and nonsmooth) right-hand side, and yet we are able to find its exact solution,
thus allowing us to compare the numerical results for this nontrivial case to the exact
results. Indeed, the exact solution of this initial value problem is
9
y(t) = t8 − 3t4+α/2 + tα ,
4
and hence
which implies C α 3
0 Dt y ∈ C [0, T ] for arbitrary T > 0 and 0 < α ≤ 2, and thus the conditions
C0 h1+2α ,
if 0 < α ≤ 1,
max |y(tk ) − yk | ≤
0≤k≤2m C0 h3 , if 1 < α ≤ 2.
66
Table 4.4.1: Numerical results at t = 1 in Example 9 with the different fractional order
α<1
Table 4.4.2: Numerical results at t = 1 in Example 9 with the different fractional order
α>1
−4
−6
−8
log2(|e(t)|)
−10
−12
−14
−16
−18
−10 −9 −8 −7 −6 −5 −4 −3
log2(h)
−5
−10
−15
log2(|e(t)|)
−20
−25
−30
−35
−10 −9 −8 −7 −6 −5 −4 −3
log2(h)
Let y = log2 (|e(tn )|) and x = log2 (h). We plot the function y = y(x) for the different
values of x = log2 (h) where h = 1/(5 × 2l ), l = 1, 2, . . . , 7. To observe the order of
convergence, we also plot the straight line y = (1 + 2α)x, where α = 0.35. We see that
these two lines are almost parallel which confirms that the order of convergence of the
numerical method is O(h1+2α ).
In Figure 4.4.2, we will plot the order of convergence for α = 1.25. We plot the function
y = y(x) for the different values of x = log2 (h) where h = 1/(5 × 2l ), l = 1, 2, . . . , 7. To
observe the order of convergence, we also plot the straight line y = 3x. We observe that
the order of convergence is higher than 3 ( almost 1 + 2α).
Chapter 5
5.1 Introduction
The aim of this chapter is to discuss convergence acceleration methods for fractional dif-
ferential equation by extrapolation procedure. We will consider Richardson extrapolation
algorithms for solving higher order fractional differential equations. Richardson extrapo-
lation is an idea which can often be used to improve the convergence order of the numerical
method: from a method of order O(hk0 ) we can get a method of order O(hk1 , k0 < k1 ).
A = f 0 (x0 ).
By Taylor formula,
0 f (x0 + h) − f (x0 ) f 00 (x0 ) f 000 (x0 ) 2
A = f (x0 ) = − h− h − ...
h 2! 3!
68
69
Denote
f (x0 + h) − f (x0 )
A0 (h) = ,
h
we have
A = A0 (h) + a0 h + a1 h2 + . . . (5.1.1)
We use the stepsize ht , t > 0, for example t = 2, we get the approximate A0 ( ht ) i.e.,
h h h
A = A0 ( ) + a0 ( ) + a1 ( )2 + . . . (5.1.2)
t t t
Multiplying t in both sides of (5.1.2) we get
h h
tA = tA0 ( ) + a0 h + a1 t( )2 + . . . . (5.1.3)
t t
Subtracting (5.1.1) from (5.1.3), we get
h
(t − 1)A = [tA0 ( ) − A0 (h)] + O(h2 ).
t
i.e.
tA0 ( ht ) − A0 (h)
A= + O(h2 ),
t−1
Denote
tA0 ( ht ) − A0 (h)
A1 (h) = ,
t−1
we get
A = A1 (h) + O(h2 ).
Thus we see that A1 (h) is a numerical method of O(h2 ). Let t = 2 , we get the extrapo-
lation formula
h
A1 (h) = 2A0 ( ) − A0 (h).
2
70
with 0 < k0 < k1 < k2 < . . . , we use the stepsize ht , t > 0. (for example t = 2) to get the
approximation A0 ( ht ), i.e.
h h h h
A = A0 ( ) + a0 ( )k0 + a1 ( )k1 + a2 ( )k2 + . . . . (5.2.2)
t t t t
Multiplying tk0 in both sides, we get
h h h
tk0 A = tk0 A0 ( ) + a0 (h)k0 + a1 tk0 ( )k1 + a2 tk0 ( )k2 + . . . . (5.2.3)
t t t
Subtracting (5.2.1) from (5.2.3), we have
tk0 A0 ( ht ) − A0 (h)
A= + b1 hk1 + b2 hk2 + . . .
tk0 − 1
Denote
tk0 A0 ( ht ) − A0 (h)
A1 (h) = ,
tk0 − 1
we have
Thus A1 (h) is a numerical method of convergence order O(hk1 ), we can continue this
process to construct the numerical methods of order O(hk2 ), O(hk3 ), . . . . Choose t = 2
we first calculate A0 (h), A0 ( h2 ), A0 ( 2h2 ), A0 ( 2h3 ) which has convergence order O(hk0 ).
We next calculate A1 (h), A1 ( h2 ), A1 ( 2h2 ), . . . which has convergence order O(hk1 ). Sim-
ilarly, we can calculate A2 (h), A2 ( h2 ), A2 ( 2h2 ), . . . which has convergence order O(hk2 ).
We proceed by setting up a triangular array ( so-called Romberg tableau) of approxi-
mation value for A of the form
A0 (h)
A0 ( h2 ) A1 (h)
A0 ( 2h2 ) A1 ( h2 ) A2 (h)
A0 ( 2h3 ) A1 ( 2h2 ) A2 ( h2 )
A0 ( 2h4 ) A1 ( 2h3 ) A2 ( 2h2 )
71
. . .
. . .
. . .
Here
2k0 A0 ( h2 ) − A0 (h) h 2k0 A0 ( 2h2 ) − A0 ( h2 )
A1 (h) = , A1 ( ) = ,
2k0 − 1 2 2k0 − 1
To observe the order O(hk0 ) from A0 (h), A0 ( h2 ), A0 ( 2h2 ), . . . we can use the following
idea.
Note that,
h h h
|e0 ( )| = |A − A0 ( )| ≤ C( )k0 .
2 2 2
Thus
|e0 (h)| hk0 |e0 (h)|
≈ = 2k0 , k0 = log2 .
|e0 ( h2 )| ( h2 )k0 |e0 ( h2 )|
Hence one can calculat all the values
|e0 (h)| |e0 ( h2 )| |e0 ( 2h2 )|
log2 , log2 , log2 ,...
|e0 ( h2 )| |e0 ( 2h2 )| |e0 ( 2h3 )|
and observe that the values should be k0 approximately.
numerical methods for quadratic interpolation polynomials. Numerical results show that
the approximate solutions of these two numerical methods have the expected asymptotic
expansions.
We consider the Richardson extrapolation algorithms for solving the following frac-
tional order differential equation
C α
0 Dt y(t) = f (t, y(t)), 0 < t ≤ T, (5.2.4)
(k)
y (k) (0) = y0 , k = 0, 1, 2, . . . , dαe − 1, (5.2.5)
(k)
where the y0 may be arbitrary real numbers and α > 0. Here C α
0 Dt denotes the differential
C α
0 Dt y(t) = βy(t) + f (t), 0 ≤ t ≤ 1, (5.2.6)
y(0) = y0 , (5.2.7)
where β < 0 and f is a given function on [0, 1]. Diethelm and Walz [33] proved that
the approximate solution of the numerical algorithm in [26] has an asymptotic expansion.
For the general nonlinear fractional differential equation (5.2.4) -(5.2.5),Diethelm, Ford
and Freed [28] introduced a fractional Adams-type predictor-corrector method for solv-
ing (5.2.4)-(5.2.5) and numerical evidence suggests that the approximate solution of the
numerical method in [28] has also an asymptotic expansion.
73
Recently, Yan, Pal and Ford [93] extended the numerical method in [26] and obtained a
high order numerical method for solving (5.2.6) - (5.2.7) and proved that the approximate
solution has an asymptotic expansion.
In this section we will consider a higher order numerical method for solving (5.2.6)-(5.2.7).
It is well-known that (5.2.6)-(5.2.7) is equivalent to, with 0 < α < 1,
R α
0 Dt [y(t) − y0 ] = βy(t) + f (t), 0 ≤ t ≤ 1, (5.3.1)
where R α
0 Dt y(t) denotes the Riemann-Liouville fractional derivative defined by,
R α
0 Dt [y(tj ) − y0 ] = βy(tj ) + f (tj ), j = 1, 2, . . . , M.
Note that
tj t−α 1
I I
1 −1−α j
R α
0 Dt y(tj ) = (tj − u) y(u) du = w−1−α y(tj − tj w) dw. (5.3.4)
Γ(−α) 0 Γ(−α) 0
H1
For every j, we denote g(w) = y(tj − tj w) and approximate 0
w−1−α g(w) dw by
H1
0
w−1−α g1 (w) dw, where g1 (w) is the piecewise linear interpolation polynomial on the
74
t−α 1
I
j
R α
0 Dt y(tj ) = w−1−α y(tj − tj w) dw
Γ(−α) 0
j I
t−α
j
X tk
−1−α
= w g1 (w) dw + Rj (g)
Γ(−α) k=1 tk−1
j
t−α
j
X
= αk,j y(tj−k ) + Rj (g) ,
Γ(−α) k=0
where αk,j , k = 0, 1, 2, . . . , j are weights and Rj (g) is the remainder term. Thus (5.3.1)
satisfies, with j = 1, 2, . . . , M ,
1 h
y(tj ) = α
tαj Γ(−α)f (tj )
α0,j − tj Γ(−α)β
j j i
X X
− αk,j y(tj−k ) + y0 αk,j − Rj (g) . (5.3.5)
k=1 k=0
Let yj ≈ y(tj ) be the approximate solutions of y(tj ). We define the following finite
difference method for solving (5.2.6) - (5.2.7), with j = 1, 2, . . . , M ,
j j
1 h
α
X X i
yj = t Γ(−α)f (t j ) − α k,j yj−k + y0 α k,j . (5.3.6)
α0,j − tαj Γ(−α)β j k=1 k=0
Diethelm and Walz [33] proved the following asymptotic expansion theorem.
Theorem 5.3.1 (Theorem 2.1 in [33]). Let 0 < α < 1 and M be a positive integer. Let
0 = t0 < t1 < t2 < · · · < tj < · · · < tM = 1 be a partition of [0, 1] and h the stepsize. Let
y(tj ) and yj be the exact and the approximate solutions of (5.3.5) and (5.3.6), respectively.
Assume that the function y ∈ C m+2 [0, 1], m ≥ 2. Then there exist coefficients cµ = cµ (α)
and c∗µ = c∗µ (α) such that the sequence {yl }, l = 0, 1, 2, . . . , M possesses an asymptotic
expansion of the form
µ ∗
m+1
X X
y(tM ) − yM = cµ (M ) α−µ
+ c∗µ (M )−2µ + o((M )α−m−1 ), for M → ∞,
µ=2 µ=1
that is,
µ ∗
m+1
X X
y(tM ) − yM = cµ h µ−α
+ c∗µ h2µ + o(hm+1−α ), for h → 0,
µ=2 µ=1
where µ∗ is the integer satisfying 2µ∗ < m + 1 − α < 2(µ∗ + 1), and cµ and c∗µ are certain
coefficients that depend on y.
75
Yan, Pal and Ford [93] extended the numerical method in Diethelm and Walz [33] and
obtained a high order numerical method for solving (5.2.6)- (5.2.7). Let M be a fixed
positive integer and let 0 = t0 < t1 < t2 < · · · < t2j < t2j+1 < · · · < t2M = 1 be a partition
2j
of [0, 1] and h the stepsize. At the nodes t2j = 2M
, the equations (5.2.6)- (5.2.7) satisfy
R α
0 Dt [y(t2j ) − y0 ] = βy(t2j ) + f (t2j ), j = 1, 2, . . . , M,
2j+1
and at the nodes t2j+1 = 2M
, the equations (5.2.6)- (5.2.7) satisfy
R α
0 Dt [y(t2j+1 ) − y0 ] = βy(t2j+1 ) + f (t2j+1 ), j = 0, 1, 2, . . . , M − 1. (5.3.7)
Note that
t2j t−α 1
I I
1 −1−α 2j
R α
0 Dt y(t2j ) = (t2j −u) y(u) du = w−1−α y(t2j −t2j w) dw. (5.3.8)
Γ(−α) 0 Γ(−α) 0
H1
For every j, we denote g(w) = y(t2j − t2j w) and approximate 0
w−1−α g(w) dw by
H1
0
w−1−α g2 (w) dw, where g2 (w) is the piecewise quadratic interpolation polynomials on
the nodes wl = l/2j, l = 0, 1, 2, . . . , 2j. More precisely, we have, for k = 1, 2, . . . , j,
Thus
t−α 1
I
2j
R α
0 Dt y(t2j ) = w−1−α y(t2j − t2j w) dw
Γ(−α) 0
j I
t−α
2j
X w2k
= w−1−α g2 (w) dw + R2j (g)
Γ(−α) k=1 w2k−2
2j
t−α
2j
X
= αk,2j y(t2j−k ) + R2j (g)
Γ(−α) k=0
76
where R2j (g) is the remainder term and αk,2j , k = 0, 1, 2, . . . , 2j are weights given by
Here
F0 (k) =(2k − 1)(2k) (2k)−α − (2k − 2)−α (−α + 1)(−α + 2)
−α+1 −α+1
− (2k − 1) + 2k (2k) − (2k − 2) (−α)(−α + 2)
+ (2k)−α+2 − (2k − 2)−α+2 (−α)(−α + 1),
F1 (k) =(2k − 2)(2k) (2k)−α − (2k − 2)−α (−α + 1)(−α + 2)
− (2k − 2) + 2k (2k)−α+1 − (2k − 2)−α+1 (−α)(−α + 2)
−α+2 −α+2
+ (2k) − (2k − 2) (−α)(−α + 1),
and
−α −α
F2 (k) =(2k − 2)(2k − 1) (2k) − (2k − 2) (−α + 1)(−α + 2)
− (2k − 2) + (2k − 1) (2k)−α+1 − (2k − 2)−α+1 (−α)(−α + 2)
+ (2k)−α+2 − (2k − 2)−α+2 (−α)(−α + 1).
(5.3.9)
77
2j+1
At the nodes t2j+1 = 2M
, j = 0, 1, 2, . . . , M − 1, we have
I t2j+1
1
R α
0 Dt y(t2j+1 ) = (t2j+1 − u)−1−α y(u) du
Γ(−α) 0
I t1
1
= (t2j+1 − u)−1−α y(u) du
Γ(−α) 0
I 2j
t−α
2j+1 2j+1
+ w−1−α y(t2j+1 − t2j+1 w) dw.
Γ(−α) 0
H 2j
For every j, we denote g(w) = y(t2j+1 − t2j+1 w) and approximate 02j+1 w−1−α g(w) dw
H 2j
by 02j+1 w−1−α g2 (w) dw, where g2 (w) is the piecewise quadratic interpolation polynomials
l
on the nodes wl = 2j+1
, l = 0, 1, 2, . . . , 2j. We then get
Z t1
1
R α
0 Dt y(t2j+1 ) = (t2j+1 − u)−1−α y(u) du
Γ(−α) 0
j I
t−α
2j+1
X w2k
+ w−1−α g2 (w) dw + R2j+1 (g)
Γ(−α) k=1 w2k−2
Z t1
1
= (t2j+1 − u)−1−α y(u) du
Γ(−α) 0
2j
t−α
2j+1
X
+ αk,2j+1 y(t2j+1−k ) + R2j+1 (g)
Γ(−α) k=0
where R2j+1 (g) is the remainder term and αk,2j+1 = αk,2j , k = 0, 1, 2, . . . , 2j. Hence
2j
1 h X
y(t2j+1 ) = tα2j+1 Γ(−α)f (t2j+1 ) − αk,2j+1 y(t2j+1−k )
α0,2j+1 − tα2j+1 Γ(−α)β k=1
2j Z t1 i
X
+ y0 αk,2j+1 − R2j+1 (g) − tα2j+1 (t2j+1 − u)−1−α y(u) du . (5.3.10)
k=0 0
Here α0,l − tαl Γ(−α)β < 0, l = 2j, 2j + 1, which follow from Γ(−α) < 0, β < 0 and
α0,2j+1 = α0,2j .
Let y2j ≈ y(t2j ) and y2j+1 ≈ y(t2j+1 ) denote the approximate solutions of y(t2j ) and
y(t2j+1 ), respectively. We define the following numerical methods for solving (5.2.6)-
(5.2.7), with j = 1, 2, . . . , M ,
2j 2j
1 h
α
X X i
y2j = t2j Γ(−α)f (t2j ) − αk,2j y2j−k + y0 αk,2j , (5.3.11)
α0,2j − tα2j Γ(−α)β k=1 k=0
78
and, with j = 1, 2, . . . , M − 1,
2j
1 h
α
X
y2j+1 = t2j+1 Γ(−α)f (t2j+1 ) − αk,2j+1 y2j+1−k
α0,2j+1 − tα2j+1 Γ(−α)β k=1
2j Z t1 i
X
+ y0 αk,2j+1 − tα2j+1 (t2j+1 − u)−1−α y(u) du . (5.3.12)
k=0 0
Theorem 5.3.2 (for proof see the Theorem 3.3.3). Let 0 < α < 1 and M be a positive
integer. Let 0 = t0 < t1 < t2 < · · · < t2j < t2j+1 < · · · < t2M = 1 be a partition of
[0, 1] and h the stepsize. Let y(t2j ), y(t2j+1 ), y2j and y2j+1 be the exact and the approx-
imate solutions of (5.3.9) - (5.3.12), respectively. Assume that y ∈ C m+2 [0, 1], m ≥ 3.
Further assume that we can approximate the starting value y1 and the starting integral
R t1
0
(t2j+1 − τ )−1−α y(τ ) dτ in (5.3.12) by using some numerical methods and obtain the
required accuracy. Then there exist coefficients cµ = cµ (α) and c∗µ = c∗µ (α) such that the
sequence {yl }, l = 0, 1, 2, . . . , 2M possesses an asymptotic expansion of the form
µ ∗
m+1
X X
y(t2M ) − y2M = cµ (2M )α−µ + c∗µ (2M )−2µ + o((2M )α−m−1 ), for M → ∞,
µ=3 µ=2
that is,
µ ∗
m+1
X X
y(t2M ) − y2M = cµ hµ−α + c∗µ h2µ + o(hm+1−α ), for h → 0,
µ=3 µ=2
where µ∗ is the integer satisfying 2µ∗ < m + 1 − α < 2(µ∗ + 1), and cµ and c∗µ are certain
coefficients that depend on y.
H1 H1
We denote g(w) = y(t1 − t1 w) and approximate 0
w−1−α g(w) dw by 0
w−1−α g2 (w) dw,
where g2 (w) is the quadratic interpolation polynomial on the nodes 0, 21 , 1 defined by
(w − 21 )(w − 1) (w − 0)(w − 1) 1
g2 (w) = 1 g(0) + 1 g( )
(0 − 2 )(0 − 1) ( 2 − 0)( 21 − 1) 2
(w − 0)(w − 12 )
+ g(1), for w ∈ [0, 1]. (5.3.13)
(1 − 0)(1 − 21 )
We then get
t−α t−α
I 1 I 1
−1−α (1)
R α
0 Dt y(t1 ) = 1
w g(w) dw = 1
w−1−α g2 (w) dw + R2
Γ(−α) 0 Γ(−α) 0
−α
t (1)
= 1 w10 y(t1 ) + w11 y(t 1 ) + w12 y(t0 ) + R2 , (5.3.14)
Γ(−α) 2
where
1
−1−α (w − 12 )(w − 1) 1
(w − 0)(w − 1)
I I
w10 = w dw, w11 = w−1−α dw,
0 (0 − 12 )(0 − 1) 0 ( 12 − 0)( 12 − 1)
1
(w − 0)(w − 21 )
I
w12 = w−1−α dw, (5.3.15)
0 (1 − 0)(1 − 12 )
(1)
and the remainder term R2 satisfies, [26]
(1)
|R2 | ≤ Ct31 |y 000 |∞ .
3 3 1 (2)
y(t 1 ) = y(t0 ) + y(t1 ) − y(t2 ) + R2 ,
2 8 4 8
(2) 1 3 000
where R2 = 12
h y (c). Hence we have
R α t−α
1
(1) (2)
0 Dt y(t1 ) = B̂0 y(t2 ) + B̂1 y(t1 ) + B̂2 y(t0 ) + R2 + R2 , (5.3.16)
Γ(−α)
where
3 3 1
B̂2 = w12 + w11 , B̂1 = w10 + w11 , B̂0 = − w11 .
8 4 8
Therefore we have, at t = t1 ,
1
y(t1 ) = tα1 Γ(−α)f (t1 ) − B̂0 y(t2 ) − B̂2 y(t0 )
B̂1 − tα1 Γ(−α)β
2
X
(1) (2)
+ y0 B̂k − R2 − R2 . (5.3.17)
k=0
80
At t = t2 , we have
t2
t−α
I I 1
1 −1−α
R α
0 Dt y(t2 ) = (t2 − u) y(u) du = 2
w−1−α y(t2 − t2 w) dw.
Γ(−α)
0 Γ(−α) 0
H 1 −1−α H1
We denote g(w) = y(t2 −t2 w) and approximate the integral 0 w g(w) dw by 0 w−1−α g2 (w) dw,
where g2 (w) is defined as in (5.3.13). We have
t−α t−α
I 1 I 1
R α 2 −1−α 2 −1−α (3)
0 D t y(t2 ) = w g(w) dw = w g 2 (w) dw + R2
Γ(−α) 0 Γ(−α) 0
t−α 0 (3)
= 2 w1 y(t2 ) + w11 y(t1 ) + w12 y(t0 ) + R2 , (5.3.18)
Γ(−α)
(3)
where w1j , j = 0, 1, 2 are defined as in (5.3.15) and the remainder term R2 satisfies, [26]
(3)
|R2 | ≤ Ct32 |y 000 |∞ .
Therefore we have, at t = t2 ,
2 2
1
α
X
k
X
k (3)
y(t2 ) = 0 t Γ(−α)f (t2 ) − w y(t2−k ) + y 0 w − R2 .
w1 − tα2 Γ(−α)β 2 k=1
1
k=0
1
(5.3.19)
2
1 X
y1 = tα1 Γ(−α)f (t1 ) − B̂0 y2 − B̂2 y0 + y0 B̂k , (5.3.20)
B̂1 − tα1 Γ(−α)β k=0
2 2
1 α
k
X
k
X
y2 = 0 t Γ(−α)f (t2 ) − w1 y2−k + y0 w1 . (5.3.21)
w1 − tα2 Γ(−α)β 2 k=1 k=0
1
(1) (2)
e1 = B̂0 e2 + B̂2 e0 − R2 − R2 , (5.3.22)
B̂1 − tα1 Γ(−α)β
2
1 X
k (3)
e2 = 0 w e2−k − R2 . (5.3.23)
w1 − tα2 Γ(−α)β k=1 1
(3)
|e2 | ≤ C|R2 | ≤ Ch3 ,
81
and
(1) (2)
|e1 | ≤ C(|R2 | + |R2 |) ≤ Ch3 .
R t1
We next consider how to approximate the starting integral 0
(t2j+1 − u)−1−α y(u) du
in (5.3.12) with j ≥ 1. Note that this integral is the usual integral since j ≥ 1 and
Z t1 Z 1
−1−α
(t2j+1 − u) y(u) du = t1 (t2j + t1 w)−1−α y(t1 − t1 w) dw.
0 0
R1
Denoting g(w) = y(t1 − t1 w) and approximating the integral 0 (t2j + t1 w)−1−α g(w) dw by
R1
(t + t1 w)−1−α g2 (w) dw, where g2 (w) is defined by (5.3.13), we have
0 2j
Z t1
(1)
(t2j+1 − u)−1−α y(u) du = t1 wj0 y(t1 ) + wj1 y(t 1 ) + wj2 y(t0 ) + Rj , (5.3.24)
2
0
where
1
−1−α (w − 12 )(w − 1)
Z
wj0 = (t2j + t1 w) dw,
0 (0 − 21 )(0 − 1)
1
(w − 0)(w − 1)
Z
wj1 = (t2j + t1 w)−1−α dw,
0 ( 12 − 0)( 12 − 1)
1
(w − 0)(w − 21 )
Z
wj2 = (t2j + t1 w)−1−α dw,
0 (1 − 0)(1 − 21 )
(1)
and the remainder term Rj satisfies, [26]
Z 1
(1)
|Rj | ≤ (t2j + t1 w)−1−α (Ct31 ) dw ≤ Ch3 t−α
2j ≤ Ch
3−α
.
0
3 3 1 (2)
y(t 1 ) = y(t0 ) + y(t1 ) − y(t2 ) + R2 ,
2 8 4 8
(2)1 3 000
where R2 = 12 h y (c). Hence we have
Z t1
(1) (2)
(t2j+1 − u)−1−α y(u) du = t1 B̂0,j y(t2 ) + B̂1,j y(t1 ) + B̂2,j y(t0 ) + Rj + Rj ,
0
where
3 3 1
B̂2,j = wj2 + wj1 , B̂1,j = wj0 + wj1 , B̂0,j = − wj1 .
8 4 8
R t1
We shall approximate the integral 0 (t2j+1 − u)−1−α y(u) du by
Z t1
−1−α
(t2j+1 − u) y(u) du ≈ t1 B̂0,j y2 + B̂1,j y1 + B̂2,j y0 , (5.3.25)
0
82
Z t1
(t2j+1 − u)−1−α y(u) du − t1 B̂0,j y2 + B̂1,j y1 + B̂2,j y0
0
2
X
(1) (2)
= t1 B̂k,j e2−k + Rj + R2 ≤ Ct1 (Ch3 + Ch3−α ) ≤ Ch4−α .
k=0
After obtaining y1 and y2 by (5.3.20) and (5.3.21), we then use (5.3.11), (5.3.12) and
(5.3.25) to calculate y3 , y4 , . . . , y2M .
To determine the predictor formula for yjP , we approximate f (u, y(u)) in (5.4.2) by the
piecewise constant function P0 (u) on the nodes 0 = t0 < t1 < t2 , · · · < tj and obtain
Z tj Z tj j
X
α−1 α−1
(tj − u) f (u, y(u)) du ≈ (tj − u) P0 (u) du = bk,j f (tk , y(tk )),
0 0 k=0
where bk,j , k = 0, 1, 2, . . . , j −1 are some weights, see [29]. The predictor formula is defined
by
j−1
(1) tj 1 X
yjP = y0 + y0 + bk,j f (tk , yk ), j = 1, 2, . . . , M. (5.4.4)
1! Γ(α) k=0
j = 1, 2, . . . , M , where
(u − t2l+1 )(u − t2l+2 )
P2 (u) = f (t2l , y(t2l ))
(t2l − t2l+1 )(t2l − t2l+2 )
(u − t2l )(u − t2l+2 )
+ f (t2l+1 , y(t2l+1 ))
(t2l+1 − t2l )(t2l+1 − t2l+2 )
(u − t2l )(u − t2l+1 )
+ f (t2l+2 , y(t2l+2 )), (5.4.7)
(t2l+2 − t2l )(t2l+2 − t2l+1 )
and
(1)
f (u, y(u)) − P2 (u) = Rl ,
where
(2)
f (u + h, y(u + h)) − Q2 (u) = Rl ,
where
and
Z t2j 2j
X
(t2j − u)α−1 Q2 (u) du = ck,2j f (tk+1 , y(tk+1 )),
0 k=0
85
where
1
F (0),
2 0
if k = 0,
1
hα F (l) + 12 F2 (l − 1), if k = 2l, l = 1, 2, . . . , j − 1,
2 0
ck,2j =
α(α + 1)(α + 2)
−F1 (l), if k = 2l + 1, l = 0, 1, 2, . . . , j − 1,
1 F (j − 1), if k = 2j,
2 2
and
F0 (l) = α(α + 1) (2j − 2l)α+2 − (2j − 2l − 2)α+2
α+1 α+1
+ α(α + 2) 2(2j) − (2l + 1) − (2l + 2) (2j − 2l − 2) − (2j − 2l)
α α
+ (α + 1)(α + 2) (2j − 2l − 1)(2j − 2l − 2) (2j − 2l) − (2j − 2l − 2) ,
F1 (l) = α(α + 1) (2j − 2l)α+2 − (2j − 2l − 2)α+2
+ α(α + 2) 2(2j) − (2l) − (2l + 2) (2j − 2l − 2)α+1 − (2j − 2l)α+1
α α
+ (α + 1)(α + 2) (2j − 2l)(2j − 2l − 2) (2j − 2l) − (2j − 2l − 2) ,
F2 (l) = α(α + 1) (2j − 2l)α+2 − (2j − 2l − 2)α+2
+ α(α + 2) 2(2j) − (2l) − (2l + 1) (2j − 2l − 2)α+1 − (2j − 2l)α+1
+ (α + 1)(α + 2) (2j − 2l)(2j − 2l − 1) (2j − 2l)α − (2j − 2l − 2)α .
and, with j = 0, 1, 2, . . . , M − 1,
Z t1
(1) t2j+1 1
y2j+1 = y0 + y0 + (t2j+1 − u)α−1 f (u, y(u)) du
1! Γ(α) 0
2j−1
1 X P
+ ck,2j f (tk+1 , yk+1 ) + c2j+1,2j+1 f (t2j+1 , y2j+1 ) , (5.4.10)
Γ(α) k=0
The remaining problem is the determination of the predictor formula required to calcu-
P P
late y2j and y2j+1 . The idea is the same as the one described above: we replace f (u, y(u))
86
and f (u + h, y(u + h)) of the integrals on the right-hand sides of equations (5.4.5) and
(5.4.6), respectively, by the piecewise linear interpolation polynomials and obtain, with
j = 1, 2, . . . , M,
2j−1
(1) t2j 1 X
P PP
y2j = y0 + y0 + ak,2j f (tk , yk ) + a2j,2j f (t2j , y2j ) , (5.4.11)
1! Γ(α) k=0
and, with j = 0, 1, 2, . . . , M − 1,
2j
(1) t2j+1 1 X
P PP
y2j+1 = y0 + y0 + ak,2j+1 f (tk , yk ) + a2j+1,2j+1 f (t2j+1 , y2j+1 ) , (5.4.12)
1! Γ(α) k=0
Similarly, to calculate ykP P , we replace f (u, y(u)) and f (u + h, y(u + h)) in the integrals
in (5.4.5) and (5.4.6), respectively by the piecewise constants and obtain
2j−1
PP (1) t2j 1 X
y2j = y0 + y0 + bk,2j f (tk , yk ), j = 1, 2, . . . , M, (5.4.13)
1! Γ(α) k=0
and
2j
PP (1) t2j+1 1 X
y2j+1 = y0 + y0 + bk,2j+1 f (tk , yk ), j = 1, 2, . . . , M − 1, (5.4.14)
1! Γ(α) k=0
hα
bk,n+1 = (n + 1 − k)α − (n − k)α . (5.4.15)
α
Our basic fractional Adams method, is completely described now by equations (5.4.9)
Rt
- (5.4.14). Assume that the starting value y1 and the starting integral 0 1 (t2j+1 −
u)−1−α f (u, y(u)) du in (5.4.10) can be approximate by using some numerical methods
and satisfy the required accuracy, Yan, Pal and Ford [93] proved the error estimates for
yl − y(tl ), l = 1, 2, . . . , 2M .
87
In this subsection we shall consider how to approximate the starting value y1 and the
Rt
initial integral 0 1 (t2j+1 − u)−1−α y(u) du in (5.4.10). We will follow the idea in Cao and
Xu [9].
At t = t1 , we have
Z t1
(1) t1 1
y(t1 ) = y0 + y0 + (t1 − u)α−1 f (u, y(u)) du.
1! Γ(α) 0
Approximating g(u) = f (u, y(u)) on [0, t1 ] by the following quadratic interpolation poly-
nomial
(u − t 1 )(u − t1 ) (u − t0 )(u − t1 )
2
P2 (u) = f (0, y(0)) + f (t 1 , y(t 1 ))
(t0 − t 1 )(t0 − t1 ) (t 1 − t0 )(t 1 − t1 ) 2 2
2 2 2
(u − t0 )(u − t 1 )
+ 2
f (t1 , y(t1 )) for u ∈ [t0 , t1 ],
(t1 − t0 )(t1 − t 1 )
2
where
3 3 1
f (t 1 , y(t 1 )) ≈ f (t0 , y(t0 )) + f (t1 , y(t1 )) − f (t2 , y(t2 )),
2 2 8 4 8
where
3 3 1
(2)
f (t 1 , y(t 1 )) − f (t0 , y(t0 )) + f (t1 , y(t1 )) − f (t2 , y(t2 )) = R1 (u),
2 2 8 4 8
(2) 1 000
and R1 (u) = 16
f (c2 )h3 , c2 ∈ (0, t2 ).
We then obtain
Z t1
(1) t1 1
y(t1 ) = y0 + y0 + (t1 − u)α−1 f (u, y(u)) du
1! Γ(α) 0
2
(1) t1 1 X
= y0 + y0 + B̂i f (ti , y(ti ))
1! Γ(α) i=0
Z t1 Z t1
α−1 (1) (2)
+ (t1 − u) R1 (u) du + (t1 − u)α−1 R1 (u) du, (5.4.16)
0 0
88
where
Z t1
α−1
(u − t 1 )(u − t1 ) 3
Z t1
(u − t0 )(u − t1 )
B̂0 = (t1 − u) 2
du + (t1 − u)α−1 du,
0 (t0 − t 1 )(t0 − t1 ) 8 0 (t 1 − t0 )(t 1 − t1 )
2 2 2
t1 t1
(u − t0 )(u − t1 ) (u − t0 )(u − t1 )
Z Z
3
B̂1 = (t1 − u)α−1 du + (t1 − u)α−1 du,
4 0 (t 1 − t0 )(t 1 − t1 ) 0 (t1 − t0 )(t1 − t 1 )
2 2 2
t1
(u − t0 )(u − t1 )
Z
1
B̂2 = − (t1 − u)α−1 du.
8 0 (t 1 − t0 )(t 1 − t1 )
2 2
Let yl ≈ y(tl ), l = 0, 1, 2, denote the approximations of y(tl ) and we define the following
numerical method for y1 .
2
(1) t1 1 X
y1 = y0 + y0 + B̂i f (ti , yi ). (5.4.17)
1! Γ(α) i=0
At t = t2 , we have
Z t2
(1) t2 1
y(t2 ) = y0 + y0 + (t2 − u)α−1 f (u, y(u)) du.
1! Γ(α) 0
Approximating g(u) = f (u, y(u)) on [0, t2 ] by the following quadratic interpolation poly-
nomial
(u − t1 )(u − t2 ) (u − t0 )(u − t2 )
P2 (u) = f (0, y(0)) + f (t1 , y(t1 ))
(t0 − t1 )(t0 − t2 ) (t1 − t0 )(t1 − t2 )
(u − t0 )(u − t1 )
+ f (t2 , y(t2 )), for u ∈ [t0 , t2 ],
(t2 − t0 )(t2 − t1 )
where
(1) f 000 (c3 )
f (u, y(u)) − P2 (u) = R2 (u) = (u − t0 )(u − t1 )(u − t2 ), c3 ∈ (t0 , t2 ).
3!
We obtain
Z t2
(1) t2 1
y(t2 ) = y0 + y0 + (t2 − u)α−1 f (u, y(u)) du
1! Γ(α) 0
2 Z t2
(1) t2 1 X
α−1 (1)
= y0 + y0 + B̃i f (ti , y(ti )) + (t2 − u) R2 (u) du , (5.4.18)
1! Γ(α) i=0 0
where
t2
(u − t1 )(u − t2 )
Z
B̃0 = (t2 − u)α−1 du,
0 (t0 − t1 )(t0 − t2 )
Z t2
(u − t0 )(u − t2 )
B̃1 = (t2 − u)α−1 du,
0 (t1 − t0 )(t1 − t2 )
Z t2
(u − t0 )(u − t1 )
B̃2 = (t2 − u)α−1 du.
0 (t2 − t0 )(t2 − t1 )
89
(1) t2 1
y2 = y0 + y0 + B̃0 f (t0 , y0 ) + B̃1 f (t1 , y1 ) + B̃2 f (t2 , y2 ) , (5.4.19)
1! Γ(α)
Rt
Similarly, we can approximate the starting integral 0 1 (t2j+1 − u)α−1 f (u, y(u)) du in
(5.4.10) by using the same idea as in (5.4.16) and obtain
Z t1
(t2j+1 − u)α−1 f (u, y(u)) du
0
= B̂0,j f (t0 , y0 ) + B̂1,j f (t1 , y1 ) + B̂2,j f (t2 , y2 )
Z t1 Z t1
α−1 (1) (2)
+ (t2j+1 − u) R1 (u) du + (t2j+1 − u)α−1 R1 (u) du,
0 0
(5.4.20)
where
Z t1
α−1
(u − t 1 )(u − t1 ) 3
Z t1
(u − t0 )(u − t1 )
B̂0,j = (t2j+1 − u) 2
du + (t2j+1 − u)α−1 du,
0 (t0 − t 1 )(t0 − t1 ) 8 0 (t 1 − t0 )(t 1 − t1 )
2 2 2
t1 t1
(u − t0 )(u − t1 ) (u − t0 )(u − t1 )
Z Z
3
B̂1,j = (t2j+1 − u)α−1 du + (t2j+1 − u)α−1 du,
4 0 (t 1 − t0 )(t 1 − t1 ) 0 (t1 − t0 )(t1 − t 1 )
2 2 2
t1
(u − t0 )(u − t1 )
Z
1
B̂2,j =− (t2j+1 − u)α−1 du.
8 0 (t 1 − t0 )(t 1 − t1 )
2 2
1 (1) t1
y1 = y0 + y0 +
B̂0 f (t0 , y0 ) + B̂1 f (t1 , y1 ) + B̂2 f (t2 , y2 ) , (5.4.21)
1! Γ(α)
(1) t2 1
y2 = y0 + y0 + B̃0 f (t0 , y0 ) + B̃1 f (t1 , y1 ) + B̃2 f (t2 , y2 ) , (5.4.22)
1! Γ(α)
and, with j = 1, 2, . . . , M ,
2j−1
(1) t2j 1 X
P
y2j = y0 + y0 + ck,2j f (tk , yk ) + c2j,2j f (t2j , y2j ) , (5.4.23)
1! Γ(α) k=0
and, with j = 0, 1, 2, . . . , M − 1,
(1) t2j+1
1
y2j+1 = y0 + + y0 B̂0,j f (t0 , y0 ) + B̂1,j f (t1 , y1 ) + B̂2,j f (t2 , y2 )
1! Γ(α)
2j−1
1 X P
+ ck,2j f (tk+1 , yk+1 ) + c2j,2j f (t2j+1 , y2j+1 ) , (5.4.24)
Γ(α) k=0
90
P P
where the predictor terms y2j and y2j+1 can be obtained by (5.4.11) and (5.4.12). We
then have the following error estimates.
C α
Theorem 5.4.1. Let 0 < α ≤ 2 and assume that 0 Dt y ∈ C 3 [0, T ] for some suitable T .
Let y(tk ) and yk , k = 0, 1, 2, . . . , 2M, t2M = T be the solutions of (5.4.1) and (5.4.21) -
(5.4.24). Then, for sufficiently small h, there exists a positive constant C0 > 0 such that
C0 h1+2α , if 0 < α ≤ 1,
max |y(tk ) − yk | ≤
0≤k≤2M C0 h3 , if 1 < α ≤ 2.
In this section we will consider two examples for solving the linear differential equation
(5.2.6)-(5.2.7) by using the algorithm (5.3.11)-(5.3.12). Theorem 5.3.2 shows that the
approximate solution y2M has the asymptotic expansion
µ∗
m+1
X X
y(t2M ) − y2M = cµ hµ−α + c∗µ h2µ + o(hm+1−α ), as h → 0,
µ=3 µ=2
where µ∗ is the integer satisfying 2µ∗ < m + 1 − α < 2(µ∗ + 1), and cµ and c∗µ are certain
coefficients that depend on y.
Let A = y(t2M ), t2M = 1 and assume that A0 (h) = y2M is the approximate solution of
A with stepsize h. We then have by Theorem 5.3.2, with 0 < α < 1,
|A − A0 (h)| = O(hλ1 ).
Let A0 (h/2) denote the approximate solution of A with stepsize h/2. Then we have
2λ1 A = 2λ1 A0 (h/2) + 2λ1 a1 (h/2)λ1 + 2λ1 a2 (h/2)λ2 + 2λ1 a3 (h/2)λ3 + . . . . (5.5.3)
91
where
2λ1 A0 (h/2) − A0 (h/2)
A1 (h) = ,
2λ1 − 1
which implies that A1 (h) is an approximation of A with the convergence order O(hλ2 ),
that is
|A − A1 (h)| = O(hλ2 ).
Continuing these processes, we obtain the high order approximations A2 (h), A3 (h), . . .
of A. In Table 5.5.1, we proceed by setting up a triangle array ( a so-called Romberg
tableau) of approximate values for A.
A0 (h)
A0 (h/2) A1 (h)
A0 (h/22 ) A1 (h/2) A2 (h)
A0 (h/23 ) A1 (h/22 ) A2 (h/2) A3 (h)
.. .. .. ..
. . . . ...
C α 3!
0 Dt y(t) + y(t) = t3 + t3−α , t ∈ [0, 1], (5.5.5)
Γ(4 − α)
y(0) = 0, (5.5.6)
92
Step size Error of the method 1st extra. error 2nd extra. error
1/10 3.3237e-004
1/20 5.1939e-005 9.3242e-007
1/40 8.0506e-006 6.8029e-008 4.0268e-009
1/80 1.2432e-006 5.0402e-009 2.1066e-010
1/160 1.9164e-007 3.7765e-010 1.1023e-011
1/320 2.9516e-008 2.8511e-011 5.9381e-013
Table 5.5.3: Orders (“EOC ”) for equations (5.5.5)-(5.5.6) with α = 0.3, taken at t = 1.
93
Step size Error of the method 1st extra. error 2nd extra. error
1/10 1.1296e-003
1/20 2.0412e-004 5.3779e-006
1/40 3.6454e-005 4.5025e-007 2.7527e-008
1/80 6.4759e-006 3.8539e-008 1.3800e-009
1/160 1.1475e-006 3.3431e-009 6.9354e-011
1/320 2.0310e-007 2.9225e-010 3.5604e-012
Table 5.5.5: Orders (“EOC ”) for equations (5.5.5)-(5.5.6) with α = 0.5, taken at t = 1.
Step size Error of the method 1st extra. error 2nd extra. error
1/10 7.6048e-003
1/20 1.8568e-003 1.0809e-004
1/40 4.4205e-004 1.1665e-005 1.0659e-006
1/80 1.0412e-004 1.3139e-006 5.2739e-008
1/160 2.4402e-005 1.5070e-007 2.8748e-009
1/320 5.7054e-006 1.7432e-008 1.6257e-010
Table 5.5.7: Orders (“EOC ”) for equations (5.5.5)-(5.5.6) with α = 0.9, taken at t = 1.
Step size Error of the method 1st extra. error 2nd extra. error
1/10 1.4571e-004
1/20 2.3118e-005 8.2097e-007
1/40 3.6127e-006 6.5021e-008 2.0039e-009
1/80 5.6030e-007 5.1186e-009 1.2514e-010
1/160 8.6565e-008 4.0106e-010 7.8051e-012
1/320 1.3348e-008 3.1315e-011 4.9268e-013
Table 5.5.9: Orders (“EOC ”) for equations (5.5.7)-(5.5.8) with α = 0.3, taken at t = 1.
Step size Error of the method 1st extra. error 2nd extra. error
1/10 5.0921e-004
1/20 9.2881e-005 3.4801e-006
1/40 1.6676e-005 3.1186e-007 4.6709e-009
1/80 2.9708e-006 2.7831e-008 2.9143e-010
1/160 5.2721e-007 2.4764e-009 1.8053e-011
1/320 9.3380e-008 2.1991e-010 1.1328e-012
Table 5.5.11: Orders (“EOC ”) for equations (5.5.7)-(5.5.8) with α = 0.5, taken at t = 1.
In this subsection we will consider one example for solving (5.2.4)-(5.2.5) by using the
algorithm (5.4.9)-(5.4.14). We will numerically check that, with 1 < α ≤ 2,
Step size Error of the method 1st extra. error 2nd extra. error
1/10 3.5534e-003
1/20 8.5873e-004 3.8951e-005
1/40 2.0381e-004 4.5703e-006 3.1078e-008
1/80 4.7950e-005 5.3459e-007 1.7728e-009
1/160 1.1233e-005 6.2442e-008 1.0563e-010
1/320 2.6257e-006 7.2882e-009 6.3909e-012
Table 5.5.13: Orders (“EOC ”) for equations (5.5.7)-(5.5.8) with α = 0.9, taken at t = 1.
where λ1 = 2 + α, λ2 = 4, λ3 = 3 + α, . . . .
tableau with α = 1.3, 1.5, 1.9. In all cases of α under consideration, we observe that the
first column converges as h2+α . The second column converges as h4 and the last column
converges as h3+α . We also observe that when α is close to 2, the convergence seems to be
even a bit faster. But when α is close to 1, the convergence is a bit slower than expected.
Step size Error of the method 1st extra. error 2nd extra. error 3rd extra error
1/10 7.1066e-004
1/20 6.8623e-005 3.9303e-006
1/40 5.6000e-006 1.5219e-006 1.3613e-006
1/80 4.3070e-007 1.5346e-007 6.2236e-008 7.2391e-009
1/160 3.2640e-008 1.2343e-008 2.9345e-009 2.3700e-010
1/320 2.5021e-009 9.0367e-010 1.4107e-010 8.3232e-012
Step size The method 1st extrapolation 2nd extrapolation 3rd extrapolation
1/10
1/20 3.37
1/40 3.62 1.39
1/80 3.70 3.31 4.45
1/160 3.72 3.64 4.41 4.93
1/320 3.71 3.77 4.38 4.83
Table 5.5.15: Orders (“EOC ”) for equation (5.5.9) with α = 1.3, taken at t = 1.
98
Step size Error of the method 1st extra. error 2nd extra. error 3rd extra error
1/10 1.3107e-003
1/20 1.0256e-004 1.4581e-005
1/40 7.2525e-006 1.9886e-006 1.1491e-006
1/80 4.9046e-007 1.6518e-007 4.3614e-008 7.5021e-009
1/160 3.2450e-008 1.1957e-008 1.7426e-009 1.9344e-010
1/320 2.1236e-009 8.1682e-010 7.4128e-011 3.0170e-012
Step size The method 1st extrapolation 2nd extrapolation 3rd extrapolation
1/10
1/20 3.68
1/40 3.82 2.87
1/80 3.89 3.59 4.72
1/160 3.92 3.79 4.65 5.28
1/320 3.93 3.87 4.56 6.00
Table 5.5.17: Orders (“EOC ”) for equation (5.5.9) with α = 1.5, taken at t = 1.
Step size Error of the method 1st extra. error 2nd extra. error 3rd extra error
1/10 1.9057e-003
1/20 1.2585e-004 1.9355e-006
1/40 8.0391e-006 4.1927e-007 3.1819e-007
1/80 5.0764e-007 3.3082e-008 7.3362e-009 3.4360e-009
1/160 3.1910e-008 2.2452e-009 1.8944e-010 5.8225e-011
1/320 2.0046e-009 1.4248e-010 2.2910e-012 4.1943e-012
Step size The method 1st extrapolation 2nd extrapolation 3rd extrapolation
1/10
1/20 3.92
1/40 3.97 2.21
1/80 3.99 3.66 5.44
1/160 3.99 3.88 5.28 5.88
1/320 3.99 3.98 6.36 3.80
Table 5.5.19: Orders (“EOC ”) for equation (5.5.9) with α = 1.9, taken at t = 1.
Chapter 6
6.1 Introduction
Space fractional derivatives are used to model anomalous diffusion or dispersion, a phe-
nomenon observed in many problems, where particles spread faster than the classical
models predict. When a fractional derivative replaces the second derivative in a diffusion
or dispersion model, it leads to enhanced diffusion (also called superdiffusion), see Meer-
schaert and Tadjeran [66]. Space-fractional diffusion equations have been investigated
by West and Seshadri [91] and Gorenflo and Mainardi [48] and Gorenflo [47] . A linear
interpolation polynomial was used to approximate the Hadamard integral generated by
fractional derivative and the rate of the convergence of the proposed numerical method
is O(h2−α ) [26].
In this chapter we will discuss a finite difference method for solving space-fractional
partial differential equation. The space-fractional derivatives are the left-handed and
right-handed Riemann-Liouville fractional derivatives which can be expressed by using
the Hadamard finite-part integrals.
We will examine the stability, consistency and convergence of the proposed finite dif-
ference method. The Hadamard finite-part integrals are approximated by using piecewise
quadratic interpolation polynomials and a numerical approximation scheme of the space-
fractional derivative with convergence order O(∆x3−α ) (1 < α < 2) is obtained. A shifted
100
101
implicit finite difference method is introduced for solving two-sided space-fractional partial
differential equations and we prove that the order of convergence of the finite difference
method is O(∆t + ∆xmin(3−α,β) ), 1 < α < 2, β > 0, where ∆t, ∆x denote the time and
space stepsizes, respectively, and β is related to the smoothness of the exact solution u.
ut (t, x) = C+ (t, x) R α R α
0 Dx u(t, x), +C− (t, x) x D1 u(t, x) + f (t, x), 0 < x < 1, (6.2.1)
Here the function f (t, x) is a source/sink term. The functions C+ (t, x) ≥ 0 and C− (t, x) ≥
0 may be interpreted as transport related coefficients. The addition of a classical advective
term −ν(t, x) ∂u(t,x)
∂x
in (6.2.1) does not impact the analysis performed in this chapter,
and has been omitted to simplify the notation. The left-handed fractional derivative
R α
0 Dx f (x) and right-handed fractional derivative R α
x D1 f (x) in (6.2.1) are Riemann-Liouville
There are several ways to approximate the Riemann- Liouville fractional derivative.
Let 0 = x0 < x1 < · · · < xj < · · · < xM = 1 be a partition of [0, 1] and ∆x the stepsize.
Based on the definition of the Grünwald-Letnikov derivative, one can approximate the
left-handed and right-handed Riemann-Liouville fractional derivatives by see [66])
j
X (α)
R α
0 Dx f (xj ) = ∆x−α wk f (xj−k ) + O(∆x), (6.2.6)
k=0
102
and
M −j
X (α)
R α
x D1 f (xj ) = ∆x−α wk f (xj+k ) + O(∆x), (6.2.7)
k=0
(α)
where wk are some weights and the order of convergence in (6.2.6) or (6.2.7) is O(∆x)
for any α > 0. Meerschaert and Tadjeran [64] proposed finite difference approximations
for fractional advection-dispersion flow equations. They used the Grünwald method to
approximate the space-fractional derivative and proved that the standard finite difference
method is unconditionally unstable, but the shifted finite difference method is uncondi-
tionally stable.
Lubich [53] obtained approximations of order 2 - 6 in the form of (6.2.6), where the
(α)
coefficients wk are just the coefficients of the Taylor series expansions of some generating
(α)
functions wl (z), l = 2, 3, 4, 5, 6. The L2 scheme and its modification L2C scheme are
introduced in Oldham and Spanier [72], Lynch, et al. [62] as follows. Note that, with
1 < α < 2,
On each subinterval [xl , xl+1 ], one approximates the integral by using the linear interpo-
ξ−xl+1 00 ξ−xl
lation polynomial P1 (ξ) = xl −xl+1
f (xl ) + xl+1 −xl
f 00 (xl+1 ) and obtains, with some weights
w̄k,j , k = 0, 1, 2, . . . , j,
j−1 Z xl+1 j
1 X X
C α
0 Dx f (xj ) ≈ (xj − ξ)1−α P1 (ξ) dξ = ∆x2−α w̄k,j f 00 (xk ).
Γ(2 − α) l=0 xl k=0
and third order approximations for the Grünwald and shifted Grünwald formulae with
weighted averages of Caputo derivatives.
Let us review some numerical methods for solving space-fractional partial differen-
tial equations. There are many different numerical methods for solving space-fractional
partial differential equations in literature: Choi at al. [12] applied the backward Euler fi-
nite difference method with the right-shifted Grünward formula for the Riemann-Liouville
space fractional derivative term and proved the existence using Leray-Schauder fixed point
theorem and finally the convergence order O(∆x + ∆t) are considered. By using shifted
Grünwald-Letnikov formulae (6.2.6) and (6.2.7), Meerschaert and Tadjeran [66] intro-
duced a finite difference method for solving two-sided space-fractional partial differential
equations (6.2.1)- (6.2.3) and proved that the convergence order of spatial discretization is
O(∆x). Meerschaert and Tadjeran (2004) [64] also considered the finite difference method
for solving the 1D fractional advection-dispersion equation, with 1 < α < 2,
by using the shifted Grünwald-Letnikov formula on a finite domain and they proved that
the convergence order of spatial discretization is O(∆x). Tadjeran, Meerschaert and
Scheffler [88] and Tadjeran and Meerschaert [89] applied the shifted Grünwald-Letnikov
formula and extrapolation techniques to fractional diffusion equations in 1D and 2D and
104
obtained a second-order accurate finite difference method. Liu et al. [61] transformed
the fractional advection-dispersion equation into a system of ordinary differential equa-
tions, which was then solved using backward difference formulae. Chen and Liu [11]
used a technique combining the alternating direction implicit-Euler method with Richard-
son extrapolation to establish an unconditionally stable second-order accuracy difference
method to approximate a 2D fractional advection-dispersion equation with variable co-
efficients on a finite domain. Podlubny et al. [77] developed a matrix approach to dis-
cretize fractional diffusion equations with various combinations of time-space-fractional
derivatives. Shen et al. [79, 80] presented explicit and implicit difference approxima-
tions for the Riesz fractional advection-dispersion equations and the space-time Riesz-
Caputo fractional advection-dispersion equations. Shen et al. [81] considered a novel
numerical approximation for the space fractional advection-dispersion equation. See also
[2, 13, 60, 65, 82, 83, 85, 94, 86].
There are other numerical methods for solving space-fractional partial differential
equations: the finite element methods, see [20, 21, 38, 39, 37, 40] and the spectral methods
[57, 58].
In this chapter, we will use the idea in Diethelm [26] to define a finite difference
method for solving (6.2.1)- (6.2.3), see recent works for this method [93, 43, 45, 46]. We
first express the fractional derivative by using the Hadamard finite-part integral, i.e., with
1 < α < 2,
x x
d2
Z I
1 1
R α
0 Dx f (x) = 1−α
(x − ξ) f (ξ) dξ = (x − ξ)−α−1 f (ξ) dξ.
Γ(2 − α) dx2 0 Γ(−α) 0
Based on these approximation schemes, we define a shifted finite difference method for
solving (6.2.1)-(6.2.3). We proved that the convergence order of the numerical method is
O(∆t + ∆xmin(3−α,β) ), 1 < α < 2, β > 0.
105
Denoting g(w) = f (xj − xj w) and substituting g(w) in (6.3.2) by the following linear
interpolation polynomial P1 (w) on [ l−1
j
, jl ], l = 1, 2, . . . , j,
l l−1
w− j
l − 1 w− j l
P1 (w) = l−1 l
g + l l−1
g ,
j
− j
j j
− j
j
R α
we obtain an approximation to 0 Dx f (xj ), 1 < α < 2,
j
X
R α −α
0 Dx f (xj ) = ∆x wk,j f (xj−k ) + O(∆x2−α ), (6.3.3)
k=0
where
1, for k = 0,
21−α − 2,
for k = 1,
Γ(2 − α)wk,j =
(k + 1)1−α − 2k 1−α + (k − 1)1−α , for k = 2, 3, . . . , j − 1,
−j 1−α + (j − 1)1−α − (α − 1)j −α , for k = j.
106
Lemma 6.3.1. Let 1 < α < 2. The coefficients wk,j in (6.3.3) satisfy
Proof. From the properties of wkj it is obvious that w1,j < 0 and wk,j > 0 We can show
that the sum of coefficients wkj are always negative. For example, Let, j=3 we have,
b0 , k = 0,
b −b ,
k = 1,
0 1
Γ(2 − α)wk3 =
b2 − b1 , k = 2,
b −b ,
k = 3,
3 2
Therefore,
3
X
wk3 = b0 + (b1 − b0 ) + (b2 − b1 ) + (b3 − b2 )
k=0
= b3 = (1 − α)3−α < 0. (6.3.4)
In general, we have
j
X
wkj = b0 + (b1 − b0 ) + (b2 − b1 ) + · · · + (bj − bj−1 ) + (bj+1 − bj )
k=0
= bj+1 = (1 − α)(j + 1)−α < 0. (6.3.5)
j
X M −j
X
−1 −α
∆t (un+1
j − unj ) − (∆x) wk,j unj−k + wk,M −j unj+k
k=0 k=0
j
X M −j
X
−1 −α
Ujn+1 Ujn n n
+ fjn ,
∆t − = ∆x wk,j Uj−k + wk,M −j Uj+k (6.3.8)
k=0 k=0
n
with U0n = UM = 0, and Uj0 = u0 (xj ), j = 0, 1, 2, . . . , M − 1. Here the weights wk,j and
wk,M −j , are given in (6.3.3).
Lemma 6.3.2. The explicit finite difference method (6.3.8) is unconditionally unstable
Assume that we have some errors in the starting values Uj0 , i.e.,
To consider the stability, we assume that only the term Ui0 , for some fixed i, has the error,
and other terms have no errors, i.e.,
0j = 0, j 6= i.
Then we have
i
X M
X −i
Ūi1 = 1 + w0i λ + w0,(M −j) λ Ūi0 + λ 0 0
+ ∆tfi0 . (6.3.10)
wki Ūi−k +λ wk,M −i Ūi+k
k=1 k=1
That is, the error is amplified by the factor µi = 1 + w0i λ + w0,(M −i) when the finite
difference equation is advanced by one time step. After n time steps, one may write
n
ni = 1 + w0i λ + w0,(M −i) 0i .
Note that, by Lemma 6.3.1, w0i = 1/Γ(2 − α) > 0, and w0,(M −i) = 1/Γ(2 − α) > 0 we
have 1 + w0i λ + w0,(M −i) > 1. Thus |ni | → ∞ as n → ∞, which implies that the method
is unstable.
j
X M −j
X
−1 −α
Ujn+1 Ujn n+1 n+1
+ fjn+1 ,
∆t − = ∆x wk,j Uj−k + wk,M −j Uj+k (6.3.11)
k=0 k=0
with U0n = UM
n
= 0, and Uj0 = u0 (xj ), j = 0, 1, 2, . . . , M − 1. Here the weights wk,j and
wk,M −j , are given in (6.3.3).
Lemma 6.3.3. The implicit finite difference method (6.3.11) is unconditionally unstable
Although this is an implicit Euler method, the problem can be solved explicitly by a
left-to -right sweep across the x domain due to the Dirichlet boundary condition at the
left boundary. For example, the value U31 can be explicitly determined by U01 , U11 , U21 and
U30 . Now let us consider the stability. Let 0j = Ūj0 − Uj0 , j = 0, 1, 2, . . . , M be the error
generated by Uj0 . Assume that 0j = 0, j 6= i, that is Ui0 is the only term that has an error
for fixed i. Let n = 0, we have
i M −i
1 1 X X
Ui1 = 0
U + λ 1
wki Ui−k +λ 1 1
wk,M −i Ui+k +∆tfi ,
1 − w0i λ − w0,M −i λ i 1 − w0i λ − w0,M −i λ k=1 k=1
(6.3.13)
and
i M −i
1 1 X X
Ūi1 = 0
Ūi + λ 1
wki Ūi−k +λ 1
wk,M +i Ūi+k +∆fi1 .
1 − w0i λ − w0,M −i λ 1 − w0i λ − w0,M −i λ k=1 k=1
(6.3.14)
1
1i = 0 .
1 − w0i λ − w0,M −i λ i
Note that, by Lemma 6.3.1, w0i > 0, and w0,M −i > 0 which implies that 1−w0i λ−w0,M −i <
1 and therefore
1
> 1.
1 − w0i λ − w0,M −i
We now introduce the shifted Diethelm’s FDM for space-fractional PDEs. At the node
(tn+1 , xj ), we may write the equation (6.2.1) into the shifted form, with j = 1, 2, . . . , M −1,
ut (tn+1 , xj ) − R α
0 Dx u(tn+1 , xj+1 ) +R D
x 1
α
u(tn+1 , xj−1 ) = fjn+1 + σjn+1 , (6.3.15)
110
Discretizing ut (tn+1 , xj ) at tn+1 by using the backward Euler method and discretizing
R α R α
0 Dx u(tn+1 , xj+1 ) and x D1 u(tn+1 , xj−1 ) by using (6.3.3) and (6.3.6) at xj+1 and xj−1
respectively, we get, with unj = u(tn , xj ), fjn = f (tn , xj ),
j+1
X M −(j−1)
X
−1 −α
∆t (un+1
j − unj ) − (∆x) wk,j+1 un+1
j+1−k + wk,M −(j−1) un+1
j−1+k
k=0 k=0
j+1
X M −(j−1)
X
−1 −α
Ujn+1 −Ujn n+1 n+1
+fjn+1 , (6.3.17)
∆t = ∆x wk,j+1 Uj+1−k + wk,M −(j−1) Uj−1+k
k=0 k=0
with U0n+1 = UM
n+1
= 0, and Uj0 = u0 (xj ), j = 0, 1, 2, . . . , M − 1. Here the weights wk,j+1
and wk,M −(j−1) , are given in (6.3.3).
where
1 − λw12 − λw1M −λw02 − λw2M ... −λwM −1,M
−λw23 − λw0,M −1 1 − λw13 − λw1,M −1 ... −λwM −2,M −1
A= ,
−λw34 −λw24 − λw0,M −2 ... −λwM −3,M −2
.. .. .. ..
. . . .
−λwM −1,M −λwM −2,M −λw2,M − λw0,2 1 − λw1,M − λw12
and
U1n+1 U1n f1n+1
n+1 n+1
U2n
U2 f2
U n+1 =
..
,
n
U =
..
,
F n+1 =
..
.
. . .
n+1 n n+1
UM −1 UM −1 fM −1
x1
x2
Let µ denote an eigenvalue of A and ξ = 6= 0 the corresponding eigenvector,
..
.
xM −1
that is,
Aξ = µξ.
Denote
or
M −1
X xj
µ = aii + aij .
j=1,j6=i
xi
112
Note that
we have
xi+1 xi−1 x1
µ = 1 − λ w0,i+1 + w1,i+1 + w2,i+1 + · · · + wi,i+1
xi xi xi
xi−1 xi+1 xM −1
− λ w0,M −(i−1) + w1,M −(i−1) + w2,M −(i−1) + · · · + wi,M −(i−1) .
xi xi xi
x Pi
Since xji < 1, j 6= i and, by Lemma 6.3.1, wk,i+1 > 0, k 6= 1 and k=0 wk,i+1 <
Pi+1 −α
P M −(i−1)−1 P M −(i−1)
k=0 wk,i+1 = (1 − α)(i + 1) < 0, and k=0 wk,M −(i−1) < k=0 wk,M −(i−1) =
(1 − α)(M − (i − 1))−α < 0, we have
xi+1 xi−1 x1
w0,i+1 + w1,i+1 + w2,i+1 + · · · + wi,i+1
xi xi xi
xi−1 xi+1 xM −1
+ w0,M −(i−1) + w1,M −(i−1) + w2,M −(i−1) + · · · + wi,M −(i−1)
xi xi xi
< w0,i+1 + w1,i+1 + w2,i+1 + · · · + wi,i+1
+ w0,M −(i−1) + w1,M −(i−1) + w2,M −(i−1) + · · · + wi,M −(i−1) < 0,
We may use the Gershgorin lemma to simplify the proof above. In fact, we have,
noting that w1,j < 0, wk,j > 0, k 6= 1, k = 0, 2, 3, . . . , j,
M
X −1
ri = |aik | = λ(w0,i+1 + w2,i+1 + w3,i+1 + · · · + wi,i+1 )
k=1,k6=i
for i = 1, 2, . . . , M − 1.
Since aii = 1 − λw1,i+1 − λw1,M −i+1 , i = 1, 2, . . . , M − 1, we have
aii − ri > 1, i = 1, 2, . . . , M − 1.
Thus all the eigenvalues of A are larger than or equal to 1, which implies that the matrix
A is invertible and there exists a matrix norm k · k such that kA−1 k ≤ 1. Hence the
numerical method (6.3.17) is unconditionally stable.
The proof of the Theorem 6.3.4 is complete.
We now consider the error estimates of the shifted finite difference method (6.3.17).
Theorem 6.3.5. Let u(tn+1 , xj ) and Ujn+1 be the solutions of (6.3.15) and (6.3.17), re-
spectively. Assume that u(t, x) satisfies the Lipschitz conditions, with some β > 0,
R α
0 Dx u(t, x) −R α β
0 Dx u(t, y) ≤ Cα |x − y| , (6.3.20)
R α
x D1 u(t, x) −R α β
x D1 u(t, y) ≤ Cα |x − y| . (6.3.21)
Then we have
j+1
X M −(j−1)
X
−1 −α
en+1 enj wk,j+1 en+1 wk,M −(j−1) en+1
∆t j − − ∆x j+1−k + j−1+k
k=0 k=0
= σjn+1 + τjn+1 .
Using Lemma 6.3.1 and assumptions (6.3.20)- (6.3.21), we have, with R = (∆t+∆xmin(2−α,β) ,
|e1 |∞ = sup |e1j | = |e1l | ≤ |e1l | 1 − λ(w0,l+1 + w1,l+1 + · · · + wl+1,l+1 )
j
− λ(w0,M −(l−1) + w1,M −(l−1) + · · · + wM −(l−1),M −(l−1) )
− λw0,M −(l−1) |e1l | − λw1,M −(l−1) |e1l | − · · · − λwM −(l−1),M −(l−1) |e1l |
− λw0,M −(l−1) |e1l−1 | − λw1,M −(l−1) |e1l | − · · · − λwM −(l−1),M −(l−1) |e1M |
− λw0,M −(l−1) e1l−1 − λw1,M −(l−1) e1l − · · · − λwM −(l−1),M −(l−1) e1M |
≤ |e0l | + ∆tR.
|e1 |∞ ≤ ∆tR.
For every j, we replace g(w) = f (x2j − x2j w) in the integral in (6.4.2) by piecewise
quadratic interpolation polynomials with the equispaced nodes 0, 2j1 , 2j2 , . . . , 2j
2j
. We then
have
I 1 I 1
−1−α
w g(w) dw = w−1−α P2 (w) dw + R2j (g), (6.4.3)
0 0
where P2 (w) is the piecewise quadratic interpolation polynomial of g(w) defined on the
equispaced nodes 0, 2j1 , 2j2 , . . . , 2j
2j
and R2j (g) is the remainder term.
2j+1
At the node x2j+1 = j = 1, 2, . . . , m − 1 we have
2m
,
I x2j+1
1
R α
0 Dx f (x2j+1 ) = (x2j+1 − ξ)−1−α f (ξ) dξ
Γ(−α) 0
Z x1
1
= (x2j+1 − ξ)−1−α f (ξ) dξ
Γ(−α) 0
I 2j
x−α
2j+1 2j+1
+ w−1−α f (x2j+1 − x2j+1 w) dw. (6.4.4)
Γ(−α) 0
116
where Q2 (w) is the piecewise quadratic interpolation polynomial of g(w) defined on the
1 2 2j
nodes 0, 2j+1 , 2j+1 , . . . , 2j+1 and R2j+1 (g) is the remainder term.
We have,
Lemma 6.4.1. [93]. Let 1 < α < 2 and let M = 2m where m is a fixed positive integer.
Let 0 = x0 < x1 < x2 < · · · < x2j < x2j+1 < · · · < xM = 1 be a partition of [0, 1]. Assume
that f (x) is a sufficiently smooth function. Then we have, with j = 1, 2, . . . , m,
2j
R α
x−α
2j
X
0 Dx f (x) = αl,2j f (x2j−l ) + R2j (f )
x=x2j Γ(−α) l=0
2j
−α
X x−α
2j
= ∆x wl,2j f (x2j−l ) + R2j (f ), (6.4.6)
l=0
Γ(−α)
and, with j = 1, 2, . . . , m − 1,
Z x1
1
R α
0 Dx f (x) = (x2j+1 − ξ)−1−α f (ξ) dξ
x=x2j+1 Γ(−α) 0
2j
x−α
2j+1
X
+ αl,2j+1 f (x2j+1−l ) + R2j+1 (f )
Γ(−α) l=0
2j
x−α
Z x1
1 −1−α −α
X 2j+1
= (x2j+1 − ξ) f (ξ) dξ + ∆x wl,2j+1 f (x2j+1−l ) + R2j+1 (f ),
Γ(−α) 0 l=0
Γ(−α)
(6.4.7)
117
where
and
Here the integral denotes the Hadamard finite-part integral [3]. We approximate g(ω) on
[0,1] by the quadratic interpolation polynomials g2 (ω), where
2k−1
(ω − 2j
)(ω − 2k
2j
) 2k − 2
g2 (ω) = 2k−2 2k−1 2k−2 2k
g( ) (6.4.8)
( 2j − 2j )( 2j − 2j ) 2j
(ω − 2k−2
2j
)(ω − 2k
2j
) 2k − 1
+ 2k−1 2k−2 2k−1 2k g( )
( 2j − 2j )( 2j − 2j ) 2j
(ω − 2k−2
2j
)(ω − 2k−1
2j
) 2k 2k − 2 2k
+ 2k 2k−2 2k 2k−1 g( ), f or ω∈[ , ], k = 1, 2, . . . , j.
( 2j − 2j )( 2j − 2j ) 2j 2j 2j
Let us now find the values of
I 1 I 2 Z 4 Z 2j
2j 2j 2j
−1−α
ω g2 (ω)dω = [ + +··· + ]ω −1−α g2 (ω)dω,
2 2j−2
0 0 2j 2j
H 2j2
where the integral 0 g2 (ω)ω −1−α dω is a Hadamard finite-part integral. By the definition
of the Hadamard finite-part integral, we get
I 2 g2 (0)( 2j2 )−α Z 2j −1−α Z ω 0
2
2j
−1−α
g2 (ω)ω dω = + ω [ g2 (y)dy]dω (6.4.9)
0 −α 0 0
Z 2
2−α 2j
= −α
g2 (0) + ω −1−α (g2 (ω) − g2 (0))dω
(−α)(2j) 0
−α Z 2 2
2 2j
−1−α (2j)
h 1 2
= −α
g(0) + ω (ω 2 − ( + )ω)g(0)
(−α)(2j) 0 2 2j 2j
2 2
(2j) 2 1 (2j) 1 2 i
+ (ω 2 − (0 + )ω)g( ) + (ω 2 − (0 + )ω)g( ) dω
−1 2j 2j 2 2j 2j
−α
2 (α + 2)
= g(0)
(−α)(−α + 1)(−α + 2)(2j)−α
22−α 1
+ g( )
(−α + 1)(−α + 2)(2j)−α 2j
−2−α α 2
+ g( )
(−α + 1)(−α + 2)(2j)−α 2j
119
Similarly, we have
Z 2k−2
2j
−α
(−α)(−α + 1)(−α + 2)(2j) g2 (ω)ω −1−α dω
2k
2j
1 2k − 2 2k − 1 1 2k
= F0 (k)g( ) + (−1)F1 (k)g( ) + F2 (k)g( ),
2 2j 2j 2 2j
where Fi (k), i = 0, 1, 2 and k = 1, 2, 3, . . . , j are defined as above.
The weights wl,2j have some special properties which are summarized in the following
Lemma 6.4.2 .
Lemma 6.4.2. Let 1 < α < 2. The coefficients wl,2j in (6.4.6) satisfy
Proof. It is easy to show that w0,2j > 0 and w1,2j < 0. We now prove that wk,2j > 0, k =
2, 3, . . . , 2j. We first show that
w2l−1,2j > 0, l = 2, 3, . . . , j.
Note that
Γ(3 − α)w2l−1,2j = 2 (2l − 2)−α+2 − (2l)−α+2 + 2(−α + 2) (2l − 2)−α+1 + (2l)−α+1 .
2n
Note that the sequence an = n!
is decreasing. Hence we see that, with 1 < α < 2,
I(m) > 0.
w2l,2j > 0, l = 1, 2, . . . , j − 1.
−α+2
2 −α+2 2 1
I(m) =m + (−2)(1 − ) + (−1)(−α + 2)(1 − )−α+1
m m m
2 −α+2 2 −α+1 1
+ 2(1 + ) + (−1)(−α + 2)(1 + )
m m m
2(−α + 2)(−α + 1)(−α)(−α − 1)(−α − 2) 23 · 2 22
= m−α+2 − +
m3 3! 2!
5 4
2(−α + 2)(−α + 1)(−α)(−α − 1)(−α − 2) 2 · 2 2
+ −
m5 5! 4!
2(−α + 2)(−α + 1)(−α)(−α − 1)(−α − 2) 27 · 2 26
+ − + . . .
m7 7! 6!
3 2
1 2(−α + 2)(−α + 1)(−α) 2 · 2 2
= 1+α −
m m0 3! 2!
2(−α + 2)(−α + 1)(−α)(−α − 1)(−α − 2) 25 · 2 24
+ −
m2 5! 4!
2(−α + 2)(−α + 1)(−α)(−α − 1)(−α − 2)(−α − 3)(−α − 4)
+
m4
27 · 2 26
− + ... .
7! 6!
Note that
2n · 2 2n−1 2n−1 2 · 2
− = − 1 ≤ 0, n ≥ 4.
n! (n − 1)! (n − 1)! n
121
Hence we get
21+α 1 h 2(−α + 2)(−α + 1)(−α) 23 · 2 22
I(m) ≥ −
m1+α 21+α m0 3! 2!
2(−α + 2)(−α + 1)(−α)(−α − 1)(−α − 2) 25 · 2 24
+ −
22 5! 4!
2(−α + 2)(−α + 1)(−α)(−α − 1)(−α − 2)(−α − 3)(−α − 4) 27 · 2 26 i
+ − + . . .
24 7! 6!
1+α
2
= 1+α I(2), m = 2, 3, 4, . . . .
m
It is easy to show that, with 1 < α < 2,
I(2) = 2−α+2 3α − 6 + 2−α (6 + α) > 0.
Thus we get
I(m) > 0, m = 2, 3, 4, . . . .
−α+2
2 −α+1 1 2 −α+2 1
I(m) =m (3α − 6)(1 − ) − 2(1 − ) + (α − 2) + 2
m m m m
(−α + 2)(−α + 1) (−2) (−2)2
= (−3) + (−2)
m2 1! 2!
2
(−α + 2)(−α + 1)(−α) (−2) (−2)3
+ (−3) + (−2)
m3 2! 3!
(−α + 2)(−α + 1)(−α)(−α − 1) (−2)3 (−2)4
+ (−3) + (−2) + ....
m4 3! 4!
Note that
(−2)n (−2)( n + 1) (−2)n −2
(−3) + (−2) = (−3) + (−2)
n! (n + 1)! n! n+1
n
(−2) 4 −3n + 1
= (−3) + = (−2)n ,
n! n+1 (n + 1)!
122
I(m) < 0, m = 4, 6, 8, . . . .
and, with j = 0, 1, 2, . . . , m − 2,
Z xM
1
R α
x D1 f (x) = (ξ − x2j+1 )−1−α f (ξ) dξ
x=x2j+1 Γ(−α) xM −1
M −(2j+1)−1
−α
X x−α
2j+1
+ ∆x wl,M −(2j+1) f (x2j+1+l ) + R2j+1 (f ). (6.4.14)
l=0
Γ(−α)
n n
Let U2j ≈ u(tn , x2j ) and U2j+1 ≈ u(tn , x2j+1 ) denote the approximate solutions of
u(tn , x2j ) and u(tn , x2j+1 ), respectively. We define the following explicit numerical method
for solving (6.2.1) - (6.2.3).
2j
X M −2j
X
−1 n+1 n −α n
wk,M −2j un2j+k
∆t U2j − U2j = ∆x wk,2j U2j−k +
k=0 k=0
n
+ f2j , j = 1, 2, . . . , m − 1, (6.4.15)
2j+1 M −2j−1
X X
−1 n+1 n −α n n
∆t U2j+1 − U2j+1 = ∆x wk,2j+1 U2j+1−k + wk,M −2j−1 U2j+1+k
k=0 k=0
+ f¯2j+1
n
+ Qn2j , j = 0, 1, 2, . . . , m − 1, (6.4.16)
Lemma 6.4.3. The standard explicit numerical method (6.4.15) - (6.4.16) is uncondi-
tionally unstable.
123
where
Z x1
1
f¯2j+1
0
= 0
f2j+1 + (x2j+1 − ξ)−1−α u(ξ, tn+1 ) dξ.
Γ(−α) 0
Assume that we have some errors in the starting values Ul0 , i.e.,
0
To consider the stability, we assume that only the term U2j 0
, for some fixed j0 , has the
error, and other terms have no errors. That is
0l = 0, l 6= 2j0 .
Then we have
2j0 M −2j 0
X X
1
0 0 0 0
Ū2j0
= 1+w0,2j0 λ+w0,M −2j 0 λ Ū2j0 +λ wk,2j0 Ū2j0 −k +λ wk,M −2j 0 Ū2j 0 −k
+∆tf2j0
.
k=1 k=1
(6.4.19)
That is, the error is amplified by the factor µ2j0 = 1 + w0,2j0 λ + w0,M −2j 0 λ when the finite
difference equation is advanced by one time step. After n time steps, one may write
n
n2j0 = 1 + w0,2j0 λ + w0,M −2j 0 λ 02j0 .
Note that w0,2j0 = 1/Γ(3 − α) > 0, and +w0,M −2j 0 = 1/Γ(3 − α) > 0 we have 1 +
w0,2j0 λ + w0,M −2j 0 λ > 1. Thus |n2j0 | → ∞ as n → ∞, which implies that the method is
unstable.
124
Similarly, we can introduce the standard implicit numerical method and show that the
standard implicit numerical method is also unconditionally unstable.
We now introduce the shifted Diethelm FDM for space-fractional PDEs. Let 0 = t0 <
t1 < t2 < · · · < tn < . . . be the time partition and ∆t the time stepsize. At the nodes
2j
x2j = 2m
,j = 1, 2, . . . , m − 1, we have, by (6.2.1),
R α R α n+1 n+1
ut (tn+1 , x2j ) − 0 Dx u(tn+1 , x2j+1 ) + x D1 u(tn+1 , x2j−1 ) = f2j + σ2j , (6.4.20)
2j+1
and at the nodes x2j+1 = 2m
,j = 1, 2, . . . , m − 1,
R α n+1 n+1
ut (tn+1 , x2j+1 ) − 0 Dx u(tn+1 , x2j+2 ) +R D
x 1
α
u(tn+1 , x2j ) = f2j+1 + σ2j+1 , (6.4.21)
where
n+1
σ2j =− R 0 D α
x u(t n+1 , x2j+1 ) − R α
0 D x u(t n+1 , x 2j )
− R α R α
x D1 u(tn+1 , x2j−1 ) − x D1 u(tn+1 , x2j ) ,
n+1 R α R α
σ2j+1 = − 0 Dx u(tn+1 , x2j+2 ) − 0 Dx u(tn+1 , x2j+1 )
− R D
x 1
α
u(t n+1 , x2j ) − R α
D
x 1 u(t n+1 , x2j+1 ) .
2j
X M −(2j−1)−1
X
−1 −α
un+1 un2j wk,2j+1 un+1 wk,M −(2j−1) un+1
∆t 2j − = ∆x 2j+1−k + 2j−1+k
k=0 k=0
n+1
+ f2j + Qn+1 n+1 n+1
2j + σ2j + τ2j , j = 1, 2, . . . , m − 1, (6.4.22)
2j+2 M −2j
X X
∆t−1 un+1 n −α n+1 n+1
2j+1 − u2j+1 = ∆x w u
k,2j+2 2j+2−k + w u
k,M −2j 2j+k
k=0 k=0
n+1 n+1 n+1
+ f2j+1 + σ2j+1 + τ2j+1 , j = 0, 1, 2, . . . , m − 1, (6.4.23)
where the truncation errors τln+1 = O(∆t + ∆x3−α ), l = 1, 2, . . . , Ṁ − 1 [25], [26] and
Z x1 Z xM
1 −1−α 1
n+1
Q2j = (x2j+1 − ξ) u(ξ, tn+1 ) dξ + (ξ − x2j+1 )−1−α u(ξ, tn+1 ) dξ.
Γ(−α) 0 Γ(−α) xM −1
(6.4.24)
125
n n
Let U2j ≈ u(tn , x2j ) and U2j+1 ≈ u(tn , x2j+1 ) denote the approximate solutions of
u(tn , x2j ) and u(tn , x2j+1 ), respectively. We define the following implicit shifted numerical
method for solving (6.2.1) - (6.2.3).
2j
X M −(2j−1)−1
X
−1 n+1 −α n+1
n
wk,M −(2j−1) un+1
∆t U2j − U2j = ∆x wk,2j+1 U2j+1−k + 2j−1+k
k=0 k=0
n+1
+ f2j + Qn+1
2j , j = 1, 2, . . . , m − 1, (6.4.25)
2j+2 M −2j
X X
−1 n+1 n −α n+1 n+1
∆t U2j+1 − U2j+1 = ∆x wk,2j+2 U2j+2−k + wk,M −2j U2j+k
k=0 k=0
n+1
+ f2j+1 , j = 0, 1, 2, . . . , m − 1, (6.4.26)
where Qn+1
2j is defined below in (6.4.24).
Lemma 6.4.4. The shifted implicit method (6.4.25)- (6.4.26) is unconditionally stable.
Proof. For simplicity, we only consider the left-hand Riemman-Liouville fractional deriva-
tive for stability analysis. With λ = ∆t/∆xα we write (6.4.25)-(6.4.26) into one equation,
with l = 1, 2, . . . , 2j, 2j + 1, . . . , 2m − 1,
l+1
X
n+1
−λw0,l+1 Ul+1 + (1 − λw1,l+1 )Uln+1 −λ n+1
wk,l+1 Ul+1−k = Uln + kFln+1 , (6.4.27)
k=2
Z x1
1
f¯2j
n+1
= (x2j+1 − ξ)−1−α u(ξ, tn+1 ) dξ + f2j
n+1
.
Γ(−α) 0
Further we write (6.4.27) into the following linear system with 2m − 1 equations and
2m − 1 unknowns.
AU n+1 = U n + kF n+1 ,
126
where
1 − λw12 −λw02
−λw23 1 − λw13 −λw03
A= ,
.. ..
..
.
. .
−λw2m−1,2m −λw2m−2,2m ... 1 − λw1,2m
and
U1n+1 U1n F1n+1
n+1 n+1
U2n
U2 F2
U n+1 =
..
,
n
U =
..
,
F n+1 =
..
.
. . .
n+1 n n+1
U2m−1 U2m−1 F2m−1
x1
x2
Let µ be an eigenvalue of A. Let ξ = 6= 0 be the corresponding eigenvector.
..
.
x2m−1
Then we have
Aξ = µξ.
Denote
i.e.,
2m−1
X xj
µ = aii + .
j=1,j6=i
xi
Note that aii = 1 − λw1,i+1 , ai,i+1 = −λw0,i+1 , ai,i−1 = −λw2,i+1 , . . . , ai,1 = −λwi,i+1 . We
have
i−1
xi+1 X xj
µ = (1 − λw1,i+1 ) − λw0,i+1 −λ wi−j+1,i+1
xi j=1
xi
xi+1 xi−1 x1
= 1 − λ w0,i+1 + w1,i+1 + w2,i+1 + · · · + wi,i+1 .
xi xi xi
127
xj Pi+1
Note that xi
< 1, j 6= i and wk,i+1 > 0, k 6= 1 and, by Lemma 6.3.1, k=0 wk,i+1 < 0, we
have
xi+1 xi−1 x1
w0,i+1 + w1,i+1 + w2,i+1 + · · · + wi,i+1
xi xi xi
< w0,i+1 + w1,i+1 + w2,i+1 + . . . wi,i+1 < 0.
Hence
xi+1 xi−1 x1
µ = 1 − λ w0,i+1 + w1,i+1 + w2,i+1 + · · · + wi,i+1 > 1.
xi xi xi
Since all the eigenvalues µ of matrix A satisfy |µ| ≥ 1, the matrix A is invertible and all
eigenvalues of A−1 are less than 1, which implies that there exists a matrix norm k · k
such that kA−1 k ≤ 1 and
Lemma 6.4.5. The eigenvalues of the matrix A lie in the disks centered at aii with radius
P
ri = k6=i |aik |.
By using Lemma 6.4.5, we shall prove all the eigenvalues of A are larger than or equal
to 1. In fact, we have
2m−1
X
ri = |aik | = λ(w0,i+1 + w2,i+1 + w3,i+1 + · · · + wi,i+1 ).
k=1,k6=i
By Lemma 3.4.2 we have w0,i+1 + w1,i+1 + w2,i+1 + · · · + wi,i+1 < 0, which implies that
aii − ri > 1 and therefore all the eigenvalues µ of A satisfy
(ξ − x 1 )(ξ − x1 ) (ξ − x0 )(ξ − x1 )
2
P2 (ξ) = u(tn+1 , x0 ) + u(tn+1 , x 1 )
(x0 − x 1 )(x0 − x1 ) (x 1 − x0 )(x 1 − x1 ) 2
2 2 2
(ξ − x0 )(ξ − x 1 )
+ 2
u(tn+1 , x1 ), for ξ ∈ [x0 , x1 ],
(x1 − x0 )(x1 − x 1 )
2
where
3 3 1
u(tn+1 , x 1 ) ≈ u(tn+1 , x0 ) + u(tn+1 , x1 ) − u(tn+1 , x2 ),
2 8 4 8
where
33 1
(2)
u(tn+1 , x 1 ) − u(tn+1 , x0 ) + u(tn+1 , x1 ) − u(tn+1 , x2 ) = R1 (ξ),
2 8 4 8
(2) 1 000
and R1 (ξ) = 16
u (tn+1 , c2 )h3 , c2 ∈ (0, x2 ).
We then have
Z x1 2
1 −1−α
X
(x2j+1 − ξ) u(tn+1 , ξ) dξ = B̂i u(tn+1 , xi ) + R1 ,
Γ(−α) 0 i=0
where
Z x1
α−1
(ξ − x 1 )(ξ − x1 ) 3
Z x1
(ξ − x0 )(ξ − x1 )
B̂0 = (x1 − ξ) 2
dξ + (x1 − ξ)α−1 dξ,
0 (x0 − x 1 )(x0 − x1 ) 8 0 (x 1 − x0 )(x 1 − x1 )
2 2 2
x1 x1
(ξ − x0 )(ξ − x1 ) (ξ − x0 )(ξ − x1 )
Z Z
3
B̂1 = (x1 − ξ)α−1 dξ + (x1 − ξ)α−1 dξ,
4 0 (x 1 − x0 )(x 1 − x1 ) 0 (x1 − x0 )(x1 − x 1 )
2 2 2
x1
(ξ − x0 )(ξ − x1 )
Z
1
B̂2 = − (x1 − ξ)α−1 dξ,
8 0 (x 1 − x0 )(x 1 − x1 )
2 2
and
Z x1 Z x1
−1−α (1) (2)
R1 = (x2j+1 − ξ) R1 (ξ) dξ + (x2j+1 − ξ)−1−α R1 (ξ) dξ.
0 0
129
Z x1 Z x1
−1−α (1) (2)
|R1 | ≤ (x2j+1 − ξ) |R1 (ξ)| dξ + (x2j+1 − ξ)−1−α |R1 (ξ)| dξ
Z0 x1 0
Hence, we have
Z x1 2
X
−1−α
(x2j+1 − ξ) u(tn+1 , ξ) dξ − B̂i un+1
i = O(∆x3−α ).
0 i=0
2 M
n+1 1 X 1 X
S2j = B̂i Uin+1 + B̃i Uin+1 . (6.4.28)
Γ(−α) i=0 Γ(−α) i=M −2
Theorem 6.4.6. Let 1 < α < 2 and let u(tn+1 , xl ) and Uln+1 , l = 1, 2, . . . , M − 1 be
the solutions of (6.4.22)- (6.4.23) and (6.4.25)-(6.4.26), respectively. Assume that u(t, x)
satisfies the Lipschitz conditions, with some β > 0,
R α
0 Dx u(t, x) −R α β
0 Dx u(t, y) ≤ Cα |x − y| , (6.4.29)
R α
x D1 u(t, x) −R α β
x D1 u(t, y) ≤ Cα |x − y| . (6.4.30)
We have
and, for l = 2j + 1, j = 0, 1, 2 . . . , m − 1,
2j+2 M −2j
X X
−1 −α
∆t en+1
2j+1 − en2j+1 − ∆x wk,M −(2j+2) en+1
2j+2+k + wk,M −2j en+1
2j+k + R.
k=0 k=0
= en2j + ∆tR,
and, for l = 2j + 1, j = 0, 1, 2 . . . , m − 1,
= en2j+1 + ∆tR.
Assume that |e1 |∞ = supl |e1l | = |e2k | for some k, we get, by Lemma 6.4.2, with
R = (∆t + ∆xmin(3−α,β) ),
|e1 |∞ = sup |e1l | = |e12k | ≤ |e12k | 1 − λ(w0,2k+1 + w1,2k+1 + . . . w2k,2k+1 )
l
− λ(w0,M −(2k−1) + w1,M −(2k−1) + · · · + wM −(2k−1)−1,M −(2k−1) )
− λw0,M −(2k−1) |e12k−1 | − λw1,M −(2k−1) |e12k | − · · · − λwM −(2k−1)−1,M −(2k−1) )|e1M −1 |
− λw0,M −(2k−1) |e12k−1 | − λw1,M −(2k−1) |e12k | − · · · − λwM −(2k−1)−1,M −(2k−1) |e1M −1 |
≤ |e02k | + ∆tR.
131
Assume that |e1 |∞ = supl |e1l | = |e2k+1 | for some k, we get, by Lemma 6.4.2, with
R = (∆t + ∆xmin(3−α,β) ),
1
|e |∞ = sup |e1l | = |e12k+1 |
≤ |e12k+1 |
1 − λ(w0,2k+2 + w1,2k+2 + . . . w2k+2,2k+2 )
l
− λ(w0,M −2k + w1,M −2k + · · · + wM −2k,M −2k )
− λw0,M −2k |e12k | − λw1,M −2k |e12k+1 | + · · · − λwM −2k,M −2k )|e1M |
− λw0,M −2k |e12k | − λw1,M −2k |e12k+1 | − · · · − λwM −2k,M −2k |e1M |
≤ |e02k+1 | + ∆tR.
Hence we obtain
|e1 |∞ ≤ ∆tR.
∂u(t, x) R α
− 0 Dx u(t, x) = f (t, x), 0 < x < 1, t > 0, (6.5.1)
∂t
u(t, 0) = ϕ1 (t), u(t, 1) = ϕ2 (t), (6.5.2)
where ϕ1 (t), ϕ2 (t) are some suitable functions of t and u0 (x) is the initial condition.
Let us recall the numerical method introduced in the previous section. Let m be a
positive integer and let 0 = x0 < x1 < x2 < · · · < x2m = 1 be a space partition of [0, 1]
and ∆x the space stepsize. Let 0 = t0 < t1 < t2 < · · · < xN = 1 be a time partition of
[0, 1] and ∆t the time stepsize.
At x = xl , t = tn , we have, with l = 1, 2, . . . , 2m − 1, and n = 1, 2, . . . , N ,
∂u(t, x)
−R α
0 Dx u(t, x) = f (t, x) , (6.5.4)
∂t x=xl ,t=tn x=xl ,t=tn x=xl ,t=tn
To get a stable finite difference scheme for this time-dependent problem, we need to
consider the following shifted equation, that is,
∂u(t, x)
−R α
0 Dx u(t, x) = f (t, x) + ρl (tn ), (6.5.7)
∂t x=xl ,t=tn x=xl+1 ,t=tn x=xl ,t=tn
where
ρl (tn ) = − R α
0 Dx u(t, x) −R
0 Dxα u(t, x) .
x=xl+1 ,t=tn x=xl ,t=tn
Note that,
and, with l = 2j + 1, j = 0, 1, 2, . . . , m − 1,
Z x2j+2
1
R α
0 Dx u(t, x) = (x2j+2 − ξ)−1−α u(tn , ξ) dξ
x=xl+1 ,t=tn Γ(−α) 0
2j+2
X
= ∆x−α wk,2j+2 u(tn , x2j+2−k ) + O(∆x3−α ),
k=0
n n−1 2j
U2j − U2j −α
X
n
− ∆x wk,2j+1 U2j+1−k = f (x2j , tn ) + ρn2j
∆t k=0
Z x1
1
+ (x2j+1 − ξ)−1−α u(ξ, tn ) dξ, j = 1, 2, . . . , m − 1,
Γ(−α) 0
n n−1 2j+2
U2j+1 − U2j+1 −α
X
n
− ∆x wk,2j+2 U2j+2−k = f (x2j+1 , tn ) + ρn2j+1
∆t k=0
Z x1
1
+ (x2j+1 − ξ)−1−α u(ξ, tn ) dξ, j = 0, 1, 2, . . . , m − 1,
Γ(−α) 0
∆t
or, with λ = ∆xα
,
2j
X
n n n−1
U2j −λ wk,2j+1 U2j+1−k = U2j + ∆tf (x2j , tn ) + ∆tρn2j
k=0
Z x1
1
+ ∆t (x2j+1 − ξ)−1−α u(ξ, tn ) dξ, j = 1, 2, . . . , m − 1,
Γ(−α) 0
(6.5.10)
2j+2
X
n n n−1
U2j+1 −λ wk,2j+2 U2j+2−k = U2j+1 + ∆tf (x2j+1 , tn ) + ∆tρn2j+1 , j = 0, 1, 2, . . . , M − 1.
k=0
(6.5.11)
134
The numerical methods (6.5.10) - (6.5.11) can be written into the following matrix
form
where
U1n f (x1 , tn ) − R 0 D α
x u(x ,
2 n t ) − R α
0 D x u(x ,
1 n t )
U2n f (x2 , tn ) − R D α
u(x , t ) − R α
D u(x , t )
3 n 2 n
0 x 0 x
Un = U3n , Fn = , n
ρ = R α R α
− 0 Dx u(x4 , tn ) − 0 Dx u(x3 , tn ) ,
f (x3 , tn )
.. ..
..
. .
.
n
U2m−1 f (x2m−1 , tn ) − R0 D α
x u(x 2m , t n ) − R α
0 D x u(x 2m−1 , tn )
and
0
R x1
1 −1−α
(x3 − ξ) u(tn , ξ) dξ
Γ(−α)
0
n
0
I =
..
,
.
1
R x1 −1−α
−
Γ(−α) 0
(x 2m−1 ξ) u(t n , ξ) dξ
0
λw2,2 u(tn , x0 )
0 0
λw4,4 u(tn , x0 ) 0
..
Bln = , Brn = ,
0 .
..
. 0
0 λw0,2m u(tn , x2m )
λw2m,2m u(tn , x0 )
and
1 − λw1,2 −λw0,2 0 0 ... 0
−λw2,3 1 − λw1,3 −λw0,3 0 ... 0
.. .. .. .. .. ..
A= .
. . . . . .
−λw2m−2,2m−1 −λw2m−3,2m−1 . . . 1 − λw1,2m−1 −λw0,2m−1
...
−λw2m−1,2m −λw2m−2,2m ... ... −λw2,2m 1 − λw1,2m
135
Here Bln and Brn are determined by the Dirichlet boundary conditions u(tn , x0 ) and
u(tn , x2m ). We then use MATLAB to obtain all the approximate solutions U n , n =
1, 2, . . . , N .
R α
ut (t, x) = 0 Dx u(t, x) + f (t, x), 0 < x < 2, t > 0 (6.5.12)
where
−t 2
2 Γ(2 + 1)
−t
f (t, x) = −4e x (2 − x) − 4e 4 x2−α
Γ(2 − α + 1)
Γ(3 + 1) Γ(4 + 1)
−4 x3−α + x4−α ,
Γ(3 − α + 1) Γ(4 − α + 1)
where |eN |∞ denotes the L∞ -norm of the error at time tN = 1. In our numerical example,
we know the exact solution u, so we can exactly calculate ρn . In general, we may need
to approximate ρn by using the computed solutions U n with some higher order numerical
methods.
To observe the convergence order with respect to ∆x, we choose ∆t = 2−10 sufficiently
small and the different space stepsizes hl = ∆x = 2−l , l = 3, 4, 5, 6, 7. Hence the error
will be dominated by ∆xγ . Now let |eN
l |∞ = |U
N
− u(tN )|∞ denote the L∞ -norm at
tN = 1 obtained by using the space stepsize hl . For the fixed space stepsize hl = 2−l , l =
3, 4, 5, 6, 7, we have
γ
|eN
l |∞ ≈ Chl , (6.5.15)
|eN
l |∞ hγl
≈ γ = 2γ .
|eN |
l+1 ∞ hl+1
136
approximates the Riemann-Liouville fractional derivative uniformly with first order accu-
racy, i.e.,
Aαh,p u(x) = R α
−∞ Dx u(x) + O(∆x),
(α)
where p is a positive integer and gk = (−1)k ( αk ). Considering a well defined function
u(x) on a bounded interval [a, b] if u(a) = 0 or u(b) = 0, the function u(x) can be
zero extended for x < a or x > b. And then the α order left and right Riemann-
Liouville fractional derivatives of u(x) at each point x can be approximated by the shifted
137
−1
−2
−3
log2(error)
−4
−5
−6
−7
−8
−9
−5 −4.5 −4 −3.5 −3 −2.5 −2 −1.5 −1
log2(∆ x)
−1
log2(error)
−2
−3
−4
−5
−6
−5 −4.5 −4 −3.5 −3 −2.5 −2 −1.5 −1
log2(∆ x)
Grünwald difference operator Aαh,p u(x). In [90], the authors introduced a weighted and
shifted Grünwald difference operator which has second order accuracy to approximate
the Riemann-Liouville fractional derivative. However the approximation of the left or
right Riemann-Liouville fractional derivatives in [66, 90] by using the shifted Grünwald
difference operator on finite interval [a, b] requires that u(a) = 0 or u(b) = 0 respectively.
In Table 6.5.2, we obtain the experimentally determined orders of convergence (EOC) for
the different α = 1.2, 1.4, 1.6, 1.8 by using the Grünwand difference method in [66]. We
only observe the first order convergence.
Example 15. We consider the same equation as in Example 14, but with the nonhomo-
geneous Dirichlet boundary condition,
R α
ut (t, x) = 0 Dx u(t, x) + f (t, x), 0 < x < 2, t > 0 (6.5.17)
where
Γ(2 + 1) Γ(3 + 1)
f (t, x) = −4e−t x2 (2 − x)2 − 4e−t 4 x2−α − 4 x3−α
Γ(2 − α + 1) Γ(3 − α + 1)
Γ(4 + 1) Γ(1)
+ x4−α + 5 x−α .
Γ(4 − α + 1) Γ(1 − α)
We use the same notations as in Example 14. In Table 6.5.3, we obtain the experimen-
tally determined orders of convergence (EOC) for the different α = 1.2, 1.4, 1.6, 1.8. We
see that the convergence order is less than 3 − α. This is because of the nonhomogeneous
boundary conditions.
The approximation of the Riemann-Liouville fractional derivative by using the Grünwald
difference operator on [a, b] in Meerschaert and Tadjeran [66] requires that the function
has the zero extension for x < a and x > b. Hence we require that the function should have
zero boundary conditions on the finite interval in order to get good approximation of the
fractional derivative of such function by using the Grünwald difference operator. In this
example, since the Dirichlet boundary conditions are not homogeneous, we observe that
in Table 6.5.4 the convergence order of the algorithm by using the Grünwald difference
method is rather low. However the shifted Diethelm method works well for the nonhomo-
geneous Dirichlet boundary conditions and the convergence order is approximately equal
to 1 in this example. This is another advantage of using the shifted Diethelm’s method
compared with the Grünwald difference method in Meerschaert and Tadjeran [66].
R α
ut (t, x) = 0 Dx u(t, x) + f (t, x), 0 < x < 1, t > 0 (6.5.20)
where
Γ(α1 + 1)
f (t, x) = −e−t xα1 − e−t xα1 −α .
Γ(α1 + 1 − α)
The exact solution is u(t, x) = e−t xα1 . In our numerical simulations, we first consider
the nonsmooth solutions with α1 = α. we then consider the smooth solutions with α1 = 3.
for some constant C, which implies that the following Lipschitz condition holds for any
β > 0,
R α
0 Dx u(t, x) −R α β
0 Dy u(t, y) = 0 ≤ C|x − y| .
Example 17. Consider the same equation as in Example 16, but with nonhomogeneous
boundary conditions.
R α
ut (t, x) = 0 Dx u(t, x) + f (t, x), 0 < x < 1, t > 0 (6.5.23)
where
Γ(α1 + 1)
f (t, x) = −e−t xα1 − e−t xα1 −α .
Γ(α1 + 1 − α)
142
The exact solution is u(t, x) = e−t xα1 + 1. In our numerical simulations, we consider
the smooth solution with α1 = 3.
Example 18. Consider the following two-sided space-fractional partial differential equa-
tion, [66]
ut (t, x) = c+ (t, x) R α R α
0 Dx u(t, x) + c− (t, x) x D1 u(t, x) + f (t, x), 0 < x < 2, t > 0
(6.5.26)
where
−t
2 2 3 25 4
3 4
f (t, x) = −32e x + (2 − x) − 2.5(x + (2 − x) ) + (x + (2 − x) ) .
22
We use the same notations as in Example 14. In Table 6.5.8, we obtain the experi-
mentally determined orders of convergence (EOC) for the different α = 1.2, 1.4, 1.6, 1.8.
We see that the convergence order is almost 3 − α. The order 3 − α term dominates the
convergence order in this example.
This thesis has extended the existing numerical methods (Diethelm’s numerical method
and Fractional Adams-type method) to obtain higher orders convergence in the solution
to fractional order differential equations.
Applying quadratic interpolation polynomial to discretize Hadamard finite-part inte-
gral in Diethelm’s method the convergence order is O(h3−α ) when, 0 < α < 1, whereas,
the existing order of convergence is O(h2−α ) when, 0 < α < 1. And in the Adams-type
approximation method we have found the convergence order is O(h1+2α ) for 0 < α < 1
and O(h3 ) for 1 < α < 2 which are higher than the existing results. The advantage of
the method is we can solve non-linear fractional differential equations as well as linear
fractional differential equations and we can avoid non-linear calculations in the Newton
iteration process.
In Chapter 5 the Richardson extrapolation algorithm was discussed as a tool to accel-
erate the order of convergence for our considered numerical methods. The extrapolation
algorithm is applicable if the sequence of the approximate solutions of the problem pos-
sesses an asymptotic expansion and it was proved that the two approximate methods that
we considered possess an asymptotic expansion. We also discussed how to approximate
the initial value and the initial integrals of the proposed numerical methods.
Finally, we consider the finite difference method for solving space-fractional partial
differential equations. We proved that both the standard explicit finite difference method
144
145
and implicit finite difference methods are unconditionally unstable. To find a stable
finite difference method we introduce implicit shifted Diethelm finite difference method
for solving two-sided space-fractional partial differential equations. We proved that, the
method is unconditionally stable and the order of convergence of the finite difference
method is O(∆t + ∆xmin(3−α,β) ), 1 < α < 2, β > 0, where ∆t, ∆x denote the time and
space stepsizes, respectively.
The importance of research into fractional order differential equations and their signif-
icance to future applications warrant continued study. We propose some possible research
topics in this active research area:
• Higher order numerical methods for solving fractional differential equation with
variable steps.
[3] R. L. Bagley, R. A. Calico, Fractional order state equations for the control of vis-
coelastic structures, Journal of Guidance, Control and Dynamics, 14 (1991) 304-
311.
[7] C. Brezinski and M. Redivo Zaglia, Extrapolation Methods, Theory and Practice,
Elsevier Science Publishers, North Holland, (1991).
[9] J. Cao and C. Xu, A high order schema for the numerical solution of the fractional
ordinary differential equations, J. Comp. Phys., 238(2013), 154-168.
146
147
[11] S. Chen and F. Liu, ADI-Euler and extrapolation methods for the two-dimension
fractional advection-dispersion equation, J. Appl. Math. Comput., 26 (2008), 295-
311.
[12] H. W. Choi, S. K. Chung and Y. J. Lee, Numerical solutions for space fractional dis-
persion equations with nonlinear source terms, Bull. Korean Math. Soc., 47 (2010),
1225-1234.
[13] J. A.Connolly, The numerical solution of fractional and distributed order differential
equations, University of Liverpool (University of Chester), Dec-2004.
[18] W. H. Deng, Numerical algorithm for the time fractional Fokker-Planck equation,
J. Comp. Phys., 227(2007), 1510-1522.
[19] W. H. Deng, Short memory principle and a predict-corrector approach for fractional
differential equations, J. Comput. Appl. Math., 206(2007), 174-188.
[20] W. H. Deng, Finite element method for the space and time fractional Fokker-Planck
equation, SIAM J. Numer. Anal., 47(2008), 204-226.
148
[22] W. H. Deng and C. Li, Numerical schemes for fractional ordinary differential equa-
tions, Numerical Modelling, edited by: Prof. Peep Miidla, Chapter 16, 355-374,
Publisher InTech, 2012.
[27] K. Diethelm, N.J. Ford, Analysis of fractional differential equations, J. Math. Anal.
Appl., 265 (2002) 229 -248.
[28] K. Diethelm, N.J. Ford, A.D. Freed, Detailed error analysis for a fractional Adams
method, Numerical Algorithms, 36 (2004), 31 -52.
[29] K. Diethelm, N. J. Ford, A.D. Freed, A predictor-corrector approach for the numer-
ical solution of fractional differential equations, Nonlinear Dynamics, 29 (2002), 3-
22.
[30] K. Diethelm, J.M. Ford, N.J. Ford and M. Weilbeer, Pitfalls in fast numerical solvers
for fractional differential equations, J. Comp. Appl. Math., 186(2006), 482-503.
[31] K.Diethelm and A.D. Freed, On the solution of nonlinear fractional-order differ-
ential equations used in the modelling of viscoelasticity, in ”Scientific Computing
149
[32] K.Diethelm and Y. Luchko, Numerical solution of linear multi-term initial value
problems of fractional order, J. Comput. Anal. Appl., 6(2004), 243-263.
[33] K. Diethelm and G. Walz Numerical solution of fractional order differential equa-
tions by extrapolation, Numerical Algorithms, 16 (1997) 231 - 253.
[35] D. Elliot, An asymptotic analysis of two algorithms for certain Hadamard finite-part
integrals, IMA J. Numerical Anal., 13 (1993) 445- 462.
[38] V. J. Ervin and J. P. Roop, Variational formulation for the stationary frac-
tional advection dispersion equation, Numer. Methods Partial Differential Equa-
tions, 22(2006), 558-576.
[40] G.J. Fix and J. P. Roop, Least squares finite-element solution of a fractional order
two-point boundary value problem, Comput. Math. Appl., 48(2004), 1017-1033.
[42] N. J. Ford, K. Pal and Y. Yan, An algorithm for the numerical solution of two-sided
space-fractional partial differential equations, Computational Methods in Applied
Mathematics, 15(2015), 497-514.
[45] N. J. Ford, J. Xiao and Y. Yan, A finite element method for time fractional partial
differential equations, Fractional Calculus and Applied Analysis, 14 (2011), 454-474.
[46] N. J. Ford, J. Xiao and Y. Yan, Stability of a numerical method for space-time-
fractional telegraph equation, Computational Methods in Applied Mathematics,
12(2012), 273-288.
[47] R. Gorenflo, Fractional Calculus: Some Numerical Methods, CISM Lecture Notes,
1996.
[48] R. Gorenflo, and F. Mainardi, Random walk models for space-fractional diffusion
process, Fractional Calculus and Applied Analysis, 1(1998), 167-191.
[52] P. Kumar, O.P. Agrawal, An approximate method for numerical method of fractional
differential equation, Signal Proc., 86 (2006), 2602- 2610.
151
[53] CH. Lubich, A stability analysis of convolution quadraturea for Abel-Volterra integral
equations, IMA Journal Numer. Anal., 6 (1986), 87 - 101.
[54] CH. Lubich, Fractional linear multi-step methods for Abel-Volterra integral equations
of the second kind, Math. Comp., 45(1985), 463 - 469.
[55] CH. Lubich, Discretized fractional calculus, SIAM J. Math. Anal., 17 (1986), 704 -
719.
[57] X. J. Li and C. J. Xu, A space-time spectral method for the time fractional diffusion
equation, SIAM J. Numer. Anal., 47(2009), 2108-2131.
[58] X. J. Li and C. J. Xu , Existence and uniqueness of the weak solution of the space-
time fractional diffusion equation and a spectral method approximation, Commun.
Comput. Phys., 8(2010), 1016-1051.
[59] A. Le Mehaute and G. Crepy, Introduction to transfer and motion in fractal media:
the geometry of kinetics, Solid State Ionics, 9-10(1983) 17-30.
[60] C. P. Li and F. Zeng, Finite difference methods for fractional differential equations,
International Journal of Bifurcation and Chaos, 22(2012), 1230014 (28 pages).
[61] F. Liu, V. Anh and I. Turner, Numerical solution of space fractional Fokker-Planck
equation, J. Comp. Appl. Math., 166(2004), 209-219.
[67] M. M. Meerschaert and E. Scalas, Coupled continuous time random walks in finance,
Physica A, 370(2006), 114-118.
[68] R. Metzler and J. Klafter, The restaurant at the end of the random walk: recent
developments in the description of anomalous transport by fractional dynamics, J.
Phys. A: Math. Gen., 37(2004), R161-R208.
[72] K. Oldham and J. Spanier, The Fractional Calculus, Academic Press, San Diego,
1974.
[73] K. Pal, F. Liu and Y. Yan, Numerical solutions for fractional differential equations
by extrapolation, Lecture Notes in Computer Science, Springer series, 9045 (2015),
299-306.
[74] K. Pal, F. Liu, Y. Yan and G. Roberts, Finite difference method for two-sided
space-fractional partial differential equations, Lecture Notes in Computer Science,
Springer series, 9045 (2015), 307-314.
153
[78] D. A. Robinson, The use of control systems analysis in the neurophysiology of eye
movements, Ann. Rev. Neurosci, 4 (1981), 463-503.
[79] S. Shen, F. Liu and V. Anh, Numerical approximations and solution techniques
for the space-time Riesz-Caputo fractional advection-diffusion equation, Numerical
Algorithms, 56(2011), 383-403.
[80] S. Shen, F. Liu, V. Anh and I. Turner, The fundamental solution and numerical
solution of the Riesz fractional advection-dispersion equation, IMA J. Appl. Math.,
73(2008), 850-872.
[81] S. Shen, F. Liu, V. Anh, I. Turner and J. Chen, A novel numerical approxima-
tion for the space fractional advection-dispersion equation, IMA Journal of Applied
Mathematics, 79(2014), 431 - 444.
[82] D. P. Simpson, I. W. Turner and M. Ilic , A generalised matrix transfer technique for
the numerical solution of fractional-in-space partial differential equations, Preprint
(2007).
[83] E. Sousa, Finite difference approximations for a fractional advection diffusion prob-
lem, J. Comput. Phys., 228(2009), 4038-4054.
[84] E. Sousa, How to approximate the fractional derivative of order 1 < α ≤ 2, In-
ternational Journal of Bifurcation and Chaos, 22(2012), 1250075, (13 pages) DOI:
10.1142/S0218127412500757.
154
[85] E. Sousa and C. Li, A weighted finite difference method for the fractional diffusion
equation based on the Riemann-Liouville derivative, Applied Numerical Mathemat-
ics, 90(2015), 22-37.
[86] L. J. Su, W. Q. Wang and Q. Y. Xu, Finite difference methods for fractional dis-
persion equations, Applied Mathematics and Computation, 216(2010), 3329-3334.
[90] W. Tian, H. Zhou and W. H. Deng, A class of second order difference approx-
imations for solving space fractional diffusion equations, Math. Comp., 84(2015),
1703-1727.
[91] B. West and V. Seshadri, Linear systems with Lévy fluctuations, Physica,
A113(1982), 203-216.
[93] Y. Yan, K. Pal and N. J. Ford, Higher order numerical methods for solving fractional
differential equations, BIT Numer. Math., 54 (2014), 555-584.
[94] Q. Yang, F. Liu and I. Turner, Numerical methods for fractional partial differential
equations with Riesz space fractional derivatives, Appl. Math. Model., 34(2010),
200-218.