0% found this document useful (0 votes)
28 views166 pages

Higher Order Numerical Methods For

Uploaded by

Karthik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views166 pages

Higher Order Numerical Methods For

Uploaded by

Karthik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 166

Higher Order Numerical Methods for

Fractional Order Differential Equations

Item Type Thesis or dissertation

Authors Pal, Kamal

Citation Pal, K. K., (2015). Higher order numerical methods for fractional
order differential equations. (Doctoral dissertation). University of
Chester, United Kingdom.

Publisher University of Chester

Usage policy The full-text may be used and/or reproduced in any format
or medium, without prior permission or charge, for personal
research or study, educational, or not-for-profit purposes
provided that: - A full bibliographic reference is made to the
original source - A link is made to the metadata record in
ChesterRep - The full-text is not changed in any way - The full-
text must not be sold in any format or medium without the formal
permission of the copyright holders. - For more information
please email [email protected]

Download date 01/03/2024 03:52:22

Item License http://creativecommons.org/licenses/by-nc-nd/4.0/

Link to Item http://hdl.handle.net/10034/613354


Higher order numerical
methods for fractional order
differential equations

Thesis submitted in accordance with the requirements of the University of


Chester for the degree of Doctor in Philosophy by
Kamal Kanti Pal

August, 2015
i

Abstract
This thesis explores higher order numerical methods for solving fractional differential
equations.
Firstly, we consider two approaches to construct higher order numerical methods for
solving fractional differential equations. Based on a direct discretization of the fractional
differential operator we show that, the order of convergence of the linear fractional differ-
ential equation with 0 < α < 1 is O(h3−α ), where α denotes the order of the fractional
derivative. Based on discretization of the integral in the equivalent form of non-linear frac-
tional differential equations the order of convergence of the numerical method is O(h3 )
for α ≥ 1 and O(h1+2α ) for 0 < α ≤ 1 for sufficiently smooth functions.
Secondly, we introduce extrapolation algorithms for accelerating the convergence order
of the two considered numerical methods. Numerical experiments are given for each
algorithm to show that the numerical results are consistent with the theoretical results.
Finally we introduce a higher order algorithm for solving two-sided space-fractional
partial differential equations. The space-fractional derivatives we consider here are left-
handed and right-handed Riemann-Liouville fractional derivatives which are expressed
by using the Hadamard finite-part integrals. We approximate the Hadamard finite-part
integrals by using piecewise quadratic interpolation polynomials and obtain a numerical
approximation of the space-fractional derivative with convergence order O(∆x3−α ), 1 <
α < 2. A shifted implicit finite difference method is applied for solving the two-sided
space-fractional partial differential equation and we prove that the order of convergence
of the finite difference method is O(∆t + ∆xmin(3−α,β) ), 1 < α < 2, β > 0, where ∆t, ∆x
denote the time and space stepsizes, respectively, and α is the order of the fractional
derivative and β is the Lipschitz constant related to the exact solution. Numerical exam-
ples, where the solutions have varying degrees of smoothness, are presented and compared
with the exact analytical solution to compare the practical performance of the method
with the theoretical order of convergence.
ii

Declaration
No part of the work referred to in this thesis has been submitted in support of an
application for another degree or qualification of this or any other institution of learning.
However some parts of the materials contained herein have been published previously.

Publications
• Y. Yan, K. Pal and N. J. Ford [93], Higher order numerical methods for solving
fractional differential equations, BIT Numer. Math., 54 (2014), 555-584.

• K. Pal, F. Liu and Y. Yan [73], Numerical solutions for fractional differential equa-
tions by extrapolation, Lecture Notes in Computer Science, Springer series, Volume
9045 (2015), 299-306.

• K. Pal, F. Liu, Y. Yan and G. Roberts [74], Finite difference method for two-sided
space-fractional partial differential equations, Lecture Notes in Computer Science,
Springer series, Volume 9045 (2015), 307-314.

• N. J. Ford, K. Pal and Y. Yan [42], An algorithm for the numerical solution of
two-sided space-fractional partial differential equations, Computational Methods in
Applied Mathematics, 15 (2015), 497-514.

Conference presentations

• Finite difference methods for solving space-fractional partial differential equations;


Faculty of Applied Sciences Post-graduate Research Conference, 27th June 2013,
University of Chester.

• A higher order numerical method for solving fractional differential equations (FDEs)
(Diethelm’s Method); Sixth Conference on Finite Difference Methods: Theory and
Applications, June 18-23, 2014, Lozenetz, Bulgaria.

• A higher order numerical method for solving fractional differential equations (FDEs)
(Predictor-corrector method); 6th International Conference on Computational Meth-
ods in Applied Mathematics, Sep 28- Oct 4, 2014, Strobl, Austria.
iii

• Basic concepts of fractional differential equations (PDEs); SCI Early Career Re-
search Meeting on 6th Nov, 2014, in Thornton Science Park, University of Chester.

• Predictor-corrector approach for solving fractional differential equations (FDEs);


26th Biennial Numerical Analysis Conference, 23rd to 26th June, 2015, Glasgow.

Poster presentations

• Finite difference methods for space-fractional partial differential equations; Faculty


Postgraduate Conference, Faculty of Applied Science, University of Chester, 22nd
June, 2012.

• Predictor-corrector approach for solving fractional differential equations; London


Mathematical Society (LMS) 150th Anniversary celebration seminar at University
of Chester, Thornton Science Park, 3rd July, 2015.
iv

Acknowledgements:
I am grateful to my PhD supervisors Dr. Yubin Yan and Professor Neville J. Ford for
their continuous help, encouragement and guidance.

I would like to take the opportunity to thank all the colleagues and friends of the math-
ematics research group, at University of Chester for accepting me as a visiting lecturer in
2014-1015 and providing a friendly and stimulating working environment.

Furthermore, I would like to thank all the staff in the Department of Mathematics
and international center at University of Chester for their contribution during my PhD
studies. I also would like to thank Nicola Banks for commenting on a draft of this thesis.

Finally, I would like to express my special acknowledgements to all of my family mem-


bers and relatives for their continuous support and encouragement.
Contents

1 Introduction 1
1.1 Fractional calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Basic functions of fractional calculus . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 Gamma Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.2 Beta Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.3 Mittag-Leffler function . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Structure of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Fractional differential equations (FDEs) 6


2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.1 Riemann-Liouville (R-L) fractional integral . . . . . . . . . . . . . . 7
2.2.2 Riemann-Liouville fractional derivative . . . . . . . . . . . . . . . . 8
2.2.3 Caputo fractional derivative . . . . . . . . . . . . . . . . . . . . . . 9
2.2.4 Hadamard finite -part integral . . . . . . . . . . . . . . . . . . . . . 10
2.2.5 Grünwald-Letnikov fractional derivative . . . . . . . . . . . . . . . 10
2.3 Numerical methods for solving FDEs . . . . . . . . . . . . . . . . . . . . . 11
2.4 Existence and uniqueness of the solution of FDEs . . . . . . . . . . . . . . 12
2.5 Applications of fractional differential equations . . . . . . . . . . . . . . . . 17

3 Higher order numerical method for fractional ODEs (Diethelm’s method) 20


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2 Diethelm’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2.1 Error analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

v
vi

3.3 Extending Diethelm’s method . . . . . . . . . . . . . . . . . . . . . . . . . 33


3.3.1 Error analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.4 Numerical examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

4 Higher order numerical method for fractional ODEs (predictor-corrector


method) 50
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.2 Fractional Adams-type algorithm (quadratic interpolation polynomial) . . 51
4.3 Error analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.4 Numerical examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

5 Higher order numerical methods for fractional differential equations by


extrapolation 68
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.2 Richardson extrapolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.3 The linear fractional differential equation . . . . . . . . . . . . . . . . . . . 73
5.3.1 The numerical method . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.3.2 Approximating the starting values and the starting integrals . . . . 78
5.4 The nonlinear fractional differential equation . . . . . . . . . . . . . . . . . 82
5.4.1 The numerical method . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.4.2 Approximating the starting values and the starting integrals . . . . 87
5.5 Numerical simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.5.1 The linear fractional differential equation . . . . . . . . . . . . . . . 90
5.5.2 The nonlinear fractional differential equation . . . . . . . . . . . . . 95

6 Finite difference method (FDM) for space-fractional PDEs 100


6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
6.2 Brief reviews of FDM for solving space-fractional PDEs . . . . . . . . . . . 101
6.3 FDM based on linear interpolation . . . . . . . . . . . . . . . . . . . . . . 105
6.4 FDM based on quadratic interpolation . . . . . . . . . . . . . . . . . . . . 115
6.4.1 Initial integral approximation . . . . . . . . . . . . . . . . . . . . . 128
6.4.2 Error estimates of the shifted Diethelm FDMs . . . . . . . . . . . . 129
vii

6.5 Numerical simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

7 Conclusions and possibilities for further work 144


List of Figures

3.4.1 The experimentally determined orders of convergence (“EOC ”) at t = 1


in Example 7 with α = 0.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.4.2 The experimentally determined orders of convergence (“EOC ”) at t = 1
in Example 7 with α = 0.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.4.3 The experimentally determined orders of convergence (“EOC ”) at t = 1
in Example 7 with α = 0.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.4.4 The experimentally determined orders of convergence (“EOC ”) at t = 1
in Example 7 with α = 0.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4.5 The experimentally determined orders of convergence (“EOC ”) at t = 1
in Example 7 with α = 0.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4.6 The experimentally determined orders of convergence (“EOC ”) at t = 1
in Example 7 with α = 0.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

4.4.1 The experimentally determined orders of convergence (“EOC ”) at t = 1


in Example 9 with α = 0.35 . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.4.2 The experimentally determined orders of convergence (“EOC ”) at t = 1
in Example 9 with α = 1.25 . . . . . . . . . . . . . . . . . . . . . . . . . . 67

6.5.1 The experimentally determined orders of convergence (“EOC ”) at t = 1


in Example 14 with α = 1.20 . . . . . . . . . . . . . . . . . . . . . . . . . . 137
6.5.2 The experimentally determined orders of convergence (“EOC ”) at t = 1
in Example 14 with α = 1.80 . . . . . . . . . . . . . . . . . . . . . . . . . . 137

viii
List of Tables

1.1.1 Evolution in the number of publications on fractional differential equations


and their applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

3.4.1 Numerical results at t = 1 for β = −1 . . . . . . . . . . . . . . . . . . . . . 45


3.4.2 Numerical results at t = 1 for β = −1 . . . . . . . . . . . . . . . . . . . . . 46

4.4.1 Numerical results at t = 1 in Example 9 with the different fractional order


α < 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.4.2 Numerical results at t = 1 in Example 9 with the different fractional order
α > 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

5.5.1 Romberg tableau of approximate solutions . . . . . . . . . . . . . . . . . . 91


5.5.2 Errors for equations (5.5.5)-(5.5.6) with α = 0.3, taken at t = 1. . . . . . . 92
5.5.3 Orders (“EOC ”) for equations (5.5.5)-(5.5.6) with α = 0.3, taken at t = 1. 92
5.5.4 Errors for equations (5.5.5)-(5.5.6) with α = 0.5, taken at t = 1. . . . . . . 93
5.5.5 Orders (“EOC ”) for equations (5.5.5)-(5.5.6) with α = 0.5, taken at t = 1. 93
5.5.6 Errors for equations (5.5.5)-(5.5.6) with α = 0.9, taken at t = 1. . . . . . . 93
5.5.7 Orders (“EOC ”) for equations (5.5.5)-(5.5.6) with α = 0.9, taken at t = 1. 94
5.5.8 Errors for equations (5.5.7)-(5.5.8) with α = 0.3, taken at t = 1. . . . . . . 94
5.5.9 Orders (“EOC ”) for equations (5.5.7)-(5.5.8) with α = 0.3, taken at t = 1. 95
5.5.10Errors for equations (5.5.7)-(5.5.8) with α = 0.5, taken at t = 1. . . . . . . 95
5.5.11Orders (“EOC ”) for equations (5.5.7)-(5.5.8) with α = 0.5, taken at t = 1. 95
5.5.12Errors for equations (5.5.7)-(5.5.8) with α = 0.9, taken at t = 1. . . . . . . 96
5.5.13Orders (“EOC ”) for equations (5.5.7)-(5.5.8) with α = 0.9, taken at t = 1. 96
5.5.14Errors for equation (5.5.9) with α = 1.3, taken at t = 1. . . . . . . . . . . . 97

ix
x

5.5.15Orders (“EOC ”) for equation (5.5.9) with α = 1.3, taken at t = 1. . . . . . 97


5.5.16Errors for equation (5.5.9) with α = 1.5, taken at t = 1. . . . . . . . . . . . 98
5.5.17Orders (“EOC ”) for equation (5.5.9) with α = 1.5, taken at t = 1. . . . . . 98
5.5.18Errors for equation (5.5.9) with α = 1.9, taken at t = 1. . . . . . . . . . . . 98
5.5.19Orders (“EOC ”) for equation (5.5.9) with α = 1.9, taken at t = 1. . . . . . 99

6.5.1 The experimentally determined orders of convergence (EOC) at t = 1 in


Example 14 by using the shifted Diethelm method . . . . . . . . . . . . . . 136
6.5.2 The experimentally determined orders of convergence (EOC) at t = 1 in
Example 14 by using the shifted Grünwald method . . . . . . . . . . . . . 138
6.5.3 The experimentally determined orders of convergence (EOC) at t = 1 in
Example 15 by using the shifted Diethelm method . . . . . . . . . . . . . . 139
6.5.4 The experimentally determined orders of convergence (EOC) at t = 1 in
Example 15 by using the shifted Grünwald method . . . . . . . . . . . . . 140
6.5.5 The experimentally determined orders of convergence (EOC) at t = 1 in
Example 16 for α1 = α . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
6.5.6 The experimentally determined orders of convergence (EOC) at t = 1 in
Example 16 for α1 = 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
6.5.7 The experimentally determined orders of convergence (EOC) at t = 1 in
Example 17 for α1 = 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
6.5.8 The experimentally determined orders of convergence (EOC) at t = 1 in
Example 18 by using the shifted Diethelm method . . . . . . . . . . . . . . 143
Chapter 1

Introduction

1.1 Fractional calculus


Fractional calculus deals with the study of so-called fractional order integral and deriva-
tive operators over real and complex domain and their applications. It does not mean
the calculus of fractions [76]. Neither does it mean a fraction of any calculus - differen-
tial, integral or calculus of variations. The fractional calculus is a name for the theory
of integrals and derivatives of arbitrary order, which unify and generalize the notions of
integer-order differentiation and n-fold integration.
Fractional derivatives and fractional integrals are not new in the household subject
area of mathematics. In recent years a huge interest in fractional calculus has arisen
because of its applicability to vast areas of scientific interest. In 18th and 19th centuries
many brilliant scientists motivated to focus their attention on fractional calculus [4]. For
instance, we can mention Euler(1738), Laplace (1812), Fourier (1822), Abel (1823-1826),
Liouville (1832-1873), Riemann (1847), Holmgren (1865-1867), Grünward (1867-1872),
Letnikov (1868-1872), Laurent (1884), Nekrassov (1888), Krug (1890), Hadamard (1892),
Heaviside (1892-1912), Pincherle (1902), Hardy and Littlewood (1917-1928), Weyl (1917),
Lévy (1923), Marchaud (1927), Davis (1924-1936), Zygmund (1935-1945), Love (1938-
1996), Erdelyi (1939-1965), Kober (1940), Widder (1941), Riesz (1949).

However the interest in the specific topic of fractional calculus surged only at the end of
the last century. Fractional differential equations, that is, those involving real and complex

1
2

order derivatives, have assumed an important role in modelling the anomalous dynamics
of many processes related to complex systems in the most diverse areas of science and
engineering [4]. During the last 25 years there has been a spectacular increase in the use
of fractional differential models to simulate the dynamics of many different anomalous
process, especially those involving ultra-slow diffusion. The following table is only based
on the Scopus database, but it reflects this state of affairs clearly: [4]

Words in title or abstract 1960-1980 1981-1990 1991-2000 2001-2010


Fractional Brownian Motion 2 38 532 1295
Anomalous Diffusion 185 261 626 1205
Anomalous Relaxation 21 23 70 61
Superdiffusion or Subdiffusion 0 22 121 521
Fractional Models, Kinetics, Dynamics 11 24 128 443
Fractional Differential Equations 1 1 74 943

Table 1.1.1: Evolution in the number of publications on fractional differential equations


and their applications.

1.2 Basic functions of fractional calculus


In fractional calculus, the gamma function and beta function are the basic mathematical
tools to understand the origin of its computational challenges. The Gamma function
generalizes the factorial n! and allows n to take also non-integer and even complex values
[76].

1.2.1 Gamma Function

The gamma function Γ(z) is defined by the integral [51]

Z ∞
Γ(z) = e−t tz−1 dt, Re(z) > 0 (1.2.1)
0

which is the Euler integral of the second kind and converges in the right half of the
complex plane Re(z) > 0. If z = x + iy, indeed we have
3

Z ∞ Z ∞
−t x−1+iy
Γ(x + iy) = e t dt = e−t tx−1 eiylog(t) dt.
Z0 ∞ 0

e−t tx−1 cos(ylog(t)) + i sin(ylog(t)) dt,



= (1.2.2)
0

which is convergent for any x > 0. The reduction formula of the gamma function is

Γ(z + 1) = zΓ(z), Re(z) > 0 (1.2.3)

which can be proved by integrating by parts [76], with z > 0, z ∈ R

Z ∞ Z ∞
−t z
Γ(z + 1) = e t dt = [−e−t tz ]∞
0 +z e−t tz−1 dt = zΓ(z).
0 0

Since, Γ(1) = 1, the recurrence shows that for any positive integer z [72],

Γ(z + 1) = zΓ(z) = z(z − 1)Γ(z − 1) = · · · = z(z − 1)(z − 2) . . . 2.1.Γ(1) = z!

1.2.2 Beta Function

The beta function B(z, w) is defined by [76]


Z 1
B(z, w) = tz−1 (1 − t)w−1 dt, Re(z) > 0, Re(w) > 0, (1.2.4)
0

which is the Euler’s integral of first kind. By using Laplace transform the beta function
can be written in terms of gamma function.

Γ(z)Γ(w)
B(z, w) = , Re(z) > 0, Re(w) > 0. (1.2.5)
Γ(z + w)

1.2.3 Mittag-Leffler function

The Mittag-Leffler function also plays a very important role in the research of fractional
calculus. The classical Mittag-Leffler function for one parameter is defined by [51],

X zk
Eα (z) := , z ∈ C, Re(α) > 0, (1.2.6)
k=0
Γ(αk + 1)
4


In particular, when α = 1 and α = 2, we have, E1 (z) = ez and E2 (z) = cosh( z).

The Mittag-Leffler type function with two parameter α, β, is defined by the series
expansion as follows [76],

X zk
Eα,β (z) := , (α > 0, β > 0).
k=0
Γ(αk + β)

1.3 Structure of the thesis


In Chapter 2, we will look into the basic preliminaries and fundamentals of fractional
differential equations. Some of the important solution methods of fractional calculus
are discussed in this chapter. Additionally we will present the existence and uniqueness
theorems of the solution.
In Chapter 3, we will discuss Diethelm’s numerical method for solving fractional ordi-
nary differential equations (ODEs). In [26], Diethelm considered linear fractional differ-
ential equation and used a first-degree compound quadrature formula to approximate the
Hadamard finite-part integral in the equivalent form of the considered equations and de-
fined a numerical method for solving the equations. Here we approximate the Hadamard
finite-part integral by using the second-degree compound quadrature formula and obtain
a higher order numerical method for the considered fractional differential equations.
In Chapter 4, we will discuss another numerical method, the fractional Adams-type
method (also called predictor-corrector method) for solving fractional differential equa-
tions (FDEs) which has been developed by three well-known mathematicians Kai Di-
ethelm, Neville J. Ford and Alan D. Freed. In [29], the authors approximated the equiva-
lent integral equation by using a piecewise linear interpolation polynomial and introduced
a fractional Adams method for solving fractional ODEs. We will use piecewise quadratic
interpolation polynomials to approximate the integral and introduce a high order frac-
tional Adams method for solving the fractional ODEs.
In Chapter 5, we will consider the Richardson extrapolation technique for solving
fractional differential equations. In this chapter we will also discuss the initial value
and the initial integral approximation appeared in the numerical algorithm based on the
piecewise quadratic interpolation polynomial approximations.
5

In Chapter 6, we will consider finite difference method for space-fractional partial


differential equations. We will examine the stability, consistency and convergence of the
proposed finite difference method. A shifted implicit finite difference method is introduced
for solving two-sided space-fractional partial differential equation and we prove that the
order of convergence of the finite difference method is O(∆t + ∆xmin(3−α,β) ), 1 < α <
2, β > 0, where ∆t, ∆x denote the time and space stepsizes, respectively.
In Chapter 7, we will outline the summary of the thesis and will indicate the further
research plan.
Chapter 2

Fractional differential equations


(FDEs)

2.1 Introduction
Fractional differential equations provide an excellent mathematical tool for the descrip-
tion of memory and hereditary properties of various materials and processes [10]. These
operators are non-local which is the most significant advantage in the applications. The
standard derivative of a function includes information about the value of the function at
certain earlier time points only, while the fractional derivative encapsulates information
about the function’s behaviour from the earliest point in time up to the present.

The advantages of fractional differential equations become apparent in modelling me-


chanical and electrical properties of real materials, as well as in the description of rhe-
ological properties of rocks [10]. FDEs have been used successfully to model frequency
dependent damping behaviour of many viscoelastic materials [52], cardiac electrophysio-
logical model [8], electrochemical process [50], a radial flow problem [64]. Many papers
have also been involved in illustrating the application of FDEs in dielectric polarization
[87], control of viscoelastic structures [3].

Several analytical methods have been proposed to solve FDEs, for example Laplace
transform, Mellin transform, Fourier transform, model synthesis, eigenvector expansion

6
7

etc.. Most of these methods are only applicable to solve linear FDEs but cannot be applied
in non-linear FDEs.

Recent developments have seen a tremendous interest in approximating numerical so-


lution for FDEs which can be effectively applied to both linear and non-linear FDEs (see
Diethelm [25, 31], Lubich [55]). As pointed out by Diethelm and Freed [31], most of the
techniques of solving initial value problems (IVPs) of FDEs are equivalent to Volterra
integral equations. Therefore the numerical schemes for Volterra integral equations can
be applied to FDEs. Lubich [53, 54] took the advantage for the fact FDEs can be con-
verted into Volterra integral equations. Diethelm and Walz [33] presented an extrapolation
method for numerical solution of FDEs. This was based on the algorithm of [26] where the
application of extrapolation was justified. The algorithm used the Hadamard finite-part
integral stated in [24] to determine the weights of the numerical solution. Diethelm et al.
[29] presented a predictor-corrector numerical method for solving FDEs. It was demon-
strated that the Adam-Moulton predictor-corrector method of ODEs can be extended to
predictor-corrector method of FDEs and a detailed error analysis for fractional Adams
method was produced.

2.2 Definitions
In this section we will introduce some of the fundamental definitions of fractional deriva-
tives and integrals, such as Riemann-Liouville integral, Riemann-Liouville fractional deriva-
tives, Caputo derivative, Hadamard finite-part integral etc. We will also discuss some
theorems and facts related to fractional calculus that we will apply in our research.

2.2.1 Riemann-Liouville (R-L) fractional integral

Let n ∈ R+ . The operator Jan defined on L1 (a, b) by [25]


Z t
n 1
Ja f (t) := (t − τ )n−1 f (τ )dτ, (2.2.1)
Γ(n) a

for a ≤ t ≤ b, is called the Riemann-Liouville fractional integral operator of order n.


8

For n = 0 we set Ja0 := I, the identity operator and in this case the operator is quite
convenient for further manipulations. In fact,
Z t
n 1
lim Ja f (t) = lim (t − τ )n−1 f (τ )dτ,
n→0 n→0 Γ(n) a
Z t  (t − τ )n 
1
= lim f (τ )d − ,
n→0 Γ(n) a n
Z t
1 h
n 0 n
i
= lim f (a)(t − a) + f (τ )(t − τ ) dτ ,
n→0 Γ(n + 1) a
Z t i
= 1 · [f (a) · 1 + f 0 (τ ) · 1dτ ,
a
= f (a) + f (t) − f (a) = f (t).

Thus, Ja0 f (t) = f (t).


Moreover, in [25] the case of n ≥ 1 it is obvious that the integral Jan f (t) exists for
every t ∈ [a, b] because the integrand is the product of an integrable function f and the
continuous function (t − ·)n−1 . One of the most important property of Riemman-Liouville
integral is as follows.

Theorem 2.2.1. [25] Let α, β ≥ 0 and f ∈ L1 (a, b).Then

Jaα Jaβ f = Jaα+β f. (2.2.2)

holds almost everywhere on [a, b]. If additionally f ∈ C[a, b] or α + β ≥ 1, then the


identity holds everywhere on [a, b]. Theorem 2.2.1 gives the commutative property,

Jaα Jaβ = Jaβ Jaα . (2.2.3)

2.2.2 Riemann-Liouville fractional derivative

Suppose p > 0 we define the following Riemann-Liouville fractional derivative as [76]


Z t
R p n R p−n n 1
0 Dt f (t) = D [0 Dt f (t)] = D (t − τ )n−p−1 f (τ )dτ, p > 0, (2.2.4)
Γ(n − p) 0
dn dn
where Dn = dtn
and n − 1 < p < n. Recall that Dn = dtn
is the derivative part while
p−n
[R
0 Dt f (t)] = J0n−p f (t) is Riemann-Liouville integral part.
1
Example 1. Suppose f (t) = t2 , find the value of R
0 Dt f (t)?
2
9

1
Solution: Here p = 2
and lies on the interval 0 < p < 1 such that n = 1. Using (2.2.4)
gives
 Z t 
R 2
1
− 21 d 1 − 12 2
0 Dt f (t) = D 1 [R
0 Dt f (t)] = (t − τ ) τ dτ . (2.2.5)
dt Γ( 21 ) 0

2.2.3 Caputo fractional derivative

Suppose n − 1 < p < n and p > 0 we define the following Caputo’s fractional derivative
as [76]
Z t
C p 1
0 Dt f (t) =R
0 Dtp−n [Dn f (t)] = (t − τ )n−p−1 [Dn f (τ )] dτ, (2.2.6)
Γ(n − p) 0
1
Example 2. Suppose f (t) = t2 , find the value of C
0 Dt f (t)?
2

1
Solution: Here p = 2
and lies on the interval 0 < p < 1 such that n = 1. Using (2.2.6)
gives
Z t  
C
1 1 1
−1 − 12 d
0 Dt f (t)
2
=R
Dt [D f (t)] = 1
0
2 1
(t − τ ) f (τ ) dτ. (2.2.7)
Γ( 2 ) 0 dτ
Remark 3. Suppose p > 0 and n − 1 < p < n, then the relation between Riemman-
Liouville and Caputo fractional derivative can be expressed by the theorem [25] below

Theorem 2.2.2. Let p > 0 and n − 1 < p < n, we have,


n−1
R p C p
X f (k) (0)
0 D t f (t) =0 D t f (t) + tk−p . (2.2.8)
k=0
Γ(−p + k + 1)
Proof. We only consider the case for n = 1 and 0 < p < 1
 Z t 
R p 1 R p−1 d 1 −p
0 Dt f (t) = D [0 Dt f (t)] = (t − τ ) f (τ )dτ
dt Γ(1 − p) 0
τ =t Z t !
(t − τ )−p+1 (t − τ )−p+1 0

d 1
= −f (τ ) + f (τ )dτ
dt Γ(1 − p) −p + 1 τ =0 0 −p + 1
Z t
t1−p (t − τ )1−p 0
  
d 1
= f (0) + f (τ )dτ
dt Γ(1 − p) 1−p 0 1−p
Z t
1 −p d 1 (t − τ )1−p 0
= f (0)t + f (τ )dτ
Γ(1 − p) dt Γ(1 − p) 0 1−p
Z t 
∂ (t − τ )1−p 0

1 −p 1
= f (0)t + f (τ ) dτ
Γ(1 − p) Γ(1 − p) 0 ∂t 1−p
Z t 
1 −p 1 −p 0
= f (0)t + (t − τ ) f (τ )dτ
Γ(1 − p) Γ(1 − p) 0
C p 1
= 0 Dt f (t) + f (0)t−p .
Γ(1 − p)
10

Similarly, we can prove the case for n − 1 < p < n, n > 1, i.e;
n−1
R p p
X f (k) (0)
0 Dt f (t) =C
0 Dt f (t) + tk−p . (2.2.9)
k=0
Γ(−p + k + 1)

2.2.4 Hadamard finite -part integral

Hadamard finite-part integral is one of the most important mathematical tools in frac-
tional derivatives, integral equations and partial differential equations. Let N denote the
set of all natural numbers then, for p ∈
/ N, on a general interval [a, b] Hadamard finite-part
integral is defined in [24] as follows:

I b
(x − a)−p f (x)dx (2.2.10)
a
bpc−1
X f (k) (a)(b − a)k+1−p Z b
:= + (x − a)−p Rbpc−1 (x, a)dx,
k=0
(k + 1 − p)k! a

where
1 x
Z
Rµ (x, a) := (x − y)µ f (µ+1) (y)dy, (2.2.11)
µ! a
H
and denotes the Hadamard finite-part integral. bpc denotes the largest integer not
exceeding p, where p 6∈ N.
Hadamard finite-part integral is the mathematical tool which reformulates a boundary
value problem for a partial differential equation with integer-order singularities and also
encountered the non-integer order singularities.
In particular, from [24] we can see that the Riemann-Liouville fractional derivatives
R p
a Dx f of order p > 0, p ∈
/ N of the function f may be expressed as a finite-part integral
according to
I x
1
R p
a Dx f (x) = (x − y)−p−1 f (y)dy. (2.2.12)
Γ(−p) a

2.2.5 Grünwald-Letnikov fractional derivative

Grünwald and Letnikov independently developed another non-integer derivative nearly the
same time when Riemann and Liouville developed Riemann-Liouville fractional derivative
11

to solve fractional differential equations. Later on many other authors use this Grünwald-
Letnikov fractional derivative to construct numerical methods for fractional differential
equations.
The Grünwald-Letnikov fractional derivative can be expressed as follows [51]. Let
α ∈ R+ . The operator GL
Daα defined by,
 
m
GL (∆αh f )(t) 1 X α
Daα f (t) = lim = lim (−1)k   f (t − kh), (2.2.13)
h→0 hα mh=t−a, h→0 h α
k=0 k

for a ≤ t ≤ b, is called the Grünwald-Letnikov fractional derivative of order α. Here,


(∆αh f )(t) is a fractional formulation of backward difference. This definition holds for
arbitrary function f (t), but the convergence of the infinite sum cannot be ensured for all
functions.

2.3 Numerical methods for solving FDEs


In this section we will briefly review some numerical methods for fractional differential
equations. There are a number of numerical and analytical methods developed for various
types of FDEs, for example, variational iterative method, fractional differential transform
method, a domain decomposition method, homotopy perturbation method and power
series method [4].
Hadamard finite-part integral is used by Diethelm[1997] [24] to obtain an approximate
algorithm for solving fractional differential equation. Podlubny [76] used the Grünwald-
Letnikov method to solve FDEs. And very recently Diethelm, Ford and Freed [29] intro-
duced a fractional Adams-type predictor-corrector method for solving FDEs. Lubich [53]
wrote the fractional differential equation in the form of an Abel-Volterra integral equation
and used the convolution quadrature method to approximate the fractional integral and
obtained an approximate solution for fractional differential equations.
In our research we are aiming to use Diethelm’s algorithm and Adams-type predictor-
corrector algorithm for higher order numerical methods for solving fractional differential
equations, the methods that are more accurate and cost effective in mathematical mod-
elling. Diethelm [26] considered a linear fractional differential equation, with 0 < α < 1,
and used a first-degree compound quadrature formula to approximate the Hadamard
12

finite-part integral in the equivalent form of the considered equations and defined a nu-
merical method with order of convergence of O(h2−α ), 0 < α < 1, see also [30]. Here
we are aiming to use the second-degree compound quadrature formula to approximate
the Hadamard finite-part integral for higher order convergence method. And in [29] Kai
Diethelm, Neville J. Ford and Alan D. Freed introduced Adams-type predictor-corrector
method for solving both linear and nonlinear fractional differential equations. In the
numerical algorithm the authors converted the considered equations into the Volterra
integral equation and then approximated the integral by using a piecewise linear inter-
polation polynomial and proved that the order of convergence of the numerical method
is O(h2 ) for 1 < α < 2 and O(h1+α ) for 0 < α < 1 if C α 2
0 Dt y(t) ∈ C [0, T ]. We will use

piecewise quadratic interpolation polynomials to approximate the integral and introduce


a high order fractional Adams method for solving the fractional differential equations and
prove that the order of convergence of our numerical method is higher than the order of
the method in [29].

2.4 Existence and uniqueness of the solution of FDEs


Existence and uniqueness of the solution are very important mathematical elements for
any differential equations. In this section we will discuss about the existence and unique-
ness of FDEs in the Riemann-Liouville sense and the initial conditions are specified accord-
ing to Caputo’s suggestions [27], thus allowing for interpretation in a physically meaningful
way.
Let us consider the initial-value problem, with m − 1 < q < m, m ≥ 1

C q
0 Dx y(x) = f (x, y(x)), (2.4.1)

y (k) (0) = y0k , k = 0, 1, 2, . . . , m − 1. (2.4.2)

where C q
0 Dx y(x) represents the Caputo fractional derivative of order q > 0, with m − 1 <

q < m,
Z x
C q 1
0 Dx y(x) := (x − u)m−1−q y (m) (u)du. (2.4.3)
Γ(m − q) 0
13

The existence and the uniqueness of the solution is described by Diethelm and Ford
[27] in the following two theorems that are very similar to the corresponding classical
theorems known in the case of first-order equation.

Theorem 2.4.1. (Existence) [27] Assume that D := [0, X ∗ ] × [y00 − α, y00 + α] with some
X ∗ > 0 and some α > 0, and let the function f : D → R be continuous. Furthermore,
1
define X := min{X ∗ , ( αΓ(q+1)
||f ||∞
) q }. Then there exists a function y : [0, X ] → R solving the
initial value problem (2.4.1)-(2.4.2).

Theorem 2.4.2. (Uniqueness) [27] Assume that D := [0, X ∗ ] × [y00 − α, y00 + α] with some
X ∗ > 0 and some α > 0. Furthermore, let the function f : D → R be bounded on D and
fulfil a Lipschitz condition with respect to the second variable, i.e.

|f (x, y) − f (x, z)| ≤ L|y − z|.

with some constant L > 0 independent of x, y, and z. Then, denoting X as in Theorem


2.4.1 there exists at most one function y : [0, X ] → R solving the initial value problem
(2.4.1)-(2.4.2).

To prove the existence and uniqueness theorems we need to know the following results:

Lemma 2.4.3. [27] If the function f is continuous, then the initial value problem (2.4.1)-
(2.4.2) is equivalent to the non-linear Volterra integral equation of the second kind
m−1 x
xk (k)
Z
X 1
y(x) = y (0) + (x − z)q−1 f (z, y(z))dz. (2.4.4)
k=0
k! Γ(q) 0

with m − 1 < q ≤ m. In other words, every solution of the Volterra equation (2.4.4) is
also a solution of our initial value problem (2.4.1)-(2.4.2), and vice-versa.

Theorem 2.4.4. [27] Let U be a nonempty closed subset of a Banach space E, and
let αn ≥ 0 for every n and such that ∞
P
n=0 αn converges. Moreover, let the mapping

A : U → U satisfy the inequality

||An u − An v|| ≤ αn ||u − v||, (2.4.5)

for every n ∈ N and every u, v ∈ U . Then, A has a unique defined fixed point u∗ .
Furthermore, for any u0 ∈ U , the sequence (An u0 )∞ ∗
n=1 converges to this point u .
14

Proof. of Theorem 2.4.2 (Uniqueness): [27] As we identified previously, we need only


discuss the case 0 < q < 1. In this situation, the Volterra equation (2.4.4) reduces to
Z x
0 1
y(x) = y0 + (x − z)q−1 f (z, y(z))dz. (2.4.6)
Γ(q) 0
we thus introduce the set U = {y ∈ C[0, X ] :k y − y00 k∞ ≤ α}. Obviously, this is a
closed subset of the Banach space of all continuous functions on [0, X ], equipped with the
Chebyshev norm. Since the constant function y ≡ y00 is in U , we also see that U is not
empty. On U we define the operator A by
Z x
0 1
(Ay)(x) = y0 + (x − z)q−1 f (z, y(z))dz. (2.4.7)
Γ(q) 0
Using this operator, the equation under consideration can be rewritten as

y = Ay.

and in order to prove our desired uniqueness result, we have to show that A has a unique
fixed point. Let us therefore investigate the properties of the operator A. First we note
that, for 0 ≤ x1 ≤ x2 ≤ X ,

(Ay)(x1 ) − (Ay)(x2 )
Z x1 Z x2
1 q−1
= (x1 − z) f (z, y(z))dz − (x2 − z)q−1 f (z, y(z))dz (2.4.8)
Γ(q) 0 0
Z x1  Z x2
1 q−1 q−1

= (x1 − z) − (x2 − z) f (z, y(z)dz + (x2 − z)q−1 f (z, y(z))dz
Γ(q) 0
Z x2 x1
k f k∞ h x1 
Z  i
≤ (x2 − z)q−1 − (x1 − z)q−1 dz + (x2 − z)q−1 dz
Γ(q) 0 x1
k f k∞  
= 2(x2 − x1 )q + xq1 − xq2 . (2.4.9)
Γ(q + 1)
proving that Ay is a continuous function. Moreover, for y ∈ U and x ∈ [0, X ], we find

Z x
1 1
(Ay)(x) − y00 = (x − z)q−1 f (z, y(z))dz ≤ k f k∞ xq
Γ(q) 0 Γ(q + 1)
1 1 αΓ(q + 1)
≤ k f k∞ Xq ≤ k f k∞ = α.
Γ(q + 1) Γ(q + 1) k f k∞
Thus, we have shown that Ay ∈ U if y ∈ U ; i.e., A maps the set U to itself. Then next
step is to prove that, for every n ∈ N0 and every x ∈ [0, X ], we have
15

(Lxq )n
k An y − An ȳ kL∞ [0,x] ≤ k y − ȳ kL∞ [0,x] . (2.4.10)
Γ(1 + qn)
This can be seen by induction. In the case n = 0, the statement is trivially true. For the
induction step n − 1 → n, we write

k An y − An ȳ kL∞ [0,x] = k A(An−1 y) − A(An−1 ȳ) kL∞ [0,x]


Z w
1
= sup (w − z)q−1 [f (z, An−1 y(z)) − f (z, An−1 ȳ(z))]dz .
Γ(q) 0≤w≤x 0
In the next steps, we use the Lipschitz assumption on f and the induction hypothesis and
find
Z w
n n L
k A y − A ȳ kL∞ [0,x] ≤ sup (w − z)q−1 An−1 y(z) − An−1 ȳ(z) dz
Γ(q) 0≤w≤x 0
Z x
L
≤ (x − z)q−1 sup An−1 y(w) − An−1 ȳ(w) dz
Γ(q) 0 0≤w≤z
n Z x
L
≤ (x − z)q−1 z q(n−1) sup y(w) − ȳ(w) dz
Γ(q)Γ(1 + q(n − 1)) 0 0≤w≤z
Z x
Ln
≤ sup y(w) − ȳ(w) (x − z)q−1 z q(n−1) dz
Γ(q)Γ(1 + q(n − 1)) 0≤w≤x 0
 
L n Γ(q)Γ 1 + q(n − 1)
= k y − ȳ kL∞ [0,x] xqn .
Γ(q)Γ(1 + q(n − 1)) Γ(1 + qn)
which is our desired result (2.4.10). As a consequence, we find, taking Chebyshev norms
on our fundamental interval [0, x],
(Lxq )n
k An y − An ȳ k∞ ≤ k y − ȳ k∞ .
Γ(1 + qn)
We have now shown that the operator A fulfills the assumptions of Theorem 2.4.4
with αn = (Lxq )n /Γ(1 + qn). In order to apply that theorem, we only need to verify that
the series ∞
P
n=0 αn converges. This, however, is a well known result; the limit

X (Lxq )n
= Eq (Lxq ).
n=0
Γ(1 + qn)
is the Mittag-Leffler function of order q, evaluated at Lxq (see[36, Chapter 18] for general
results on Mittag-Leffler functions or [49] for details on the role of these functions in
fractional calculus). Therefore, we may apply the fixed point theorem and deduce the
uniqueness of the solution of our differential equation.
16

Remark 1. Note that Theorem 2.4.4 not only asserts that the solution is unique; it
actually gives us (at least theoretically) a means of determining this solution by Picard-
type iteration process.

Remark 2. Without the Lipschitz assumption on f the solution needs not be unique.
To see this, look at the simple one-dimensional example
C q
0 Dx y = yk , 0 < q < 1,
with initial condition y(0) = 0. Consider 0 < k < 1, so that the function on the right-
hand side of the differential equation is continuous, but the Lipschitz condition is violated.
Obviously, the zero function is a solution of the initial value problem. However, setting
pj (x) = xj , we recall that

C q Γ(j + 1)
0 Dx pj (x) = pj−q (x).
Γ(j + 1 − q)
p
Thus, the function y(x) = k Γ(j + 1)/Γ(j + 1 − q)xj with j = q/(1 − k) also solves the
problem, proving that the solution is not unique.

Proof. of Theorem 2.4.1 [27]: We begin by argument similar to those of the previous
proof. In particular, we use the same operator A defined in (2.4.7) and recall that it maps
the nonempty, convex, and closed set U = {y ∈ C[0, x] :k y − y00 k∞ ≤ α} to itself.

We shall now prove that A is a continuous operator. A stronger result, (2.4.10), has
been derived above, but in that derivation we used the Lipschitz property of f which
we do not assume to hold here. Therefore, we proved differently and note that, since f
is continuous on the compact set D, it is uniformly continuous there. Thus, given an
arbitrary  > 0, we can find δ > 0 such that

|f (x, y) − f (x, z)| < Γ(q + 1) whenever |y − z| < δ. (2.4.11)
xq
Now let y, ȳ ∈ U such that k y − ȳ k< δ. Then, in view of (2.4.11),

|f (x, y(x)) − f (x, ȳ(x))| < Γ(q + 1), (2.4.12)
xq
17

for all x ∈ [0, X ]. Hence,


Z x
1
|(Ay)(x) − (Aȳ)(x)| = (x − z)q−1 (f (z, y(z)) − f (z, ȳ(z)))dz
Γ(q) 0
Γ(q + 1) x xq
Z
q−1
≤ (x − z) dz = ≤ ,
xq Γ(q) 0 xq
proving the continuity of the operator A.
Next we look at the set of functions

A(U ) := {Ay : y ∈ U }.

For z ∈ A(U ) we find that, for all x ∈ [0, X ],


Z x
0 1
(x − z)q−1 f z, y(z) dz

|z(x)| = |(Ay)(x)| ≤ |y0 | +
Γ(q) 0
1
≤ |y00 | + k f k∞ X q ,
Γ(q + 1)
which means that A(U ) is bounded in a pointwise sense. Moreover, for 0 ≤ x1 ≤ x2 ≤ X ,
we have found in the proof of Theorem 2.4.2 that
k f k∞ q k f k∞
|(Ay)(x1 ) − (Ay)(x2 )| ≤ (x1 − xq2 + 2(x2 − x1 )q ) ≤ 2 (x2 − x1 )q .
Γ(q + 1) Γ(q + 1)
Thus, if |x2 − x1 | < δ, then
k f k∞ q
|(Ay)(x1 ) − (Ay)(x2 )| ≤ 2 δ .
Γ(q + 1)
Noting that the expression on the right-hand side is independent of y, we see that the
set A(U ) is equicontinuous. Then, the Arzelà-Ascoli theorem yields that every sequence
of functions from A(U ) has got a uniformly convergent subsequence, and therefore A(U )
is relatively compact. Then, Schauder’s fixed point theorem asserts that A has a unique
fixed point. By construction, a fixed point of A is a solution of our initial value problem.

2.5 Applications of fractional differential equations


Applications come from a very wide range of science and engineering. Fractional differen-
tial equations are becoming increasingly used as a modelling tool for understanding the
many aspects of nonlocality, for example;
18

• Fractional-order viscoelasticity models in blood flow [14] : The integer-order stress-


strain relation models (such as; Hook’s law for elastic solids and Newton’s law for
viscous liquids , Voigt and standard liner or Kelvin-Zener) for heterogeneous soft
tissue with complex bio-mechanical properties in blood flow, provide a reasonable
qualitative description; however, they do not satisfactorily describe the real situation
in the real world [75, 17]. On the other hand for the cell and tissue biomechanics
the fractional-order models proposed by Craiem and Armentano [16], Craiem et al
[15], and Doehring et al. [23], seem to be more adequate from both quantitative
and qualitative view points.

• Fractional-order model of neurons in biology [17]: In 1981, the neurodynamics of


the vestibulo-ocular reflex (VOR) model have been described by Robinson [78].
This model is based on direct and integrated parallel pathways to the motoneurons.
The first-order transfer functions approximate time and frequency domain data
from canal afferents, vestibular and prepositus nuclei neurons, and motoneurons.
Anastasio [1] recognized some difficulties in the classical integer-order models to
describe the behaviour of premotor neurons in the vestibulo–ocular relax system.
To overcome this problem he proposed a fractional-order model in terms of the
Laplace transform of the premotor neuron discharge rate and proved that fractional
differentiation and integrations can effectively be used to describe various aspects
of vestibulo-oculomotor dynamics.

• Fractional calculus in physics : Fractional derivatives are involved in the modelling


of electrical circuits and generalized voltage divider [17]. Le Mehaute and Crepy [59]
suggested electrical circuits may have the fractance which represents an electrical
element with fractional-order impedance.

• Fractional calculus in electrochemistry and tracer fluid flows: The fractional advection-
dispersion equation (FADE) is used in groundwater hydrology to model the trans-
port of passive tracers carried by fluid flow in a porous medium. Dispersion (or
spreading) of tracers depends strong on the scale of observation. In general, there
are three different mechanisms of dispersion [17]: molecular diffusion, variations in
the permeability field (microdispersion), and variations of the fluid velocity in a
19

porous medium (microdispersion). These mechanisms take place at different scales.


At large scale, dispersion is essentially controlled by permeability heterogenetic. A
fractal structure model for heterogeneous media has been developed by Maloy et al
for details see [69].

• Continuous time random walk (CTRW) model: The CTRW models impose a ran-
dom waiting time between particle jumps [67] and the non-local CTRW model is a
good phenomenological description of the tick-by-tick dynamics, which can take into
account the pathological time evolution of financial markets. This non-local CTRW
model is very much related to the fractional calculus. For details see [63, 68].

• Cardiac electrical propagation model [8] : Fractional diffusion model in electrical


propagation with heterogeneous media describe their application to cardiac muscle
as a representative case of composite biological tissue. It describes the propagation
of electrical excitation in the cable equation; for details see [8].

• Fractional order dynamical systems in control theory [17] : This is the generaliza-
tion of the classical P ID-controller, the concept of the P I λ Dµ -controller, involving
fractional-order integrator and fractional order differentiator [76], which has been
found to be a more efficient control of fractional order dynamic systems.
Chapter 3

Higher order numerical method for


fractional ODEs (Diethelm’s method)

3.1 Introduction
We consider numerical methods for solving the fractional differential equation

C α
0 Dt y(t) = f (t, y(t)), 0 < t < T, (3.1.1)

y (k) (0) = y0k , k = 0, 1, 2, . . . , dαe − 1, (3.1.2)

where the y0k may be arbitrary real numbers and α > 0. Here C α
0 Dt denotes the differential

operator in the sense of Caputo, n − 1 < α < n,


Z t
C α 1
0 Dt y(t) = (t − u)n−α−1 y (n) (u) du,
Γ(n − α) 0

where n = dαe is the smallest integer ≥ α.


Existence and uniqueness of solutions for (3.1.1) -(3.1.2) have been studied, for exam-
ple, in Podlubny [76], Diethelm and Ford [27]. Numerical methods for solving fractional
differential equations have been considered by many authors and we mention here a few
key contributions. Lubich [53] wrote the fractional differential equation in the form of
an Abel-Volterra integral equation and used the convolution quadrature method to ap-
proximate the fractional integral and obtained approximate solutions of the fractional
differential equation. Diethelm [26] wrote the fractional Riemann-Liouville derivative by

20
21

using the Hadamard finite-part integral and approximated the integral by using a quadra-
ture formula and obtained an implicit numerical algorithm for solving a linear fractional
differential equation. Diethelm and Luchko [32] used the observation that a fractional dif-
ferential equation has an exact solution, which can be expressed as a Mittag-Leffler type
function. Then they used convolution quadrature and discretised operational calculus to
produce an approximation to this Mittag-Leffler function. Blank [5] applied a colloca-
tion method to approximate the fractional differential equation. Podlubny [76] used the
Grünwald and Letnikov method to approximate the fractional derivative and defined an
implicit finite difference method for solving (3.1.1)-(3.1.2) and proved that the order of
convergence is O(h), where h is the stepsize. Gorenflo [47] introduced a second order O(h2 )
difference method for solving (3.1.1)-(3.1.2), but the conditions to achieve the desired ac-
curacy are restrictive. In [28], the authors converted the equations (3.1.1)-(3.1.2) into a
Volterra integral equation and then approximated the integral by using a piecewise lin-
ear interpolation polynomial and introduced a fractional Adams-type predictor-corrector
method for solving (3.1.1)-(3.1.2), proving that the order of convergence of the numerical
method is min{2, 1 + α} for 0 < α ≤ 2 if C α 2
0 Dt y ∈ C [0, T ]. Deng [18] modified the method

in [28] and introduced a new predictor-corrector method for solving (3.1.1)-(3.1.2) and
the convergence order is proved to be min{2, 1 + 2α} for α ∈ (0, 1]. In [95], the authors
introduced a so-called Jacobi-predictor-corrector approach to solve (3.1.1)-(3.1.2) which
is based on the polynomial interpolation and the Gauss-Lobatto quadrature with respect
to some Jacobi-weight function and the computational cost is O(N ), N = 1/h and any
desired convergence order can be obtained. In [9], a higher order numerical method for
solving (3.1.1)-(3.1.2) is obtained where a quadratic interpolation polynomial was used
to approximate the integral. Ford, Morgado and Rebelo recently (see [41]) used a non-
polynomial collocation method to achieve good convergence properties without assuming
any smoothness of the solution. There are also several works that are related to the fixed
memory principle and the nested memory concept for solving (3.1.1)-(3.1.2), see, e.g.,
[44, 29, 18, 19, 22].
In [26], Diethelm considered the following linear fractional differential equation, with
22

0 < α < 1,

C α
0 Dt y(t) = βy(t) + f (t), 0 ≤ t ≤ 1, (3.1.3)

y(0) = y0 , (3.1.4)

where β < 0, f is a given function on the interval [0, 1]. Diethelm [26] used a first-degree
compound quadrature formula to approximate the Hadamard finite-part integral in the
equivalent form of (3.1.3)-(3.1.4) and defined a numerical method for solving (3.1.3)-
(3.1.4) and proved that the order of convergence of the numerical method is O(h2−α ), 0 <
α < 1. Here we approximate the Hadamard finite-part integral by using the second-degree
compound quadrature formula and obtain an asymptotic expansion of the error for solving
(3.1.3)-(3.1.4), which implies that the order of convergence of the numerical method is
O(h3−α ), 0 < α < 1. Moreover, a high order finite difference method (O(h3−α ), 0 < α < 2)
for approximating the Riemann-Liouville fractional derivative is given, which may be
applied to construct high order numerical methods for solving time-space-fractional partial
differential equations.

3.2 Diethelm’s method


In this section we review Diethelm’s method for solving fractional differential equations
where the Hadamard finite-part integral is approximated by piecewise linear interpolation
polynomials.

Consider, with 0 < α < 1,


C α
0 Dt y(t) = βy(t) + f (t), (3.2.1)

y(0) = y0 . (3.2.2)

It is well-known that (3.2.1)- (3.2.2) is equivalent to, with 0 < α < 1,


R α
0 Dt [y(t) − y0 ] = βy(t) + f (t), 0 ≤ t ≤ 1, (3.2.3)

where α is the order of the derivative, f is a given function on the interval [0,1], β ≤ 0 and
y is the unknown function. From the definition of Riemann-Liouville fractional derivative
23

in Chapter 2, for 0 < α < 1 we get


d t
Z
1
R α
0 Dt y(t) = (t − τ )−α y(τ )dτ. (3.2.4)
Γ(1 − α) dt 0
Let us recall the Diethelm’s numerical algorithm for piecewise linear interpolation poly-
nomial with the equispaced nodes. The Lemmas below will help the reader to understand
the algorithm of the numerical method for solving fractional differential equations.

Lemma 3.2.1. [35] The Hadamard finite-part integral for the Riemann-Liouville deriva-
tive (3.2.4) can be written as
I t
1
R α
0 Dt y(t) = (t − τ )−1−α y(τ )dτ.
Γ(−α) 0
H
where 0 < α ≤ 1, represents the symbol of Hadamard finite-part integral.

Lemma 3.2.2. [26] Assume that 0 = t0 < t1 < t2 < · · · < tk < · · · < tn = 1 is the
partition on the interval [0, 1] and 0 < α < 1, then at t = tj ,
j
R α −α
X t−α
j
0 Dt [y(tj )] = h ωkj y(tj − tk ) + Rj , j = 1, 2, 3, . . . , n.
k=0
Γ(α)
where ωkj are called the weights and Rj is the remainder term given by

|Rj | ≤ Cj α−2 ky 00 (tj − tj ω)k∞ , 0 < ω ≤ 1,

where ω is the new variable [y(τ ) = y(tj − tj ω)] introduce in the proof, h is the time-step
size and the weights ωkj satisfy

 1,

 k=0

Γ(2 − α)ωkj = −2k 1−α + (k − 1)1−α + (k + 1)1−α , k = 1, 2, . . . , j − 1


 −(α − 1)k −α + (k − 1)1−α − k 1−α ,

k=j

The proof for this Lemma 3.2.2 is straightforward and requires a piecewise linear
Lagrange interpolation polynomial.

Proof. We have
I tj
R α 1 y(τ )
0 Dt y(tj ) = dτ.
Γ(−α) 0 (tj − τ )α+1
Suppose tj − τ = tj ω, then
t−α 1
y(tj − tj ω)
I
R α j
0 Dt y(tj ) = dω.
Γ(−α) 0 ω α+1
24

Performing another substitution such that g(ω) = y(tj − tj ω), we have


t−α
I 1
j
R α
0 Dt y(tj ) = g(ω)ω −(α+1) dω.
Γ(−α) 0
For every j, we replace the integral by a piecewise interpolation polynomial with equi-
spaced nodes 0, 1/j, 2/j, 3/j, . . . , j/j. That is,
I 1 I 1
−α−1
g(ω)ω dω = g1 (ω)ω −α−1 dω + Rj ,
0 0

where g1 (ω) is the piecewise linear interpolation polynomial of g(ω) with the equispaced
nodes and Rj is the remainder term.

Note that,
k k−1
ω− ω−
     
j k−1 j k k−1 k
g1 (ω) = k−1 k
g + k k−1
g , on , .
j
− j
j j
− j
j j j
Thus,
I 1 I 1
−(1+α)
g(ω)ω dω ≈ g1 (ω)ω −(1+α) dω = Qj (g). (3.2.5)
0 0

Here we observe generally that


I 1 I 1 j I k
j X j
−(1+α) −(1+α)
Qj (g) = g1 (ω)ω dω = g1 (ω)ω dω + g1 (ω)ω −(1+α) dω(3.2.6)
k−1
0 0 k=2 j

Applying the Lagrange interpolation polynomial on each integral on the right hand side
of (3.2.5) gives
I 1 1
" 1  #
j
−(1+α)
I
j ω− jω−0 1
g1 (ω)ω dω = 1 g(0)
+ 1 g ω −(α+1) dω
0 0 0− jj
−0 j
" #
ω − 2j 1  
2 2
ω −
I I  
j
−(1+α)
j 1 j 2
g1 (ω)ω dω = 1 2g + 2 1g ω −(α+1) dω
1
j
1
j j
−j j j
−j j
...
j−1 j−1 j j−1
"  #
ω− ω−
 
j−1
I I
j j
j j j
g1 (ω)ω −(1+α) dω = j−1 j g + j j−1 g ω −(α+1) dω.
j
j
j
j j
− j
j j
− j
j
25

We can deduce that


 1−(1+α)
1
I 1
j g1 (0) j
Z 1
j 1
Z ω 
−(1+α) −(1+α) 0 (1)
g1 (ω)ω dω = + ω (ω − y) g1 (y)dy dω
0 (0 + 1 − (1 + α))0! 0 0! 0
Z 1 Z ω     
1 j
−(1+α) 1
= g(0) + ω jg − jg(0) dy dω
(−α)j −α 0 0 j
    Z 1
1 1 j
= −α
g(0) + jg − jg(0) . ω −α dω
(−α)j j 0
   
1 1 1 1
= −α
− −α
g(0) + −α
g
(−α)j (1 − α)j (1 − α)j j
 
1 1 1
= −α
g − g(0).
(1 − α)j j α(1 − α)j −α
Now we consider in general:
k k
" k k−1  
#
ω− ω −
 

Z Z
j j
j k 1 j k
g1 (ω)ω −(1+α) dω = 1 g + 1 g ω −(1+α) dω
k−1
j
k−1
j
− j
j j
j
 Z k  
k−1 j k
=g j − ω ω −(1+α) dω
j k−1
j
j
 Z k  
k j k−1
+g j ω− ω −(1+α) dω
j k−1
j
j
 Z k
k−1 j
kω −(1+α) dω − jω −α dω

=g
j k−1
j
 Z k
k j
jω −α − (k − 1)ω −(1+α) dω

+g
j k−1
j
 "  −α  1−α
k−1 k k j k
=g −
j −α j 1−α j
 −α  −α #
k k−1 j k−1
− +
−α j 1−α j
  "  1−α  −α
k j k k−1 k
+g −
j 1−α j −α j
 1−α  −α #
j k−1 k−1 k−1
− + .
1−α j −α j
26

Therefore
 
1 1 1
Qj (g) = −α
g − g(0)
(1 − α)j j α(1 − α)j −α
j  "  −α  1−α
X k−1 k k j k
+ −
k=2
j −α j 1−α j
 −α  −α #
k k−1 j k−1
− +
−α j 1−α j
  "  1−α  −α
k j k k−1 k
+ g −
j 1−α j −α j
 1−α  −α #
j k−1 k−1 k−1
− +
1−α j −α j
j   X j
X k
= αkj g = αkj y(tj − tk ),
k=0
j k=0

where αkj satisfy the following:

when k = 0,
−1
α0j = ,
α(1 − α)j −α

and when k = j,
"  1−α  −α
j j j−1 j
αjj = −
1−α j −α j
 1−α  −α #
j j−1 j−1 j−1
− +
1−α j −α j
j j−1 (j − 1)1−α (j − 1)1−α
= − − +
1−α −α (1 − α)j −α (−α)j −α
αj 1−α + (1 − α)(j − 1)j −α − α(j − 1)1−α + (α − 1)(j − 1)1−α
=
α(1 − α)j −α
αj 1−α + (1 − α)j 1−α − (1 − α)−α − (j − 1)1−α
=
α(1 − α)j −α
(α − 1)j −α − (j − 1)1−α + j 1−α
= .
α(1 − α)j −α
27

For k = 1, 2, 3, 4, . . . , j − 1, we have
 −α  −α  −α
k+1 k+1 j k+1 k+1 k
αkj = − −
−α j 1−α j −α j
 1−α  1−α  −α  1−α
j k j k k−1 k j k−1
+ + − −
1−α j 1−α j −α j 1−α j
 −α
k−1 k−1
+
−α j
   
1 1 1−α 1 1
= − (k + 1) + − + (k − 1)1−α
−αj −α (1 − α)j −α (1 − α)j −α (−α)j −α
"  −α  −α #  
k+1 k k−1 k 1 1
+ − − + + k 1−α
−α j −α j (1 − α)j −α (1 − α)j −α
α−1−α −α + (α − 1)
= −α
(k + 1)1−α + (k − 1)1−α
α(1 − α)j α(1 − α)j −α
 −α
2k k 2
+ + −α
k 1−α
α j (1 − α)j
1−α
(k + 1) (k − 1)1−α 2k 1−α
= − +
α(1 − α)j −α α(1 − α)j −α α(1 − α)j −α
1 h
1−α 1−α 1−α
i
= 2k − (k − 1) − (k + 1) .
α(1 − α)j −α

Thus, we get
j
t−α 1 t−α
I hX i
R α j −(α+1) j
0 Dt y(tj ) = g(ω)ω dω = αkj y(tj − tk ) + Rj (g)
Γ(−α) 0 Γ(−α) k=0
j
α
X t−α
j
= h wkj y(tj − tk ) + Rj (g),
k=0
Γ(−α)
where



 1, k=0

−α
α(α−1)j αkj = Γ(2−α)ωkj = −2k 1−α + (k − 1)1−α + (k + 1)1−α , k = 1, 2, ..., j − 1


 −(α − 1)k −α + (k − 1)1−α − k 1−α ,

k=j

Together these estimates complete the proof of Lemma 3.2.2.

Thus the solution of (3.2.3) has the form


j
1 h
α
X
y(tj ) = t Γ(−α)f (tj ) − αkj y(tj−k ) (3.2.7)
α0j − tαj Γ(−α)β j k=1
j i
X
+ y0 αkj − Rj (g) ,
k=0
28

where

|Rj (g)| ≤ Cj α−2 t2j ||y 00 ||∞ .

Let yj ≈ y(tj ) denote the approximate solution of y(tj ), j = 1, 2, 3, . . . , n, then based on


(3.2.7) we can define the following numerical method for solving (3.2.3) as
j j
" #
1 X X
yj = tα Γ(−α)f (tj ) − αkj yj−k + y0 αkj . (3.2.8)
α0j − tαj Γ(−α)β j k=1 k=0

We remark that Lemma 3.2.2 for 0 < α < 1 can be extended to the case for 1 < α < 2
to yield the following weights,



 −1, k=0


α, k = 1, j = 0





α(1 − α)j −α αkj = 2 − 21−α , k = 1, j > 1


 2k 1−α − (k − 1)1−α − (k + 1)1−α ,

 k = 1, 2, . . . , j − 1, j ≥ 3



 (α − 1)k −α − (k − 1)1−α + k 1−α ,

k = j, j ≥ 2.

These weights are obtained by following the same process from Lemmas 3.2.1 and 3.2.2.
The only difference lies from the Hadamard finite-part integral.

3.2.1 Error analysis

Theorem 3.2.3. [26] Let 0 < α < 1. Assume y(tj ) and yj are the exact and approximate
solutions of (3.2.7) and (3.2.8), respectively. Also, assume that the function involved is
sufficiently smooth, then there exists a constant C = C(α, g, β), such that

|y(tj ) − yj | ≤ Ch2−α ||y 00 ||∞ . j = 1, 2, . . . , n.

To prove the Theorem 3.2.3, we need the following Lemma.

Lemma 3.2.4. Let 0 < α < 1 be the order of derivative and the sequence (dj ) satisfy

 d =1
1
 d = 1 + α(1 − α)j −α Pj α d .
j k=1 kj j−k

Then, we have

1 ≤ dj ≤ C α j α , j = 1, 2, . . . , n.
1
where the positive constant Cα = [(−α)(−α+1)Γ(−α)Γ(α+1)]
29

Proof. of Theorem 3.2.3:


Assume

ej = y(tj ) − yj .

then we have the error equation, subtracting (3.2.7) from (3.2.8),


j
" #
1 X
ej = − αkj ej−k − Rj .
α0j − tαj Γ(−α)β k=1

Note that
1
α0j = < 0, Γ(−α) < 0, β < 0, αkj > 0.
−α(1 − α)j −α
then we have
j
!
1 X
|ej | ≤ αkj |ej−k | + |Rj |
−α0j k=1
j
!
X
≤ α(1 − α)j −α αkj |ej−k | + j α−2 t2j ||y 00 ||∞
k=1
j
X
2 00 −α
≤ α(1 − α)h ||y ||∞ + α(1 − α)j αkj |ej−k | .
k=1
2 00
By denoting a = α(1 − α)h ||y ||∞ and assume for simplicity that e0 = 0 then we get
j
X
−α
|ej | ≤ a + α(1 − α)j αkj |ej−k | , j = 1, 2, . . . , n,
k=1

which implies that

|ej | ≤ adj , j = 1, 2, . . . , n,

where

 d =1
1
 d = 1 + α(1 − α)j −α Pj αkj dj−k
j k=1

Hence, the proof of Theorem 3.2.3 is complete.


30

Next we will show that yn − y(tn ) has an asymptotic expansion. We have the following
Theorem.

Theorem 3.2.5. [33] Let tn = 1 be fixed. Let yn and y(tn ) be the solutions of (3.2.8)
and (3.2.7), respectively. Then there exist coefficients Cµ (α) and Cµ∗ (α) such that the
sequence {yn }possesses an asymptotic expansion of the form.

M1
X M2
X
y(tn ) = yn + Cµ (α)n α−µ
+ Cµ∗ (α)n−2µ + O(n−M3 ) f or n → ∞,
µ=2 µ=1

where M1 and M2 depend on the smoothness of y, and M3 = min{M1 − α, 2M2 }.

To prove Theorem 3.2.5, we need the following

Lemma 3.2.6. [Theorem 1.3 in [33]]


Let 0 < α < 1 and let g ∈ C m+2 [0, 1], m≥2
Then
I 1 I 1
−1−α
Rj (g) = t g(t)dt − t−1−α g1 (t)dt
0 0
j−1 k+1 µ∗
XZ m+1
j X X
−1−α
= t [g(t) − g1 (t)]dt = dµ j α−µ
+ d∗µ j −2µ + O(j α−m−1 ),
k
k=0 j µ=2 µ=1

where µ∗ is the integer satisfying 2µ∗ < m + 1 − α < 2(µ∗ + 1), dµ and d∗µ are certain
coefficients that depend on g. Here g1 (t) is the linear interpolation polynomial of g(t) on
[0, 1].

For example, assume that g ∈ C m+2 [0, 1], m = 4. Then we have

Rj (g) = d2 j α−2 + d∗1 j −2 + d3 j α−3 + d4 j α−4 + d5 j α−5 + d∗2 j −4 + O(j α−5 ),

Here µ∗ = 2 , and 2 × 2 < 5 − α < 2(2 + 1).

Proof of Theorem 3.2.5. To understand the idea of the proof. We assume, e.g, that
y ∈ C m+2 [0, 1], m = 4. Then we shall prove that, there exist C̃2 , C̃1∗ , C̃3 , C̃4 , C̃2∗ , such
that

y(tn ) − yn = C̃2 nα−2 + C1∗ n−2 + C̃3 nα−3 + C̃4 nα−4 + C̃5 nα−5 (3.2.9)

+C̃2∗ n−4 + O(nα−5 ), n → ∞,


31

tj j
To prove (3.2.9), we will consider yj − y(tj ), j → ∞, for fixed tj such that tn
= n
=
c0 , c0 is a constant, that is n depends on j. Here tn = 1, tj = c0 .

For example, choose c0 = 21 , then when j = 1 we have n = 2 and when j = 2, we have


n = 4,. . . . We will prove that

εj = y(tj ) − yj = C̃2 nα−2 + C1∗ n−2 + C̃3 nα−3 + C̃4 nα−4 (3.2.10)

+C̃5 nα−5 + C̃2∗ n−4 + O(nα−5 ), j → ∞,

Then let j = n. We get (3.2.9)

By Lemma 3.2.6, we see that, for g ∈ C m+2 [0, 1], m ≥ 2, (m = 4), we have

I 1 I 1 j−1 I k+1
X j
−1−α −1−α
Rj (g) = t g(t)dt − t g1 (t)dt = t−1−α [g(t) − g1 (t)]dt
k
0 0 k=0 j

= d2 j α−2 + d∗1 j −2 + d3 j α−3 + d4 j α−4 + d5 j α−5 + d∗2 j −4 + O(j α−5 ). (3.2.11)

Note that j = c0 n, we can write (3.2.11) into

Rj (g) = d˜2 nα−2 + d˜∗1 n−2 + d˜3 nα−3 + d˜4 nα−4 + d˜5 nα−5 + d˜∗2 n−4 + O(nα−5 ). (3.2.12)

Next we will prove that

εj = y(tj ) − yj = C̃2 nα−2 + C̃1∗ n−2 + C̃3 nα−3 + C̃4 nα−4 + C̃5 nα−5 (3.2.13)

+C̃2∗ n−4 + O(nα−5 ), j → ∞,

where
1
C̃` = d˜` , ` = 2, 3, 4, 5.
−cα0 Γ(−α)β− α1
1
C̃`∗ = d˜∗ , ` = 1, 2.
−c0 Γ(−α)β − α1 `
α

Suppose that (3.2.13) holds, then (3.2.9) follows by replacing j by n since j =


1, 2, . . . , n.
32

We shall use mathematical induction to prove (3.2.13).

Step 1: When j = 0, we have ε0 = 0, therefore (3.2.13) is true.


Step 2: When j = 1, we have, by (3.2.7) and (3.2.8),
1
1  hX i 
ε1 = y(t1 ) − y1 = (nc0 )α
ε0 αk1 − α01 + R1 [g] .
cα0 Γ(−α)β − α(α−1) k=0
P1
That is, noting that k=0 αk1 = − α1 , by (3.2.10),
h 1 i
(nc0 )α − cα0 Γ(−α)β ε1
α(α − 1)
1h i
= C̃2 nα−2 + C̃1∗ n−2 + C̃3 nα−3 + C̃4 nα−4 + C̃5 nα−5 + C̃2∗ n−4 + O(nα−5 )
α
1 h i
+ (nc0 )α C̃2 nα−2 + C̃1∗ n−2 + C̃3 nα−3 + C̃4 nα−4 + C̃5 nα−5 + C̃2∗ n−4 + O(nα−5 )
α(α − 1)
h i
+ d˜2 nα−2 + d˜∗ n−2 + d˜3 nα−3 + d˜4 nα−4 + d˜5 nα−5 + d˜∗ n−4 + O(nα−5 ) .
1 2

This shows that ε1 possesses an asymptotic expansion w.r.t powers of n, and we can
check indeed by comparing the coefficients of powers of n,

ε1 = C̃2 nα−2 + C̃1∗ n−2 + C̃3 nα−3 + C̃4 nα−4 + C̃5 nα−5 + C̃2∗ n−4 + O(nα−5 ).

Step 3: Assume that

ε` = C̃2 nα−2 + C̃1∗ n−2 + C̃3 nα−3 + C̃4 nα−4 + C̃5 nα−5 + C̃2∗ n−4 + O(nα−5 ), ` = 0, 1, 2, . . . , j − 1.

Then we have, same as in Step 2,


j
h 1 α α
i X 
(nc0 ) − c0 Γ(−α)β εj = αkj εj−k + Rj [g]
α(α − 1) k=1
h 
∗ −2
+ C̃2 n α−2
+ C̃1 n + C̃3 nα−3
+ C̃4 n α−4
+ C̃5 nα−5 + C̃2∗ n−4 + O(nα−5 )
j
X  i
αkj − α0j + Rj [g] .
k=0
Pj
Note that k=0 αkj = − α1 , we have, by (3.2.12)
h 1 i
(nc0 )α − cα0 Γ(−α)β εj
α(α − 1)
1h α−2 ∗ −2 α−3 α−4 α−5 ∗ −4 α−5
i
= C̃2 n + C̃1 n + C̃3 n + C̃4 n + C̃5 n + C̃2 n + O(n )
α
1 α
h
α−2 ∗ −2 α−3 α−4 α−5 ∗ −4 α−5
i
+ (nc0 ) C̃2 n + C̃1 n + C̃3 n + C̃4 n + C̃5 n + C̃2 n + O(n )
α(α − 1)
h i
+ d˜2 nα−2 + d˜∗ n−2 + d˜3 nα−3 + d˜4 nα−4 + d˜5 nα−5 + d˜∗ n−4 + O(nα−5 ) .
1 2
33

This shows that εj possesses an asymptotic expansion w.r.t powers of n, and we can check
indeed, comparing with the coefficients of powers of n,

εj = C̃2 nα−2 + C̃1∗ n−2 + C̃3 nα−3 + C̃4 nα−4 + C̃5 nα−5 + C̃2∗ n−4 + O(nα−5 ), j → ∞.

Thus (3.2.13) holds.

Together these estimates complete the proof of Theorem 3.2.5.

3.3 Extending Diethelm’s method


In this section we will consider a higher order numerical method for solving (3.1.3)-(3.1.4).
It is well-known that (3.1.3)-(3.1.4) is equivalent, with 0 < α < 1, to the following
problem:

R α
0 Dt [y(t) − y0 ] = βy(t) + f (t), 0 ≤ t ≤ 1, (3.3.1)

where R α
0 Dt y(t) denotes the Riemann-Liouville fractional derivative defined by,

with 0 < α < 1,


Z t
1 d
R α
0 Dt y(t) = (t − τ )−α y(τ ) dτ. (3.3.2)
Γ(1 − α) dt 0

The Riemann-Liouville fractional derivative R α


0 Dt y(t) can be written as [26]
I t
1
R α
0 Dt y(t) = (t − τ )−1−α y(τ ) dτ, (3.3.3)
Γ(−α) 0
H
where the integral denotes the Hadamard finite-part integral.
In [26], Diethelm approximated the Hadamard finite-part integral in (3.3.3) by piece-
wise linear interpolation polynomials and defined a numerical method for solving (3.3.1).
In this section, we will approximate the Hadamard finite-part integral by using piecewise
quadratic interpolation polynomials.
Let M be a fixed positive integer and let 0 = t0 < t1 < t2 < · · · < t2j < t2j+1 < · · · <
2j
t2M = 1 be a partition of [0, 1] and h the stepsize. At node t2j = 2M
, the equation (3.3.1)
satisfies

R α
0 Dt [y(t2j ) − y0 ] = βy(t2j ) + f (t2j ), j = 1, 2, . . . , M, (3.3.4)
34

2j+1
and at node t2j+1 = 2M
, the equation (3.3.1) satisfies

R α
0 Dt [y(t2j+1 ) − y0 ] = βy(t2j+1 ) + f (t2j+1 ), j = 0, 1, 2, . . . , M − 1. (3.3.5)

Let us first consider the discretization of (3.3.4). Note that

t−α
I t2j I 1
1 −1−α 2j
R α
0 Dt y(t2j ) = (t2j −τ ) y(τ ) dτ = w−1−α y(t2j −t2j w) dw. (3.3.6)
Γ(−α) 0 Γ(−α) 0
For every j, we replace g(w) = y(t2j − t2j w) in the integral in (3.3.6) by a piecewise
quadratic interpolation polynomial with equispaced nodes 0, 2j1 , 2j2 , . . . , 2j
2j
. We then have
I 1 I 1
−1−α
w g(w) dw = w−1−α g2 (w) dw + R2j (g), (3.3.7)
0 0

where g2 (w), defined by (3.3.9), is the piecewise quadratic interpolation polynomial of


g(w) with equispaced nodes 0, 2j1 , 2j2 , . . . , 2j
2j
and R2j (g) is the remainder term.

Lemma 3.3.1. Let 0 < α < 1. We have


I 1 2j k
X
−1−α
w g2 (w) dw = αk,2j g , (3.3.8)
0 k=0
2j

where

2−α (α + 2), for l = 0,







(−α)22−α ,



 for l = 1,



(−α)(−2−α α) + 21 F0 (2),




 for l = 2,



 −F1 (k),

for l = 2k − 1,
(−α)(−α + 1)(−α + 2)(2j)−α αl,2j =
k = 2, 3, . . . , j,







 1


 2
(F2 (k) + F0 (k + 1)), for l = 2k,







 k = 2, 3, . . . , j − 1,


 1
F (j), for l = 2j,

2 2

 
F0 (k) =(2k − 1)(2k) (2k)−α − (2k − 2)−α (−α + 1)(−α + 2)
  
− (2k − 1) + 2k (2k)−α+1 − (2k − 2)−α+1 (−α)(−α + 2)
 
−α+2 −α+2
+ (2k) − (2k − 2) (−α)(−α + 1),
35

 
−α −α
F1 (k) =(2k − 2)(2k) (2k) − (2k − 2) (−α + 1)(−α + 2)
  
− (2k − 2) + 2k (2k)−α+1 − (2k − 2)−α+1 (−α)(−α + 2)
 
+ (2k)−α+2 − (2k − 2)−α+2 (−α)(−α + 1),

and
 
F2 (k) =(2k − 2)(2k − 1) (2k)−α − (2k − 2)−α (−α + 1)(−α + 2)
  
−α+1 −α+1
− (2k − 2) + (2k − 1) (2k) − (2k − 2) (−α)(−α + 2)
 
+ (2k)−α+2 − (2k − 2)−α+2 (−α)(−α + 1).

1 2 2j
Proof. For fixed 2j, let 0 < 2j
< 2j
< ··· < 2j
= 1 be a partition of [0, 1]. Denote
l
wl = 2j
, l = 0, 1, 2, . . . , 2j. We then have, for k = 1, 2, . . . , j,

(w − w2k−1 )(w − w2k )


g2 (w) = g(w2k−2 )
(w2k−2 − w2k−1 )(w2k−2 − w2k )
(w − w2k−2 )(w − w2k )
+ g(w2k−1 )
(w2k−1 − w2k−2 )(w2k−1 − w2k )
(w − w2k−2 )(w − w2k−1 ) i
+ g(w2k ), for w ∈ [w2k−2 , w2k . (3.3.9)
(w2k − w2k−2 )(w2k − w2k−1 )

Let us now consider


I 1 hI w2 Z w4 Z w2j i
−1−α
w g2 (w) dw = + +··· + w−1−α g2 (w) dw.
0 0 w2 w2j−2

By the definition of the Hadamard finite-part integral [24], we obtain


I w2 Z w2 hZ w
−1−α g2 (0)(w2 )−α −1−α
i
w g2 (w) dw = + w g20 (y) dy dw
0 −α 0 0
Z w2
2−α
= g2 (0) + w−1−α (g2 (w) − g2 (0)) dw.
(−α)(2j)−α 0

(3.3.10)
36

By using (3.3.9), we have


I w2
g2 (w)w−1−α dw
0
Z w2
2−α −1−α (2j)
h 2
2

= g(0) + w w − (w1 + w2 )w g(0)
(−α)(2j)−α 0 2
(2j)2  2  (2j)2  2  i
+ w − (0 + w2 )w g(w1 ) + w − (0 + w1 )w g(w2 ) dw
−1 2
−α
2 (α + 2) 22−α
= g(0) + g(w1 )
(−α)(−α + 1)(−α + 2)(2j)−α (−α + 1)(−α + 2)(2j)−α
−2−α α
+ g(w2 ).
(−α + 1)(−α + 2)(2j)−α
Similarly, we have, after a simple calculation,
Z w2k+2
−α
(−α)(−α + 1)(−α + 2)(2j) g2 (w)w−1−α dw
w2k
1 1
= F0 (k)g(w2k−2 ) + (−1)F1 (k)g(w2k−1 ) + F2 (k)g(w2k ),
2 2
where Fi (k), i = 0, 1, 2 and k = 2, 3, ..., j are defined as above.
Together these estimates lead to (3.3.8) and the proof of Lemma 3.3.1 is complete.

Next we consider the discretization of (3.3.5). At the node


2j+1
t2j+1 = j = 1, 2, . . . , M − 1. We have
2M
,
I t2j+1
1
R α
0 Dt y(t2j+1 ) = (t2j+1 − τ )−1−α y(τ ) dτ
Γ(−α) 0
I 2j
t−α
Z t1
1 −1−α 2j+1 2j+1
= (t2j+1 − τ ) y(τ ) dτ + w−1−α y(t2j+1 − t2j+1 w) dw.
Γ(−α) 0 Γ(−α) 0
(3.3.11)

For j = 1, 2, . . . , M −1, we replace g(w) = y(t2j+1 −t2j+1 w) in the integral in (3.3.11) by


1 2 2j
a piecewise quadratic interpolation polynomial with equispaced nodes 0, 2j+1 , 2j+1 , . . . , 2j+1 .
We then have, for a sufficient smooth function g(w),
I 2j I 2j
2j+1 2j+1
−1−α
w g(w) dw = w−1−α g2 (w) dw + R2j+1 (g), (3.3.12)
0 0

where g2 (w) is the piecewise quadratic interpolation polynomial of g(w) with the nodes
1 2 2j
0, 2j+1 , 2j+1 , . . . , 2j+1 and R2j+1 (g) is the remainder term.
Similarly, we can prove the following lemma.
37

Lemma 3.3.2. Let 0 < α < 1. We have


I 2j 2j
2j+1 X  k 
w−1−α g2 (w) dw = αk,2j+1 g , (3.3.13)
0 k=0
2j + 1
where αk,2j+1 = αk,2j , k = 0, 1, 2, . . . , 2j and αk,2j are given in Lemma 3.3.1.

Remark 4. By direct calculation, we can show that, with 0 < α < 1,

2−α (α + 2)
α0,2j = < 0, (3.3.14)
(−α)(−α + 1)(−α + 2)(2j)−α
and αk,2j > 0 for k > 0, k 6= 2. For k = 2, there exists α1 ∈ (0, 1) such that α2,2j ≥ 0 for
0 < α < α1 and α2,2j ≤ 0 for α1 < α < 1.

Now solutions of (3.3.1) satisfy, with j = 1, 2, . . . , M ,


2j 2j
1 h X X i
y(t2j ) = tα2j Γ(−α)f (t2j ) − αk,2j y(t2j−k ) + y0 αk,2j − R2j (g) ,
α0,2j − tα2j Γ(−α)β k=1 k=0
(3.3.15)

and, with j = 1, 2, . . . , M − 1,
2j
1 h X
y(t2j+1 ) = tα2j+1 Γ(−α)f (t2j+1 ) − αk,2j+1 y(t2j+1−k )
α0,2j+1 − tα2j+1 Γ(−α)β k=1
2j Z t1 i
X
+ y0 αk,2j+1 − R2j+1 (g) − tα2j+1 (t2j+1 − τ )−1−α y(τ ) dτ . (3.3.16)
k=0 0

Here α0,l −tαl Γ(−α)β < 0, l = 2j, 2j +1, which follow from (3.3.14) and Γ(−α) < 0, β < 0
and α0,2j+1 = α0,2j .
Let y2j ≈ y(t2j ) and y2j+1 ≈ y(t2j+1 ) denote the approximations of the exact solutions
y(t2j ) and y(t2j+1 ), respectively. Assume that the starting values y0 and y1 are given. We
define the following numerical methods for solving (3.3.1), with j = 1, 2, . . . , M,
2j 2j
1 h X X i
y2j = tα2j Γ(−α)f (t2j ) − αk,2j y2j−k + y0 αk,2j , (3.3.17)
α0,2j − tα2j Γ(−α)β k=1 k=0

and, with j = 1, 2, . . . , M − 1,
2j
1 h
α
X
y2j+1 = t2j+1 Γ(−α)f (t2j+1 ) − αk,2j+1 y2j+1−k
α0,2j+1 − tα2j+1 Γ(−α)β k=1
2j Z t1 i
X
+ y0 αk,2j+1 − tα2j+1 (t2j+1 − τ )−1−α y(τ ) dτ . (3.3.18)
k=0 0
38

R t1
Remark 5. In practice, we need to approximate 0
(t2j+1 − τ )−1−α y(τ ) dτ . One way
is to divide the integral [0, t1 ] into small intervals 0 ≤ t11 ≤ t21 ≤ · · · ≤ tN
1 = t1 with

stepsize h̃  h. We first obtain y1p ≈ y(tp1 ), p = 1, 2, . . . , N by using some numerical


method for solving fractional differential equation. Then we apply a quadrature formula
to approximate the integral.

3.3.1 Error analysis

We have the following asymptotic expansion theorem.

Theorem 3.3.3. Let 0 < α < 1 and M be a positive integer. Let 0 = t0 < t1 <
t2 < · · · < t2j < t2j+1 < · · · < t2M = 1 be a partition of [0, 1] and h the stepsize.
Let y(t2j ), y(t2j+1 ), y2j and y2j+1 be the exact solutions and the approximate solutions of
(3.3.15) - (3.3.18), respectively. Assume that the function y ∈ C m+2 [0, 1], m ≥ 3. Further
assume that we obtain the exact starting values y0 = y(0) and y1 = y(t1 ). Then there exist
coefficients cµ = cµ (α) and c∗µ = c∗µ (α) such that the sequence {yl }, l = 0, 1, 2, . . . , 2M
possesses an asymptotic expansion of the form
µ ∗
m+1
X X
y(t2M ) − y2M = cµ (2M ) α−µ
+ c∗µ (2M )−2µ + o((2M )α−m−1 ), for M → ∞,
µ=3 µ=2

that is,
µ ∗
m+1
X X
y(t2M ) − y2M = cµ h µ−α
+ c∗µ h2µ + o(hm+1−α ), for h → 0,
µ=3 µ=2

where µ∗ is the integer satisfying 2µ∗ < m + 1 − α < 2(µ∗ + 1), and cµ and c∗µ are certain
coefficients that depend on y.

To prove Theorem 3.3.3, we need the following lemma for the asymptotic expansions
for the remainder terms R2j (g) and R2j+1 (g) in (3.3.7) and (3.3.12).

Lemma 3.3.4. Let 0 < α < 1 and g ∈ C m+2 [0, 1], m ≥ 3. Let R2j (g) and R2j+1 (g)
be the remainder terms in (3.3.7) and (3.3.12), respectively. Then we have, with l =
2, 3, . . . , 2j, 2j + 1, . . . , 2M,
µ∗
m+1
X X
Rl (g) = dµ lα−µ + d∗µ l−2µ + o(lα−m−1 ), (3.3.19)
µ=3 µ=2
39

where µ∗ is the integer satisfying 2µ∗ < m + 1 − α < 2(µ∗ + 1), and dµ and d∗µ are certain
coefficients that depend on g.

Proof. We follow the proof of Theorem 1.3 in [33] where the piecewise linear Lagrange
interpolation polynomials are used.
We first consider the case l = 2j for j = 1, 2, . . . , M . Let 0 = w0 < w1 < w2 < · · · <
k 1
w2j = 1, wk = 2j
,k = 0, 1, 2, . . . , 2j be a partition of [0, 1]. Let h1 = 2j
be the stepsize.
Let g2 (w) denote the piecewise quadratic Lagrange interpolation polynomial defined by
(3.3.9) on [w2l , w2l+2 ], l = 0, 1, 2, . . . , j − 1. Then we have
I 1 I 1
−1−α
R2j (g) = w g(w) dw − w−1−α g2 (w) dw
0 0
j−1 Z w2l+2   j−1 Z 1 h
X X
−1−α
= w g(w) − g2 (w) dw = (w2l + 2h1 s)−1−α g(w2l + 2h1 s)
l=0 w2l l=0 0
1 1 i
− (2s − 1)(2s − 2)g(w2l ) − (2s)(2s − 2)g(w2l+1 ) + (2s)(2s − 1)g(w2l+2 ) (2h1 ) ds.
2 2
By using the Taylor formula, we have
g 0 (w2l + 2h1 s) g 00 (w2l + 2h1 s)
g(w2l ) = g(w2l + 2h1 s) + (−2h1 s) + (−2h1 s)2
1! 2!
g 000 (w2l + 2h1 s) g (M )
(w 2l + 2h 1 s) (1)
+ (−2h1 s)3 + · · · + (−2h1 s)m + Rm+1 ,
3! m!
g 0 (w2l + 2h1 s) g 00 (w2l + 2h1 s)
g(w2l+1 ) = g(w2l + 2h1 s) + (h1 − 2h1 s) + (h1 − 2h1 s)2
1! 2!
g 000 (w2l + 2h1 s) g (m)
(w 2l + 2h 1 s) (2)
+ (h1 − 2h1 s)3 + · · · + (h1 − 2h1 s)m + Rm+1 ,
3! m!
g 0 (w2l + 2h1 s) g 00 (w2l + 2h1 s)
g(w2l+2 ) = g(w2l + 2h1 s) + (2h1 − 2h1 s) + (2h1 − 2h1 s)2
1! 2!
g 000 (w2l + 2h1 s) g (m)
(w 2l + 2h 1 s) (3)
+ (2h1 − 2h1 s)3 + · · · + (2h1 − 2h1 s)m + Rm+1 ,
3! m!
(3.3.20)
(i)
where Rm+1 , i = 1, 2, 3 denote the remainder terms. Thus we obtain
j−1 Z 1 h m−3 i
X X
−1−α
R2j (g) =(2h1 ) (w2l + 2h1 s) hr+3
1 g (r+3)
(w 2l + 2h1 s)π r (s) ds
l=0 0 r=0
j−1 1
XZ
+ (2h1 ) (w2l + 2h1 s)−1−α m+1 (s) ds = I + II,
l=0 0

(i)
where m+1 (s) depends on the remainder terms Rm+1 , i = 1, 2, 3 and πr (s) are some
functions of s.
40

For I, we have
m−3 Z 1 h j−1 i
X X
I= hr+3
1 2h1 (w2l + 2h1 s)−1−α g (r+3) (w2l + 2h1 s) πr (s) ds.
r=0 0 l=0

Applying Theorem 3.2 in [56], we have, with w̄l = w2l , h̄1 = 2h1 ,
j−1
X
2h1 (w2l + 2h1 s)−1−α g (r+3) (w2l + 2h1 s)
l=0
j−1
X
= h̄1 (w̄l + h̄1 s)−1−α g (r+3) (w̄l + h̄1 s)
l=0
m−r−3
X m−r−2
X
= aj (s)hj1 + a0,j (s)hj−α
1 + o(hm−r−2
1 ),
j=0 j=0

with some suitable functions aj (s), j = 0, 1, . . . , m − r − 3 and


a0,j (s), j = 0, 1, . . . , m − r − 2, with r = 0, 1, 2, . . . , m − 3, m ≥ 3.
Hence we have, noting that h1 = (2j)−1 ,
m−3
X hZ 1 m−r−3
X i
I= h3+r
1 aj (s)hj1 πr (s) ds
r=0 0 j=0
m−3
X hZ 1 m−r−2
X i
+ h3+r
1 a0,j (s)hj−α
1 πr (s) ds) + o(hm+1
1 )
r=0 0 j=0
m−3
X m−r−3
X hZ 1 i
= aj (s)πr (s) ds h3+r+j
1
r=0 j=0 0

m−3
X m−r−2
X hZ 1 i
+ a0,j (s)πr (s) ds h3+r+j−α
1 + o(hm+1
1 )
r=0 j=0 0

µ ∗
m+1
X X
= dµ (2j) α−µ
+ d∗µ (2j)−2µ + o((2j)−m−1 ), (3.3.21)
µ=3 µ=2

where µ∗ is the integer satisfying 2µ∗ < m + 1 − α < 2(µ∗ + 1), and dµ and d∗µ are certain
coefficients that depend on g. We remark that the expansion does not contain any odd
integer of powers of (2j) which follows from the argument in the proof of Theorem 1.3 in
[33].
For II, we have, following the argument of the proof for Theorem 1.3 in [33],
j−1 Z 1
X
II = 2h1 (w2l + 2h1 s)−1−α m+1 (s) ds = o((2j)α−m−1 ).
l=0 0
41

Thus (3.3.19) holds for l = 2j.


2l 2l+2 1
Next we consider the case l = 2j + 1. Denote w2l = 2j+1
, w2l+2 = 2j+1
and h1 = 2j+1
,
we have

I 2j I 2j
2j+1 2j+1
−1−α
R2j+1 (g) = w g(w) dw − w−1−α g2 (w) dw
0 0
j−1 w2l+2 j−1 Z 1
XZ   X h
−1−α −1−α
= w g(w) − g2 (w) dw = (w2l + 2h1 s) g(w2l + 2h1 s)
l=0 w2l l=0 0
1 1 i
− (2s − 1)(2s − 2)g(w2l ) − (2s)(2s − 2)g(w2l+1 ) + (2s)(2s − 1)g(w2l+2 ) (2h1 ) ds.
2 2

Following the same argument as for the case l = 2j, we show that (3.3.19) also holds for
l = 2j + 1. Together these estimates complete the proof of Lemma 3.3.4.

Proof of Theorem 3.3.3. We follow the proof of Theorem 2.1 in [33] where the piecewise
linear Lagrange interpolation polynomials are used to approximate the Hadamard finite-
part integral.
Let us fix tl = c to be a constant for l = 1, 2, . . . , 2M . Let t2M = 1 be fixed. We will
investigate the difference

l
el = y(tl ) − yl , for l → ∞, with tl = lh = = c,
2M

where h = 1/(2M ) is the stepsize. In other words, there is a constant c, independent of


M , such that

l = c · (2M ), or M = l/(2c),

and consequently, we see that if el possesses an asymptotic expansion with respect to l,


then e2M possesses at the same time one with respect to M , and vice versa.
We shall prove
µ∗
m+1
X X
el = y(tl )−yl = cµ (2M ) α−µ
+ c∗µ (2M )−2µ +o((2M )α−m−1 ), for l → ∞, (3.3.22)
µ=3 µ=2

for some suitable constants cµ , c∗µ which we will determine later.


42

Let us first consider the case l = 2j. Subtracting (3.3.17) from (3.3.15), we have,
2j
noting t2j = (2j)h = 2M
= c,
2j
1 h X i
e2j = 2j α − αk,2j (y(t2j−k ) − y2j−k ) − R2j (g)
α0,2j − ( 2M ) Γ(−α)β k=1
2j
1 X 
= α e
k,2j 2j−k + R2j (g) . (3.3.23)
cα Γ(−α)β − α0,2j k=1

Note that g(·) = y(t2j − t2j ·) ∈ C m+2 [0, 1], m ≥ 3, we have, by Lemma 3.3.4,
µ ∗
m+1
X X
R2j (g) = dµ (2j) α−µ
+ d∗µ (2j)−2µ + o((2j)α−m−1 ), for j → ∞, (3.3.24)
µ=3 µ=2

where µ∗ is the integer satisfying 2µ∗ < m + 1 − α < 2(µ∗ + 1), and dµ and d∗µ are certain
coefficients that depend on g.
Note that (2j)/(2M ) = c, we can write (3.3.24) into
µ ∗
m+1
X X
R2j (g) = d˜µ (2M )α−µ + d˜∗µ (2M )−2µ + o((2M )α−m−1 ), for j → ∞. (3.3.25)
µ=3 µ=2

Choose
1
cµ = d˜µ , µ = 3, 4, . . . , m + 1, (3.3.26)
−cα Γ(−α)β − 1/α
1
c∗µ = d˜∗µ , µ = 1, 2, . . . , µ∗ , (3.3.27)
−cα Γ(−α)β − 1/α
we will prove below that (3.3.22) holds for the coefficents cµ , c∗µ defined in (3.3.26) and
(3.3.27).
We shall use mathematical induction to prove (3.3.22). By assumption e0 = 0, e1 = 0,
hence (3.3.22) holds for l = 0, 1 with the coefficients given by (3.3.26) and (3.3.27). Let
2−α (α+2)(2M c)α
us now consider the case for l = 2. We have, noting that α0,l = (−α)(−α+1)(−α+2)
and
applying Lemma 3.3.4,

2
1 X 
e2 = y(t2 ) − y2 = α αk,2 e2−k + R2 (g)
c Γ(−α)β − α0,2 k=1
h m+1 µ∗
1 X X 
= α cµ (2M )α−µ
+ c∗µ (2M )−2µ + o((2M )α−m−1 )
c Γ(−α)β − α0,2 µ=3 µ=2
2
X  i
· αk,2 − α0,2 + R2 (g) . (3.3.28)
k=0
43

P2 2−α (α+2)(2M c)α


Thus we get, noting that k=0 αk,2 = −1/α and α0,2 = (−α)(−α+1)(−α+2)
,
h 2−α (α + 2)(2M c)α α
i
− c Γ(−α)β e2
(−α)(−α + 1)(−α + 2)
m+1 µ∗
1hX X i
= cµ (2M ) α−µ
+ c∗µ (2M )−2µ + o((2M )α−m−1 )
α µ=3 µ=2
µ ∗
m+1
X X
− d˜µ (2M )α−µ − d˜∗µ (2M )−2µ + o((2M )α−m−1 )
µ=3 µ=2
m+1 µ∗
2−α (α + 2)(2M c)α h X X i
+ cµ (2M )α−µ
+ c∗µ (2M )−2µ + o((2M )α−m−1 ) .
(−α)(−α + 1)(−α + 2) µ=3 µ=2

(3.3.29)
This shows that the sequence e2 possesses an asymptotic expansion with respect to the
powers of 2M , and it is easy to check that, by comparing with the coefficients of powers
of (2M ), see [33],
µ ∗
m+1
X X
e2 = cµ (2M ) α−µ
+ c∗µ (2M )−2µ + o((2M )α−m−1 ).
µ=3 µ=2

Assume that (3.3.22) holds for l = 0, 1, . . . , 2j − 1. Then we have, following the same
argument for (3.3.29), noting 2j
P
k=0 αk,2j = −1/α and applying Lemma 3.3.4,

h 2−α (α + 2)(2M c)α i


− cα Γ(−α)β e2j
(−α)(−α + 1)(−α + 2)
m+1 µ∗
1hX α−µ
X
∗ −2µ α−m−1
i
= cµ (2M ) + cµ (2M ) + o((2M ) )
α µ=3 µ=2
µ ∗
m+1
X X
− d˜µ (2M )α−µ − d˜∗µ (2M )−2µ + o((2M )α−m−1 )
µ=3 µ=2
m+1 µ∗
2−α (α + 2)(2M c)α h X X i
+ cµ (2M )α−µ + c∗µ (2M )−2µ + o((2M )α−m−1 ) .
(−α)(−α + 1)(−α + 2) µ=3 µ=2

(3.3.30)
This shows that the sequence e2j possesses an asymptotic expansion with respect to
the powers of 2M , and it is easy to check that, by comparing with the coefficients of
powers of (2M ), see [33],
µ ∗
m+1
X X
e2j = cµ (2M ) α−µ
+ c∗µ (2M )−2µ + o((2M )α−m−1 ).
µ=3 µ=2
44

Hence (3.3.22) holds for l = 2j.


Finally we assume that (3.3.22) holds for l = 0, 1, . . . , 2j. Then we have, following the
same argument for (3.3.30), noting 2j
P P2j
k=0 αk,2j+1 = k=0 αk,2j = −1/α, α0,2j+1 = α0,2j

and applying Lemma 3.3.4,

h 2−α (α + 2)(2M c)α i


− cα Γ(−α)β e2j+1
(−α)(−α + 1)(−α + 2)
m+1 µ∗
1hX α−µ
X
∗ −2µ α−m−1
i
= cµ (2M ) + cµ (2M ) + o((2M ) )
α µ=3 µ=2
µ∗
m+1
X X
− d˜µ (2M )α−µ
− d˜∗µ (2M )−2µ + o((2M )α−m−1 )
µ=3 µ=2
m+1 µ∗
2−α (α + 2)(2M c)α h X X i
+ cµ (2M )α−µ
+ c∗µ (2M )−2µ + o((2M )α−m−1 ) .
(−α)(−α + 1)(−α + 2) µ=3 µ=2

(3.3.31)

This again shows that the sequence e2j+1 possesses an asymptotic expansion with respect
to the powers of 2M , and it is easy to check that, by comparing with the coefficients of
powers of 2M , see [33],
µ ∗
m+1
X X
e2j+1 = cµ (2M )α−µ + c∗µ (2M )−2µ + o((2M )α−m−1 ).
µ=3 µ=2

Hence (3.3.22) holds also for l = 2j + 1. Together these estimates complete the proof of
(3.3.22). Applying l = 2M in (3.3.22), we complete the proof of Theorem 3.3.3.

Remark 6. In Theorem 3.3.3, we assume that y1 = y(t1 ) exactly. In practice y1 can be


approximated by using the ideas described in Remark 5.

3.4 Numerical examples


Example 7. Consider [26]

C α
0 Dt y(t) = βy(t) + f (t), t ∈ [0, 1], (3.4.1)

y(0) = y0 , (3.4.2)
45

where y0 = 0, 0 < α < 1, β = −1 and f (t) = (t2 +2t2−α /Γ(3−α))+(t3 +3!t3−α /Γ(4−α)).
The exact solution is y(t) = t2 + t3 .
The main purpose is to check the order of convergence of the numerical method with
respect to the fractional order α. For various choices of α ∈ (0, 1), we computed the errors
at t = 1. We choose the stepsize h = 1/(5 × 2l ), l = 1, 2, . . . , 7, i.e, we divided the interval
[0, 1] into n = 1/h small intervals with nodes 0 = t0 < t1 < · · · < tn = 1. Then we
compute the error e(tn ) = y(tn ) − yn . By Theorem 3.3.3, we have

|e(tn )| = |y(tn ) − yn | ≤ Ch3−α , (3.4.3)

To observe the order of convergence we shall compute the error |e(tn )| at tn = 1 for
the different values of h. Denote |eh (tn )| the error at tn = 1 for the stepsize h. Let
hl = h = 1/(5 × 2l ) for a fixed l = 1, 2, . . . , 7. We then have
|ehl (tn )| Ch3−α
≈ l
= 23−α ,
|ehl+1 (tn )| Ch3−α
l+1
 
|ehl (tn )|
which implies that the order of convergence satisfies 3 − α ≈ log2 |ehl+1 (tn )|
. In Tables
3.4.1- 3.4.2, we compute the experimentally determined orders of convergence (EOC) for
the different values of α. The numerical results are consistent with the theoretical results.

n EOC( α = .1 ) EOC( α = .2) EOC( α = .3) EOC ( α = .4) EOC ( α = .5)


10
20 2.8885 2.7870 2.6836 2.5790 2.4732
40 2.8941 2.7935 2.6919 2.5897 2.4871
80 2.8972 2.7963 2.6961 2.5950 2.4937
160 2.8987 2.7985 2.6981 2.5976 2.4969
320 2.8994 2.7993 2.6991 2.5988 2.4985
640 2.9003 2.7998 2.6995 2.5994 2.4992

Table 3.4.1: Numerical results at t = 1 for β = −1


and f (t) = (t2 + 2t(2 − α)/Γ(3 − α)) + (t3 + 3!t3−α /Γ(4 − α))

In Figures 3.4.1 - 3.4.6, we plot the experimentally determined orders of convergence.


We have from (3.4.3)

log2 (|e(tn )|) ≤ log2 (C) + (3 − α)log2 (h).


46

n EOC( α = .6 ) EOC( α = .7) EOC( α = .8) EOC ( α = .9)


10
20 2.3662 2.2579 2.1476 2.0351
40 2.3840 2.2804 2.1760 2.0709
80 2.3923 2.2905 2.1885 2.0861
160 2.3962 2.2954 2.1944 2.0932
320 2.3981 2.2977 2.1972 2.0967
640 2.3991 2.2989 2.1986 2.0983

Table 3.4.2: Numerical results at t = 1 for β = −1


and f (t) = (t2 + 2t(2 − α)/Γ(3 − α)) + (t3 + 3!t3−α /Γ(4 − α))

−5

−10

−15
log2(|e(t)|)

−20

−25

−30

−35
−10 −9 −8 −7 −6 −5 −4 −3
log2(h)

Figure 3.4.1: The experimentally determined orders of convergence (“EOC ”) at t = 1 in


Example 7 with α = 0.1

Let y = log2 (|e(tn )|) and x = log2 (h). In Figure 3.1, we plot the function y = y(x) for the
different values of x = log2 (h) where h = 1/(5×2l ), l = 1, 2, . . . , 7. To observe the order of
convergence, we also plot the straight line y = (3−α)x, where α = 0.1, 0.2, 0.4, 0.5, 0.7, 0.8.
We see that these two lines are exactly parallel which means that the order of convergence
of the numerical method is O(h3−α ).
47

−5

−10

−15
log2(|e(t)|)

−20

−25

−30
−10 −9 −8 −7 −6 −5 −4 −3
log2(h)

Figure 3.4.2: The experimentally determined orders of convergence (“EOC ”) at t = 1 in


Example 7 with α = 0.2

−8

−10

−12

−14

−16
log2(|e(t)|)

−18

−20

−22

−24

−26

−28
−10 −9 −8 −7 −6 −5 −4 −3
log2(h)

Figure 3.4.3: The experimentally determined orders of convergence (“EOC ”) at t = 1 in


Example 7 with α = 0.4
48

−8

−10

−12

log2(|e(t)|) −14

−16

−18

−20

−22

−24

−26
−10 −9 −8 −7 −6 −5 −4 −3
log2(h)

Figure 3.4.4: The experimentally determined orders of convergence (“EOC ”) at t = 1 in


Example 7 with α = 0.5

−6

−8

−10

−12
log2(|e(t)|)

−14

−16

−18

−20

−22

−24
−10 −9 −8 −7 −6 −5 −4 −3
log2(h)

Figure 3.4.5: The experimentally determined orders of convergence (“EOC ”) at t = 1 in


Example 7 with α = 0.7
49

−6

−8

−10

−12
log2(|e(t)|)

−14

−16

−18

−20

−22
−10 −9 −8 −7 −6 −5 −4 −3
log2(h)

Figure 3.4.6: The experimentally determined orders of convergence (“EOC ”) at t = 1 in


Example 7 with α = 0.8
Chapter 4

Higher order numerical method for


fractional ODEs (predictor-corrector
method)

4.1 Introduction
A predictor-corrector approximation method [29] for fractional differential equations has
been developed by the three well-known mathematicians Kai Diethelm, Neville J. Ford
and Alan D. Freed. The popularity of this method is due to its suitability for use both
for linear and for nonlinear problems and the easy implementation of a computational
algorithm.
We consider numerical methods for solving the fractional differential equations

C α
0 Dt y(t) = f (t, y(t)), 0 < t < T, (4.1.1)

y (k) (0) = y0k , k = 0, 1, 2, . . . , dαe − 1, (4.1.2)

where the y0k may be arbitrary real numbers and α > 0. Here C α
0 Dt denotes the differential

operator in the sense of Caputo denoted by, with n − 1 < α < n


Z t
C α 1
0 Dt y(t) = (t − u)n−α−1 y (n) (u) du,
Γ(n − α) 0

where n = dαe is the smallest integer ≥ α.

50
51

The approach for solving the fractional differential equation (4.1.1)-(4.1.2) is based on
the discretization of the integral in the equivalent form of (4.1.1)-(4.1.2), see [28]. It is
well-known that (4.1.1)-(4.1.2) is equivalent to the Volterra integral equation
dαe−1 ν Z t
(ν) t 1
X
y(t) = y0 + (t − u)α−1 f (u, y(u)) du. (4.1.3)
ν=0
ν! Γ(α) 0

In [29], the authors approximated the integral in (4.1.3) by using a piecewise linear
interpolation polynomial and introduced a fractional Adams method for solving (4.1.1)-
(4.1.2) and proved that the order of convergence of the numerical method is O(h2 ) for
1 < α < 2 and O(h1+α ) for 0 < α < 1 if C α 2
0 Dt y(t) ∈ C [0, T ]

We will use piecewise quadratic interpolation polynomials to approximate the integral


in (4.1.3) and introduce a high order fractional Adams method for solving (4.1.3) and prove
that the order of convergence of the numerical method is min{3, 1 + 2α} for α ∈ (0, 2] if
C α
0 Dt y(t) ∈ C 3 [0, T ]. This method has higher convergence order than the method in [29].
It is easier to implement our numerical algorithm compared with the method in [95] where
the Jacobi-Gauss-Lobatto nodes must be calculated at each time level. Our method is
simpler than the method in [9] in the sense that we are using a predictor-corrector method
and therefore we do not need to solve the nonlinear system at each time level.

4.2 Fractional Adams-type algorithm (quadratic inter-


polation polynomial)
In this section we will consider a higher order numerical method for solving (4.1.1)-(4.1.2).
For simplicity we only consider the case where 0 < α ≤ 2 since the case α > 2 does not
seem to be of major practical interest [28].
To make sure that (4.1.1)-(4.1.2) has a unique solution, we assume that f (u, ·) satisfies
a Lipschitz condition, i.e., there exists a constant L such that

|f (u, x) − f (u, y)| ≤ L|x − y|, ∀ x, y ∈ R. (4.2.1)

Let m be a positive integer and let 0 = t0 < t1 < t2 < · · · < t2j < t2j+1 < · · · < t2m = T
be a partition of [0, T ] and h the stepsize. Note that the system (4.1.1)-(4.1.2) is equivalent
52

to (4.1.3). Let us now consider the discretization of (4.1.3). At node t = t2j , j =


1, 2, . . . , m, we have
Z t2j
(1) t2j 1
y(t2j ) = y0 + y0 + (t2j − u)α−1 f (u, y(u)) du. (4.2.2)
1! Γ(α) 0

(The second of the initial conditions only for 1 < α < 2 of course). At node t = t2j+1 , j =
1, 2, . . . , m − 1, we have
Z t2j+1
(1) t2j+1 1
y(t2j+1 ) = y0 + y0 + (t2j+1 − u)α−1 f (u, y(u)) du
1! Γ(α) 0
Z t1
(1) t2j+1 1
= y0 + y0 + (t2j+1 − u)α−1 f (u, y(u)) du
1! Γ(α) 0
Z t2j+1
1
+ (t2j+1 − u)α−1 f (u, y(u)) du
Γ(α) t1
Z t1
(1) t2j+1 1
= y0 + y0 + (t2j+1 − u)α−1 f (u, y(u)) du
1! Γ(α) 0
Z t2j
1
+ (t2j − u)α−1 f (u + h, y(u + h)) du (4.2.3)
Γ(α) 0
Rt
We will replace f (u, y(u)) in the integral 0 2j (t2j − u)α−1 f (u, y(u)) du in (4.2.2) by the
following piecewise quadratic polynomial, for t2l ≤ u ≤ t2l+2 , l = 0, 1, 2, . . . , j − 1 with
j = 1, 2, . . . , m,

(u − t2l+1 )(u − t2l+2 )


f (u, y(u)) ≈ P2 (u) = f (t2l , y(t2l ))
(t2l − t2l+1 )(t2l − t2l+2 )
(u − t2l )(u − t2l+2 )
+ f (t2l+1 , y(t2l+1 ))
(t2l+1 − t2l )(t2l+1 − t2l+2 )
(u − t2l )(u − t2l+1 )
+ f (t2l+2 , y(t2l+2 )). (4.2.4)
(t2l+2 − t2l )(t2l+2 − t2l+1 )
Rt
Similarly, we will replace f (u+h, y(u+h)) in the integral 0 2j (t2j −u)α−1 f (u+h, y(u+h)) du
in (4.2.3) by the following piecewise quadratic polynomial, for t2l ≤ u ≤ t2l+2 , l =
0, 1, 2, . . . , j − 1, j = 1, 2, . . . , m − 1,

(u − t2l+1 )(u − t2l+2 )


f (u + h, y(u + h)) ≈ Q2 (u) = f (t2l+1 , y(t2l+1 ))
(t2l − t2l+1 )(t2l − t2l+2 )
(u − t2l )(u − t2l+2 )
+ f (t2l+2 , y(t2l+2 ))
(t2l+1 − t2l )(t2l+1 − t2l+2 )
(u − t2l )(u − t2l+1 )
+ f (t2l+3 , y(t2l+3 )). (4.2.5)
(t2l+2 − t2l )(t2l+2 − t2l+1 )

We then have the following lemma:


53

Lemma 4.2.1. Let 0 < α ≤ 2. We have


Z t2j 2j
X
α−1
(t2j − u) P2 (u) du = ck,2j f (tk , y(tk )), (4.2.6)
0 k=0

and
Z t2j 2j
X
α−1
(t2j − u) Q2 (u) du = ck,2j f (tk+1 , y(tk+1 )), (4.2.7)
0 k=0

where

1



 F (0),
2 0
if k = 0,


1

hα F (l) + 12 F2 (l − 1), if k = 2l, l = 1, 2, . . . , j − 1,

2 0

ck,2j =
α(α + 1)(α + 2) 


 −F1 (l), if k = 2l + 1, l = 0, 1, 2, . . . , j − 1,



 1 F (j − 1), if k = 2j,

2 2

and
 
α+2 α+2
F0 (l) = α(α + 1) (2j − 2l) − (2j − 2l − 2)
  
α+1 α+1
+ α(α + 2) 2(2j) − (2l + 1) − (2l + 2) (2j − 2l − 2) − (2j − 2l)
  
+ (α + 1)(α + 2) (2j − 2l − 1)(2j − 2l − 2) (2j − 2l)α − (2j − 2l − 2)α ,
 
F1 (l) = α(α + 1) (2j − 2l)α+2 − (2j − 2l − 2)α+2
  
α+1 α+1
+ α(α + 2) 2(2j) − (2l) − (2l + 2) (2j − 2l − 2) − (2j − 2l)
  
+ (α + 1)(α + 2) (2j − 2l)(2j − 2l − 2) (2j − 2l)α − (2j − 2l − 2)α ,
 
F2 (l) = α(α + 1) (2j − 2l)α+2 − (2j − 2l − 2)α+2
  
+ α(α + 2) 2(2j) − (2l) − (2l + 1) (2j − 2l − 2)α+1 − (2j − 2l)α+1
  
α α
+ (α + 1)(α + 2) (2j − 2l)(2j − 2l − 1) (2j − 2l) − (2j − 2l − 2) ,
54

Proof. We have

t2j j−1 Z t2l+2  (u − t


2l+1 )(u − t2l+2 )
Z X
α−1
(t2j − u) P2 (u)du = (t2j − u)α−1 g(t2l )
0 t2l
(t2l − t2l+1 )(t2l − t2l+2 )
l=0
(u − t2l )(u − t2l+2 ) (u − t2l )(u − t2l+1 ) 
+ g(t2l+1 ) + g(t2l+2 ) du
(t2l+1 − t2l )(t2l+1 − t2l+2 ) (t2l+2 − t2l )(t2l+2 − t2l+1 )
j−1  t2l+2
(u − t2l+1 )(u − t2l+2 )
X Z
= (t2j − u)α−1 g(t2l )du
l=0 t2l
(−h)(−2h)
Z t2l+2
(u − t2l )(u − t2l+2 )
+ (t2j − u)α−1 g(t2l+1 )du
t2l (h)(−h)
Z t2l+2
(u − t2l )(u − t2l+1 ) 
+ (t2j − u)α−1 g(t2l+2 )du .
t2l (2h)(h)

Note that,
Z t2l+2
(t2j − u)α−1 (u − t2l+1 )(u − t2l+2 )du
t2l
Z t2l+2
= (t2j − u)α−1 [(u − t2j ) + (t2j − t2l+1 )][(u − t2j ) + (t2j − t2l+2 )]du
t
Z 2lt2l+2 Z t2l+2
α+1
= (t2j − u) du − (t2j − u)α (t2j − t2l+1 − t2l+2 )du
t t2l
Z 2lt2l+2
+ (t2j − u)α−1 (t2j − t2l+1 )(t2j − t2l+2 )du
t2l
 (2j − 2l)α+2 − (2j − 2l − 2)α+2
= hα+2
α+2
[2.2j − (2l + 1) − (2l + 2)][(2j − 2l)α+1 − (2j − 2l − 2)α+1 ]
+
α+1
[2j − (2l + 1)][2j − (2l + 2)][(2j − 2l)α − (2j − 2l − 2)α ] 
+
α
α+2
h 
= α(α + 1)[(2j − 2l)α+2 − (2j − 2l − 2)α+2 ]
α(α + 1)(α + 2)
+ α(α + 2)[2.2j − (2l + 1) − (2l + 2)][(2j − 2l)α+1 − (2j − 2l − 2)α+1 ]

α α
+ (α + 1)(α + 2)(2j − 2l − 1)(2j − 2l − 2)[(2j − 2l) − (2j − 2l − 2)
hα+2
= F0 (l), l = 0, 1, 2, 3 . . . , j − 1.
α(α + 1)(α + 2)
55

Similarly, we have,
Z t2l+2
(t2j − u)α−1 (u − t2l )(u − t2l+2 )du
t2l
Z t2l+2
= (t2j − u)α−1 [(u − t2j ) + (t2j − t2l )][(u − t2j ) + (t2j − t2l+2 )]du
t
Z 2lt2l+2 Z t2l+2
α+1
= (t2j − u) du − (t2j − u)α (2t2j − t2l − t2l+2 )du
t t2l
Z 2lt2l+2
+ (t2j − u)α−1 (t2j − t2l )(t2j − t2l+2 )du
t2l
hα+2 
= α(α + 1)[(2j − 2l)α+2 − (2j − 2l − 2)α+2 ]
α(α + 1)(α + 2)
+ α(α + 2)[2.2j − 2l − (2l + 2)][(2j − 2l − 2)α+1 − (2j − 2l)α+1 ]

α α
+ (α + 1)(α + 2)(2j − 2l)(2j − 2l − 2)[(2j − 2l) − (2j − 2l − 2)
hα+2
= F1 (l), l = 0, 1, 2, 3 . . . , (j − 1).
α(α + 1)(α + 2)
and,
Z t2l+2
(t2j − u)α−1 (u − t2l )(u − t2l+1 )du
t2l
Z t2l+2
= (t2j − u)α−1 [(u − t2j ) + (t2j − t2l )][(u − t2j ) + (t2j − t2l+1 )]du
t
Z 2lt2l+2 Z t2l+2
α+1
= (t2j − u) du − (t2j − u)α (2t2j − t2l − t2l+1 )du
t t2l
Z 2lt2l+2
+ (t2j − u)α−1 (t2j − t2l )(t2j − t2l+1 )du
t2l
 (2j − 2l)α+2 − (2j − 2l − 1)α+2
= hα+2
α+2
[2.2j − 2l − (2l + 1)][(2j − 2l − 1)α+1 − (2j − 2)α+1 ]
+
α+1
(2j − 2l)(2j − 2l − 1)[(2j − 2l)α − (2j − 2l − 1)α ] 
+
α
α+2
h 
= α(α + 1)[(2j − 2l)α+2 − (2j − 2l − 1)α+2 ]
α(α + 1)(α + 2)
+ α(α + 2)[2.2j − 2l − (2l + 1)][(2j − 2l − 1)α+1 − (2j − 2l)α+1 ]

+ (α + 1)(α + 2)(2j − 2l)(2j − 2l − 1)[(2j − 2l)α − (2j − 2l − 1)α
hα+2
= F2 (l), l = 0, 1, 2, 3 . . . , j − 1.
α(α + 1)(α + 2)
Thus we get,
56

Z t2j
(t2j − u)α−1 P2 (u)du
0
j−1 
X hα F0 (l)
= g(t2l )
l=0
α(α + 1)(α + 2) 2
hα F1 (l) hα F2 (l) 
− g(t2l+1 ) + g(t2l+2 )
α(α + 1)(α + 2) 2 α(α + 1)(α + 2) 2
2j
hα X
= ck,2j f (tk , y(tk )),
α(α + 1)(α + 2) k=0

where, ck,2j is given in (4.2.7)

We now define a fractional Adams numerical method for solving (4.1.3). Let yl ≈ y(tl )
denote the approximation of y(tl ), l = 0, 1, 2, . . . , 2m. The corrector formula is defined
by
2j−1
(1) t2j 1 X 
P
y2j = y0 + y0 + ck,2j f (tk , yk ) + c2j,2j f (t2j , y2j ) , j = 1, 2, . . . , m, (4.2.8)
1! Γ(α) k=0

and
Z t1
(1) t2j+1 1
y2j+1 = y0 + y0 + (t2j+1 − u)α−1 f (u, y(u)) du
1! Γ(α) 0
2j−1
1 X P

+ ck,2j f (tk+1 , yk+1 ) + c2j,2j f (t2j+1 , y2j+1 ) , j = 1, 2, . . . , m − 1.
Γ(α) k=0
(4.2.9)

The remaining problem is the determination of the predictor formula required to calcu-
P P
late y2j and y2j+1 . The idea is the same as the one described above: we replace f (u, y(u))
and f (u + h, y(u + h)) of the integrals on the right-hand sides of equations (4.2.2) and
(4.2.3), respectively, by the piecewise linear interpolation polynomials and obtain
2j−1
(1) t2j 1 X 
P PP
y2j = y0 +y0 + ak,2j f (tk , yk )+a2j,2j f (t2j , y2j ) , j = 1, 2, . . . , m, (4.2.10)
1! Γ(α) k=0

and, with j = 1, 2, . . . , m − 1,
2j
(1) t2j+1 1 X 
P PP
y2j+1 = y0 + y0 + ak,2j+1 f (tk , yk ) + a2j+1,2j+1 f (t2j+1 , y2j+1 ) , (4.2.11)
1! Γ(α) k=0
57

where the weights are [28]



nα+1 − (n − α)(n + 1)α , if k = 0,







ak,n+1 = (n − k + 2)α+1 + (n − k)α+1 − 2(n − k + 1)α+1 , if 1 ≤ k ≤ n,
α(α + 1) 


 1, if k = n + 1.

PP PP
Similarly, to calculate y2j and y2j+1 , we replace f (u, y(u)) and f (u + h, y(u + h)) in
the integrals on the right-hand sides of equations (4.2.2) and (4.2.3), respectively, by the
piecewise constants and obtain
2j−1
PP (1) t2j 1 X
y2j = y0 + y0 + bk,2j f (tk , yk ), j = 1, 2, . . . , m, (4.2.12)
1! Γ(α) k=0

and
2j
PP (1) t2j+1 1 X
y2j+1 = y0 + y0 + bk,2j+1 f (tk , yk ), j = 1, 2, . . . , m − 1. (4.2.13)
1! Γ(α) k=0

where the weights are [28]

hα  
bk,n+1 = (n + 1 − k)α − (n − k)α . (4.2.14)
α

Our basic fractional Adams method, is completely described now by equations (4.2.8)
- (4.2.13).

Remark 8. In practice, we need to approximate the integral in (4.2.9). We shall use the
same ideas as in Remark 5.

We have thus completed the description of our numerical algorithm. Now we will
discuss the error analysis of the scheme.

4.3 Error analysis


We have the following theorem.
C α
Theorem 4.3.1. Let 0 < α ≤ 2 and assume that 0 Dt y ∈ C 3 [0, T ] for some suitable
chosen T . Let y(tk ) and yk , k = 0, 1, 2, . . . , 2m, t2m = T be the solutions of (4.2.2),
58

(4.2.3), (4.2.8), (4.2.9), respectively. Assume that y0 = y(0) and y1 = y(t1 ) exactly. Then
there exists a positive constant C0 > 0 such that

 C0 h1+2α , if 0 < α ≤ 1,

max |y(tk ) − yk | ≤
0≤k≤2m  C0 h3 , if 1 < α ≤ 2.

To prove this theorem, we need some lemmas.

Lemma 4.3.2 ( Theorem 2.4 [28]). Let 0 < α ≤ 2. If z ∈ C 1 [0, T ], then there is a
constant C1α depending only on α such that
Z t2j 2j−1
X
α−1
(t2j − u) z(u) du − bk,2j z(tk ) ≤ C1α tα2j h.
0 k=0

where bk,2j are the weights defined by,

hα  α α

bk,2j = (2j − k) − (2j − 1 − k) .
α

Lemma 4.3.3 ( Theorem 2.5 [28]). Let 0 < α ≤ 2. If z ∈ C 2 [0, T ], then there is a
constant C2α depending only on α such that
Z t2j 2j
X
α−1
(t2j − u) z(u) du − ak,2j z(tk ) ≤ C2α tα2j h2 .
0 k=0

where ak,2j are the weights defined by,



 (2j − 1)α+1 − (2j − 1 − α)(2j)α , if k = 0,






ak,2j = α+1
+ (2j − 1 − k)α+1 − 2(2j − k)α+1 ,
α(α + 1)  (2j − k + 1) if 1 ≤ k ≤ 2j − 1,


 1, if k = 2j.

Lemma 4.3.4. Let 0 < α ≤ 2. If z ∈ C 3 [0, T ], then there is a constant C3α depending
only on α such that
Z t2j 2j
X
α−1
(t2j − u) z(u) du − ck,2j z(tk ) ≤ C3α tα2j h3 . (4.3.1)
0 k=0

and
Z t2j+1 2j
X
α−1
(t2j+1 − u) z(u) du − ck,2j z(tk+1 ) ≤ C3α tα2j+1 h3 , (4.3.2)
t1 k=0
59

where ck,2j are the weights defined in 4.2.7

Proof. We have
Z t2j 2j
X
α−1
I= (t2j − u) z(u) du − ck,2j z(tk )
0 k=0
Z t2j Z t2j
= (t2j − u)α−1 z(u) du − (t2j − u)α−1 P2 (u) du, (4.3.3)
0 0

where P2 (u) is the piecewise quadratic interpolation polynomial of z(u), defined by (4.2.4).
Thus we have
j−1 Z t2k+2  
X
α−1
|I| = (t2j − u) z(u) − P2 (u) du
k=0 t2k
j−1 t2k+2
XZ z 000 (ξ)
= (t2j − u)α−1 (u − t2k )(u − t2k+1 )(u − t2k+2 ) du
k=0 t2k 3!
000 t2j
kf k∞
Z
≤ (2h)3 (t2j − u)α−1 du = C3α tα2j h3 ,
3! 0

which shows (4.3.1). Similarly, we can show (4.3.2).

Lemma 4.3.5. [28] Let 0 < α ≤ 2 and m be a positive integer. Let ak,2j and bk,2j , k =
0, 1, 2, . . . , 2j, j = 1, 2, . . . , m be introduced in (4.2.10) and (4.2.12), respectively. Then
we have

ak,2j ≥ 0, bk,2j ≥ 0, k = 0, 1, 2, . . . , 2j,

and
2j 2j
X 1 X 1 α
ak,2j ≤ T α, bk,2j ≤ T . j = 1, 2, . . . , m.
k=0
α k=0
α

Further, there exist constants D1α and D2α such that

a2j,2j = D2α hα , b2j,2j = D1α hα , j = 1, 2, . . . , m.

Lemma 4.3.6. Let 0 < α ≤ 2. Let ck,2j , k = 0, 1, 2, . . . , 2j, j = 1, 2, . . . , m be introduced


in (4.2.8). Then we have

ck,2j ≥ 0, k = 0, 1, 2, . . . , 2j, (4.3.4)


60

and
2j
X 1 α
ck,2j ≤ T . (4.3.5)
k=0
α

Further there exists a constant D3α such that

c2j,2j = D3α hα , j = 1, 2, . . . , m. (4.3.6)

Proof. We first show that

F1 (l) ≤ 0, l = 0, 1, 2, . . . , j − 1. (4.3.7)

It is easy to show that



F1 (l) = 2 (2j − 2l)α+2 − (α + 2)(2j − 2l)α+1 − (2j − 2l − 2)α+2

− (α + 2)(2j − 2l − 2)α+1 , l = 0, 1, 2, . . . , j − 1.

Further, after some direct calculations, we can show that

(γ + 1)(n + 2)γ + (γ + 1)nγ + nγ+1 − (n + 2)γ+1 ≥ 0, ∀ n ∈ Z+ , γ > 0.

By putting n = 2j − 2l − 2 and γ = α + 1, we get (4.3.7).


Next we show

F0 (l) + F2 (l − 1) ≥ 0, l = 1, 2, . . . , j − 1. (4.3.8)

It is easy to show that

F0 (l) + F2 (l − 1) = 2(2j − 2l + 2)α+2 − (α + 2)(2j − 2l + 2)α+1 − 6(α + 2)(2j − 2l)α+1

− 2(2j − 2l − 2)α+2 − (α + 2)(2j − 2l − 2)α+1 .

Further, after some direct calculations, we can show that

2(n+4)α+2 −(α+2)(n+4)α+1 −6(α+2)(n+2)α+1 −2nα+2 −(α+2)nα+1 ≥ 0, ∀ n ∈ Z+ .

(4.3.9)

Hence (4.3.8) follows from (4.3.9). Finally we can also show F0 (0) ≥ 0 and F2 (j − 1) ≥ 0.
Hence we prove (4.3.4).
61

Further (4.3.5) follows from


2j Z t2j
X 1 1
ck,2j = (t2j − u)α−1 du = tα2j ≤ T α .
k=0 0 α α
For (4.3.6), we have, by Lemma 4.2.1, c2j,2j = 21 F2 (j − 1) = D3α hα , with the suitable
constant D3α . Together these estimates complete the proof of Lemma 4.3.6.

Proof of Theorem 4.3.1. We first consider the case where 1 < α ≤ 2. We will use mathe-
matical induction. Note that, by assumptions, |y(t0 ) − y0 | = 0, |y(t1 ) − y1 | = 0. Assume
that

|y(tk ) − yk | ≤ C0 h3 , (4.3.10)

is true for k = 0, 1, 2, . . . , 2j − 1, j = 1, 2, . . . , m. We must prove that this also holds for


k = 2j. In fact, we have, with j = 1, 2, . . . , m,
 
Γ(α) y(t2j ) − y2j
Z t2j  2j−1
X 
α−1
= (t2j − u) f (u, y(u)) du − ck,2j f (tk , yk ) − c2j,2j f (t2j , tP2j )
0 k=0
Z t2j Z t2j
= (t2j − u)α−1 f (u, y(u)) du − (t2j − u)α−1 P2 (u) du
0 0
Z t2j  2j−1
X 
α−1
+ (t2j − u) P2 (u) du − ck,2j f (tk , yk ) − c2j,2j f (t2j , tP2j )
0 k=0
Z t2j Z t2j 
α−1
= (t2j − u) f (u, y(u)) du − (t2j − u)α−1 P2 (u) du
0 0
2j−1    
X
+ ck,2j f (tk , y(tk )) − f (tk , yk ) + c2j,2j f (t2j , y(t2j )) − f (t2j , tP2j )
k=0

= I1 + II1 + III1 .

For I1 , we have, by Lemma 4.3.4,


Z t2j Z t2j
α−1
|I1 | = (t2j − u) f (u, y(u)) du − (t2j − u)α−1 P2 (u) du ≤ C3α T α h3 .
0 0

For II1 , we have, by Lemma 4.3.6 and the Lipschitz condition (4.2.1),
2j−1 2j−1
X X
|II1 | ≤ ck,2j |f (tk , y(tk )) − f (tk , yk )| ≤ ck,2j L|y(tk ) − yk |
k=0 k=0
1
≤ T α L max |y(tk ) − yk |.
α 0≤k≤2j−1
62

For III1 , we have, by Lemma 4.3.6 and the Lipschitz condition,

P
|III1 | ≤ c2j,2j |f (t2j , y(t2j )) − f (t2j , y2j )| ≤ D3α hα L|y(t2j ) − y2j
P
|.

P
Now let us consider the bound for |y(t2j ) − y2j |. We have

 
P
Γ(α) y(t2j ) − y2j
Z t2j  2j−1
X 
α−1
= (t2j − u) f (u, y(u)) du − ak,2j f (tk , yk ) − a2j,2j f (t2j , tP2jP )
0 k=0
Z t2j Z t2j 
= (t2j − u)α−1 f (u, y(u)) du − (t2j − u)α−1 P1 (u) du
0 0
2j−1    
X
+ ak,2j f (tk , y(tk )) − f (tk , yk ) + a2j,2j f (t2j , y(t2j )) − f (t2j , tP2jP )
k=0

= I2 + II2 + III2 .

For I2 , we have, by Lemma 4.3.3,


Z t2j Z t2j
α−1
|I2 | = (t2j − u) f (u, y(u)) du − (t2j − u)α−1 P1 (u) du ≤ C2α T α h2 .
0 0

For II2 , we have, by Lemma 4.3.5 and the Lipschitz condition (4.2.1),
2j−1 2j−1
X X
|II2 | ≤ ak,2j |f (tk , y(tk )) − f (tk , yk )| ≤ ak,2j |y(tk ) − yk |
k=0 k=0
1
≤ T α L max |y(tk ) − yk |.
α 0≤k≤2j−1

For III2 , we have, by Lemma 4.3.5 and Lipschitz condition (4.2.1),

PP
|III2 | ≤ a2j,2j |f (t2j , y(t2j )) − f (t2j , y2j )| ≤ D2α hα L|y(t2j ) − y2j
PP
|.

PP
We also need to consider the bound for |y(t2j ) − y2j |. We have

  Z t2j 2j−1
X
PP α−1
Γ(α) y(t2j ) − y2j = (t2j − u) f (u, y(u)) du − bk,2j f (tk , yk )
0 k=0
Z t2j 2j−1
X
= (t2j − u)α−1 f (u, y(u)) du − bk,2j f (tk , y(tk ))
0 k=0
2j−1  
X
+ bk,2j f (tk , y(tk )) − f (tk , yk ) = I3 + II3 .
k=0
63

For I3 , we have, by Lemma 4.3.2, |I3 | ≤ C1α T α h.


For II3 , we have, by Lemma 4.3.5 and Lipschitz condition (4.2.1),

1 α
|II3 | ≤ T L max |y(tk ) − yk |.
α 0≤k≤2j−1

Together these estimates, we have

1
Γ(α)|y(t2j ) − y2j | ≤ C3α T α h3 + T α L max |y(tk ) − yk |
α 0≤k≤2j−1
1  1
+ D3α hα L C2α T α h2 + T α L max |y(tk ) − yk |
Γ(α) α 0≤k≤2j−1

α α 1 h
α α 1 α i
+ D2 h L C T h + T L max |y(tk ) − yk |
Γ(α) 1 α 0≤k≤2j−1
α α α 2+α
h D LC2 T h Dα Dα L2 C1α T α h1+2α i
≤ C3α T α h3 + 3 + 3 2
Γ(α) Γ(α)2
h1 Dα L2 ( α1 T α )hα D3α D2α ( α1 T α )L3 h2α i
+ T αL + 3 + max |y(tk ) − yk |.
α Γ(α) Γ(α)2 0≤k≤2j−1

By mathematical induction (4.3.10), we have


h C α T α h3 D3α LC2α T α h2+α D3α D2α L2 C1α T α h1+2α i
3
|y(t2j ) − y2j | ≤ + +
Γ(α) Γ(α)2 Γ(α)3
h 1 α D3α L2 ( α1 T α )hα D3α D2α ( α1 T α )L3 h2α i
+ T L+ + 2
C0 h3 .
Γ(α + 1) Γ(α + 1)Γ(α) Γ(α + 1)Γ(α)
(4.3.11)

1
We first choose T sufficiently small, see Lemma 3.1 in [28] such that Γ(α+1)
T αL ≤ 12 .
Then we fix this value for T and make the sum of the remaining terms in the right hand
C0 3
side of (4.3.11) smaller than 2
h (for sufficiently small h) by choosing C0 sufficiently
large. Hence we obtain, for 1 < α ≤ 2,

C0 3 C0 3
|y(t2j ) − y2j | ≤ h + h = C0 h3 . (4.3.12)
2 2

We also need to show that if (4.3.10) is true for k = 0, 1, 2, . . . , 2j with j = 1, 2, . . . , m−


64

1, then it also holds for k = 2j + 1. In fact, we have, with j = 1, 2, . . . , m − 1,


  Z t2j+1
Γ(α) y(t2j+1 ) − y2j+1 = (t2j+1 − u)α−1 f (u, y(u)) du
0
Z t1 2j−1 
X
− (t2j+1 − u)α−1 f (u, y(u)) du + P
ck,2j f (tk+1 , yk+1 ) + c2j,2j f (t2j+1 , y2j+1 )
0 k=0
Z t2j+1  2j−1
X 
α−1 P
= (t2j+1 − u) f (u, y(u)) du − ck,2j f (tk+1 , yk+1 ) + c2j,2j f (t2j+1 , y2j+1 )
t1 k=0
Z t2j+1 Z t2j+1 
= (t2j+1 − u)α−1 f (u, y(u)) du − (t2j+1 − u)α−1 Q2 (u) du
t1 t1
2j−1    
X
P
+ ck,2j f (tk+1 , y(tk+1 )) − f (tk+1 , yk+1 ) + c2j,2j f (t2j+1 , y(t2j+1 )) − f (t2j+1 , y2j+1 ) .
k=0

Using the same arguments as proving (4.3.12), we can show

|y(t2j+1 ) − y2j+1 | ≤ C0 h3 , j = 1, 2, . . . , m − 1.

Hence we complete the proof for the case where 1 < α ≤ 2.


For the case 0 < α ≤ 1. Note that, by the assumptions, |y(t0 ) − y0 | = 0, and
|y(t1 ) − y1 | = 0. Assume that

|y(tk ) − yk | ≤ C0 h1+2α , (4.3.13)

for k = 0, 1, 2, . . . , 2j − 1, j = 1, 2, . . . , m. We must prove that this also holds for k = 2j.


In fact, by using the same arguments as showing (4.3.12), we get

h C α T α h3 D3α LC2α T α h2+α D3α D2α L2 C1α T α h1+2α i


3
|y(t2j ) − y2j | ≤ + +
Γ(α) Γ(α)2 Γ(α)3
h 1 α D3α L2 ( α1 T α )hα D3α D2α ( α1 T α )L3 h2α i
+ T L+ + 2
C0 h1+2α .
Γ(α + 1) Γ(α + 1)Γ(α) Γ(α + 1)Γ(α)
(4.3.14)
1
As in the case for 1 < α ≤ 2, we first choose T sufficiently small such that Γ(α+1)
T αL ≤
1
2
. Then we fix this value for T and make the sum of the remaining terms in the right had
C0 1+2α
side of (4.3.14) smaller than 2
h (for sufficiently small h) by choosing C0 sufficiently
large.
Hence we obtain, for 0 < α ≤ 1,
C0 1+2α C0 1+2α
|y(t2j ) − y2j | ≤ h + h = C0 h1+2α . (4.3.15)
2 2
65

Similarly, we can show that if (4.3.13) is true for k = 0, 1, 2, . . . , 2j with j = 1, 2, . . . , m−


1, then it is also true for k = 2j + 1. Together these estimates complete the proof of The-
orem 4.3.1.

4.4 Numerical examples


Example 9. [28] This example deals with the nonlinear fractional differential equation
where the unknown solution y has a smooth derivative of order α. Specifically we shall
look at the equation

C α 40320 8−α Γ(5 + α/2) 4−α/2 9 3 3


0 Dt y(t) = t −3 t + Γ(α + 1) + tα/2 − t4 − [y(t)]3/2 .
Γ(9 − α) Γ(5 − α/2) 4 2

The initial conditions were chosen to be homogeneous (y(0) = 0, y 0 (0) = 0; the latter
only in the case 1 < α < 2). This equation has been chosen because it exhibits a difficult
(nonlinear and nonsmooth) right-hand side, and yet we are able to find its exact solution,
thus allowing us to compare the numerical results for this nontrivial case to the exact
results. Indeed, the exact solution of this initial value problem is

9
y(t) = t8 − 3t4+α/2 + tα ,
4

and hence

C α 40320 8−α Γ(5 + α/2) 4−α/2 9


0 Dt y(t) = t −3 t + Γ(α + 1),
Γ(9 − α) Γ(5 − α/2) 4

which implies C α 3
0 Dt y ∈ C [0, T ] for arbitrary T > 0 and 0 < α ≤ 2, and thus the conditions

of Theorem 4.3.1 are fulfilled.


For various choices of α ∈ (0, 2], we compute the errors at tn = 1. We choose the
stepsize h = 1/(5 × 2l ), l = 1, 2, . . . , 7, i.e, we divided the interval [0, 1] into n = 1/h
small subintervals with nodes 0 = t0 < t1 < · · · < tn = 1. Then we compute the error
e(tn ) = y(tn ) − yn . By Theorem 4.3.1, we have


 C0 h1+2α ,

if 0 < α ≤ 1,
max |y(tk ) − yk | ≤
0≤k≤2m  C0 h3 , if 1 < α ≤ 2.

66

In Tables 4.4.1-4.4.2, we compute the orders of convergence for different values of α.


We observe that the order of convergence is O(h1+2α ) for 0 < α ≤ 1. But the observed
order of convergence is higher than 3 for 1 < α ≤ 2 in this example. For example, when
α = 1.35, the experimentally determined order is 3.5. When α = 1.65, the experimentally
determined order of convergence (EOC) is almost 4.

n EOC ( α = .35) EOC ( α = .40) EOC ( α = .45) EOC ( α = .50)


10
20 1.2475 1.2993 1.2965 1.2037
40 1.5302 1.6834 1.7891 1.8583
80 1.7461 1.8758 1.9787 2.0638
160 1.8293 1.9391 2.0350 2.1232
320 1.8518 1.9482 2.0391 2.1284
640 1.8478 1.9356 2.0233 2.1135

Table 4.4.1: Numerical results at t = 1 in Example 9 with the different fractional order
α<1

n EOC ( α = 1.35 ) EOC ( α = 1.40) EOC ( α = 1.60) EOC ( α = 1.65)


10
20 3.4810 3.5611 3.7581 3.7921
40 3.6886 3.7438 3.8753 3.8963
80 3.7695 3.8198 3.9268 3.9414
160 3.7977 3.8517 3.9526 3.99637
320 3.7944 3.8588 3.99667 3.9772
640 3.7662 3.8355 3.9810 3.9380

Table 4.4.2: Numerical results at t = 1 in Example 9 with the different fractional order
α>1

In Figure 4.4.1, we plot the order of convergence. We have

log2 (|e(tn )|) ≤ log2 (C) + (1 + 2α)log2 (h).


67

−4

−6

−8

log2(|e(t)|)
−10

−12

−14

−16

−18
−10 −9 −8 −7 −6 −5 −4 −3
log2(h)

Figure 4.4.1: The experimentally determined orders of convergence (“EOC ”) at t = 1 in


Example 9 with α = 0.35

−5

−10

−15
log2(|e(t)|)

−20

−25

−30

−35
−10 −9 −8 −7 −6 −5 −4 −3
log2(h)

Figure 4.4.2: The experimentally determined orders of convergence (“EOC ”) at t = 1 in


Example 9 with α = 1.25

Let y = log2 (|e(tn )|) and x = log2 (h). We plot the function y = y(x) for the different
values of x = log2 (h) where h = 1/(5 × 2l ), l = 1, 2, . . . , 7. To observe the order of
convergence, we also plot the straight line y = (1 + 2α)x, where α = 0.35. We see that
these two lines are almost parallel which confirms that the order of convergence of the
numerical method is O(h1+2α ).
In Figure 4.4.2, we will plot the order of convergence for α = 1.25. We plot the function
y = y(x) for the different values of x = log2 (h) where h = 1/(5 × 2l ), l = 1, 2, . . . , 7. To
observe the order of convergence, we also plot the straight line y = 3x. We observe that
the order of convergence is higher than 3 ( almost 1 + 2α).
Chapter 5

Higher order numerical methods for


fractional differential equations by
extrapolation

5.1 Introduction
The aim of this chapter is to discuss convergence acceleration methods for fractional dif-
ferential equation by extrapolation procedure. We will consider Richardson extrapolation
algorithms for solving higher order fractional differential equations. Richardson extrapo-
lation is an idea which can often be used to improve the convergence order of the numerical
method: from a method of order O(hk0 ) we can get a method of order O(hk1 , k0 < k1 ).

Suppose that we want to approximate a quantity A, we have available approximation


A(h) for stepsize h > 0.

For example, we want to approximate

A = f 0 (x0 ).

By Taylor formula,
0 f (x0 + h) − f (x0 ) f 00 (x0 ) f 000 (x0 ) 2
A = f (x0 ) = − h− h − ...
h 2! 3!

68
69

Denote
f (x0 + h) − f (x0 )
A0 (h) = ,
h
we have

A = A0 (h) + a0 h + a1 h2 + . . . (5.1.1)

we note that A0 (h) is a numerical method of order O(h).

We use the stepsize ht , t > 0, for example t = 2, we get the approximate A0 ( ht ) i.e.,
h h h
A = A0 ( ) + a0 ( ) + a1 ( )2 + . . . (5.1.2)
t t t
Multiplying t in both sides of (5.1.2) we get
h h
tA = tA0 ( ) + a0 h + a1 t( )2 + . . . . (5.1.3)
t t
Subtracting (5.1.1) from (5.1.3), we get
h
(t − 1)A = [tA0 ( ) − A0 (h)] + O(h2 ).
t
i.e.
tA0 ( ht ) − A0 (h)
A= + O(h2 ),
t−1
Denote
tA0 ( ht ) − A0 (h)
A1 (h) = ,
t−1
we get

A = A1 (h) + O(h2 ).

Thus we see that A1 (h) is a numerical method of O(h2 ). Let t = 2 , we get the extrapo-
lation formula
h
A1 (h) = 2A0 ( ) − A0 (h).
2
70

5.2 Richardson extrapolation


Let us now consider the general idea of the Richardson extrapolation. Assume that A(h)
is the approximation of a quantity A, where h is the stepsize . We also assume that

A = A0 (h) + a0 hk0 + a1 hk1 + a2 hk2 + . . . (5.2.1)

with 0 < k0 < k1 < k2 < . . . , we use the stepsize ht , t > 0. (for example t = 2) to get the
approximation A0 ( ht ), i.e.
h h h h
A = A0 ( ) + a0 ( )k0 + a1 ( )k1 + a2 ( )k2 + . . . . (5.2.2)
t t t t
Multiplying tk0 in both sides, we get
h h h
tk0 A = tk0 A0 ( ) + a0 (h)k0 + a1 tk0 ( )k1 + a2 tk0 ( )k2 + . . . . (5.2.3)
t t t
Subtracting (5.2.1) from (5.2.3), we have
tk0 A0 ( ht ) − A0 (h)
A= + b1 hk1 + b2 hk2 + . . .
tk0 − 1
Denote
tk0 A0 ( ht ) − A0 (h)
A1 (h) = ,
tk0 − 1
we have

A = A1 (h) + b1 hk1 + b2 hk2 + . . .

Thus A1 (h) is a numerical method of convergence order O(hk1 ), we can continue this
process to construct the numerical methods of order O(hk2 ), O(hk3 ), . . . . Choose t = 2
we first calculate A0 (h), A0 ( h2 ), A0 ( 2h2 ), A0 ( 2h3 ) which has convergence order O(hk0 ).
We next calculate A1 (h), A1 ( h2 ), A1 ( 2h2 ), . . . which has convergence order O(hk1 ). Sim-
ilarly, we can calculate A2 (h), A2 ( h2 ), A2 ( 2h2 ), . . . which has convergence order O(hk2 ).
We proceed by setting up a triangular array ( so-called Romberg tableau) of approxi-
mation value for A of the form
A0 (h)
A0 ( h2 ) A1 (h)
A0 ( 2h2 ) A1 ( h2 ) A2 (h)
A0 ( 2h3 ) A1 ( 2h2 ) A2 ( h2 )
A0 ( 2h4 ) A1 ( 2h3 ) A2 ( 2h2 )
71

. . .
. . .
. . .
Here
2k0 A0 ( h2 ) − A0 (h) h 2k0 A0 ( 2h2 ) − A0 ( h2 )
A1 (h) = , A1 ( ) = ,
2k0 − 1 2 2k0 − 1

2k1 A1 ( h2 ) − A1 (h) h 2k1 A1 ( 2h2 ) − A1 ( h2 )


A2 (h) = , A2 ( ) =
2k1 − 1 2 2k1 − 1

To observe the order O(hk0 ) from A0 (h), A0 ( h2 ), A0 ( 2h2 ), . . . we can use the following
idea.

Note that,

|e0 (h)| = |A − A0 (h)| ≤ Chk0 .

h h h
|e0 ( )| = |A − A0 ( )| ≤ C( )k0 .
2 2 2
Thus
|e0 (h)| hk0 |e0 (h)|
≈ = 2k0 , k0 = log2 .
|e0 ( h2 )| ( h2 )k0 |e0 ( h2 )|
Hence one can calculat all the values
|e0 (h)| |e0 ( h2 )| |e0 ( 2h2 )|
log2 , log2 , log2 ,...
|e0 ( h2 )| |e0 ( 2h2 )| |e0 ( 2h3 )|
and observe that the values should be k0 approximately.

Similarly, we can calculate


|e1 (h)| |e1 ( h2 )| |e1 ( 2h2 )|
log2 , log2 , log2 ,...
|e1 ( h2 )| |e1 ( 2h2 )| |e1 ( 2h3 )|
and observe that the values should be k1 approximately.
In this chapter we will consider two extrapolation algorithms for solving fractional
differential equations. One algorithm is for solving a linear fractional differential equation
which is based on the direct discretization of the fractional differential operator. Another
algorithm is for solving the nonlinear fractional differential equation which is based on the
discretization of the equivalent integral form of the fractional differential equation. We
also discuss in detail how to determine the starting values and the starting integrals in the
72

numerical methods for quadratic interpolation polynomials. Numerical results show that
the approximate solutions of these two numerical methods have the expected asymptotic
expansions.
We consider the Richardson extrapolation algorithms for solving the following frac-
tional order differential equation

C α
0 Dt y(t) = f (t, y(t)), 0 < t ≤ T, (5.2.4)
(k)
y (k) (0) = y0 , k = 0, 1, 2, . . . , dαe − 1, (5.2.5)

(k)
where the y0 may be arbitrary real numbers and α > 0. Here C α
0 Dt denotes the differential

operator in the sense of Caputo defined by,


Z t
C α 1
0 Dt y(t) = (t − u)n−α−1 y (n) (u) du,
Γ(n − α) 0

where n = dαe is the smallest integer ≥ α.


Extrapolation can be used to accelerate the convergence of a given sequence, [6, 7, 92].
Its applicability depends on the fact that a given sequence of the approximate solutions
of the problem possesses an asymptotic expansion. Let us review some extrapolation
algorithms for solving fractional differential equations. For the linear fractional differential
equation, Diethelm [26] introduced an algorithm for solving the following linear differential
equation of fractional order, with 0 < α < 1,

C α
0 Dt y(t) = βy(t) + f (t), 0 ≤ t ≤ 1, (5.2.6)

y(0) = y0 , (5.2.7)

where β < 0 and f is a given function on [0, 1]. Diethelm and Walz [33] proved that
the approximate solution of the numerical algorithm in [26] has an asymptotic expansion.
For the general nonlinear fractional differential equation (5.2.4) -(5.2.5),Diethelm, Ford
and Freed [28] introduced a fractional Adams-type predictor-corrector method for solv-
ing (5.2.4)-(5.2.5) and numerical evidence suggests that the approximate solution of the
numerical method in [28] has also an asymptotic expansion.
73

Recently, Yan, Pal and Ford [93] extended the numerical method in [26] and obtained a
high order numerical method for solving (5.2.6) - (5.2.7) and proved that the approximate
solution has an asymptotic expansion.

5.3 The linear fractional differential equation

5.3.1 The numerical method

In this section we will consider a higher order numerical method for solving (5.2.6)-(5.2.7).
It is well-known that (5.2.6)-(5.2.7) is equivalent to, with 0 < α < 1,

R α
0 Dt [y(t) − y0 ] = βy(t) + f (t), 0 ≤ t ≤ 1, (5.3.1)

where R α
0 Dt y(t) denotes the Riemann-Liouville fractional derivative defined by,

with 0 < α < 1,


Z t
1 d
R α
0 Dt y(t) = (t − u)−α y(u) du. (5.3.2)
Γ(1 − α) dt 0

By using the Hadamard finite-part integral, R α


0 Dt y(t) can be written as
I t
1
R α
0 Dt y(t) = (t − u)−1−α y(u) du. (5.3.3)
Γ(−α) 0
H
Here the integral denotes a Hadamard finite-part integral [25].
Let M be a positive integer and let 0 = t0 < t1 < · · · < tj < · · · < tM = 1 be a
partition of [0, 1]. At t = tj , we have

R α
0 Dt [y(tj ) − y0 ] = βy(tj ) + f (tj ), j = 1, 2, . . . , M.

Note that
tj t−α 1
I I
1 −1−α j
R α
0 Dt y(tj ) = (tj − u) y(u) du = w−1−α y(tj − tj w) dw. (5.3.4)
Γ(−α) 0 Γ(−α) 0
H1
For every j, we denote g(w) = y(tj − tj w) and approximate 0
w−1−α g(w) dw by
H1
0
w−1−α g1 (w) dw, where g1 (w) is the piecewise linear interpolation polynomial on the
74

nodes 0, 1j , 2j , . . . , jj = 1. We then obtain

t−α 1
I
j
R α
0 Dt y(tj ) = w−1−α y(tj − tj w) dw
Γ(−α) 0
j I
t−α
j
X tk
−1−α

= w g1 (w) dw + Rj (g)
Γ(−α) k=1 tk−1
j
t−α
j
X 
= αk,j y(tj−k ) + Rj (g) ,
Γ(−α) k=0

where αk,j , k = 0, 1, 2, . . . , j are weights and Rj (g) is the remainder term. Thus (5.3.1)
satisfies, with j = 1, 2, . . . , M ,
1 h
y(tj ) = α
tαj Γ(−α)f (tj )
α0,j − tj Γ(−α)β
j j i
X X
− αk,j y(tj−k ) + y0 αk,j − Rj (g) . (5.3.5)
k=1 k=0

Let yj ≈ y(tj ) be the approximate solutions of y(tj ). We define the following finite
difference method for solving (5.2.6) - (5.2.7), with j = 1, 2, . . . , M ,
j j
1 h
α
X X i
yj = t Γ(−α)f (t j ) − α k,j yj−k + y0 α k,j . (5.3.6)
α0,j − tαj Γ(−α)β j k=1 k=0

Diethelm and Walz [33] proved the following asymptotic expansion theorem.

Theorem 5.3.1 (Theorem 2.1 in [33]). Let 0 < α < 1 and M be a positive integer. Let
0 = t0 < t1 < t2 < · · · < tj < · · · < tM = 1 be a partition of [0, 1] and h the stepsize. Let
y(tj ) and yj be the exact and the approximate solutions of (5.3.5) and (5.3.6), respectively.
Assume that the function y ∈ C m+2 [0, 1], m ≥ 2. Then there exist coefficients cµ = cµ (α)
and c∗µ = c∗µ (α) such that the sequence {yl }, l = 0, 1, 2, . . . , M possesses an asymptotic
expansion of the form
µ ∗
m+1
X X
y(tM ) − yM = cµ (M ) α−µ
+ c∗µ (M )−2µ + o((M )α−m−1 ), for M → ∞,
µ=2 µ=1

that is,
µ ∗
m+1
X X
y(tM ) − yM = cµ h µ−α
+ c∗µ h2µ + o(hm+1−α ), for h → 0,
µ=2 µ=1

where µ∗ is the integer satisfying 2µ∗ < m + 1 − α < 2(µ∗ + 1), and cµ and c∗µ are certain
coefficients that depend on y.
75

Yan, Pal and Ford [93] extended the numerical method in Diethelm and Walz [33] and
obtained a high order numerical method for solving (5.2.6)- (5.2.7). Let M be a fixed
positive integer and let 0 = t0 < t1 < t2 < · · · < t2j < t2j+1 < · · · < t2M = 1 be a partition
2j
of [0, 1] and h the stepsize. At the nodes t2j = 2M
, the equations (5.2.6)- (5.2.7) satisfy

R α
0 Dt [y(t2j ) − y0 ] = βy(t2j ) + f (t2j ), j = 1, 2, . . . , M,

2j+1
and at the nodes t2j+1 = 2M
, the equations (5.2.6)- (5.2.7) satisfy

R α
0 Dt [y(t2j+1 ) − y0 ] = βy(t2j+1 ) + f (t2j+1 ), j = 0, 1, 2, . . . , M − 1. (5.3.7)

Note that
t2j t−α 1
I I
1 −1−α 2j
R α
0 Dt y(t2j ) = (t2j −u) y(u) du = w−1−α y(t2j −t2j w) dw. (5.3.8)
Γ(−α) 0 Γ(−α) 0
H1
For every j, we denote g(w) = y(t2j − t2j w) and approximate 0
w−1−α g(w) dw by
H1
0
w−1−α g2 (w) dw, where g2 (w) is the piecewise quadratic interpolation polynomials on
the nodes wl = l/2j, l = 0, 1, 2, . . . , 2j. More precisely, we have, for k = 1, 2, . . . , j,

(w − w2k−1 )(w − w2k )


g2 (w) = g(w2k−2 )
(w2k−2 − w2k−1 )(w2k−2 − w2k )
(w − w2k−2 )(w − w2k )
+ g(w2k−1 )
(w2k−1 − w2k−2 )(w2k−1 − w2k )
(w − w2k−2 )(w − w2k−1 )
+ g(w2k ), for w ∈ [w2k−2 , w2k ].
(w2k − w2k−2 )(w2k − w2k−1 )

Thus

t−α 1
I
2j
R α
0 Dt y(t2j ) = w−1−α y(t2j − t2j w) dw
Γ(−α) 0
j I
t−α
2j
X w2k 
= w−1−α g2 (w) dw + R2j (g)
Γ(−α) k=1 w2k−2
2j
t−α
2j
X 
= αk,2j y(t2j−k ) + R2j (g)
Γ(−α) k=0
76

where R2j (g) is the remainder term and αk,2j , k = 0, 1, 2, . . . , 2j are weights given by

(−α)(−α + 1)(−α + 2)(2j)−α αl,2j



2−α (α + 2),



 for l = 0,



 (−α)22−α ,



 for l = 1,



 (−α)(−2−α α) + 1 F0 (2), for l = 2,

2
=
−F1 (k), for l = 2k − 1, k = 2, 3, . . . , j,







1
for l = 2k, k = 2, 3, . . . , j − 1,



 2
(F2 (k) + F0 (k + 1)),



 1 F (j),


2 2 for l = 2j.

Here
 
F0 (k) =(2k − 1)(2k) (2k)−α − (2k − 2)−α (−α + 1)(−α + 2)
  
−α+1 −α+1
− (2k − 1) + 2k (2k) − (2k − 2) (−α)(−α + 2)
 
+ (2k)−α+2 − (2k − 2)−α+2 (−α)(−α + 1),

 
F1 (k) =(2k − 2)(2k) (2k)−α − (2k − 2)−α (−α + 1)(−α + 2)
  
− (2k − 2) + 2k (2k)−α+1 − (2k − 2)−α+1 (−α)(−α + 2)
 
−α+2 −α+2
+ (2k) − (2k − 2) (−α)(−α + 1),

and
 
−α −α
F2 (k) =(2k − 2)(2k − 1) (2k) − (2k − 2) (−α + 1)(−α + 2)
  
− (2k − 2) + (2k − 1) (2k)−α+1 − (2k − 2)−α+1 (−α)(−α + 2)
 
+ (2k)−α+2 − (2k − 2)−α+2 (−α)(−α + 1).

Hence (5.3.1) satisfies, with j = 1, 2, . . . , M ,


2j 2j
1 h X X i
y(t2j ) = tα2j Γ(−α)f (t2j ) − αk,2j y(t2j−k ) + y0 αk,2j − R2j (g) .
α0,2j − tα2j Γ(−α)β k=1 k=0

(5.3.9)
77

2j+1
At the nodes t2j+1 = 2M
, j = 0, 1, 2, . . . , M − 1, we have
I t2j+1
1
R α
0 Dt y(t2j+1 ) = (t2j+1 − u)−1−α y(u) du
Γ(−α) 0
I t1
1
= (t2j+1 − u)−1−α y(u) du
Γ(−α) 0
I 2j
t−α
2j+1 2j+1
+ w−1−α y(t2j+1 − t2j+1 w) dw.
Γ(−α) 0
H 2j
For every j, we denote g(w) = y(t2j+1 − t2j+1 w) and approximate 02j+1 w−1−α g(w) dw
H 2j
by 02j+1 w−1−α g2 (w) dw, where g2 (w) is the piecewise quadratic interpolation polynomials
l
on the nodes wl = 2j+1
, l = 0, 1, 2, . . . , 2j. We then get
Z t1
1
R α
0 Dt y(t2j+1 ) = (t2j+1 − u)−1−α y(u) du
Γ(−α) 0
j I
t−α
2j+1
X w2k 
+ w−1−α g2 (w) dw + R2j+1 (g)
Γ(−α) k=1 w2k−2
Z t1
1
= (t2j+1 − u)−1−α y(u) du
Γ(−α) 0
2j
t−α
2j+1
X 
+ αk,2j+1 y(t2j+1−k ) + R2j+1 (g)
Γ(−α) k=0

where R2j+1 (g) is the remainder term and αk,2j+1 = αk,2j , k = 0, 1, 2, . . . , 2j. Hence

2j
1 h X
y(t2j+1 ) = tα2j+1 Γ(−α)f (t2j+1 ) − αk,2j+1 y(t2j+1−k )
α0,2j+1 − tα2j+1 Γ(−α)β k=1
2j Z t1 i
X
+ y0 αk,2j+1 − R2j+1 (g) − tα2j+1 (t2j+1 − u)−1−α y(u) du . (5.3.10)
k=0 0

Here α0,l − tαl Γ(−α)β < 0, l = 2j, 2j + 1, which follow from Γ(−α) < 0, β < 0 and
α0,2j+1 = α0,2j .
Let y2j ≈ y(t2j ) and y2j+1 ≈ y(t2j+1 ) denote the approximate solutions of y(t2j ) and
y(t2j+1 ), respectively. We define the following numerical methods for solving (5.2.6)-
(5.2.7), with j = 1, 2, . . . , M ,
2j 2j
1 h
α
X X i
y2j = t2j Γ(−α)f (t2j ) − αk,2j y2j−k + y0 αk,2j , (5.3.11)
α0,2j − tα2j Γ(−α)β k=1 k=0
78

and, with j = 1, 2, . . . , M − 1,
2j
1 h
α
X
y2j+1 = t2j+1 Γ(−α)f (t2j+1 ) − αk,2j+1 y2j+1−k
α0,2j+1 − tα2j+1 Γ(−α)β k=1
2j Z t1 i
X
+ y0 αk,2j+1 − tα2j+1 (t2j+1 − u)−1−α y(u) du . (5.3.12)
k=0 0

Yan, Pal and Ford [93] proved the following Theorem.

Theorem 5.3.2 (for proof see the Theorem 3.3.3). Let 0 < α < 1 and M be a positive
integer. Let 0 = t0 < t1 < t2 < · · · < t2j < t2j+1 < · · · < t2M = 1 be a partition of
[0, 1] and h the stepsize. Let y(t2j ), y(t2j+1 ), y2j and y2j+1 be the exact and the approx-
imate solutions of (5.3.9) - (5.3.12), respectively. Assume that y ∈ C m+2 [0, 1], m ≥ 3.
Further assume that we can approximate the starting value y1 and the starting integral
R t1
0
(t2j+1 − τ )−1−α y(τ ) dτ in (5.3.12) by using some numerical methods and obtain the
required accuracy. Then there exist coefficients cµ = cµ (α) and c∗µ = c∗µ (α) such that the
sequence {yl }, l = 0, 1, 2, . . . , 2M possesses an asymptotic expansion of the form
µ ∗
m+1
X X
y(t2M ) − y2M = cµ (2M )α−µ + c∗µ (2M )−2µ + o((2M )α−m−1 ), for M → ∞,
µ=3 µ=2

that is,
µ ∗
m+1
X X
y(t2M ) − y2M = cµ hµ−α + c∗µ h2µ + o(hm+1−α ), for h → 0,
µ=3 µ=2

where µ∗ is the integer satisfying 2µ∗ < m + 1 − α < 2(µ∗ + 1), and cµ and c∗µ are certain
coefficients that depend on y.

5.3.2 Approximating the starting values and the starting integrals

To obtain the approximate solutions yl , l = 0, 1, 2, . . . , 2M numerically, we need to approx-


Rt
imate the starting value y1 and the initial integral 0 1 (t2j+1 − u)−1−α y(u) du in (5.3.12).
We shall consider these issues in this subsection and follow the idea in Cao and Xu [9].
At t = t1 , we have
t1
t−α 1
I I
1 −1−α
R α
0 Dt y(t1 ) = (t1 − u) y(u) du = 1 w−1−α y(t1 − t1 w) dw.
Γ(−α) 0 Γ(−α) 0
79

H1 H1
We denote g(w) = y(t1 − t1 w) and approximate 0
w−1−α g(w) dw by 0
w−1−α g2 (w) dw,
where g2 (w) is the quadratic interpolation polynomial on the nodes 0, 21 , 1 defined by
(w − 21 )(w − 1) (w − 0)(w − 1) 1
g2 (w) = 1 g(0) + 1 g( )
(0 − 2 )(0 − 1) ( 2 − 0)( 21 − 1) 2
(w − 0)(w − 12 )
+ g(1), for w ∈ [0, 1]. (5.3.13)
(1 − 0)(1 − 21 )
We then get
t−α t−α
I 1 I 1 
−1−α (1)
R α
0 Dt y(t1 ) = 1
w g(w) dw = 1
w−1−α g2 (w) dw + R2
Γ(−α) 0 Γ(−α) 0
−α 
t (1)

= 1 w10 y(t1 ) + w11 y(t 1 ) + w12 y(t0 ) + R2 , (5.3.14)
Γ(−α) 2

where
1
−1−α (w − 12 )(w − 1) 1
(w − 0)(w − 1)
I I
w10 = w dw, w11 = w−1−α dw,
0 (0 − 12 )(0 − 1) 0 ( 12 − 0)( 12 − 1)
1
(w − 0)(w − 21 )
I
w12 = w−1−α dw, (5.3.15)
0 (1 − 0)(1 − 12 )
(1)
and the remainder term R2 satisfies, [26]

(1)
|R2 | ≤ Ct31 |y 000 |∞ .

Further we approximate the value y(t 1 ) by, [9]


2

3 3 1 (2)
y(t 1 ) = y(t0 ) + y(t1 ) − y(t2 ) + R2 ,
2 8 4 8
(2) 1 3 000
where R2 = 12
h y (c). Hence we have

R α t−α
1

(1) (2)

0 Dt y(t1 ) = B̂0 y(t2 ) + B̂1 y(t1 ) + B̂2 y(t0 ) + R2 + R2 , (5.3.16)
Γ(−α)
where
3 3 1
B̂2 = w12 + w11 , B̂1 = w10 + w11 , B̂0 = − w11 .
8 4 8
Therefore we have, at t = t1 ,

1 
y(t1 ) = tα1 Γ(−α)f (t1 ) − B̂0 y(t2 ) − B̂2 y(t0 )
B̂1 − tα1 Γ(−α)β
2
X 
(1) (2)
+ y0 B̂k − R2 − R2 . (5.3.17)
k=0
80

At t = t2 , we have
t2
t−α
I I 1
1 −1−α
R α
0 Dt y(t2 ) = (t2 − u) y(u) du = 2
w−1−α y(t2 − t2 w) dw.
Γ(−α)
0 Γ(−α) 0
H 1 −1−α H1
We denote g(w) = y(t2 −t2 w) and approximate the integral 0 w g(w) dw by 0 w−1−α g2 (w) dw,
where g2 (w) is defined as in (5.3.13). We have

t−α t−α
I 1 I 1 
R α 2 −1−α 2 −1−α (3)
0 D t y(t2 ) = w g(w) dw = w g 2 (w) dw + R2
Γ(−α) 0 Γ(−α) 0
t−α  0 (3)

= 2 w1 y(t2 ) + w11 y(t1 ) + w12 y(t0 ) + R2 , (5.3.18)
Γ(−α)
(3)
where w1j , j = 0, 1, 2 are defined as in (5.3.15) and the remainder term R2 satisfies, [26]

(3)
|R2 | ≤ Ct32 |y 000 |∞ .

Therefore we have, at t = t2 ,
2 2
1 
α
X
k
X
k (3)

y(t2 ) = 0 t Γ(−α)f (t2 ) − w y(t2−k ) + y 0 w − R2 .
w1 − tα2 Γ(−α)β 2 k=1
1
k=0
1

(5.3.19)

Let yl ≈ y(tl ), l = 1, 2, be the approximate solutions of y(tl ). We define the following


numerical methods for solving yl , l = 1, 2.

2
1  X 
y1 = tα1 Γ(−α)f (t1 ) − B̂0 y2 − B̂2 y0 + y0 B̂k , (5.3.20)
B̂1 − tα1 Γ(−α)β k=0
2 2
1 α

k
X
k
 X
y2 = 0 t Γ(−α)f (t2 ) − w1 y2−k + y0 w1 . (5.3.21)
w1 − tα2 Γ(−α)β 2 k=1 k=0

Let el = y(tl ) − yl , l = 1, 2, denote the errors, we then have

1 
(1) (2)

e1 = B̂0 e2 + B̂2 e0 − R2 − R2 , (5.3.22)
B̂1 − tα1 Γ(−α)β
2
1 X
k (3)

e2 = 0 w e2−k − R2 . (5.3.23)
w1 − tα2 Γ(−α)β k=1 1

By using Gronwall’s Lemma, we get, [9]

(3)
|e2 | ≤ C|R2 | ≤ Ch3 ,
81

and

(1) (2)
|e1 | ≤ C(|R2 | + |R2 |) ≤ Ch3 .
R t1
We next consider how to approximate the starting integral 0
(t2j+1 − u)−1−α y(u) du
in (5.3.12) with j ≥ 1. Note that this integral is the usual integral since j ≥ 1 and
Z t1 Z 1
−1−α
(t2j+1 − u) y(u) du = t1 (t2j + t1 w)−1−α y(t1 − t1 w) dw.
0 0
R1
Denoting g(w) = y(t1 − t1 w) and approximating the integral 0 (t2j + t1 w)−1−α g(w) dw by
R1
(t + t1 w)−1−α g2 (w) dw, where g2 (w) is defined by (5.3.13), we have
0 2j
Z t1  
(1)
(t2j+1 − u)−1−α y(u) du = t1 wj0 y(t1 ) + wj1 y(t 1 ) + wj2 y(t0 ) + Rj , (5.3.24)
2
0

where
1
−1−α (w − 12 )(w − 1)
Z
wj0 = (t2j + t1 w) dw,
0 (0 − 21 )(0 − 1)
1
(w − 0)(w − 1)
Z
wj1 = (t2j + t1 w)−1−α dw,
0 ( 12 − 0)( 12 − 1)
1
(w − 0)(w − 21 )
Z
wj2 = (t2j + t1 w)−1−α dw,
0 (1 − 0)(1 − 21 )
(1)
and the remainder term Rj satisfies, [26]
Z 1
(1)
|Rj | ≤ (t2j + t1 w)−1−α (Ct31 ) dw ≤ Ch3 t−α
2j ≤ Ch
3−α
.
0

Further we approximate the value y(t 1 ) by


2

3 3 1 (2)
y(t 1 ) = y(t0 ) + y(t1 ) − y(t2 ) + R2 ,
2 8 4 8
(2)1 3 000
where R2 = 12 h y (c). Hence we have
Z t1  
(1) (2)
(t2j+1 − u)−1−α y(u) du = t1 B̂0,j y(t2 ) + B̂1,j y(t1 ) + B̂2,j y(t0 ) + Rj + Rj ,
0

where
3 3 1
B̂2,j = wj2 + wj1 , B̂1,j = wj0 + wj1 , B̂0,j = − wj1 .
8 4 8
R t1
We shall approximate the integral 0 (t2j+1 − u)−1−α y(u) du by
Z t1  
−1−α
(t2j+1 − u) y(u) du ≈ t1 B̂0,j y2 + B̂1,j y1 + B̂2,j y0 , (5.3.25)
0
82

and it is easy to show that

Z t1  
(t2j+1 − u)−1−α y(u) du − t1 B̂0,j y2 + B̂1,j y1 + B̂2,j y0
0
2
X 
(1) (2)
= t1 B̂k,j e2−k + Rj + R2 ≤ Ct1 (Ch3 + Ch3−α ) ≤ Ch4−α .
k=0

After obtaining y1 and y2 by (5.3.20) and (5.3.21), we then use (5.3.11), (5.3.12) and
(5.3.25) to calculate y3 , y4 , . . . , y2M .

5.4 The nonlinear fractional differential equation

5.4.1 The numerical method

In this subsection we will introduce the fractional Adams-type predictor-corrector method


for solving (5.2.4)-(5.2.5). Note that (5.2.4)-(5.2.5) is equivalent to the integral form, with
0 < α ≤ 2,
Z t
(1) t 1
y(t) = y0 + y0 + (t − u)α−1 f (u, y(u)) du. (5.4.1)
1! Γ(α) 0

(The second of the initial conditions is only for 1 < α ≤ 2 of course).


Let M be a positive integer and let 0 = t0 < t1 < t2 < · · · < tj < · · · < tM = T be a
partition of [0, T ] and h the stepsize. At t = tj , we have
Z tj
(1) tj 1
y(tj ) = y0 + y0 + (tj − u)α−1 f (u, y(u)) du. (5.4.2)
1! Γ(α) 0
Approximating f (u, y(u)) in (5.4.2) by the piecewise linear interpolation polynomial P1 (u)
on the nodes 0 = t0 < t1 < · · · < tj , we obtain
Z tj Z tj j
X
α−1 α−1
(tj − u) f (u, y(u)) du ≈ (tj − u) P1 (u) du = ak,j f (tk , y(tk )),
0 0 k=0

where ak,j , k = 0, 1, 2, . . . , j are some weights, see [29].


Let yj ≈ y(tj ) denote the approximate solution of y(tj ), j = 0, 1, 2, . . . , M . We define
the corrector formula of (5.4.2) by
j−1
(1) tj 1 X 
yj = y0 + y0 + ak,j f (tk , yk ) + aj,j f (tj , yjP ) , j = 1, 2, . . . , M. (5.4.3)
1! Γ(α) k=0
83

To determine the predictor formula for yjP , we approximate f (u, y(u)) in (5.4.2) by the
piecewise constant function P0 (u) on the nodes 0 = t0 < t1 < t2 , · · · < tj and obtain
Z tj Z tj j
X
α−1 α−1
(tj − u) f (u, y(u)) du ≈ (tj − u) P0 (u) du = bk,j f (tk , y(tk )),
0 0 k=0

where bk,j , k = 0, 1, 2, . . . , j −1 are some weights, see [29]. The predictor formula is defined
by
j−1
(1) tj 1 X
yjP = y0 + y0 + bk,j f (tk , yk ), j = 1, 2, . . . , M. (5.4.4)
1! Γ(α) k=0

The fractional Adams-type predictor-corrector method for solving (5.2.4)-(5.2.5) is com-


pletely described now by (5.4.3) and (5.4.4) with the weights ak,j , k = 0, 1, 2, . . . , j and
bk,j , k = 0, 1, 2, . . . , j. Diethelm, Ford and Freed [29] obtained the error estimates for the
methods (5.4.3) and (5.4.4).
In [93], Yan, Pal and Ford introduced a high order fractional Adams-type predictor-
corrector method for solving (5.2.4)-(5.2.5). Let M be a positive integer and let 0 = t0 <
t1 < t2 < · · · < t2j < t2j+1 < · · · < t2M = T be a partition of [0, T ] and h the stepsize.
Note that the system (5.2.4)-(5.2.5) is equivalent to (5.4.1). Let us now consider the
discretization of (5.4.1). At the nodes t = t2j , j = 1, 2, . . . , M , we have
Z t2j
(1) t2j 1
y(t2j ) = y0 + y0 + (t2j − u)α−1 f (u, y(u)) du. (5.4.5)
1! Γ(α) 0

At the nodes t = t2j+1 , j = 0, 1, 2, . . . , M − 1, we have


Z t1
(1) t2j+1 1
y(t2j+1 ) = y0 + y0 + (t2j+1 − u)α−1 f (u, y(u)) du
1! Γ(α) 0
Z t2j+1
1
+ (t2j+1 − u)α−1 f (u, y(u)) du
Γ(α) t1
Z t1
(1) t2j+1 1
= y0 + y0 + (t2j+1 − u)α−1 f (u, y(u)) du
1! Γ(α) 0
Z t2j
1
+ (t2j − u)α−1 f (u + h, y(u + h)) du. (5.4.6)
Γ(α) 0
Rt
We will replace f (u, f (u)) of the integral 0 2j (t2j − u)α−1 f (u, y(u)) du in (5.4.5) by the
following piecewise quadratic polynomial P2 (u), t2l ≤ u ≤ t2l+2 , l = 0, 1, 2, . . . , j − 1 with
84

j = 1, 2, . . . , M , where
(u − t2l+1 )(u − t2l+2 )
P2 (u) = f (t2l , y(t2l ))
(t2l − t2l+1 )(t2l − t2l+2 )
(u − t2l )(u − t2l+2 )
+ f (t2l+1 , y(t2l+1 ))
(t2l+1 − t2l )(t2l+1 − t2l+2 )
(u − t2l )(u − t2l+1 )
+ f (t2l+2 , y(t2l+2 )), (5.4.7)
(t2l+2 − t2l )(t2l+2 − t2l+1 )
and

(1)
f (u, y(u)) − P2 (u) = Rl ,

where

(1) f 000 (cl , y(cl ))


Rl = (u − t2l )(u − t2l+1 )(u − t2l+2 ), t2l ≤ cl ≤ t2l+2 .
3!
Rt
Similarly, we will replace f (u + h, f (u + h)) in the integral 0 2j (t2j − u)α−1 f (u +
h, y(u + h)) du in (5.4.6) by the following piecewise quadratic polynomial Q2 (u), for t2l ≤
u ≤ t2l+2 , l = 0, 1, 2, . . . , j − 1, j = 1, 2, . . . , M − 1, where
(u − t2l+1 )(u − t2l+2 )
Q2 (u) = f (t2l+1 , y(t2l+1 ))
(t2l − t2l+1 )(t2l − t2l+2 )
(u − t2l )(u − t2l+2 )
+ f (t2l+2 , y(t2l+2 ))
(t2l+1 − t2l )(t2l+1 − t2l+2 )
(u − t2l )(u − t2l+1 )
+ f (t2l+3 , y(t2l+3 )), (5.4.8)
(t2l+2 − t2l )(t2l+2 − t2l+1 )
and

(2)
f (u + h, y(u + h)) − Q2 (u) = Rl ,

where

(2) f 000 (dl , y(dl ))


Rl = (u − t2l )(u − t2l+1 )(u − t2l+2 ), t2l ≤ dl ≤ t2l+2 .
3!
We then have, with 0 < α ≤ 2, see [93],
Z t2j 2j
X
α−1
(t2j − u) P2 (u) du = ck,2j f (tk , y(tk )),
0 k=0

and
Z t2j 2j
X
(t2j − u)α−1 Q2 (u) du = ck,2j f (tk+1 , y(tk+1 )),
0 k=0
85

where

1



 F (0),
2 0
if k = 0,


1

hα F (l) + 12 F2 (l − 1), if k = 2l, l = 1, 2, . . . , j − 1,

2 0

ck,2j =
α(α + 1)(α + 2) 


 −F1 (l), if k = 2l + 1, l = 0, 1, 2, . . . , j − 1,



 1 F (j − 1), if k = 2j,

2 2

and
 
F0 (l) = α(α + 1) (2j − 2l)α+2 − (2j − 2l − 2)α+2
  
α+1 α+1
+ α(α + 2) 2(2j) − (2l + 1) − (2l + 2) (2j − 2l − 2) − (2j − 2l)
  
α α
+ (α + 1)(α + 2) (2j − 2l − 1)(2j − 2l − 2) (2j − 2l) − (2j − 2l − 2) ,
 
F1 (l) = α(α + 1) (2j − 2l)α+2 − (2j − 2l − 2)α+2
  
+ α(α + 2) 2(2j) − (2l) − (2l + 2) (2j − 2l − 2)α+1 − (2j − 2l)α+1
  
α α
+ (α + 1)(α + 2) (2j − 2l)(2j − 2l − 2) (2j − 2l) − (2j − 2l − 2) ,
 
F2 (l) = α(α + 1) (2j − 2l)α+2 − (2j − 2l − 2)α+2
  
+ α(α + 2) 2(2j) − (2l) − (2l + 1) (2j − 2l − 2)α+1 − (2j − 2l)α+1
  
+ (α + 1)(α + 2) (2j − 2l)(2j − 2l − 1) (2j − 2l)α − (2j − 2l − 2)α .

Let yl ≈ y(tl ) denote the approximate solutions of y(tl ), l = 0, 1, 2, . . . , 2M . We now


define a fractional Adams numerical method for solving (5.2.4)-(5.2.5). The corrector
formula is defined by, with j = 1, 2, . . . , M,
2j−1
(1) t2j 1 X 
P
y2j = y0 + y0 + ck,2j f (tk , yk ) + c2j,2j f (t2j , y2j ) , (5.4.9)
1! Γ(α) k=0

and, with j = 0, 1, 2, . . . , M − 1,
Z t1
(1) t2j+1 1
y2j+1 = y0 + y0 + (t2j+1 − u)α−1 f (u, y(u)) du
1! Γ(α) 0
2j−1
1 X P

+ ck,2j f (tk+1 , yk+1 ) + c2j+1,2j+1 f (t2j+1 , y2j+1 ) , (5.4.10)
Γ(α) k=0

The remaining problem is the determination of the predictor formula required to calcu-
P P
late y2j and y2j+1 . The idea is the same as the one described above: we replace f (u, y(u))
86

and f (u + h, y(u + h)) of the integrals on the right-hand sides of equations (5.4.5) and
(5.4.6), respectively, by the piecewise linear interpolation polynomials and obtain, with
j = 1, 2, . . . , M,
2j−1
(1) t2j 1 X 
P PP
y2j = y0 + y0 + ak,2j f (tk , yk ) + a2j,2j f (t2j , y2j ) , (5.4.11)
1! Γ(α) k=0

and, with j = 0, 1, 2, . . . , M − 1,
2j
(1) t2j+1 1 X 
P PP
y2j+1 = y0 + y0 + ak,2j+1 f (tk , yk ) + a2j+1,2j+1 f (t2j+1 , y2j+1 ) , (5.4.12)
1! Γ(α) k=0

where the weights [28]



nα+1 − (n − α)(n + 1)α , if k = 0,







ak,n+1 = (n − k + 2)α+1 + (n − k)α+1 − 2(n − k + 1)α+1 if 1 ≤ k ≤ n,
α(α + 1) 



 1, if k = n + 1.

Similarly, to calculate ykP P , we replace f (u, y(u)) and f (u + h, y(u + h)) in the integrals
in (5.4.5) and (5.4.6), respectively by the piecewise constants and obtain
2j−1
PP (1) t2j 1 X
y2j = y0 + y0 + bk,2j f (tk , yk ), j = 1, 2, . . . , M, (5.4.13)
1! Γ(α) k=0

and
2j
PP (1) t2j+1 1 X
y2j+1 = y0 + y0 + bk,2j+1 f (tk , yk ), j = 1, 2, . . . , M − 1, (5.4.14)
1! Γ(α) k=0

where the weights [28]

hα  
bk,n+1 = (n + 1 − k)α − (n − k)α . (5.4.15)
α

Our basic fractional Adams method, is completely described now by equations (5.4.9)
Rt
- (5.4.14). Assume that the starting value y1 and the starting integral 0 1 (t2j+1 −
u)−1−α f (u, y(u)) du in (5.4.10) can be approximate by using some numerical methods
and satisfy the required accuracy, Yan, Pal and Ford [93] proved the error estimates for
yl − y(tl ), l = 1, 2, . . . , 2M .
87

5.4.2 Approximating the starting values and the starting integrals

In this subsection we shall consider how to approximate the starting value y1 and the
Rt
initial integral 0 1 (t2j+1 − u)−1−α y(u) du in (5.4.10). We will follow the idea in Cao and
Xu [9].
At t = t1 , we have
Z t1
(1) t1 1
y(t1 ) = y0 + y0 + (t1 − u)α−1 f (u, y(u)) du.
1! Γ(α) 0

Approximating g(u) = f (u, y(u)) on [0, t1 ] by the following quadratic interpolation poly-
nomial
(u − t 1 )(u − t1 ) (u − t0 )(u − t1 )
2
P2 (u) = f (0, y(0)) + f (t 1 , y(t 1 ))
(t0 − t 1 )(t0 − t1 ) (t 1 − t0 )(t 1 − t1 ) 2 2
2 2 2

(u − t0 )(u − t 1 )
+ 2
f (t1 , y(t1 )) for u ∈ [t0 , t1 ],
(t1 − t0 )(t1 − t 1 )
2

where

(1) f 000 (c1 )


f (u, y(u)) − P2 (u) = R1 (u) = (u − 0)(u − t 1 )(u − t1 ), c1 ∈ (0, t1 ).
3! 2

Further we approximate the value f (t 1 , y(t 1 )) by


2 2

3 3 1
f (t 1 , y(t 1 )) ≈ f (t0 , y(t0 )) + f (t1 , y(t1 )) − f (t2 , y(t2 )),
2 2 8 4 8

where
3 3 1 
(2)
f (t 1 , y(t 1 )) − f (t0 , y(t0 )) + f (t1 , y(t1 )) − f (t2 , y(t2 )) = R1 (u),
2 2 8 4 8
(2) 1 000
and R1 (u) = 16
f (c2 )h3 , c2 ∈ (0, t2 ).
We then obtain
Z t1
(1) t1 1
y(t1 ) = y0 + y0 + (t1 − u)α−1 f (u, y(u)) du
1! Γ(α) 0
2
(1) t1 1 X
= y0 + y0 + B̂i f (ti , y(ti ))
1! Γ(α) i=0
Z t1 Z t1
α−1 (1) (2)
+ (t1 − u) R1 (u) du + (t1 − u)α−1 R1 (u) du, (5.4.16)
0 0
88

where
Z t1
α−1
(u − t 1 )(u − t1 ) 3
Z t1
(u − t0 )(u − t1 )
B̂0 = (t1 − u) 2
du + (t1 − u)α−1 du,
0 (t0 − t 1 )(t0 − t1 ) 8 0 (t 1 − t0 )(t 1 − t1 )
2 2 2
t1 t1
(u − t0 )(u − t1 ) (u − t0 )(u − t1 )
Z Z
3
B̂1 = (t1 − u)α−1 du + (t1 − u)α−1 du,
4 0 (t 1 − t0 )(t 1 − t1 ) 0 (t1 − t0 )(t1 − t 1 )
2 2 2
t1
(u − t0 )(u − t1 )
Z
1
B̂2 = − (t1 − u)α−1 du.
8 0 (t 1 − t0 )(t 1 − t1 )
2 2

Let yl ≈ y(tl ), l = 0, 1, 2, denote the approximations of y(tl ) and we define the following
numerical method for y1 .
2
(1) t1 1 X
y1 = y0 + y0 + B̂i f (ti , yi ). (5.4.17)
1! Γ(α) i=0

At t = t2 , we have
Z t2
(1) t2 1
y(t2 ) = y0 + y0 + (t2 − u)α−1 f (u, y(u)) du.
1! Γ(α) 0

Approximating g(u) = f (u, y(u)) on [0, t2 ] by the following quadratic interpolation poly-
nomial
(u − t1 )(u − t2 ) (u − t0 )(u − t2 )
P2 (u) = f (0, y(0)) + f (t1 , y(t1 ))
(t0 − t1 )(t0 − t2 ) (t1 − t0 )(t1 − t2 )
(u − t0 )(u − t1 )
+ f (t2 , y(t2 )), for u ∈ [t0 , t2 ],
(t2 − t0 )(t2 − t1 )
where
(1) f 000 (c3 )
f (u, y(u)) − P2 (u) = R2 (u) = (u − t0 )(u − t1 )(u − t2 ), c3 ∈ (t0 , t2 ).
3!
We obtain
Z t2
(1) t2 1
y(t2 ) = y0 + y0 + (t2 − u)α−1 f (u, y(u)) du
1! Γ(α) 0
2 Z t2
(1) t2 1 X 
α−1 (1)
= y0 + y0 + B̃i f (ti , y(ti )) + (t2 − u) R2 (u) du , (5.4.18)
1! Γ(α) i=0 0

where
t2
(u − t1 )(u − t2 )
Z
B̃0 = (t2 − u)α−1 du,
0 (t0 − t1 )(t0 − t2 )
Z t2
(u − t0 )(u − t2 )
B̃1 = (t2 − u)α−1 du,
0 (t1 − t0 )(t1 − t2 )
Z t2
(u − t0 )(u − t1 )
B̃2 = (t2 − u)α−1 du.
0 (t2 − t0 )(t2 − t1 )
89

We then define the following numerical method for y2 .

(1) t2 1  
y2 = y0 + y0 + B̃0 f (t0 , y0 ) + B̃1 f (t1 , y1 ) + B̃2 f (t2 , y2 ) , (5.4.19)
1! Γ(α)
Rt
Similarly, we can approximate the starting integral 0 1 (t2j+1 − u)α−1 f (u, y(u)) du in
(5.4.10) by using the same idea as in (5.4.16) and obtain

Z t1
(t2j+1 − u)α−1 f (u, y(u)) du
0
 
= B̂0,j f (t0 , y0 ) + B̂1,j f (t1 , y1 ) + B̂2,j f (t2 , y2 )
Z t1 Z t1
α−1 (1) (2)
+ (t2j+1 − u) R1 (u) du + (t2j+1 − u)α−1 R1 (u) du,
0 0

(5.4.20)

where
Z t1
α−1
(u − t 1 )(u − t1 ) 3
Z t1
(u − t0 )(u − t1 )
B̂0,j = (t2j+1 − u) 2
du + (t2j+1 − u)α−1 du,
0 (t0 − t 1 )(t0 − t1 ) 8 0 (t 1 − t0 )(t 1 − t1 )
2 2 2
t1 t1
(u − t0 )(u − t1 ) (u − t0 )(u − t1 )
Z Z
3
B̂1,j = (t2j+1 − u)α−1 du + (t2j+1 − u)α−1 du,
4 0 (t 1 − t0 )(t 1 − t1 ) 0 (t1 − t0 )(t1 − t 1 )
2 2 2
t1
(u − t0 )(u − t1 )
Z
1
B̂2,j =− (t2j+1 − u)α−1 du.
8 0 (t 1 − t0 )(t 1 − t1 )
2 2

We now introduce the following corrector fractional Adams method.

1  (1) t1

y1 = y0 + y0 +
B̂0 f (t0 , y0 ) + B̂1 f (t1 , y1 ) + B̂2 f (t2 , y2 ) , (5.4.21)
1! Γ(α)
(1) t2 1  
y2 = y0 + y0 + B̃0 f (t0 , y0 ) + B̃1 f (t1 , y1 ) + B̃2 f (t2 , y2 ) , (5.4.22)
1! Γ(α)
and, with j = 1, 2, . . . , M ,
2j−1
(1) t2j 1 X 
P
y2j = y0 + y0 + ck,2j f (tk , yk ) + c2j,2j f (t2j , y2j ) , (5.4.23)
1! Γ(α) k=0

and, with j = 0, 1, 2, . . . , M − 1,

(1) t2j+1
1  
y2j+1 = y0 + + y0 B̂0,j f (t0 , y0 ) + B̂1,j f (t1 , y1 ) + B̂2,j f (t2 , y2 )
1! Γ(α)
2j−1
1 X P

+ ck,2j f (tk+1 , yk+1 ) + c2j,2j f (t2j+1 , y2j+1 ) , (5.4.24)
Γ(α) k=0
90

P P
where the predictor terms y2j and y2j+1 can be obtained by (5.4.11) and (5.4.12). We
then have the following error estimates.
C α
Theorem 5.4.1. Let 0 < α ≤ 2 and assume that 0 Dt y ∈ C 3 [0, T ] for some suitable T .
Let y(tk ) and yk , k = 0, 1, 2, . . . , 2M, t2M = T be the solutions of (5.4.1) and (5.4.21) -
(5.4.24). Then, for sufficiently small h, there exists a positive constant C0 > 0 such that

 C0 h1+2α , if 0 < α ≤ 1,

max |y(tk ) − yk | ≤
0≤k≤2M  C0 h3 , if 1 < α ≤ 2.

5.5 Numerical simulations

5.5.1 The linear fractional differential equation

In this section we will consider two examples for solving the linear differential equation
(5.2.6)-(5.2.7) by using the algorithm (5.3.11)-(5.3.12). Theorem 5.3.2 shows that the
approximate solution y2M has the asymptotic expansion
µ∗
m+1
X X
y(t2M ) − y2M = cµ hµ−α + c∗µ h2µ + o(hm+1−α ), as h → 0,
µ=3 µ=2

where µ∗ is the integer satisfying 2µ∗ < m + 1 − α < 2(µ∗ + 1), and cµ and c∗µ are certain
coefficients that depend on y.
Let A = y(t2M ), t2M = 1 and assume that A0 (h) = y2M is the approximate solution of
A with stepsize h. We then have by Theorem 5.3.2, with 0 < α < 1,

A = A0 (h) + a1 hλ1 + a2 hλ2 + a3 hλ3 + . . . , (5.5.1)

where λ1 = 3 − α, λ2 = 4 − α, λ3 = 4, λ4 = 5 − α, . . . . It is obvious that the convergence


order of A0 (h) is λ1 , that is

|A − A0 (h)| = O(hλ1 ).

Let A0 (h/2) denote the approximate solution of A with stepsize h/2. Then we have

A = A0 (h/2) + a1 (h/2)λ1 + a2 (h/2)λ2 + a3 (h/2)λ3 + . . . . (5.5.2)

Multiplying 2λ1 in both sides of (5.5.2), we have

2λ1 A = 2λ1 A0 (h/2) + 2λ1 a1 (h/2)λ1 + 2λ1 a2 (h/2)λ2 + 2λ1 a3 (h/2)λ3 + . . . . (5.5.3)
91

Subtracting (5.5.3) from (5.5.2), we get

A = A1 (h) + b1 hλ2 + b2 hλ3 + b3 hλ4 + . . . ,

where
2λ1 A0 (h/2) − A0 (h/2)
A1 (h) = ,
2λ1 − 1
which implies that A1 (h) is an approximation of A with the convergence order O(hλ2 ),
that is

|A − A1 (h)| = O(hλ2 ).

Continuing these processes, we obtain the high order approximations A2 (h), A3 (h), . . .
of A. In Table 5.5.1, we proceed by setting up a triangle array ( a so-called Romberg
tableau) of approximate values for A.

A0 (h)
A0 (h/2) A1 (h)
A0 (h/22 ) A1 (h/2) A2 (h)
A0 (h/23 ) A1 (h/22 ) A2 (h/2) A3 (h)
.. .. .. ..
. . . . ...

Table 5.5.1: Romberg tableau of approximate solutions

The convergence order of the approximate solution Ak (h) is λk+1 , k = 1, 2, . . . . To


obtain the experimentally determined order of convergence (“EOC”) we will calculate the
following ratios
|A − Ak (h/2l )| O((h/2l )λk+1 )
l+1
= l+1 λ
≈ 2λk+1 , k = 0, 1, 2, . . . , l = 0, 1, 2, . . . ,
|A − Ak (h/2 )| O((h/2 ) k+1 )
which implies that
 |A − A (h/2l )| 
k
λk+1 ≈ log2 , k = 0, 1, 2, . . . . (5.5.4)
|A − Ak (h/2l+1 )|
Example 10. Consider, [26]

C α 3!
0 Dt y(t) + y(t) = t3 + t3−α , t ∈ [0, 1], (5.5.5)
Γ(4 − α)
y(0) = 0, (5.5.6)
92

whose exact solution is given by y(t) = t3 .


Choose the stepsize h = 1/10. In Table 5.5.2, we display the errors of the algorithms
(5.3.11)-(5.3.12) at t = 1 and of the first two extrapolation steps in the Romberg tableau
with α = 0.3. In Table 5.5.3, we display the experimentally determined orders of con-
vergence (“EOC ”) at t = 1. We observe that the first column (the errors of the basic
algorithm without extrapolation) converges as h3−α . The second column (errors using
one extrapolation step)converges as h4−α , and the last column (two extrapolation steps)
converges as h4 . In Tables 5.5.4-5.5.5, we display the errors and the experimentally deter-
mined order of convergence (“EOC ”) with α = 0.5. In Tables 5.5.6-5.5.7, we display the
errors and the experimentally determined order of convergence (“EOC ”) with α = 0.9.
In all cases of α under consideration, we observe that the first column converges as h3−α .
The second column converges as h4−α and the last column converges as h4 .

Step size Error of the method 1st extra. error 2nd extra. error
1/10 3.3237e-004
1/20 5.1939e-005 9.3242e-007
1/40 8.0506e-006 6.8029e-008 4.0268e-009
1/80 1.2432e-006 5.0402e-009 2.1066e-010
1/160 1.9164e-007 3.7765e-010 1.1023e-011
1/320 2.9516e-008 2.8511e-011 5.9381e-013

Table 5.5.2: Errors for equations (5.5.5)-(5.5.6) with α = 0.3, taken at t = 1.

Step size The method 1st extrapolation 2nd extrapolation


1/10
1/20 2.68
1/40 2.69 3.78
1/80 2.70 3.75 4.26
1/160 2.70 3.74 4.26
1/320 2.70 3.73 4.21

Table 5.5.3: Orders (“EOC ”) for equations (5.5.5)-(5.5.6) with α = 0.3, taken at t = 1.
93

Step size Error of the method 1st extra. error 2nd extra. error
1/10 1.1296e-003
1/20 2.0412e-004 5.3779e-006
1/40 3.6454e-005 4.5025e-007 2.7527e-008
1/80 6.4759e-006 3.8539e-008 1.3800e-009
1/160 1.1475e-006 3.3431e-009 6.9354e-011
1/320 2.0310e-007 2.9225e-010 3.5604e-012

Table 5.5.4: Errors for equations (5.5.5)-(5.5.6) with α = 0.5, taken at t = 1.

Step size The method 1st extrapolation 2nd extrapolation


1/10
1/20 2.46
1/40 2.49 3.58
1/80 2.49 3.55 4.32
1/160 2.5 3.53 4.31
1/320 2.5 3.52 4.28

Table 5.5.5: Orders (“EOC ”) for equations (5.5.5)-(5.5.6) with α = 0.5, taken at t = 1.

Step size Error of the method 1st extra. error 2nd extra. error
1/10 7.6048e-003
1/20 1.8568e-003 1.0809e-004
1/40 4.4205e-004 1.1665e-005 1.0659e-006
1/80 1.0412e-004 1.3139e-006 5.2739e-008
1/160 2.4402e-005 1.5070e-007 2.8748e-009
1/320 5.7054e-006 1.7432e-008 1.6257e-010

Table 5.5.6: Errors for equations (5.5.5)-(5.5.6) with α = 0.9, taken at t = 1.

Example 11. Consider, [33]


C α 1 3! 24
0 Dt y(t) + y(t) = t4 − t3 − t3−α + t4−α , t ∈ [0, 1], (5.5.7)
2 Γ(4 − α) Γ(5 − α)
y(0) = 0, (5.5.8)
94

Step size The method 1st extrapolation 2nd extrapolation


1/10
1/20 2.03
1/40 2.07 3.21
1/80 2.09 3.15 4.34
1/160 2.09 3.12 4.20
1/320 2.10 3.11 4.14

Table 5.5.7: Orders (“EOC ”) for equations (5.5.5)-(5.5.6) with α = 0.9, taken at t = 1.

whose exact solution is given by y(t) = t4 − 12 t3 .


Choose the stepsize h = 1/10. In Tables 5.5.8 - 5.5.13, we display the errors of the
algorithms (5.3.11)-(5.3.12) at t = 1 and of the first two extrapolation steps in the Romberg
tableau with α = 0.3, 0.5, 0.9. In all cases of α under consideration, we observe that the
first column converges as h3−α . The second column converges as h4−α and the last column
converges as h4 . We observe that when α is close to 1, the convergence seems to be even
a bit faster. But when α is close to 0, the convergence is a bit slower than expected.

Step size Error of the method 1st extra. error 2nd extra. error
1/10 1.4571e-004
1/20 2.3118e-005 8.2097e-007
1/40 3.6127e-006 6.5021e-008 2.0039e-009
1/80 5.6030e-007 5.1186e-009 1.2514e-010
1/160 8.6565e-008 4.0106e-010 7.8051e-012
1/320 1.3348e-008 3.1315e-011 4.9268e-013

Table 5.5.8: Errors for equations (5.5.7)-(5.5.8) with α = 0.3, taken at t = 1.


95

Step size The method 1st extrapolation 2nd extrapolation


1/10
1/20 2.66
1/40 2.68 3.66
1/80 2.69 3.67 4.00
1/160 2.70 3.67 4.00
1/320 2.70 3.68 3.98

Table 5.5.9: Orders (“EOC ”) for equations (5.5.7)-(5.5.8) with α = 0.3, taken at t = 1.

Step size Error of the method 1st extra. error 2nd extra. error
1/10 5.0921e-004
1/20 9.2881e-005 3.4801e-006
1/40 1.6676e-005 3.1186e-007 4.6709e-009
1/80 2.9708e-006 2.7831e-008 2.9143e-010
1/160 5.2721e-007 2.4764e-009 1.8053e-011
1/320 9.3380e-008 2.1991e-010 1.1328e-012

Table 5.5.10: Errors for equations (5.5.7)-(5.5.8) with α = 0.5, taken at t = 1.

Step size The method 1st extrapolation 2nd extrapolation


1/10
1/20 2.45
1/40 2.48 3.48
1/80 2.49 3.49 4.00
1/160 2.50 3.49 4.01
1/320 2.50 3.50 3.99

Table 5.5.11: Orders (“EOC ”) for equations (5.5.7)-(5.5.8) with α = 0.5, taken at t = 1.

5.5.2 The nonlinear fractional differential equation

In this subsection we will consider one example for solving (5.2.4)-(5.2.5) by using the
algorithm (5.4.9)-(5.4.14). We will numerically check that, with 1 < α ≤ 2,

y(t2M ) − y2M = a1 hλ1 + a2 hλ2 + a3 hλ3 + . . . ,


96

Step size Error of the method 1st extra. error 2nd extra. error
1/10 3.5534e-003
1/20 8.5873e-004 3.8951e-005
1/40 2.0381e-004 4.5703e-006 3.1078e-008
1/80 4.7950e-005 5.3459e-007 1.7728e-009
1/160 1.1233e-005 6.2442e-008 1.0563e-010
1/320 2.6257e-006 7.2882e-009 6.3909e-012

Table 5.5.12: Errors for equations (5.5.7)-(5.5.8) with α = 0.9, taken at t = 1.

Step size The method 1st extrapolation 2nd extrapolation


1/10
1/20 2.05
1/40 2.08 3.09
1/80 2.09 3.10 4.13
1/160 2.10 3.10 4.07
1/320 2.10 3.10 4.05

Table 5.5.13: Orders (“EOC ”) for equations (5.5.7)-(5.5.8) with α = 0.9, taken at t = 1.

where λ1 = 2 + α, λ2 = 4, λ3 = 3 + α, . . . .

Example 12. Consider, with 1 < α ≤ 2, [28]

C α 40320 8−α Γ(5 + α/2) 4−α/2 9 3 3


0 Dt y(t) = t −3 t + Γ(α+1)+ t −t −[y(t)]3/2 . (5.5.9)
α/2 4
Γ(9 − α) Γ(5 − α/2) 4 2
The initial conditions were chosen to be homogeneous, i.e., y(0) = 0, y 0 (0) = 0. This
equation has been chosen because it exhibits a difficult (nonlinear and nonsmooth) right-
hand side, and yet we are able to find its exact solution, thus allowing us to compare the
numerical results for this nontrivial case to the exact results. Indeed, the exact solution
of this initial value problem is
9
y(t) = t8 − 3t4+α/2 + tα ,
4
Choose the stepsize h = 1/10. In Tables 5.5.14-5.5.19, we display the errors of the
algorithms (5.4.9) -(5.4.14) at t = 1 and of the first two extrapolation steps in the Romberg
97

tableau with α = 1.3, 1.5, 1.9. In all cases of α under consideration, we observe that the
first column converges as h2+α . The second column converges as h4 and the last column
converges as h3+α . We also observe that when α is close to 2, the convergence seems to be
even a bit faster. But when α is close to 1, the convergence is a bit slower than expected.

Step size Error of the method 1st extra. error 2nd extra. error 3rd extra error
1/10 7.1066e-004
1/20 6.8623e-005 3.9303e-006
1/40 5.6000e-006 1.5219e-006 1.3613e-006
1/80 4.3070e-007 1.5346e-007 6.2236e-008 7.2391e-009
1/160 3.2640e-008 1.2343e-008 2.9345e-009 2.3700e-010
1/320 2.5021e-009 9.0367e-010 1.4107e-010 8.3232e-012

Table 5.5.14: Errors for equation (5.5.9) with α = 1.3, taken at t = 1.

Step size The method 1st extrapolation 2nd extrapolation 3rd extrapolation
1/10
1/20 3.37
1/40 3.62 1.39
1/80 3.70 3.31 4.45
1/160 3.72 3.64 4.41 4.93
1/320 3.71 3.77 4.38 4.83

Table 5.5.15: Orders (“EOC ”) for equation (5.5.9) with α = 1.3, taken at t = 1.
98

Step size Error of the method 1st extra. error 2nd extra. error 3rd extra error
1/10 1.3107e-003
1/20 1.0256e-004 1.4581e-005
1/40 7.2525e-006 1.9886e-006 1.1491e-006
1/80 4.9046e-007 1.6518e-007 4.3614e-008 7.5021e-009
1/160 3.2450e-008 1.1957e-008 1.7426e-009 1.9344e-010
1/320 2.1236e-009 8.1682e-010 7.4128e-011 3.0170e-012

Table 5.5.16: Errors for equation (5.5.9) with α = 1.5, taken at t = 1.

Step size The method 1st extrapolation 2nd extrapolation 3rd extrapolation
1/10
1/20 3.68
1/40 3.82 2.87
1/80 3.89 3.59 4.72
1/160 3.92 3.79 4.65 5.28
1/320 3.93 3.87 4.56 6.00

Table 5.5.17: Orders (“EOC ”) for equation (5.5.9) with α = 1.5, taken at t = 1.

Step size Error of the method 1st extra. error 2nd extra. error 3rd extra error
1/10 1.9057e-003
1/20 1.2585e-004 1.9355e-006
1/40 8.0391e-006 4.1927e-007 3.1819e-007
1/80 5.0764e-007 3.3082e-008 7.3362e-009 3.4360e-009
1/160 3.1910e-008 2.2452e-009 1.8944e-010 5.8225e-011
1/320 2.0046e-009 1.4248e-010 2.2910e-012 4.1943e-012

Table 5.5.18: Errors for equation (5.5.9) with α = 1.9, taken at t = 1.


99

Step size The method 1st extrapolation 2nd extrapolation 3rd extrapolation
1/10
1/20 3.92
1/40 3.97 2.21
1/80 3.99 3.66 5.44
1/160 3.99 3.88 5.28 5.88
1/320 3.99 3.98 6.36 3.80

Table 5.5.19: Orders (“EOC ”) for equation (5.5.9) with α = 1.9, taken at t = 1.
Chapter 6

Finite difference method (FDM) for


space-fractional PDEs

6.1 Introduction
Space fractional derivatives are used to model anomalous diffusion or dispersion, a phe-
nomenon observed in many problems, where particles spread faster than the classical
models predict. When a fractional derivative replaces the second derivative in a diffusion
or dispersion model, it leads to enhanced diffusion (also called superdiffusion), see Meer-
schaert and Tadjeran [66]. Space-fractional diffusion equations have been investigated
by West and Seshadri [91] and Gorenflo and Mainardi [48] and Gorenflo [47] . A linear
interpolation polynomial was used to approximate the Hadamard integral generated by
fractional derivative and the rate of the convergence of the proposed numerical method
is O(h2−α ) [26].
In this chapter we will discuss a finite difference method for solving space-fractional
partial differential equation. The space-fractional derivatives are the left-handed and
right-handed Riemann-Liouville fractional derivatives which can be expressed by using
the Hadamard finite-part integrals.
We will examine the stability, consistency and convergence of the proposed finite dif-
ference method. The Hadamard finite-part integrals are approximated by using piecewise
quadratic interpolation polynomials and a numerical approximation scheme of the space-
fractional derivative with convergence order O(∆x3−α ) (1 < α < 2) is obtained. A shifted

100
101

implicit finite difference method is introduced for solving two-sided space-fractional partial
differential equations and we prove that the order of convergence of the finite difference
method is O(∆t + ∆xmin(3−α,β) ), 1 < α < 2, β > 0, where ∆t, ∆x denote the time and
space stepsizes, respectively, and β is related to the smoothness of the exact solution u.

6.2 Brief reviews of FDM for solving space-fractional


PDEs
Consider the following two-sided space-fractional partial differential equation, with 1 <
α < 2, t > 0,

ut (t, x) = C+ (t, x) R α R α
0 Dx u(t, x), +C− (t, x) x D1 u(t, x) + f (t, x), 0 < x < 1, (6.2.1)

u(t, 0) = ϕ1 (t), u(t, 1) = ϕ2 (t), (6.2.2)

u(0, x) = u0 (x), 0 < x < 1. (6.2.3)

Here the function f (t, x) is a source/sink term. The functions C+ (t, x) ≥ 0 and C− (t, x) ≥
0 may be interpreted as transport related coefficients. The addition of a classical advective
term −ν(t, x) ∂u(t,x)
∂x
in (6.2.1) does not impact the analysis performed in this chapter,
and has been omitted to simplify the notation. The left-handed fractional derivative
R α
0 Dx f (x) and right-handed fractional derivative R α
x D1 f (x) in (6.2.1) are Riemann-Liouville

fractional derivatives of order α defined by, with 1 < α < 2,


Z x
R α 1 d2
0 Dx f (x) = 2
(x − ξ)1−α f (ξ) dξ, (6.2.4)
Γ(2 − α) dx 0
and
1
d2
Z
R α 1
x D1 f (x) = (ξ − x)1−α f (ξ) dξ. (6.2.5)
Γ(2 − α) dx2 x

There are several ways to approximate the Riemann- Liouville fractional derivative.
Let 0 = x0 < x1 < · · · < xj < · · · < xM = 1 be a partition of [0, 1] and ∆x the stepsize.
Based on the definition of the Grünwald-Letnikov derivative, one can approximate the
left-handed and right-handed Riemann-Liouville fractional derivatives by see [66])

j
X (α)
R α
0 Dx f (xj ) = ∆x−α wk f (xj−k ) + O(∆x), (6.2.6)
k=0
102

and
M −j
X (α)
R α
x D1 f (xj ) = ∆x−α wk f (xj+k ) + O(∆x), (6.2.7)
k=0
(α)
where wk are some weights and the order of convergence in (6.2.6) or (6.2.7) is O(∆x)
for any α > 0. Meerschaert and Tadjeran [64] proposed finite difference approximations
for fractional advection-dispersion flow equations. They used the Grünwald method to
approximate the space-fractional derivative and proved that the standard finite difference
method is unconditionally unstable, but the shifted finite difference method is uncondi-
tionally stable.
Lubich [53] obtained approximations of order 2 - 6 in the form of (6.2.6), where the
(α)
coefficients wk are just the coefficients of the Taylor series expansions of some generating
(α)
functions wl (z), l = 2, 3, 4, 5, 6. The L2 scheme and its modification L2C scheme are
introduced in Oldham and Spanier [72], Lynch, et al. [62] as follows. Note that, with
1 < α < 2,

R α f (x0 )(xj − x0 )−α f 0 (x0 )(xj − x0 )1−α


0 Dx f (xj ) = +
Γ(1 − α) Γ(2 − α)
j−1 Z xl+1
1 X
+ s1−α f 00 (xj − s) ds.
2 − α l=0 xl
f (xj −xl )−2f (xj −xl+1 )+f (xj −xl+2 )
On each interval [xl , xl+1 ], f 00 (xj − s) is approximated by ∆x2
,
then the so-called L2 scheme is obtained and the convergence order is O(∆x). Similarly,
one can obtain L2C scheme. Diethelm [25, 26] expressed the Riemann-Liouville fractional
derivative into the equivalent Hadamard finite-part integral and then approximated the
Hadamard finite-part integral by piecewise linear interpolation polynomials to obtain an
approximation scheme to the fractional derivative for 0 < α < 1. More precisely, Diethelm
[26] obtained, with 0 < α < 1,
Z xj I xj
1 d −α 1
R α
0 Dx f (xj ) = (xj − ξ) dξ = (xj − ξ)−α−1 f (ξ) dξ
Γ(1 − α) dx 0 Γ(−α) 0
j
X
= ∆x−α wk,j f (xj−k ) + O(∆x2−α ),
k=0
H xj
where 0
denotes the Hadamard finite-part integral and wk,j are some weights.
Odibat [70, 71] introduced a computational algorithm for approximating the Caputo
fractional derivative and the convergence order is O(∆x2 ), see also Sousa [84]. The idea
103

is as follows. Note that, with 1 < α < 2,


Z xj
1
C α
0 Dx f (xj ) = (xj − ξ)1−α f 00 (ξ) dξ
Γ(2 − α) 0
j−1 Z xl+1
1 X
= (xj − ξ)1−α f 00 (ξ) dξ.
Γ(2 − α) l=0 xl

On each subinterval [xl , xl+1 ], one approximates the integral by using the linear interpo-
ξ−xl+1 00 ξ−xl
lation polynomial P1 (ξ) = xl −xl+1
f (xl ) + xl+1 −xl
f 00 (xl+1 ) and obtains, with some weights
w̄k,j , k = 0, 1, 2, . . . , j,
j−1 Z xl+1 j
1 X X
C α
0 Dx f (xj ) ≈ (xj − ξ)1−α P1 (ξ) dξ = ∆x2−α w̄k,j f 00 (xk ).
Γ(2 − α) l=0 xl k=0

f (xk+1 )−2f (xk )+f (xk−1 )


Further, Odibat [70] approximated f 00 (xk ) by ∆x2
and obtained a second
order approximation scheme to C α
0 Dx f (xj ). More recently, Dimitrov [34] obtained a second

and third order approximations for the Grünwald and shifted Grünwald formulae with
weighted averages of Caputo derivatives.
Let us review some numerical methods for solving space-fractional partial differen-
tial equations. There are many different numerical methods for solving space-fractional
partial differential equations in literature: Choi at al. [12] applied the backward Euler fi-
nite difference method with the right-shifted Grünward formula for the Riemann-Liouville
space fractional derivative term and proved the existence using Leray-Schauder fixed point
theorem and finally the convergence order O(∆x + ∆t) are considered. By using shifted
Grünwald-Letnikov formulae (6.2.6) and (6.2.7), Meerschaert and Tadjeran [66] intro-
duced a finite difference method for solving two-sided space-fractional partial differential
equations (6.2.1)- (6.2.3) and proved that the convergence order of spatial discretization is
O(∆x). Meerschaert and Tadjeran (2004) [64] also considered the finite difference method
for solving the 1D fractional advection-dispersion equation, with 1 < α < 2,

∂u(t, x) ∂u(t, x) ∂ α u(t, x)


= −ν(x) + d(x) + f (t, x),
∂t ∂x ∂xα

by using the shifted Grünwald-Letnikov formula on a finite domain and they proved that
the convergence order of spatial discretization is O(∆x). Tadjeran, Meerschaert and
Scheffler [88] and Tadjeran and Meerschaert [89] applied the shifted Grünwald-Letnikov
formula and extrapolation techniques to fractional diffusion equations in 1D and 2D and
104

obtained a second-order accurate finite difference method. Liu et al. [61] transformed
the fractional advection-dispersion equation into a system of ordinary differential equa-
tions, which was then solved using backward difference formulae. Chen and Liu [11]
used a technique combining the alternating direction implicit-Euler method with Richard-
son extrapolation to establish an unconditionally stable second-order accuracy difference
method to approximate a 2D fractional advection-dispersion equation with variable co-
efficients on a finite domain. Podlubny et al. [77] developed a matrix approach to dis-
cretize fractional diffusion equations with various combinations of time-space-fractional
derivatives. Shen et al. [79, 80] presented explicit and implicit difference approxima-
tions for the Riesz fractional advection-dispersion equations and the space-time Riesz-
Caputo fractional advection-dispersion equations. Shen et al. [81] considered a novel
numerical approximation for the space fractional advection-dispersion equation. See also
[2, 13, 60, 65, 82, 83, 85, 94, 86].
There are other numerical methods for solving space-fractional partial differential
equations: the finite element methods, see [20, 21, 38, 39, 37, 40] and the spectral methods
[57, 58].
In this chapter, we will use the idea in Diethelm [26] to define a finite difference
method for solving (6.2.1)- (6.2.3), see recent works for this method [93, 43, 45, 46]. We
first express the fractional derivative by using the Hadamard finite-part integral, i.e., with
1 < α < 2,
x x
d2
Z I
1 1
R α
0 Dx f (x) = 1−α
(x − ξ) f (ξ) dξ = (x − ξ)−α−1 f (ξ) dξ.
Γ(2 − α) dx2 0 Γ(−α) 0

Then we approximate f (ξ) by using piecewise quadratic interpolation polynomials and


obtain an approximation scheme of Riemann-Liouville fractional derivative. Similarly,
R α
we can approximate the right-handed Riemann-Liouville fractional derivative x D1 f (x).

Based on these approximation schemes, we define a shifted finite difference method for
solving (6.2.1)-(6.2.3). We proved that the convergence order of the numerical method is
O(∆t + ∆xmin(3−α,β) ), 1 < α < 2, β > 0.
105

6.3 FDM based on linear interpolation


In this section, we will introduce a finite difference method for solving (6.2.1)-(6.2.3) by
using the idea in Diethelm [25]. For simplicity, we assume C+ (t, x) = C− (t, x) = 1 and
ϕ1 (t) = ϕ2 (t) = 0. Recall that the Riemann-Liouville fractional derivative has the form,
with 1 < α < 2,
I x
1
R α
0 Dx f (x) = (x − ξ)−1−α f (ξ) dξ, (6.3.1)
Γ(−α) 0
Hx
where 0
(x − ξ)−1−α f (ξ) dξ denotes the Hadamard finite-part integral [25].
Let 0 = x0 < x1 < x2 < · · · < xj < · · · < xM = 1 be a partition of [0, 1] and ∆x the
stepsize. We then have, at x = xj , j = 1, 2, . . . , M ,
xj x−α 1
I I
1 −1−α j
R α
0 Dx f (xj ) = (xj − ξ) f (ξ) dξ = w−1−α f (xj − xj w) dw
Γ(−α) 0 Γ(−α) 0
j I l
x−α
j
X j
= w−1−α f (xj − xj w) dw. (6.3.2)
Γ(−α) l=1
l−1
j

Denoting g(w) = f (xj − xj w) and substituting g(w) in (6.3.2) by the following linear
interpolation polynomial P1 (w) on [ l−1
j
, jl ], l = 1, 2, . . . , j,

l l−1  
w− j
l − 1 w− j l
P1 (w) = l−1 l
g + l l−1
g ,
j
− j
j j
− j
j

R α
we obtain an approximation to 0 Dx f (xj ), 1 < α < 2,
j
X
R α −α
0 Dx f (xj ) = ∆x wk,j f (xj−k ) + O(∆x2−α ), (6.3.3)
k=0

where




 1, for k = 0,



 21−α − 2,

for k = 1,
Γ(2 − α)wk,j =
(k + 1)1−α − 2k 1−α + (k − 1)1−α , for k = 2, 3, . . . , j − 1,







 −j 1−α + (j − 1)1−α − (α − 1)j −α , for k = j.

106

Remark 13. The coefficients wk,j in (6.3.3) can also be written






 b0 , for k = 0,



 b 1 − b0 ,

for k = 1,
Γ(2 − α)wk,j =
bk − bk−1 , for k = 2, 3, . . . , j − 1,







 b̄ − b , for k = j,

j j−1

where bk = (k + 1)1−α − k 1−α , 0 ≤ k ≤ j − 1, b̄j = (1 − α)j −α . These are the same


coefficients as in the L2 scheme defined in [72, 61].

Lemma 6.3.1. Let 1 < α < 2. The coefficients wk,j in (6.3.3) satisfy

w1,j < 0, and wk,j > 0, k 6= 1, k = 0, 2, 3, . . . , j,


j
X
Γ(2 − α) wk,j = b̄j = (1 − α)j 1−α < 0.
k=0

Proof. From the properties of wkj it is obvious that w1,j < 0 and wk,j > 0 We can show
that the sum of coefficients wkj are always negative. For example, Let, j=3 we have,



 b0 , k = 0,


 b −b ,

k = 1,
0 1
Γ(2 − α)wk3 =


 b2 − b1 , k = 2,


 b −b ,

k = 3,
3 2

Therefore,
3
X
wk3 = b0 + (b1 − b0 ) + (b2 − b1 ) + (b3 − b2 )
k=0
= b3 = (1 − α)3−α < 0. (6.3.4)

In general, we have

j
X
wkj = b0 + (b1 − b0 ) + (b2 − b1 ) + · · · + (bj − bj−1 ) + (bj+1 − bj )
k=0
= bj+1 = (1 − α)(j + 1)−α < 0. (6.3.5)

Further, we have, when j → ∞


j
X
wkj = 0.
k=0
107

Similarly, we can obtain an approximation scheme for the right-handed Riemann-


R α
Liouville fractional derivative x D1 f (x). We have
M −j
X
R α −α
x D1 f (xj ) = ∆x wk,M −j f (xj+k ) + O(∆x2−α ), j = 0, 1, 2, . . . , M − 1, (6.3.6)
k=0

where wk,M −j , k = 0, 1, 2, . . . , M − j, j = 0, 1, 2, . . . , M − 1 are defined as in (6.3.3).


Discretizing ut (tn , xj ) by using forward Euler method at tn and discretizing R α
0 Dx u(tn , xj )
R α
and x D1 u(tn , xj ) by using (6.3.3) and (6.3.6) at xj respectively, we get, with unj =
u(tn , xj ), fjn = f (tn , xj ),

j
X M −j 
X
−1 −α
∆t (un+1
j − unj ) − (∆x) wk,j unj−k + wk,M −j unj+k
k=0 k=0

= fjn + τjn , (6.3.7)

where the truncation error τjn = O(∆t + ∆x2−α ) [25], [26].


Let Ujn ≈ u(tn , xj ) be the approximate solution of u(tn , xj ) at the node (tn , xj ).
We define an explicit finite difference method for solving (6.2.1) - (6.2.3), with j =
1, 2, . . . , M − 1,

j
X M −j 
X
−1 −α
Ujn+1 Ujn n n
+ fjn ,

∆t − = ∆x wk,j Uj−k + wk,M −j Uj+k (6.3.8)
k=0 k=0

n
with U0n = UM = 0, and Uj0 = u0 (xj ), j = 0, 1, 2, . . . , M − 1. Here the weights wk,j and
wk,M −j , are given in (6.3.3).

Lemma 6.3.2. The explicit finite difference method (6.3.8) is unconditionally unstable

Proof. Let n = 0. With λ = ∆t/∆xα , (6.3.8) can be written as, j = 1, 2, . . . , M − 1,


j M −j
X X
Uj1
 0 0 0
= 1 + w0j λ + w0,(M −j) λ Uj + λ wkj Uj−k + λ wk,(M −j) Uj+k + ∆tfj0 . (6.3.9)
k=1 k=1

Assume that we have some errors in the starting values Uj0 , i.e.,

Ūj0 = Uj0 + 0j , j = 0, 1, 2, . . . , M.


108

To consider the stability, we assume that only the term Ui0 , for some fixed i, has the error,
and other terms have no errors, i.e.,

0j = 0, j 6= i.

Then we have
i
X M
X −i
Ūi1 = 1 + w0i λ + w0,(M −j) λ Ūi0 + λ 0 0
+ ∆tfi0 . (6.3.10)

wki Ūi−k +λ wk,M −i Ūi+k
k=1 k=1

Subtracting (6.3.9) from (6.3.10) , we obtain

1i = 1 + w0i λ + w0,(M −i) 0i .




That is, the error is amplified by the factor µi = 1 + w0i λ + w0,(M −i) when the finite
difference equation is advanced by one time step. After n time steps, one may write

n
ni = 1 + w0i λ + w0,(M −i) 0i .

Note that, by Lemma 6.3.1, w0i = 1/Γ(2 − α) > 0, and w0,(M −i) = 1/Γ(2 − α) > 0 we
have 1 + w0i λ + w0,(M −i) > 1. Thus |ni | → ∞ as n → ∞, which implies that the method
is unstable.

Next we consider an implicit Euler method for solving (6.2.1)-(6.2.3), we define an


implicit finite difference method for solving (6.2.1) - (6.2.3), with j = 1, 2, . . . , M − 1,

j
X M −j 
X
−1 −α
Ujn+1 Ujn n+1 n+1
+ fjn+1 ,

∆t − = ∆x wk,j Uj−k + wk,M −j Uj+k (6.3.11)
k=0 k=0

with U0n = UM
n
= 0, and Uj0 = u0 (xj ), j = 0, 1, 2, . . . , M − 1. Here the weights wk,j and
wk,M −j , are given in (6.3.3).

Lemma 6.3.3. The implicit finite difference method (6.3.11) is unconditionally unstable

Proof. We have, with λ = ∆t/∆xα ,


j M −j
X X
(1−w0j λ−w0,M −j λ)Ujn+1 = Ujn +λ n+1
wkj Uj−k +λ n+1
wk,M −j Uj+k +∆tfjn+1 . (6.3.12)
k=1 k=1
109

Although this is an implicit Euler method, the problem can be solved explicitly by a
left-to -right sweep across the x domain due to the Dirichlet boundary condition at the
left boundary. For example, the value U31 can be explicitly determined by U01 , U11 , U21 and
U30 . Now let us consider the stability. Let 0j = Ūj0 − Uj0 , j = 0, 1, 2, . . . , M be the error
generated by Uj0 . Assume that 0j = 0, j 6= i, that is Ui0 is the only term that has an error
for fixed i. Let n = 0, we have

i M −i
1 1  X X 
Ui1 = 0
U + λ 1
wki Ui−k +λ 1 1
wk,M −i Ui+k +∆tfi ,
1 − w0i λ − w0,M −i λ i 1 − w0i λ − w0,M −i λ k=1 k=1
(6.3.13)

and
i M −i
1 1  X X 
Ūi1 = 0
Ūi + λ 1
wki Ūi−k +λ 1
wk,M +i Ūi+k +∆fi1 .
1 − w0i λ − w0,M −i λ 1 − w0i λ − w0,M −i λ k=1 k=1
(6.3.14)

Denote 1i = Ūi1 − Ui1 , we get, subtracting (6.3.13) from (6.3.14),

1
1i = 0 .
1 − w0i λ − w0,M −i λ i

After n time steps, we may write


 1 n
ni = 0i .
1 − w0i λ − w0,M −i λ

Note that, by Lemma 6.3.1, w0i > 0, and w0,M −i > 0 which implies that 1−w0i λ−w0,M −i <
1 and therefore

1
> 1.
1 − w0i λ − w0,M −i

Thus |ni | → ∞ as n → ∞, which means that the method is unconditionally unstable.

We now introduce the shifted Diethelm’s FDM for space-fractional PDEs. At the node
(tn+1 , xj ), we may write the equation (6.2.1) into the shifted form, with j = 1, 2, . . . , M −1,

 
ut (tn+1 , xj ) − R α
0 Dx u(tn+1 , xj+1 ) +R D
x 1
α
u(tn+1 , xj−1 ) = fjn+1 + σjn+1 , (6.3.15)
110

where fjn+1 = f (tn+1 , xj ) and


   
σjn+1 = − R0 Dx
α
u(t n+1 , xj+1 )− R α
0 Dx u(tn+1 , xj ) − R α
D
x 1 u(t n+1 , xj−1 )− R α
D
x 1 u(tn+1 , xj ) .

Discretizing ut (tn+1 , xj ) at tn+1 by using the backward Euler method and discretizing
R α R α
0 Dx u(tn+1 , xj+1 ) and x D1 u(tn+1 , xj−1 ) by using (6.3.3) and (6.3.6) at xj+1 and xj−1
respectively, we get, with unj = u(tn , xj ), fjn = f (tn , xj ),

j+1
X M −(j−1) 
X
−1 −α
∆t (un+1
j − unj ) − (∆x) wk,j+1 un+1
j+1−k + wk,M −(j−1) un+1
j−1+k
k=0 k=0

= fjn+1 + σjn+1 + τjn+1 , (6.3.16)

where the truncation error τjn+1 = O(∆t + ∆x2−α ) [25, 26].


Let Ujn ≈ u(tn , xj ) be the approximate solution of u(tn , xj ) at the node (tn , xj ). We
define an implicit shifted finite difference method for solving (6.2.1) - (6.2.3), with j =
1, 2, . . . , M − 1,

j+1
X M −(j−1) 
X
−1 −α
Ujn+1 −Ujn n+1 n+1
+fjn+1 , (6.3.17)

∆t = ∆x wk,j+1 Uj+1−k + wk,M −(j−1) Uj−1+k
k=0 k=0

with U0n+1 = UM
n+1
= 0, and Uj0 = u0 (xj ), j = 0, 1, 2, . . . , M − 1. Here the weights wk,j+1
and wk,M −(j−1) , are given in (6.3.3).

Theorem 6.3.4. The shifted implicit method (6.3.17) is unconditionally stable.

Proof. With λ = ∆t/∆xα , (6.3.17) can be written as, j = 1, 2, . . . , M − 1,


   
n+1
− λw0,j+1 − λw2,M −(j−1) Uj+1 + 1 − λw1,j+1 − λw1,M −(j−1) Ujn+1
j+1
X M −(j−1) 
X
n+1 n+1
−λ wk,j+1 Uj+1−k + wk,M −(j−1) Uj−1+k = Ujn + ∆tfjn+1 , (6.3.18)
k=2 k=2
n+1
U0n+1 = UM = 0, (6.3.19)

or in the matrix form,

AU n+1 = U n + ∆tF n+1 ,


111

where
 
1 − λw12 − λw1M −λw02 − λw2M ... −λwM −1,M
 
 
 
 −λw23 − λw0,M −1 1 − λw13 − λw1,M −1 ... −λwM −2,M −1
 

A= ,
 
 −λw34 −λw24 − λw0,M −2 ... −λwM −3,M −2 
.. .. .. ..
 
 
 . . . . 
 
−λwM −1,M −λwM −2,M −λw2,M − λw0,2 1 − λw1,M − λw12

and
     
U1n+1 U1n f1n+1
     
 n+1  n+1
 U2n
   
 U2    f2 
U n+1 =
 ..
,

n
U =
 ..
,
 F n+1 =
 ..
.

 .   .   . 
     
n+1 n n+1
UM −1 UM −1 fM −1

 
x1
 
 
 x2 
Let µ denote an eigenvalue of A and ξ =   6= 0 the corresponding eigenvector,
 .. 
 . 
 
xM −1
that is,

Aξ = µξ.

Denote

|xi | = max{|xj |, j = 1, 2, . . . , M − 1},


j

we have, for fixed i,


M
X −1
aij xj = µxi ,
j=1

or
M −1
X xj
µ = aii + aij .
j=1,j6=i
xi
112

Note that

ai,1 = −λwi,i+1 , . . . , ai,i−2 = −λw3,i+1 ,

ai,i−1 = −λw2,i+1 − λw0,M −(i−1) , aii = 1 − λw1,i+1 − λw1,M −(i−1) ,

ai,i+1 = −λw0,i+1 − λw2,M −(i−1) ,

ai,i+2 = −λw3,M −(i−1) , . . . , ai,M −1 = −λwM −(i−1)−1,M −(i−1) ,

we have
 xi+1 xi−1 x1 
µ = 1 − λ w0,i+1 + w1,i+1 + w2,i+1 + · · · + wi,i+1
xi xi xi
 xi−1 xi+1 xM −1 
− λ w0,M −(i−1) + w1,M −(i−1) + w2,M −(i−1) + · · · + wi,M −(i−1) .
xi xi xi
x Pi
Since xji < 1, j 6= i and, by Lemma 6.3.1, wk,i+1 > 0, k 6= 1 and k=0 wk,i+1 <
Pi+1 −α
P M −(i−1)−1 P M −(i−1)
k=0 wk,i+1 = (1 − α)(i + 1) < 0, and k=0 wk,M −(i−1) < k=0 wk,M −(i−1) =
(1 − α)(M − (i − 1))−α < 0, we have

xi+1 xi−1 x1
w0,i+1 + w1,i+1 + w2,i+1 + · · · + wi,i+1
xi xi xi
xi−1 xi+1 xM −1
+ w0,M −(i−1) + w1,M −(i−1) + w2,M −(i−1) + · · · + wi,M −(i−1)
xi xi xi
 
< w0,i+1 + w1,i+1 + w2,i+1 + · · · + wi,i+1
 
+ w0,M −(i−1) + w1,M −(i−1) + w2,M −(i−1) + · · · + wi,M −(i−1) < 0,

which implies that µ > 1.


Since all the eigenvalues µ of matrix A satisfy |µ| ≥ 1, the matrix A is invertible and
all eigenvalues of A−1 are less than 1. Hence there exists a matrix norm k · k such that
kA−1 k ≤ 1 and

kU n+1 k = kA−1 (U n + kF n+1 )k ≤ kU n k + kkF n+1 k


n+1
X
0
≤ · · · ≤ kU k + k kF j k
j=1

≤ kU 0 k + tn+1 max kf (t)k ≤ C,


t≥0

which implies that the numerical method is stable.


113

We may use the Gershgorin lemma to simplify the proof above. In fact, we have,
noting that w1,j < 0, wk,j > 0, k 6= 1, k = 0, 2, 3, . . . , j,
M
X −1
ri = |aik | = λ(w0,i+1 + w2,i+1 + w3,i+1 + · · · + wi,i+1 )
k=1,k6=i

+ λ(w0,M −(i−1) + w2,M −(i−1) + w3,M −(i−1) + · · · + wM −i,M −(i−1) ),

for i = 1, 2, . . . , M − 1.
Since aii = 1 − λw1,i+1 − λw1,M −i+1 , i = 1, 2, . . . , M − 1, we have

aii − ri =1 − λ(w0,i+1 + w1,i+1 + w2,i+1 + · · · + wi,i+1 )

− λ(w0,M −(i−1) + w1,M −(i−1) + w2,M −(i−1) + · · · + wM −i,M −(i−1) ),

which implies that, by Lemma 6.3.1,

aii − ri > 1, i = 1, 2, . . . , M − 1.

By Gershgorin lemma, all the eigenvalues µ of A satisfy

1 < aii − ri < µ < aii + ri .

Thus all the eigenvalues of A are larger than or equal to 1, which implies that the matrix
A is invertible and there exists a matrix norm k · k such that kA−1 k ≤ 1. Hence the
numerical method (6.3.17) is unconditionally stable.
The proof of the Theorem 6.3.4 is complete.

We now consider the error estimates of the shifted finite difference method (6.3.17).

Theorem 6.3.5. Let u(tn+1 , xj ) and Ujn+1 be the solutions of (6.3.15) and (6.3.17), re-
spectively. Assume that u(t, x) satisfies the Lipschitz conditions, with some β > 0,

R α
0 Dx u(t, x) −R α β
0 Dx u(t, y) ≤ Cα |x − y| , (6.3.20)
R α
x D1 u(t, x) −R α β
x D1 u(t, y) ≤ Cα |x − y| . (6.3.21)

Then we have

max |u(tn+1 , xj ) − Ujn+1 | ≤ C(∆t + ∆xmin(β,2−α) ).


j
114

Proof. Let en+1


j = u(tn+1 , xj ) − Ujn+1 . Subtracting (6.3.17) from (6.3.16), we obtain the
following error equation,

j+1
X M −(j−1) 
X
−1 −α
en+1 enj wk,j+1 en+1 wk,M −(j−1) en+1

∆t j − − ∆x j+1−k + j−1+k
k=0 k=0

= σjn+1 + τjn+1 .

With λ = ∆t/∆xα , we have

(1 − λw1,j+1 − λw1,M −(j−1) )en+1


j

− λ w0,j+1 en+1 n+1 n+1


+ wj+1,j+1 en+1

j+1 + w2,j+1 ej−1 + · · · + wj,j+1 e1 0

− λ w0,M −(j−1) en+1 n+1 n+1



j−1 + w2,M −(j−1) ej+1 + · · · + wM −(j−1),M −(j−1) eM

= enj + ∆tσjn+1 + ∆tτjn+1 .

Using Lemma 6.3.1 and assumptions (6.3.20)- (6.3.21), we have, with R = (∆t+∆xmin(2−α,β) ,

|e1 |∞ = sup |e1j | = |e1l | ≤ |e1l | 1 − λ(w0,l+1 + w1,l+1 + · · · + wl+1,l+1 )
j

− λ(w0,M −(l−1) + w1,M −(l−1) + · · · + wM −(l−1),M −(l−1) )

= |e1l | − λw0,l+1 |e1l | − λw1,l+1 |e1l | − · · · − λwl+1,l+1 |e1l |

− λw0,M −(l−1) |e1l | − λw1,M −(l−1) |e1l | − · · · − λwM −(l−1),M −(l−1) |e1l |

≤ |e1l | − λw0,l+1 |e1l+1 | − λw1,l+1 |e1l | − · · · − λwl+1,l+1 |e10 |

− λw0,M −(l−1) |e1l−1 | − λw1,M −(l−1) |e1l | − · · · − λwM −(l−1),M −(l−1) |e1M |

≤ |e1l − λw0,l+1 e1l+1 − λw1,l+1 e1l − · · · − λwl+1,l+1 e10

− λw0,M −(l−1) e1l−1 − λw1,M −(l−1) e1l − · · · − λwM −(l−1),M −(l−1) e1M |

= |e0l + ∆tσl1 + ∆tτl1 |

≤ |e0l | + ∆tR.

Further, for simplicity, we assume that e0l = 0. Then we have

|e1 |∞ ≤ ∆tR.

Similarly, we can show that

|e2 |∞ ≤ |e1l | + ∆tR ≤ t2 R,


115

and in general, with 0 ≤ tn ≤ T ,

|en |∞ ≤ tn R ≤ C(∆t + ∆xmin(2−α,β) ).

The proof of Theorem 6.3.5 is now complete.

6.4 FDM based on quadratic interpolation


In this section, we will introduce a new finite difference method for solving (6.2.1)- (6.2.3).
For simplicity, we assume C+ (t, x) = C− (t, x) = 1 and ϕ1 (t) = ϕ2 (t) = 0.
The Riemann-Liouville fractional derivative R α
0 Dx f (x) can be written as [26]
I x
1
R α
0 Dx f (x) = (x − ξ)−1−α f (ξ) dξ. (6.4.1)
Γ(−α) 0
H
Here the integral denotes the Hadamard finite-part integral.
Let m be a fixed positive integer and M = 2m. Let 0 = x0 < x1 < x2 < · · · < x2j <
x2j+1 < · · · < x2m = 1 be a partition of [0, 1] and ∆x the stepsize.
2j
At the nodes x2j = 2m
,j = 1, 2, . . . , m, we have
x2j x−α 1
I I
1 −1−α 2j
R α
0 Dx f (x2j ) = (x2j − ξ) f (ξ) dξ = w−1−α f (x2j − x2j w) dw.
Γ(−α) 0 Γ(−α) 0
(6.4.2)

For every j, we replace g(w) = f (x2j − x2j w) in the integral in (6.4.2) by piecewise
quadratic interpolation polynomials with the equispaced nodes 0, 2j1 , 2j2 , . . . , 2j
2j
. We then
have
I 1 I 1
−1−α
w g(w) dw = w−1−α P2 (w) dw + R2j (g), (6.4.3)
0 0

where P2 (w) is the piecewise quadratic interpolation polynomial of g(w) defined on the
equispaced nodes 0, 2j1 , 2j2 , . . . , 2j
2j
and R2j (g) is the remainder term.
2j+1
At the node x2j+1 = j = 1, 2, . . . , m − 1 we have
2m
,
I x2j+1
1
R α
0 Dx f (x2j+1 ) = (x2j+1 − ξ)−1−α f (ξ) dξ
Γ(−α) 0
Z x1
1
= (x2j+1 − ξ)−1−α f (ξ) dξ
Γ(−α) 0
I 2j
x−α
2j+1 2j+1
+ w−1−α f (x2j+1 − x2j+1 w) dw. (6.4.4)
Γ(−α) 0
116

For every j, j = 1, 2, . . . , m − 1, we replace g(w) = f (x2j+1 − x2j+1 w) by a piecewise


1 2 2j
quadratic interpolation polynomial with the equispaced nodes 0, 2j+1 , 2j+1 , . . . , 2j+1 and
obtain
I 2j I 2j
2j+1 2j+1
−1−α
w g(w) dw = w−1−α Q2 (w) dw + R2j+1 (g), (6.4.5)
0 0

where Q2 (w) is the piecewise quadratic interpolation polynomial of g(w) defined on the
1 2 2j
nodes 0, 2j+1 , 2j+1 , . . . , 2j+1 and R2j+1 (g) is the remainder term.
We have,

Lemma 6.4.1. [93]. Let 1 < α < 2 and let M = 2m where m is a fixed positive integer.
Let 0 = x0 < x1 < x2 < · · · < x2j < x2j+1 < · · · < xM = 1 be a partition of [0, 1]. Assume
that f (x) is a sufficiently smooth function. Then we have, with j = 1, 2, . . . , m,
2j
R α
x−α
2j
X 
0 Dx f (x) = αl,2j f (x2j−l ) + R2j (f )
x=x2j Γ(−α) l=0
2j
−α
X x−α
2j
= ∆x wl,2j f (x2j−l ) + R2j (f ), (6.4.6)
l=0
Γ(−α)

and, with j = 1, 2, . . . , m − 1,
Z x1
1
R α
0 Dx f (x) = (x2j+1 − ξ)−1−α f (ξ) dξ
x=x2j+1 Γ(−α) 0
2j
x−α
2j+1
X 
+ αl,2j+1 f (x2j+1−l ) + R2j+1 (f )
Γ(−α) l=0
2j
x−α
Z x1
1 −1−α −α
X 2j+1
= (x2j+1 − ξ) f (ξ) dξ + ∆x wl,2j+1 f (x2j+1−l ) + R2j+1 (f ),
Γ(−α) 0 l=0
Γ(−α)
(6.4.7)
117

where

(−α)(−α + 1)(−α + 2)(2j)−α αl,2j



2−α (α + 2),



 for l = 0,



(−α)22−α ,




 for l = 1,



 (−α)(−2−α α) + 1 F0 (2), for l = 2,

2
=
−F1 (k), for l = 2k − 1, k = 2, 3, . . . , j,







1
for l = 2k, k = 2, 3, . . . , j − 1,

 (F (k) + F0 (k + 1)),
 2 2




 1 F2 (j),


2
for l = 2j,

F0 (k) =(2k − 1)(2k) (2k)−α − (2(k − 1))−α (−α + 1)(−α + 2)




− (2k − 1) + 2k (2k)−α+1 − (2(k − 1))−α+1 (−α)(−α + 2)


 

+ (2k)−α+2 − (2(k − 1))−α+2 (−α)(−α + 1),




F1 (k) =(2k − 2)(2k) (2k)−α − (2k − 2)−α (−α + 1)(−α + 2)




− (2k − 2) + 2k (2k)−α+1 − (2k − 2)−α+1 (−α)(−α + 2)


 

+ (2k)−α+2 − (2k − 2)−α+2 (−α)(−α + 1),




F2 (k) =(2k − 2)(2k − 1) (2k)−α − (2k − 2)−α (−α + 1)(−α + 2)




− (2k − 2) + (2k − 1) (2k)−α+1 − (2k − 2)−α+1 (−α)(−α + 2)


 

+ (2k)−α+2 − (2k − 2)−α+2 (−α)(−α + 1).




Further we have, with l = 0, 1, 2, . . . , 2j,

Γ(3 − α)wl,2j = (−α)(−α + 1)(−α + 2)(2j)−α

and

αl,2j+1 = αl,2j , wl,2j+1 = wl,2j .

The remainder term Rl (f ) satisfies, for every f ∈ C 3 (0, 1),

|Rl (f )| ≤ C∆x3−α kf 000 k∞ , l = 2, 3, 4, . . . , M, with M = 2m.


118

Proof. By the relationship between Riemann-Liouville and Hadamard finite-part integral,


2j
for fixed 2j, let 0 < 2j1 < 2j2 < · · · < 2j = 1 be a partition of [0,1]. We then have
I 1 I 2 Z 4 Z 2j
2j 2j 2j
−1−α
ω g(ω)dω = [ + +··· + ]ω −1−α g(ω)dω.
2 2j−2
0 0 2j 2j

Here the integral denotes the Hadamard finite-part integral [3]. We approximate g(ω) on
[0,1] by the quadratic interpolation polynomials g2 (ω), where
2k−1
(ω − 2j
)(ω − 2k
2j
) 2k − 2
g2 (ω) = 2k−2 2k−1 2k−2 2k
g( ) (6.4.8)
( 2j − 2j )( 2j − 2j ) 2j
(ω − 2k−2
2j
)(ω − 2k
2j
) 2k − 1
+ 2k−1 2k−2 2k−1 2k g( )
( 2j − 2j )( 2j − 2j ) 2j
(ω − 2k−2
2j
)(ω − 2k−1
2j
) 2k 2k − 2 2k
+ 2k 2k−2 2k 2k−1 g( ), f or ω∈[ , ], k = 1, 2, . . . , j.
( 2j − 2j )( 2j − 2j ) 2j 2j 2j
Let us now find the values of
I 1 I 2 Z 4 Z 2j
2j 2j 2j
−1−α
ω g2 (ω)dω = [ + +··· + ]ω −1−α g2 (ω)dω,
2 2j−2
0 0 2j 2j

H 2j2
where the integral 0 g2 (ω)ω −1−α dω is a Hadamard finite-part integral. By the definition
of the Hadamard finite-part integral, we get
I 2 g2 (0)( 2j2 )−α Z 2j −1−α Z ω 0
2
2j
−1−α
g2 (ω)ω dω = + ω [ g2 (y)dy]dω (6.4.9)
0 −α 0 0
Z 2
2−α 2j
= −α
g2 (0) + ω −1−α (g2 (ω) − g2 (0))dω
(−α)(2j) 0
−α Z 2 2
2 2j
−1−α (2j)
h 1 2
= −α
g(0) + ω (ω 2 − ( + )ω)g(0)
(−α)(2j) 0 2 2j 2j
2 2
(2j) 2 1 (2j) 1 2 i
+ (ω 2 − (0 + )ω)g( ) + (ω 2 − (0 + )ω)g( ) dω
−1 2j 2j 2 2j 2j
−α
2 (α + 2)
= g(0)
(−α)(−α + 1)(−α + 2)(2j)−α
22−α 1
+ g( )
(−α + 1)(−α + 2)(2j)−α 2j
−2−α α 2
+ g( )
(−α + 1)(−α + 2)(2j)−α 2j
119

Similarly, we have
Z 2k−2
2j
−α
(−α)(−α + 1)(−α + 2)(2j) g2 (ω)ω −1−α dω
2k
2j

1 2k − 2 2k − 1 1 2k
= F0 (k)g( ) + (−1)F1 (k)g( ) + F2 (k)g( ),
2 2j 2j 2 2j
where Fi (k), i = 0, 1, 2 and k = 1, 2, 3, . . . , j are defined as above.

Together these estimates complete the proof of Lemma 6.4.1.

The weights wl,2j have some special properties which are summarized in the following
Lemma 6.4.2 .

Lemma 6.4.2. Let 1 < α < 2. The coefficients wl,2j in (6.4.6) satisfy

w1,2j < 0, (6.4.10)

wl,2j > 0, l 6= 1, l = 0, 2, 3, . . . , 2j, (6.4.11)


2j
X
Γ(3 − α) wl,2j < 0. (6.4.12)
l=0

Proof. It is easy to show that w0,2j > 0 and w1,2j < 0. We now prove that wk,2j > 0, k =
2, 3, . . . , 2j. We first show that

w2l−1,2j > 0, l = 2, 3, . . . , j.

Note that
   
Γ(3 − α)w2l−1,2j = 2 (2l − 2)−α+2 − (2l)−α+2 + 2(−α + 2) (2l − 2)−α+1 + (2l)−α+1 .

Let m = 2l. It is sufficient to show that, with m = 4, 6, . . . ,

I(m) = (m − 2)−α+2 − m−α+2 + (−α + 2)(m − 2)−α+1 + (−α + 2)m−α+1 > 0.

In fact, we have, by using binomial expansion,


 1 2 2 1
I(m) =m−α+2 − 1 + (−α + 2) + (1 − )−α+2 + (−α + 2)(1 − )−α+1
m m m m
 (−α + 2)(−α + 1)(−α)  23 22 
= m−α+2 − +
m3 3! 2!
(−α + 2)(−α + 1)(−α)(−α − 1)  24 23 
+ −
m4 4! 3!
5
(−α + 2)(−α + 1)(−α)(−α − 1) −2  24  
+ + + ... .
m5 5! 4!
120

2n
Note that the sequence an = n!
is decreasing. Hence we see that, with 1 < α < 2,

I(m) > 0.

We next prove that

w2l,2j > 0, l = 1, 2, . . . , j − 1.

Note that, with l = 2, 3, . . . , j − 1,


 
Γ(3 − α)w2l,2j = −3(−α + 2)(2l)−α+1 + (2l + 2)−α+2 − (2l − 2)−α+2
1  
− (−α + 2) (2l + 2)−α+1 + (2l − 2)−α+1 .
2

Let m = 2l. It is sufficient to show that, with m = 4, 6, . . . ,

I(m) = −6(−α + 2)m−α+1 + (−2)(m − 2)−α+2

+ (−1)(−α + 2)(m − 2)−α+1 + 2(m + 2)−α+2 + (−1)(−α + 2)(m + 2)−α+1 > 0.

In fact, we have, by using binomial expansion,

−α+2
 2 −α+2 2 1
I(m) =m + (−2)(1 − ) + (−1)(−α + 2)(1 − )−α+1
m m m
2 −α+2 2 −α+1 1 
+ 2(1 + ) + (−1)(−α + 2)(1 + )
m m m
 2(−α + 2)(−α + 1)(−α)(−α − 1)(−α − 2)  23 · 2 22 
= m−α+2 − +
m3 3! 2!
5 4
2(−α + 2)(−α + 1)(−α)(−α − 1)(−α − 2) 2 · 2 2

+ −
m5 5! 4!
2(−α + 2)(−α + 1)(−α)(−α − 1)(−α − 2)  27 · 2 26  
+ − + . . .
m7 7! 6!
3 2
1  2(−α + 2)(−α + 1)(−α) 2 · 2 2

= 1+α −
m m0 3! 2!
2(−α + 2)(−α + 1)(−α)(−α − 1)(−α − 2)  25 · 2 24 
+ −
m2 5! 4!
2(−α + 2)(−α + 1)(−α)(−α − 1)(−α − 2)(−α − 3)(−α − 4)
+
m4
 27 · 2 26  
− + ... .
7! 6!

Note that

2n · 2 2n−1 2n−1  2 · 2 
− = − 1 ≤ 0, n ≥ 4.
n! (n − 1)! (n − 1)! n
121

Hence we get
21+α 1 h 2(−α + 2)(−α + 1)(−α)  23 · 2 22 
I(m) ≥ −
m1+α 21+α m0 3! 2!
2(−α + 2)(−α + 1)(−α)(−α − 1)(−α − 2) 25 · 2 24 

+ −
22 5! 4!
2(−α + 2)(−α + 1)(−α)(−α − 1)(−α − 2)(−α − 3)(−α − 4)  27 · 2 26  i
+ − + . . .
24 7! 6!
1+α
2
= 1+α I(2), m = 2, 3, 4, . . . .
m
It is easy to show that, with 1 < α < 2,
 
I(2) = 2−α+2 3α − 6 + 2−α (6 + α) > 0.

Thus we get

I(m) > 0, m = 2, 3, 4, . . . .

Similarly, we can show that w2j,2j > 0.


Finally we shall prove Γ(3 − α) 2j
P
l=0 wl,2j < 0. We have
2j
X −3(−α + 2) α−2
Γ(3−α) wl,2j = (2j)−α+1 −(2j)−α+2 + (2j +2)−α+1 +(2j +2)−α+2 .
l=0
2 2

Let m = 2j + 2, it is sufficient to show that, with m = 4, 6, 8, . . . ,

I(m) = −3(−α + 2)(m − 2)−α+1 − 2(m − 2)−α+2 + (α − 2)m−α+1 + 2 · m−α+2 < 0.

In fact, by using binomial expansion, we have

−α+2
 2 −α+1 1 2 −α+2 1 
I(m) =m (3α − 6)(1 − ) − 2(1 − ) + (α − 2) + 2
m m m m
(−α + 2)(−α + 1)  (−2) (−2)2 
= (−3) + (−2)
m2 1! 2!
2
(−α + 2)(−α + 1)(−α)  (−2) (−2)3 
+ (−3) + (−2)
m3 2! 3!
(−α + 2)(−α + 1)(−α)(−α − 1)  (−2)3 (−2)4 
+ (−3) + (−2) + ....
m4 3! 4!
Note that
(−2)n (−2)( n + 1) (−2)n  −2 
(−3) + (−2) = (−3) + (−2)
n! (n + 1)! n! n+1
n
(−2) 4  −3n + 1
= (−3) + = (−2)n ,
n! n+1 (n + 1)!
122

which implies that

I(m) < 0, m = 4, 6, 8, . . . .

Together these estimates complete the proof of Lemma 6.4.2.

Similarly, we can consider the approximation of right-handed fractional derivative


R α
x D1 f (x) at x = xl , l = 0, 1, 2, . . . , 2m − 2. Using the same argument as for the approxi-
R α
mation of 0 Dx f (x) at x = xl , we can show that, with j = 0, 1, 2, . . . , m − 1,
M −2j
R α −α
X x−α
2j
x D1 f (x) = ∆x wl,M −2j f (x2j+l ) + R2j (f ), (6.4.13)
x=x2j
l=0
Γ(−α)

and, with j = 0, 1, 2, . . . , m − 2,
Z xM
1
R α
x D1 f (x) = (ξ − x2j+1 )−1−α f (ξ) dξ
x=x2j+1 Γ(−α) xM −1
M −(2j+1)−1
−α
X x−α
2j+1
+ ∆x wl,M −(2j+1) f (x2j+1+l ) + R2j+1 (f ). (6.4.14)
l=0
Γ(−α)

n n
Let U2j ≈ u(tn , x2j ) and U2j+1 ≈ u(tn , x2j+1 ) denote the approximate solutions of
u(tn , x2j ) and u(tn , x2j+1 ), respectively. We define the following explicit numerical method
for solving (6.2.1) - (6.2.3).
2j
X M −2j 
X
−1 n+1 n −α n
wk,M −2j un2j+k

∆t U2j − U2j = ∆x wk,2j U2j−k +
k=0 k=0
n
+ f2j , j = 1, 2, . . . , m − 1, (6.4.15)
 2j+1 M −2j−1 
X X
−1 n+1 n −α n n

∆t U2j+1 − U2j+1 = ∆x wk,2j+1 U2j+1−k + wk,M −2j−1 U2j+1+k
k=0 k=0

+ f¯2j+1
n
+ Qn2j , j = 0, 1, 2, . . . , m − 1, (6.4.16)

where Qn2j is defined as in (6.4.24) below.

Lemma 6.4.3. The standard explicit numerical method (6.4.15) - (6.4.16) is uncondi-
tionally unstable.
123

Proof. Let n = 0. We have, with λ = ∆t/∆xα ,


2j
X
1
 0 0
U2j = 1 + w0,2j λ + +w0,M −2j λ U2j + λ wk,2j U2j−k
k=1
M −2j
X
0 0
+λ wk,M −2j U2j+k + ∆tf2j + ∆tQ02j , (6.4.17)
k=1
2j
X
1
 0 0
U2j+1 = 1 + w0,2j+1 λ + w0,M −2j−1 λ U2j+1 + λ wk,2j+1 U2j+1−k
k=1
M −2j−1
X
+λ 0
wk,M −2j−1 U2j+1+k + ∆tf¯2j+1
0
, (6.4.18)
k=1

where
Z x1
1
f¯2j+1
0
= 0
f2j+1 + (x2j+1 − ξ)−1−α u(ξ, tn+1 ) dξ.
Γ(−α) 0

Assume that we have some errors in the starting values Ul0 , i.e.,

Ūl0 = Ul0 + 0l , l = 0, 1, 2, . . . , 2j, 2j + 1, . . . , 2m.

0
To consider the stability, we assume that only the term U2j 0
, for some fixed j0 , has the
error, and other terms have no errors. That is

0l = 0, l 6= 2j0 .

Then we have
2j0 M −2j 0
X X
1
 0 0 0 0
Ū2j0
= 1+w0,2j0 λ+w0,M −2j 0 λ Ū2j0 +λ wk,2j0 Ū2j0 −k +λ wk,M −2j 0 Ū2j 0 −k
+∆tf2j0
.
k=1 k=1
(6.4.19)

Subtracting (6.4.17) from (6.4.19), we obtain

12j0 = 1 + w0,2j0 λ + w0,M −2j 0 λ 02j0 .




That is, the error is amplified by the factor µ2j0 = 1 + w0,2j0 λ + w0,M −2j 0 λ when the finite
difference equation is advanced by one time step. After n time steps, one may write
n
n2j0 = 1 + w0,2j0 λ + w0,M −2j 0 λ 02j0 .

Note that w0,2j0 = 1/Γ(3 − α) > 0, and +w0,M −2j 0 = 1/Γ(3 − α) > 0 we have 1 +
w0,2j0 λ + w0,M −2j 0 λ > 1. Thus |n2j0 | → ∞ as n → ∞, which implies that the method is
unstable.
124

Similarly, we can introduce the standard implicit numerical method and show that the
standard implicit numerical method is also unconditionally unstable.
We now introduce the shifted Diethelm FDM for space-fractional PDEs. Let 0 = t0 <
t1 < t2 < · · · < tn < . . . be the time partition and ∆t the time stepsize. At the nodes
2j
x2j = 2m
,j = 1, 2, . . . , m − 1, we have, by (6.2.1),
 
R α R α n+1 n+1
ut (tn+1 , x2j ) − 0 Dx u(tn+1 , x2j+1 ) + x D1 u(tn+1 , x2j−1 ) = f2j + σ2j , (6.4.20)

2j+1
and at the nodes x2j+1 = 2m
,j = 1, 2, . . . , m − 1,
 
R α n+1 n+1
ut (tn+1 , x2j+1 ) − 0 Dx u(tn+1 , x2j+2 ) +R D
x 1
α
u(tn+1 , x2j ) = f2j+1 + σ2j+1 , (6.4.21)

where
 
n+1
σ2j =− R 0 D α
x u(t n+1 , x2j+1 ) − R α
0 D x u(t n+1 , x 2j )
 
− R α R α
x D1 u(tn+1 , x2j−1 ) − x D1 u(tn+1 , x2j ) ,
 
n+1 R α R α
σ2j+1 = − 0 Dx u(tn+1 , x2j+2 ) − 0 Dx u(tn+1 , x2j+1 )
 
− R D
x 1
α
u(t n+1 , x2j ) − R α
D
x 1 u(t n+1 , x2j+1 ) .

Discretizing ut (tn+1 , xl ) by using the backward Euler method and discretizing R α


0 Dx u(tn+1 , xl )
R α
and x D1 u(tn+1 , xl ) by using (6.4.6) - (6.4.7) and (6.4.13) - (6.4.14), respectively, we get,
with unj = u(tn , xj ), fjn = f (tn , xj )

2j
X M −(2j−1)−1 
X
−1 −α
un+1 un2j wk,2j+1 un+1 wk,M −(2j−1) un+1

∆t 2j − = ∆x 2j+1−k + 2j−1+k
k=0 k=0
n+1
+ f2j + Qn+1 n+1 n+1
2j + σ2j + τ2j , j = 1, 2, . . . , m − 1, (6.4.22)
 2j+2 M −2j 
X X
∆t−1 un+1 n −α n+1 n+1

2j+1 − u2j+1 = ∆x w u
k,2j+2 2j+2−k + w u
k,M −2j 2j+k
k=0 k=0
n+1 n+1 n+1
+ f2j+1 + σ2j+1 + τ2j+1 , j = 0, 1, 2, . . . , m − 1, (6.4.23)

where the truncation errors τln+1 = O(∆t + ∆x3−α ), l = 1, 2, . . . , Ṁ − 1 [25], [26] and
Z x1 Z xM
1 −1−α 1
n+1
Q2j = (x2j+1 − ξ) u(ξ, tn+1 ) dξ + (ξ − x2j+1 )−1−α u(ξ, tn+1 ) dξ.
Γ(−α) 0 Γ(−α) xM −1
(6.4.24)
125

n n
Let U2j ≈ u(tn , x2j ) and U2j+1 ≈ u(tn , x2j+1 ) denote the approximate solutions of
u(tn , x2j ) and u(tn , x2j+1 ), respectively. We define the following implicit shifted numerical
method for solving (6.2.1) - (6.2.3).
2j
X M −(2j−1)−1 
X
−1 n+1 −α n+1
n
wk,M −(2j−1) un+1

∆t U2j − U2j = ∆x wk,2j+1 U2j+1−k + 2j−1+k
k=0 k=0
n+1
+ f2j + Qn+1
2j , j = 1, 2, . . . , m − 1, (6.4.25)
 2j+2 M −2j 
X X
−1 n+1 n −α n+1 n+1

∆t U2j+1 − U2j+1 = ∆x wk,2j+2 U2j+2−k + wk,M −2j U2j+k
k=0 k=0
n+1
+ f2j+1 , j = 0, 1, 2, . . . , m − 1, (6.4.26)

where Qn+1
2j is defined below in (6.4.24).

Lemma 6.4.4. The shifted implicit method (6.4.25)- (6.4.26) is unconditionally stable.

Proof. For simplicity, we only consider the left-hand Riemman-Liouville fractional deriva-
tive for stability analysis. With λ = ∆t/∆xα we write (6.4.25)-(6.4.26) into one equation,
with l = 1, 2, . . . , 2j, 2j + 1, . . . , 2m − 1,
l+1
X
n+1
−λw0,l+1 Ul+1 + (1 − λw1,l+1 )Uln+1 −λ n+1
wk,l+1 Ul+1−k = Uln + kFln+1 , (6.4.27)
k=2

with the boundary conditions U0n+1 = U2m


n+1
= 0, where

 f¯n+1 , l = 2j, j = 1, 2, . . . , m,

n+1 l
Fl =
 f n+1 , l = 2j + 1, j = 1, 2, . . . , m − 1,

l

and f¯ln+1 is defined as follows:

Z x1
1
f¯2j
n+1
= (x2j+1 − ξ)−1−α u(ξ, tn+1 ) dξ + f2j
n+1
.
Γ(−α) 0

Further we write (6.4.27) into the following linear system with 2m − 1 equations and
2m − 1 unknowns.

AU n+1 = U n + kF n+1 ,
126

where
 
1 − λw12 −λw02
 
 
 
−λw23 1 − λw13 −λw03
 
A= ,
.. ..
 
..
.
 
 . . 
 
−λw2m−1,2m −λw2m−2,2m ... 1 − λw1,2m
and
     
U1n+1 U1n F1n+1
     
 n+1  n+1
U2n
   
 U2     F2 
U n+1 =
 ..
,

n
U =
 ..
,
 F n+1 =
 ..
.

 .   .   . 
     
n+1 n n+1
U2m−1 U2m−1 F2m−1
 
x1
 
 
 x2 
Let µ be an eigenvalue of A. Let ξ =   6= 0 be the corresponding eigenvector.
 .. 
 . 
 
x2m−1
Then we have

Aξ = µξ.

Denote

|xi | = max{|xj |, j = 1, 2, . . . , 2m − 1}.


j

We have, for fixed i,


2m−1
X
aij xj = µxi ,
j=1

i.e.,
2m−1
X xj
µ = aii + .
j=1,j6=i
xi

Note that aii = 1 − λw1,i+1 , ai,i+1 = −λw0,i+1 , ai,i−1 = −λw2,i+1 , . . . , ai,1 = −λwi,i+1 . We
have
i−1
xi+1 X xj
µ = (1 − λw1,i+1 ) − λw0,i+1 −λ wi−j+1,i+1
xi j=1
xi
 xi+1 xi−1 x1 
= 1 − λ w0,i+1 + w1,i+1 + w2,i+1 + · · · + wi,i+1 .
xi xi xi
127

xj Pi+1
Note that xi
< 1, j 6= i and wk,i+1 > 0, k 6= 1 and, by Lemma 6.3.1, k=0 wk,i+1 < 0, we
have
xi+1 xi−1 x1
w0,i+1 + w1,i+1 + w2,i+1 + · · · + wi,i+1
xi xi xi
< w0,i+1 + w1,i+1 + w2,i+1 + . . . wi,i+1 < 0.

Hence
 xi+1 xi−1 x1 
µ = 1 − λ w0,i+1 + w1,i+1 + w2,i+1 + · · · + wi,i+1 > 1.
xi xi xi
Since all the eigenvalues µ of matrix A satisfy |µ| ≥ 1, the matrix A is invertible and all
eigenvalues of A−1 are less than 1, which implies that there exists a matrix norm k · k
such that kA−1 k ≤ 1 and

kU n+1 k = kA−1 (U n + kF n+1 )k ≤ kU n k + kkF n+1 k


n+1
X
0
≤ · · · ≤ kU k + k kF j k ≤ kU 0 k + tn+1 max kf (t)k ≤ C,
0≤t≤T
j=1

which means that the numerical method is stable.


We may also use the following lemma to prove the stability.

Lemma 6.4.5. The eigenvalues of the matrix A lie in the disks centered at aii with radius
P
ri = k6=i |aik |.

By using Lemma 6.4.5, we shall prove all the eigenvalues of A are larger than or equal
to 1. In fact, we have
2m−1
X
ri = |aik | = λ(w0,i+1 + w2,i+1 + w3,i+1 + · · · + wi,i+1 ).
k=1,k6=i

We have, with aii = 1 − λw1,i+1 ,

aii − ri = 1 − λ(w0,i+1 + w1,i+1 + w2,i+1 + · · · + wi,i+1 ).

By Lemma 3.4.2 we have w0,i+1 + w1,i+1 + w2,i+1 + · · · + wi,i+1 < 0, which implies that
aii − ri > 1 and therefore all the eigenvalues µ of A satisfy

1 < aii − ri < µ < aii + ri .


128

6.4.1 Initial integral approximation


R x1
To approximate the integral 1
Γ(−α) 0
(x2j+1 − ξ)−1−α u(tn+1 , ξ) dξ in (6.4.24), we denote
g(ξ) = u(tn+1 , ξ) and approximate g(ξ) on [0, x1 ] by the following quadratic interpolation
polynomials, [9],

(ξ − x 1 )(ξ − x1 ) (ξ − x0 )(ξ − x1 )
2
P2 (ξ) = u(tn+1 , x0 ) + u(tn+1 , x 1 )
(x0 − x 1 )(x0 − x1 ) (x 1 − x0 )(x 1 − x1 ) 2
2 2 2

(ξ − x0 )(ξ − x 1 )
+ 2
u(tn+1 , x1 ), for ξ ∈ [x0 , x1 ],
(x1 − x0 )(x1 − x 1 )
2

where

(1) u000 (tn+1 , c1 )


u(tn+1 , ξ) − P2 (ξ) = R1 (ξ) = (ξ − x0 )(ξ − x 1 )(ξ − x1 ), c1 ∈ (0, x1 ).
3! 2

Further we approximate the value u(tn+1 , x 1 ) by


2

3 3 1
u(tn+1 , x 1 ) ≈ u(tn+1 , x0 ) + u(tn+1 , x1 ) − u(tn+1 , x2 ),
2 8 4 8

where
33 1 
(2)
u(tn+1 , x 1 ) − u(tn+1 , x0 ) + u(tn+1 , x1 ) − u(tn+1 , x2 ) = R1 (ξ),
2 8 4 8
(2) 1 000
and R1 (ξ) = 16
u (tn+1 , c2 )h3 , c2 ∈ (0, x2 ).
We then have
Z x1 2
1 −1−α
X
(x2j+1 − ξ) u(tn+1 , ξ) dξ = B̂i u(tn+1 , xi ) + R1 ,
Γ(−α) 0 i=0

where
Z x1
α−1
(ξ − x 1 )(ξ − x1 ) 3
Z x1
(ξ − x0 )(ξ − x1 )
B̂0 = (x1 − ξ) 2
dξ + (x1 − ξ)α−1 dξ,
0 (x0 − x 1 )(x0 − x1 ) 8 0 (x 1 − x0 )(x 1 − x1 )
2 2 2
x1 x1
(ξ − x0 )(ξ − x1 ) (ξ − x0 )(ξ − x1 )
Z Z
3
B̂1 = (x1 − ξ)α−1 dξ + (x1 − ξ)α−1 dξ,
4 0 (x 1 − x0 )(x 1 − x1 ) 0 (x1 − x0 )(x1 − x 1 )
2 2 2
x1
(ξ − x0 )(ξ − x1 )
Z
1
B̂2 = − (x1 − ξ)α−1 dξ,
8 0 (x 1 − x0 )(x 1 − x1 )
2 2

and
Z x1 Z x1
−1−α (1) (2)
R1 = (x2j+1 − ξ) R1 (ξ) dξ + (x2j+1 − ξ)−1−α R1 (ξ) dξ.
0 0
129

It is easy to show that

Z x1 Z x1
−1−α (1) (2)
|R1 | ≤ (x2j+1 − ξ) |R1 (ξ)| dξ + (x2j+1 − ξ)−1−α |R1 (ξ)| dξ
Z0 x1 0

≤ (x2j+1 − ξ)−1−α C∆x3 dξ ≤ C∆x3 ∆x−α = C∆x3−α .


0

Hence, we have
Z x1 2
X
−1−α
(x2j+1 − ξ) u(tn+1 , ξ) dξ − B̂i un+1
i = O(∆x3−α ).
0 i=0

Similarly, we have, for some suitable weights B̃i , i = 0, 1, 2,


Z xM M
X
−1−α
(ξ − x2j+1 ) u(tn+1 , ξ) dξ − B̃i Uin+1 = O(∆x3−α ).
xM −1 i=M −2

Based on the analysis above, we approximate Qn+1


2j in (6.4.25) by

2 M
n+1 1 X 1 X
S2j = B̂i Uin+1 + B̃i Uin+1 . (6.4.28)
Γ(−α) i=0 Γ(−α) i=M −2

It is easy to say that


2 M
1 X X 
Qn+1
2j − n+1
B̂i ui + B̃i un+1
i = O(∆x3−α ).
Γ(−α) i=0 i=M −2

6.4.2 Error estimates of the shifted Diethelm FDMs

Theorem 6.4.6. Let 1 < α < 2 and let u(tn+1 , xl ) and Uln+1 , l = 1, 2, . . . , M − 1 be
the solutions of (6.4.22)- (6.4.23) and (6.4.25)-(6.4.26), respectively. Assume that u(t, x)
satisfies the Lipschitz conditions, with some β > 0,

R α
0 Dx u(t, x) −R α β
0 Dx u(t, y) ≤ Cα |x − y| , (6.4.29)
R α
x D1 u(t, x) −R α β
x D1 u(t, y) ≤ Cα |x − y| . (6.4.30)

We have

max |u(tn+1 , xl ) − Uln+1 | ≤ C(∆t + ∆xmin(β,3−α) ).


1≤l≤M −1
130

Proof of Theorem 6.4.6. Let en+1


l = u(tn+1 , xl ) − Uln+1 , l = 1, 2, . . . , M − 1. Subtracting
(6.4.22) - (6.4.23) from (6.4.25)-(6.4.26), We obtain the following error equation, for l =
2j, j = 1, 2 . . . , m − 1, with R = (∆t + ∆xmin(3−α,β) ),
  2j
X M −(2j−1)−1 
X
−1 −α
∆t en+1
2j − en2j − ∆x wk,2j+1 en+1
2j+1−k + wk,M −(2j−1) en+1
2j−1+k + R,
k=0 k=0

and, for l = 2j + 1, j = 0, 1, 2 . . . , m − 1,
   2j+2 M −2j 
X X
−1 −α
∆t en+1
2j+1 − en2j+1 − ∆x wk,M −(2j+2) en+1
2j+2+k + wk,M −2j en+1
2j+k + R.
k=0 k=0

With λ = ∆t/∆xα , we have, for l = 2j, j = 1, 2 . . . , m − 1,

(1 − λw1,2j+1 − λw1,M −(2j−1) )en+1


2j

− λ w0,2j+1 en+1 n+1 n+1



2j+1 + w2,2j+1 e2j−1 + · · · + w2j,2j+1 e1

− λ w0,M −(2j−1) en+1 n+1 n+1



2j−1 + w2,M −(2j−1) e2j+1 + · · · + wM −(2j−1)−1,M −(2j−1) eM −1

= en2j + ∆tR,

and, for l = 2j + 1, j = 0, 1, 2 . . . , m − 1,

(1 − λw1,2j+2 − λw1,M −2j )en+1


2j+1

− λ w0,2j+2 en+1 n+1 n+1

2j+2 + w2,2j+2 e2j + · · · + w2j+2,2j+2 e0

− λ w0,M −2j en+1 n+1 n+1



2j + w2,M −2j e2j+2 + · · · + wM −2j,M −2j eM

= en2j+1 + ∆tR.

Assume that |e1 |∞ = supl |e1l | = |e2k | for some k, we get, by Lemma 6.4.2, with
R = (∆t + ∆xmin(3−α,β) ),

|e1 |∞ = sup |e1l | = |e12k | ≤ |e12k | 1 − λ(w0,2k+1 + w1,2k+1 + . . . w2k,2k+1 )
l

− λ(w0,M −(2k−1) + w1,M −(2k−1) + · · · + wM −(2k−1)−1,M −(2k−1) )

≤ |e12k | − λw0,2k+1 |e12k+1 | − λw1,2k+1 |e12k | − · · · − λw2k,2k+1 |e11 |

− λw0,M −(2k−1) |e12k−1 | − λw1,M −(2k−1) |e12k | − · · · − λwM −(2k−1)−1,M −(2k−1) )|e1M −1 |

≤ |e12k | − λw0,2k+1 |e12k+1 | − λw1,2k+1 |e12k | − · · · − λw2k,2k+1 |e11 |

− λw0,M −(2k−1) |e12k−1 | − λw1,M −(2k−1) |e12k | − · · · − λwM −(2k−1)−1,M −(2k−1) |e1M −1 |

≤ |e02k | + ∆tR.
131

Assume that |e1 |∞ = supl |e1l | = |e2k+1 | for some k, we get, by Lemma 6.4.2, with
R = (∆t + ∆xmin(3−α,β) ),

1
|e |∞ = sup |e1l | = |e12k+1 |
≤ |e12k+1 |
1 − λ(w0,2k+2 + w1,2k+2 + . . . w2k+2,2k+2 )
l

− λ(w0,M −2k + w1,M −2k + · · · + wM −2k,M −2k )

= |e12k+1 | − λw0,2k+2 |e12k+2 | − λw1,2k+2 |e12k+1 | − · · · − λw2k+2,2k+2 |e10 |

− λw0,M −2k |e12k | − λw1,M −2k |e12k+1 | + · · · − λwM −2k,M −2k )|e1M |

≤ |e12k+1 | − λw0,2k+2 |e12k+1 | − λw1,2k+2 |e12k+1 | − · · · − λw2k+2,2k+2 |e10 |

− λw0,M −2k |e12k | − λw1,M −2k |e12k+1 | − · · · − λwM −2k,M −2k |e1M |

≤ |e02k+1 | + ∆tR.

Hence we obtain

sup |e1l | ≤ sup |e0l | + ∆tR.


l l

Further, for simplicity, we assume that e0l = 0. Then we have

|e1 |∞ ≤ ∆tR.

Similarly, we can show that

|e2 |∞ ≤ |e1 |∞ + ∆tR ≤ t2 R,

and in general, with 0 ≤ tn ≤ T ,

|en |∞ ≤ tn R ≤ C(∆t + ∆xmin(3−α,β) ).

The proof of Theorem 6.4.6 is now complete.

6.5 Numerical simulations


In this section, we will give some numerical examples. Let us consider the following
space-fractional partial differential equation with nonhomogeneous Dirichlet boundary
132

conditions, with 1 < α < 2,

∂u(t, x) R α
− 0 Dx u(t, x) = f (t, x), 0 < x < 1, t > 0, (6.5.1)
∂t
u(t, 0) = ϕ1 (t), u(t, 1) = ϕ2 (t), (6.5.2)

u(0, x) = u0 (x), (6.5.3)

where ϕ1 (t), ϕ2 (t) are some suitable functions of t and u0 (x) is the initial condition.
Let us recall the numerical method introduced in the previous section. Let m be a
positive integer and let 0 = x0 < x1 < x2 < · · · < x2m = 1 be a space partition of [0, 1]
and ∆x the space stepsize. Let 0 = t0 < t1 < t2 < · · · < xN = 1 be a time partition of
[0, 1] and ∆t the time stepsize.
At x = xl , t = tn , we have, with l = 1, 2, . . . , 2m − 1, and n = 1, 2, . . . , N ,

∂u(t, x)
−R α
0 Dx u(t, x) = f (t, x) , (6.5.4)
∂t x=xl ,t=tn x=xl ,t=tn x=xl ,t=tn

u(tn , 0) = ϕ1 (tn ), u(tn , 1) = ϕ2 (tn ), (6.5.5)

u(0, xl ) = u0 (xl ), (6.5.6)

To get a stable finite difference scheme for this time-dependent problem, we need to
consider the following shifted equation, that is,

∂u(t, x)
−R α
0 Dx u(t, x) = f (t, x) + ρl (tn ), (6.5.7)
∂t x=xl ,t=tn x=xl+1 ,t=tn x=xl ,t=tn

u(tn , 0) = ϕ1 (tn ), u(tn , 1) = ϕ2 (tn ), (6.5.8)

u(0, xl ) = u0 (xl ), (6.5.9)

where
 
ρl (tn ) = − R α
0 Dx u(t, x) −R
0 Dxα u(t, x) .
x=xl+1 ,t=tn x=xl ,t=tn

Note that,

∂u(t, x) u(tn , xl ) − u(tn−1 , xl )


= + O(∆t),
∂t x=xl ,t=tn ∆t
133

and, with l = 2j, j = 1, 2, . . . , m − 1,


I
1
R α
0 Dx u(t, x) = (x2j+1 − ξ)−1−α u(tn , ξ) dξ
x=xl+1 ,t=tn Γ(−α) xx0 2j+1
Z x1
1
= (x2j+1 − ξ)−1−α u(tn , ξ) dξ
Γ(−α) 0
2j
X
−α
+ ∆x wk,2j+1 u(tn , x2j+1−k ) + O(∆x3−α ),
k=0

and, with l = 2j + 1, j = 0, 1, 2, . . . , m − 1,
Z x2j+2
1
R α
0 Dx u(t, x) = (x2j+2 − ξ)−1−α u(tn , ξ) dξ
x=xl+1 ,t=tn Γ(−α) 0
2j+2
X
= ∆x−α wk,2j+2 u(tn , x2j+2−k ) + O(∆x3−α ),
k=0

where wk,2j+1 , wk,2j+2 are defined as in (6.4.6) and (6.4.7).


Denote Ujn ≈ u(tn , xj ). We define the following backward Euler method for solving
(6.5.1)-(6.5.3),

n n−1 2j
U2j − U2j −α
X
n
− ∆x wk,2j+1 U2j+1−k = f (x2j , tn ) + ρn2j
∆t k=0
Z x1
1
+ (x2j+1 − ξ)−1−α u(ξ, tn ) dξ, j = 1, 2, . . . , m − 1,
Γ(−α) 0
n n−1 2j+2
U2j+1 − U2j+1 −α
X
n
− ∆x wk,2j+2 U2j+2−k = f (x2j+1 , tn ) + ρn2j+1
∆t k=0
Z x1
1
+ (x2j+1 − ξ)−1−α u(ξ, tn ) dξ, j = 0, 1, 2, . . . , m − 1,
Γ(−α) 0
∆t
or, with λ = ∆xα
,
2j
X
n n n−1
U2j −λ wk,2j+1 U2j+1−k = U2j + ∆tf (x2j , tn ) + ∆tρn2j
k=0
Z x1
1
+ ∆t (x2j+1 − ξ)−1−α u(ξ, tn ) dξ, j = 1, 2, . . . , m − 1,
Γ(−α) 0

(6.5.10)
2j+2
X
n n n−1
U2j+1 −λ wk,2j+2 U2j+2−k = U2j+1 + ∆tf (x2j+1 , tn ) + ∆tρn2j+1 , j = 0, 1, 2, . . . , M − 1.
k=0

(6.5.11)
134

The numerical methods (6.5.10) - (6.5.11) can be written into the following matrix
form

AU n = U n−1 + ∆tF n + ∆tρn + ∆tI n + Bln + Brn ,

where
       
U1n f (x1 , tn )  − R 0 D α
x u(x ,
2 n t ) − R α
0 D x u(x ,
1 n t ) 
       
U2n f (x2 , tn ) − R D α
u(x , t ) − R α
D u(x , t )
   
3 n 2 n
     0 x 0 x

       
 
Un =  U3n , Fn =  , n
ρ = R α R α
− 0 Dx u(x4 , tn ) − 0 Dx u(x3 , tn ) ,
   
f (x3 , tn )  
.. ..
   
     .. 
 .   .  
 . 

       
n
U2m−1 f (x2m−1 , tn ) − R0 D α
x u(x 2m , t n ) − R α
0 D x u(x 2m−1 , tn )

and
 
0
 R x1 
1 −1−α
(x3 − ξ) u(tn , ξ) dξ 
 
Γ(−α)
0

 
 
n
 0 
I =
 ..
,

 . 
 
1
R x1 −1−α

 
 Γ(−α) 0
(x 2m−1 ξ) u(t n , ξ) dξ 
 
0
 
λw2,2 u(tn , x0 )
   
0 0
 
 
   
 λw4,4 u(tn , x0 )  0
   
 
..
   
Bln =   , Brn =  ,
   
0 .
..
   
   
 .   0 
   
 
 0  λw0,2m u(tn , x2m )
 
λw2m,2m u(tn , x0 )
and

 
1 − λw1,2 −λw0,2 0 0 ... 0
 
−λw2,3 1 − λw1,3 −λw0,3 0 ... 0
 
 
.. .. .. .. .. ..
 
A= .
 
. . . . . .
 
 −λw2m−2,2m−1 −λw2m−3,2m−1 . . . 1 − λw1,2m−1 −λw0,2m−1
 
... 
 
−λw2m−1,2m −λw2m−2,2m ... ... −λw2,2m 1 − λw1,2m
135

Here Bln and Brn are determined by the Dirichlet boundary conditions u(tn , x0 ) and
u(tn , x2m ). We then use MATLAB to obtain all the approximate solutions U n , n =
1, 2, . . . , N .

Example 14. Consider [12]

R α
ut (t, x) = 0 Dx u(t, x) + f (t, x), 0 < x < 2, t > 0 (6.5.12)

u(t, 0) = 0, u(t, 2) = 0, (6.5.13)

u(0, x) = 4x2 (2 − x)2 , 0 < x < 1, (6.5.14)

where

−t 2

2 Γ(2 + 1)
−t
f (t, x) = −4e x (2 − x) − 4e 4 x2−α
Γ(2 − α + 1)
Γ(3 + 1) Γ(4 + 1) 
−4 x3−α + x4−α ,
Γ(3 − α + 1) Γ(4 − α + 1)

The exact solution is u(t, x) = 4e−t x2 (2 − x)2 .

By Theorem 6.4.6, we have

|eN |∞ = |U N − u(tN )|∞ ≤ C(∆t + ∆xγ ), with γ = min(3 − α, β),

where |eN |∞ denotes the L∞ -norm of the error at time tN = 1. In our numerical example,
we know the exact solution u, so we can exactly calculate ρn . In general, we may need
to approximate ρn by using the computed solutions U n with some higher order numerical
methods.
To observe the convergence order with respect to ∆x, we choose ∆t = 2−10 sufficiently
small and the different space stepsizes hl = ∆x = 2−l , l = 3, 4, 5, 6, 7. Hence the error
will be dominated by ∆xγ . Now let |eN
l |∞ = |U
N
− u(tN )|∞ denote the L∞ -norm at
tN = 1 obtained by using the space stepsize hl . For the fixed space stepsize hl = 2−l , l =
3, 4, 5, 6, 7, we have

γ
|eN
l |∞ ≈ Chl , (6.5.15)

which implies that

|eN
l |∞ hγl
≈ γ = 2γ .
|eN |
l+1 ∞ hl+1
136

Hence the convergence order satisfies


 |eN | 

γ ≈ log2 Nl . (6.5.16)
|el+1 |∞
In Table 6.5.1, we obtain the experimentally determined orders of convergence (EOC) for
the different α = 1.2, 1.4, 1.6, 1.8. We see that the convergence order is almost 3 − α which
is consistent with our theoretical convergence order γ = min(3 − α, β). The order 3 − α
term dominates the convergence order in this example. Here and below we will call our
numerical method “the Shifted Diethelm method ”.
In Figures 6.5.1- 6.5.2, we plot the convergence orders with α = 1.20 and α = 1.80,
respectively. The convergence order is O(∆x3−α ) as produced in the Table 6.5.1.

∆t ∆x α = 1.2 α = 1.4 α = 1.6 α = 1.8


2−10 2−3
2−10 2−4 1.5009 1.5203 1.4714 1.5419
2−10 2−5 1.5813 1.4978 1.3432 1.3221
2−10 2−6 1.7058 1.5597 1.3262 1.2168
2−10 2−7 1.8136 1.6285 1.3504 1.1905

Table 6.5.1: The experimentally determined orders of convergence (EOC) at t = 1 in


Example 14 by using the shifted Diethelm method

In [66], the shifted Grünwald difference operator



X
−α (α)
Aαh,p u(x) = ∆x gk u(x − (k − p)∆x)
k=0

approximates the Riemann-Liouville fractional derivative uniformly with first order accu-
racy, i.e.,

Aαh,p u(x) = R α
−∞ Dx u(x) + O(∆x),
(α)
where p is a positive integer and gk = (−1)k ( αk ). Considering a well defined function
u(x) on a bounded interval [a, b] if u(a) = 0 or u(b) = 0, the function u(x) can be
zero extended for x < a or x > b. And then the α order left and right Riemann-
Liouville fractional derivatives of u(x) at each point x can be approximated by the shifted
137

Convergence order, the reference line has slope $3− α$


0

−1

−2

−3
log2(error)
−4

−5

−6

−7

−8

−9
−5 −4.5 −4 −3.5 −3 −2.5 −2 −1.5 −1
log2(∆ x)

Figure 6.5.1: The experimentally determined orders of convergence (“EOC ”) at t = 1 in


Example 14 with α = 1.20

Convergence order, the reference line has slope $3− α$


1

−1
log2(error)

−2

−3

−4

−5

−6
−5 −4.5 −4 −3.5 −3 −2.5 −2 −1.5 −1
log2(∆ x)

Figure 6.5.2: The experimentally determined orders of convergence (“EOC ”) at t = 1 in


Example 14 with α = 1.80
138

Grünwald difference operator Aαh,p u(x). In [90], the authors introduced a weighted and
shifted Grünwald difference operator which has second order accuracy to approximate
the Riemann-Liouville fractional derivative. However the approximation of the left or
right Riemann-Liouville fractional derivatives in [66, 90] by using the shifted Grünwald
difference operator on finite interval [a, b] requires that u(a) = 0 or u(b) = 0 respectively.
In Table 6.5.2, we obtain the experimentally determined orders of convergence (EOC) for
the different α = 1.2, 1.4, 1.6, 1.8 by using the Grünwand difference method in [66]. We
only observe the first order convergence.

∆t ∆x α = 1.2 α = 1.4 α = 1.6 α = 1.8


2−10 2−3
2−10 2−4 0.8970 0.9660 1.1971 1.7665
2−10 2−5 0.9304 0.9997 1.0878 1.4690
2−10 2−6 0.9571 1.0004 1.0340 1.1946
2−10 2−7 0.9792 1.0033 1.0166 1.0674

Table 6.5.2: The experimentally determined orders of convergence (EOC) at t = 1 in


Example 14 by using the shifted Grünwald method

Example 15. We consider the same equation as in Example 14, but with the nonhomo-
geneous Dirichlet boundary condition,

R α
ut (t, x) = 0 Dx u(t, x) + f (t, x), 0 < x < 2, t > 0 (6.5.17)

u(t, 0) = 5, u(t, 2) = 5, (6.5.18)

u(0, x) = 4x2 (2 − x)2 + 5, 0 < x < 1, (6.5.19)

where
 Γ(2 + 1) Γ(3 + 1)
f (t, x) = −4e−t x2 (2 − x)2 − 4e−t 4 x2−α − 4 x3−α
Γ(2 − α + 1) Γ(3 − α + 1)
Γ(4 + 1) Γ(1) 
+ x4−α + 5 x−α .
Γ(4 − α + 1) Γ(1 − α)

The exact solution is u(t, x) = 4e−t x2 (2 − x)2 + 5.


139

We use the same notations as in Example 14. In Table 6.5.3, we obtain the experimen-
tally determined orders of convergence (EOC) for the different α = 1.2, 1.4, 1.6, 1.8. We
see that the convergence order is less than 3 − α. This is because of the nonhomogeneous
boundary conditions.
The approximation of the Riemann-Liouville fractional derivative by using the Grünwald
difference operator on [a, b] in Meerschaert and Tadjeran [66] requires that the function
has the zero extension for x < a and x > b. Hence we require that the function should have
zero boundary conditions on the finite interval in order to get good approximation of the
fractional derivative of such function by using the Grünwald difference operator. In this
example, since the Dirichlet boundary conditions are not homogeneous, we observe that
in Table 6.5.4 the convergence order of the algorithm by using the Grünwald difference
method is rather low. However the shifted Diethelm method works well for the nonhomo-
geneous Dirichlet boundary conditions and the convergence order is approximately equal
to 1 in this example. This is another advantage of using the shifted Diethelm’s method
compared with the Grünwald difference method in Meerschaert and Tadjeran [66].

∆t ∆x α = 1.2 α = 1.4 α = 1.6 α = 1.8


2−10 2−3
2−10 2−4 1.4510 1.4687 1.5479 1.6511
2−10 2−5 1.4388 1.2905 1.2426 1.2030
2−10 2−6 1.3686 1.1039 0.9791 1.0037
2−10 2−7 1.0667 0.8199 0.7011 0.7089

Table 6.5.3: The experimentally determined orders of convergence (EOC) at t = 1 in


Example 15 by using the shifted Diethelm method

Example 16. Consider [12]

R α
ut (t, x) = 0 Dx u(t, x) + f (t, x), 0 < x < 1, t > 0 (6.5.20)

u(t, 0) = 0, u(t, 1) = e−t , (6.5.21)

u(0, x) = xα1 , 0 < x < 1, (6.5.22)


140

∆t ∆x α = 1.2 α = 1.4 α = 1.6 α = 1.8


2−10 2−3
2−10 2−4 0.7821 0.3548 0.6070 1.1859
2−10 2−5 0.5424 0.2148 0.2738 0.5377
2−10 2−6 0.4045 0.2604 0.2348 0.2664
2−10 2−7 0.3801 0.3191 0.2580 0.1939

Table 6.5.4: The experimentally determined orders of convergence (EOC) at t = 1 in


Example 15 by using the shifted Grünwald method

where

Γ(α1 + 1)
f (t, x) = −e−t xα1 − e−t xα1 −α .
Γ(α1 + 1 − α)

The exact solution is u(t, x) = e−t xα1 . In our numerical simulations, we first consider
the nonsmooth solutions with α1 = α. we then consider the smooth solutions with α1 = 3.

For the case α1 = α, we have


Z x
R α α1 2

R α−2

α1 2 1
0 Dx (x ) =D 0 Dx (x ) = D (x − τ )1−α τ α1 dτ = CD2 (x2 ) = C,
Γ(2 − α) 0

for some constant C, which implies that the following Lipschitz condition holds for any
β > 0,

R α
0 Dx u(t, x) −R α β
0 Dy u(t, y) = 0 ≤ C|x − y| .

In Table 6.5.5, we obtain the experimentally determined orders of convergence (EOC)


for the different α = 1.2, 1.4, 1.6, 1.8. We see that the convergence order is less than 3 − α.
This is because the exact solution u is not sufficiently smooth in this case.
For the case α1 = 3, we obtain, in Table 6.5.6, the experimentally determined orders
of convergence (EOC) for the different α = 1.2, 1.4, 1.6, 1.8. We see that the convergence
order is almost 3 − α.
141

∆t ∆x α = 1.2, α1 = 1.2 α = 1.4, α1 = 1.4 α = 1.6, α1 = 1.6 α = 1.8, α1 = 1.8


2−10 2−3
2−10 2−4 1.2981 1.1479 1.0375 0.9583
2−10 2−5 1.4639 1.3352 1.1884 1.0637
2−10 2−6 1.4405 1.4178 1.2836 1.1379
2−10 2−7 1.2192 1.4118 1.3292 1.1831

Table 6.5.5: The experimentally determined orders of convergence (EOC) at t = 1 in


Example 16 for α1 = α

∆t ∆x α = 1.2, α1 = 3 α = 1.4, α1 = 3 α = 1.6, α1 = 3 α = 1.8, α1 = 3


2−10 2−3
2−10 2−4 1.3625 1.2416 1.1532 1.0745
2−10 2−5 1.5740 1.3951 1.2398 1.1111
2−10 2−6 1.7143 1.5008 1.3099 1.1440
2−10 2−7 1.8557 1.5754 1.3585 1.1690

Table 6.5.6: The experimentally determined orders of convergence (EOC) at t = 1 in


Example 16 for α1 = 3

Example 17. Consider the same equation as in Example 16, but with nonhomogeneous
boundary conditions.

R α
ut (t, x) = 0 Dx u(t, x) + f (t, x), 0 < x < 1, t > 0 (6.5.23)

u(t, 0) = 1, u(t, 1) = e−t + 1, (6.5.24)

u(0, x) = xα1 + 1, 0 < x < 1, (6.5.25)

where

Γ(α1 + 1)
f (t, x) = −e−t xα1 − e−t xα1 −α .
Γ(α1 + 1 − α)
142

The exact solution is u(t, x) = e−t xα1 + 1. In our numerical simulations, we consider
the smooth solution with α1 = 3.

In Table 6.5.7, we obtain the experimentally determined orders of convergence (EOC)


for the different α = 1.2, 1.4, 1.6, 1.8. We see that the convergence order is almost 3 − α
even under the nonhomogeneous boundary conditions.

∆t ∆x α = 1.2, α1 = 3 α = 1.4, α1 = 3 α = 1.6, α1 = 3 α = 1.8, α1 = 3


2−10 2−3
2−10 2−4 1.3961 1.2732 1.1686 1.0791
2−10 2−5 1.5847 1.4090 1.2514 1.1165
2−10 2−6 1.7003 1.5015 1.3149 1.1474
2−10 2−7 1.7823 1.5581 1.3562 1.1698

Table 6.5.7: The experimentally determined orders of convergence (EOC) at t = 1 in


Example 17 for α1 = 3

Example 18. Consider the following two-sided space-fractional partial differential equa-
tion, [66]

ut (t, x) = c+ (t, x) R α R α
0 Dx u(t, x) + c− (t, x) x D1 u(t, x) + f (t, x), 0 < x < 2, t > 0
(6.5.26)

u(t, 0) = u(t, 2) = 0, (6.5.27)

u(0, x) = 4x2 (2 − x)2 , 0 < x < 2, (6.5.28)

where

c+ (t, x) = Γ(1.2)x1.8 and c− (t, x) = Γ(1.2)(2 − x)1.8

−t

2 2 3 25 4
3 4

f (t, x) = −32e x + (2 − x) − 2.5(x + (2 − x) ) + (x + (2 − x) ) .
22

The exact solution is u(t, x) = 4e−t x2 (2 − x)2 .


143

We use the same notations as in Example 14. In Table 6.5.8, we obtain the experi-
mentally determined orders of convergence (EOC) for the different α = 1.2, 1.4, 1.6, 1.8.
We see that the convergence order is almost 3 − α. The order 3 − α term dominates the
convergence order in this example.

∆t ∆x α = 1.2 α = 1.4 α = 1.6 α = 1.8


2−10 2−3
2−10 2−4 1.3872 1.2531 1.1841 1.1424
2−10 2−5 1.5540 1.2676 1.1884 1.1425
2−10 2−6 1.6878 1.4151 1.2607 1.1280
2−10 2−7 1.7892 1.4580 1.1861 1.1961

Table 6.5.8: The experimentally determined orders of convergence (EOC) at t = 1 in


Example 18 by using the shifted Diethelm method
Chapter 7

Conclusions and possibilities for


further work

This thesis has extended the existing numerical methods (Diethelm’s numerical method
and Fractional Adams-type method) to obtain higher orders convergence in the solution
to fractional order differential equations.
Applying quadratic interpolation polynomial to discretize Hadamard finite-part inte-
gral in Diethelm’s method the convergence order is O(h3−α ) when, 0 < α < 1, whereas,
the existing order of convergence is O(h2−α ) when, 0 < α < 1. And in the Adams-type
approximation method we have found the convergence order is O(h1+2α ) for 0 < α < 1
and O(h3 ) for 1 < α < 2 which are higher than the existing results. The advantage of
the method is we can solve non-linear fractional differential equations as well as linear
fractional differential equations and we can avoid non-linear calculations in the Newton
iteration process.
In Chapter 5 the Richardson extrapolation algorithm was discussed as a tool to accel-
erate the order of convergence for our considered numerical methods. The extrapolation
algorithm is applicable if the sequence of the approximate solutions of the problem pos-
sesses an asymptotic expansion and it was proved that the two approximate methods that
we considered possess an asymptotic expansion. We also discussed how to approximate
the initial value and the initial integrals of the proposed numerical methods.
Finally, we consider the finite difference method for solving space-fractional partial
differential equations. We proved that both the standard explicit finite difference method

144
145

and implicit finite difference methods are unconditionally unstable. To find a stable
finite difference method we introduce implicit shifted Diethelm finite difference method
for solving two-sided space-fractional partial differential equations. We proved that, the
method is unconditionally stable and the order of convergence of the finite difference
method is O(∆t + ∆xmin(3−α,β) ), 1 < α < 2, β > 0, where ∆t, ∆x denote the time and
space stepsizes, respectively.
The importance of research into fractional order differential equations and their signif-
icance to future applications warrant continued study. We propose some possible research
topics in this active research area:

• Higher order numerical methods for solving fractional differential equation with
variable steps.

• Higher order numerical methods for solving time-fractional PDEs.

• Higher order numerical methods for solving time-space-fractional PDEs.


Bibliography

[1] T. J. Anastasio, The fractional-order dynamics of brainstem vestibulo-oculomotor


neurons , Biological Cybernetics, 72(1994), 69-79.

[2] B. Baeumer, M. Kovács and M. M. Meerschaert, Numerical solutions for fractional


reaction-diffusion equations, Comput. Math. Appl., 55(2008), 2212-2226.

[3] R. L. Bagley, R. A. Calico, Fractional order state equations for the control of vis-
coelastic structures, Journal of Guidance, Control and Dynamics, 14 (1991) 304-
311.

[4] D. Baleanu, K. Diethelm, E. Scalas and J. J. Trujillo, Fractional Calculus: Mod-


els and Numerical Methods, Series on Complexity, Nonlinearity and Chaos, World
Scientific, Vol 3 (2012).

[5] L. Blank, Numerical treatment of differential equations of fractional order, Manch-


ester Centre for Computational Mathematics, Numerical Analysis Report, (1996).

[6] C. Brezinski, A general extrapolation algorithm, Numer. Math., 35(1980), 175-187.

[7] C. Brezinski and M. Redivo Zaglia, Extrapolation Methods, Theory and Practice,
Elsevier Science Publishers, North Holland, (1991).

[8] A. Bueno-Orovio, D. Kay, V. Grau, B. Rodriguez and K. Burrage, Fractional


diffusion models of cardiac electrical propagation: role of structural heterogeneity
in dispersion of repolarization, Journal of the Royal Society Interface. 11 (2014)
http://dx.doi.org/10.1098/rsif.2014.0352.

[9] J. Cao and C. Xu, A high order schema for the numerical solution of the fractional
ordinary differential equations, J. Comp. Phys., 238(2013), 154-168.

146
147

[10] R. Caponetto, G. Dongola, L. Fortuna and I. Petráš, Fractional order systems:


Modelling and Control Applications, Singapore: World Scientific Series on Nonlinear
Science, Series A (2010), Vol 72.

[11] S. Chen and F. Liu, ADI-Euler and extrapolation methods for the two-dimension
fractional advection-dispersion equation, J. Appl. Math. Comput., 26 (2008), 295-
311.

[12] H. W. Choi, S. K. Chung and Y. J. Lee, Numerical solutions for space fractional dis-
persion equations with nonlinear source terms, Bull. Korean Math. Soc., 47 (2010),
1225-1234.

[13] J. A.Connolly, The numerical solution of fractional and distributed order differential
equations, University of Liverpool (University of Chester), Dec-2004.

[14] O. D. Craiem and R. L. Magin, Fractional order models of viscoelasticity as an


alternative in the analysis of red blood cell (RBC) membrane mechanics, Phys. Bios.,
7 (2010), 13001.

[15] O. D. Craiem, F. J. Rojo, J. M. Atienza, R. L. Armentano and G. V. Guinea,


Fractional-order viscoelasticity applied to described uniaxial stress relaxation of hu-
man arteries, Physics in Medicine and Biology, 53 (2008), 4543-4554.

[16] O. D. Craiem and R. L. Armentano, A fractional derivative model to described


arterial viscoelasticity, Biorheology, 44 (2007), 251-263.

[17] L. Debnath, Recent applications of fractional calculus to science and engineering,


Hindawi Publication Corp., 54(2003), 3413-3442.

[18] W. H. Deng, Numerical algorithm for the time fractional Fokker-Planck equation,
J. Comp. Phys., 227(2007), 1510-1522.

[19] W. H. Deng, Short memory principle and a predict-corrector approach for fractional
differential equations, J. Comput. Appl. Math., 206(2007), 174-188.

[20] W. H. Deng, Finite element method for the space and time fractional Fokker-Planck
equation, SIAM J. Numer. Anal., 47(2008), 204-226.
148

[21] W. H. Deng and J. S. Hesthaven, Discontinuous Galerkin methods for frac-


tional diffusion equations, ESAIM: Mathematical Modelling and Numerical analysis,
47(2013), 1845-1864.

[22] W. H. Deng and C. Li, Numerical schemes for fractional ordinary differential equa-
tions, Numerical Modelling, edited by: Prof. Peep Miidla, Chapter 16, 355-374,
Publisher InTech, 2012.

[23] T. C. Doegring, A. D. Freed, E. O. Carew, I. Vesely, et al, Fractional-order vis-


coelasticity of the aortic valve cusp: an alternative to quasilinear viscoelasticity, J.
Biomech. Eng., 127(2005), 700-708.

[24] K. Diethelm, Generalized compound quadrature formulae for finite-part integral,


IMA J. Numer. Anal., 17 (1997) 479- 493.

[25] K. Diethelm, The Analysis of Fractional Differential Equations, An Application-


Oriented Using Differential Operators of Caputo Type, Lecture Notes in Mathemat-
ics, Springer, (2010).

[26] K. Diethelm, An algorithm for the numerical solution of differential equation of


fractional order, Electron . Trans. Numer. Anal., 5 (1997) 1 - 6.

[27] K. Diethelm, N.J. Ford, Analysis of fractional differential equations, J. Math. Anal.
Appl., 265 (2002) 229 -248.

[28] K. Diethelm, N.J. Ford, A.D. Freed, Detailed error analysis for a fractional Adams
method, Numerical Algorithms, 36 (2004), 31 -52.

[29] K. Diethelm, N. J. Ford, A.D. Freed, A predictor-corrector approach for the numer-
ical solution of fractional differential equations, Nonlinear Dynamics, 29 (2002), 3-
22.

[30] K. Diethelm, J.M. Ford, N.J. Ford and M. Weilbeer, Pitfalls in fast numerical solvers
for fractional differential equations, J. Comp. Appl. Math., 186(2006), 482-503.

[31] K.Diethelm and A.D. Freed, On the solution of nonlinear fractional-order differ-
ential equations used in the modelling of viscoelasticity, in ”Scientific Computing
149

in Chemical Engineering II-Computational Fluid Dynamics, Reaction Engineering


and Molecular Properties” (F.Keil, W.Mackens, H. Voss and J. Werther, Eds.),
Springer-Verlage, Heidelberg, (1999), 217-224.

[32] K.Diethelm and Y. Luchko, Numerical solution of linear multi-term initial value
problems of fractional order, J. Comput. Anal. Appl., 6(2004), 243-263.

[33] K. Diethelm and G. Walz Numerical solution of fractional order differential equa-
tions by extrapolation, Numerical Algorithms, 16 (1997) 231 - 253.

[34] Y. Dimitrov, Numerical approximations for fractional differential equations, Journal


of Fractional Calculus and Applications, 5(2014), 1-45.

[35] D. Elliot, An asymptotic analysis of two algorithms for certain Hadamard finite-part
integrals, IMA J. Numerical Anal., 13 (1993) 445- 462.

[36] A. Erdelyi, W.Magnus, F. Oberhettinger, and F.G. Tricomi, Higher Transcendental


Functions, Vol. 3, McGraw-Hill, New York, 1955.

[37] V. J. Ervin, N. Heuer and J. P. Roop, Numerical approximation of a time dependent


nonlinear, space-fractional diffusion equation, SIAM J. Numer. Anal., 45(2007), 572-
591.

[38] V. J. Ervin and J. P. Roop, Variational formulation for the stationary frac-
tional advection dispersion equation, Numer. Methods Partial Differential Equa-
tions, 22(2006), 558-576.

[39] V. J. Ervin and J. P. Roop, Variational solution of fractional advection dispersion


equations on bounded domains in Rd , Numer. Methods Partial Differential Equa-
tions, 23(2007), 256-281.

[40] G.J. Fix and J. P. Roop, Least squares finite-element solution of a fractional order
two-point boundary value problem, Comput. Math. Appl., 48(2004), 1017-1033.

[41] N. J. Ford, M. L. Morgado, M. Rebelo, Nonpolynomial collocation approximation


of solutions to fractional differential equations, Fractional Calculus and Applied
Analysis, 16(2013), 874-891.
150

[42] N. J. Ford, K. Pal and Y. Yan, An algorithm for the numerical solution of two-sided
space-fractional partial differential equations, Computational Methods in Applied
Mathematics, 15(2015), 497-514.

[43] N. J. Ford, M. M. Rodrigues, J. Xiao and Y. Yan, Numerical analysis of a two-


parameter fractional telegraph equation, Journal of Computational and Applied
Mathematics, 249(2013), 95-106.

[44] N. J. Ford and A. C. Simpson, The numerical solution of fractional differential


equations: speed versus accuracy, Numer. Algorithms, 26(2001), 333-346.

[45] N. J. Ford, J. Xiao and Y. Yan, A finite element method for time fractional partial
differential equations, Fractional Calculus and Applied Analysis, 14 (2011), 454-474.

[46] N. J. Ford, J. Xiao and Y. Yan, Stability of a numerical method for space-time-
fractional telegraph equation, Computational Methods in Applied Mathematics,
12(2012), 273-288.

[47] R. Gorenflo, Fractional Calculus: Some Numerical Methods, CISM Lecture Notes,
1996.

[48] R. Gorenflo, and F. Mainardi, Random walk models for space-fractional diffusion
process, Fractional Calculus and Applied Analysis, 1(1998), 167-191.

[49] R. Gorenflo, F. Mainardi, Fractional calculus: Integral and differential equations of


fractional order, Springer Verlag, Wien and New York, 1997.

[50] M. Ichise, Y. Nagayanagi, T. Kojima, An analog simulation of non-integer order


transfer functions for analysis of electrode processes, J. Electroanalytical Chemistry
and Interfacial Electrochemistry, 33(1971), 253- 263.

[51] A. A. Kilbas, H. M. Srivastava, J. J. Trujillo, Theory and Applications of Fractional


Differential Equations, Elsevier, 2006.

[52] P. Kumar, O.P. Agrawal, An approximate method for numerical method of fractional
differential equation, Signal Proc., 86 (2006), 2602- 2610.
151

[53] CH. Lubich, A stability analysis of convolution quadraturea for Abel-Volterra integral
equations, IMA Journal Numer. Anal., 6 (1986), 87 - 101.

[54] CH. Lubich, Fractional linear multi-step methods for Abel-Volterra integral equations
of the second kind, Math. Comp., 45(1985), 463 - 469.

[55] CH. Lubich, Discretized fractional calculus, SIAM J. Math. Anal., 17 (1986), 704 -
719.

[56] J. N. Lyness, Finite-part integrals and the Euler-MacLaurin expansion. In R. V. M.


Zahar (ed.): Approximation and Computation, Internat. Ser. Numer. Math. 119,
Birkhäuser, Basel, 1994, 397-407.

[57] X. J. Li and C. J. Xu, A space-time spectral method for the time fractional diffusion
equation, SIAM J. Numer. Anal., 47(2009), 2108-2131.

[58] X. J. Li and C. J. Xu , Existence and uniqueness of the weak solution of the space-
time fractional diffusion equation and a spectral method approximation, Commun.
Comput. Phys., 8(2010), 1016-1051.

[59] A. Le Mehaute and G. Crepy, Introduction to transfer and motion in fractal media:
the geometry of kinetics, Solid State Ionics, 9-10(1983) 17-30.

[60] C. P. Li and F. Zeng, Finite difference methods for fractional differential equations,
International Journal of Bifurcation and Chaos, 22(2012), 1230014 (28 pages).

[61] F. Liu, V. Anh and I. Turner, Numerical solution of space fractional Fokker-Planck
equation, J. Comp. Appl. Math., 166(2004), 209-219.

[62] V. E. Lynch, B. A. Carreras, D. del-Castillo-Negrete, K. M. Ferreira-Mejias and H.


R. Hicks , Numerical methods for the solution of partial differential equations of
fractional order, J. Comput. Phys., 192(2003), 406-442.

[63] F. Mainardi, M. Raberto, R. Gorenflo and E. Scalas, Fractional calculus and


continuous-time finance II: the waiting-time distribution, Physica, 287(2000), 468-
481.
152

[64] M. M. Meerschaert and C. Tadjeran, Finite difference approximation for fractional


advection-dispersion flow equations, J. Comput. Appl. Math., 172 (2004), 65-77.

[65] M. M. Meerschaert, H. Scheffler and C. Tadjeran, Finite difference methods for


two-dimensional fractional dispersion equation, J. Comput. Phys., 211(2006), 249-
261.

[66] M. M. Meerschaert and C. Tadjeran, Finite difference approximations for two-


sided space-fractional partial differential equations, Applied Numerical Mathematics,
56(2006), 80-90.

[67] M. M. Meerschaert and E. Scalas, Coupled continuous time random walks in finance,
Physica A, 370(2006), 114-118.

[68] R. Metzler and J. Klafter, The restaurant at the end of the random walk: recent
developments in the description of anomalous transport by fractional dynamics, J.
Phys. A: Math. Gen., 37(2004), R161-R208.

[69] K. J. Maloy, J. Feder, F. Boger, and T. Jossang, Fractional structure of hydrody-


namic dispersion in porous media, Phys. Rev. Lett., 61(1988), 2925-2928.

[70] Z. M. Odibat, Computational algorithms for computing the fractional derivatives of


functions, Mathematics and Computers in Simulation, 79(2009), 2013-2020.

[71] Z. M. Odibat, Approximations of fractional integrals and Caputo fractional deriva-


tives, Applied Mathematics and Computation, 178(2006), 527-533.

[72] K. Oldham and J. Spanier, The Fractional Calculus, Academic Press, San Diego,
1974.

[73] K. Pal, F. Liu and Y. Yan, Numerical solutions for fractional differential equations
by extrapolation, Lecture Notes in Computer Science, Springer series, 9045 (2015),
299-306.

[74] K. Pal, F. Liu, Y. Yan and G. Roberts, Finite difference method for two-sided
space-fractional partial differential equations, Lecture Notes in Computer Science,
Springer series, 9045 (2015), 307-314.
153

[75] P. Perdikaris and G.E. Karniadakis, Fractional-order viscoelasticity in one-


dimensional blood flow models, Annals of Biomedical Engineering, 42(5) (2014),
1012-1023.

[76] I. Podlubny, Fractional Differential Equations, Mathematics in Science and Engi-


neering, Vol. 198, Academic Press, 1999.

[77] I. Podlubny, A. Chechkin, T. Skovranek, Y. Q. Chen and B. Vinagre, Matrix


approach to discrete fractional calculus II: partial fractional differential equations,
J. Comput. Phys., 228(2009), 3137-3153.

[78] D. A. Robinson, The use of control systems analysis in the neurophysiology of eye
movements, Ann. Rev. Neurosci, 4 (1981), 463-503.

[79] S. Shen, F. Liu and V. Anh, Numerical approximations and solution techniques
for the space-time Riesz-Caputo fractional advection-diffusion equation, Numerical
Algorithms, 56(2011), 383-403.

[80] S. Shen, F. Liu, V. Anh and I. Turner, The fundamental solution and numerical
solution of the Riesz fractional advection-dispersion equation, IMA J. Appl. Math.,
73(2008), 850-872.

[81] S. Shen, F. Liu, V. Anh, I. Turner and J. Chen, A novel numerical approxima-
tion for the space fractional advection-dispersion equation, IMA Journal of Applied
Mathematics, 79(2014), 431 - 444.

[82] D. P. Simpson, I. W. Turner and M. Ilic , A generalised matrix transfer technique for
the numerical solution of fractional-in-space partial differential equations, Preprint
(2007).

[83] E. Sousa, Finite difference approximations for a fractional advection diffusion prob-
lem, J. Comput. Phys., 228(2009), 4038-4054.

[84] E. Sousa, How to approximate the fractional derivative of order 1 < α ≤ 2, In-
ternational Journal of Bifurcation and Chaos, 22(2012), 1250075, (13 pages) DOI:
10.1142/S0218127412500757.
154

[85] E. Sousa and C. Li, A weighted finite difference method for the fractional diffusion
equation based on the Riemann-Liouville derivative, Applied Numerical Mathemat-
ics, 90(2015), 22-37.

[86] L. J. Su, W. Q. Wang and Q. Y. Xu, Finite difference methods for fractional dis-
persion equations, Applied Mathematics and Computation, 216(2010), 3329-3334.

[87] H. H. Sun, A. A. Abdelwahab, B. Onaral, Linear approximation of transfer function


with a pole of fractional order, IEEE Trans. Automat. Control, AC-29 (1984), 441-
444.

[88] C. Tadjeran, M. M. Meerschaert and H. Scheffler, A second-order accurate numeri-


cal approximation for the fractional diffusion equation, J. Comput. Phys., 213(2006),
205-213.

[89] C. Tadjeran, M. M. Meerschaert, A second-order accurate numerical method for


the two-dimensional fractional diffusion equation, J. Comput. Phys., 220(2007), 813-
823.

[90] W. Tian, H. Zhou and W. H. Deng, A class of second order difference approx-
imations for solving space fractional diffusion equations, Math. Comp., 84(2015),
1703-1727.

[91] B. West and V. Seshadri, Linear systems with Lévy fluctuations, Physica,
A113(1982), 203-216.

[92] G. Walz, Asymptotics and Extrapolation, Akademie-Verlag, Berlin, 1996.

[93] Y. Yan, K. Pal and N. J. Ford, Higher order numerical methods for solving fractional
differential equations, BIT Numer. Math., 54 (2014), 555-584.

[94] Q. Yang, F. Liu and I. Turner, Numerical methods for fractional partial differential
equations with Riesz space fractional derivatives, Appl. Math. Model., 34(2010),
200-218.

[95] L. Zhao and W. H. Deng, Jacobi-predictor-corrector approach for the fractional


ordinary differential equations, arXiv:1201.5952v2[math.NA], 2012.

You might also like