0% found this document useful (0 votes)
27 views73 pages

Fractional Derivatives

This chapter provides a short history of fractional derivatives, covering contributions from mathematicians such as Leibniz, Euler, Lacroix, Fourier, Abel, Liouville, Riemann, Laurent and Heaviside over the past few centuries. Leibniz was one of the first to consider fractional derivatives in the late 1600s, though the topic was not extensively studied until the 1800s and 1900s.

Uploaded by

imtiaz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views73 pages

Fractional Derivatives

This chapter provides a short history of fractional derivatives, covering contributions from mathematicians such as Leibniz, Euler, Lacroix, Fourier, Abel, Liouville, Riemann, Laurent and Heaviside over the past few centuries. Leibniz was one of the first to consider fractional derivatives in the late 1600s, though the topic was not extensively studied until the 1800s and 1900s.

Uploaded by

imtiaz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

Rowan University

Rowan Digital Works

Theses and Dissertations

12-15-2000

Fractional derivatives
John M. Beach
Rowan University

Follow this and additional works at: https://rdw.rowan.edu/etd

Part of the Mathematics Commons

Recommended Citation
Beach, John M., "Fractional derivatives" (2000). Theses and Dissertations. 1630.
https://rdw.rowan.edu/etd/1630

This Thesis is brought to you for free and open access by Rowan Digital Works. It has been accepted for inclusion
in Theses and Dissertations by an authorized administrator of Rowan Digital Works. For more information, please
contact [email protected].
FRACTIONAL DERIVATIVES

By
John M. Beach

A Thesis

Submitted in partial fulfillment of the requirements of the


Master of Arts Degree
Of
The Graduate School
At
Rowan University
December 15, 2000

Approvedby
Professor

Date Approved _ W"I ) - —


ABSTRACT

John M. Beach

Fractional Derivatives

Year: 2000

Advisor: Dr. Marcus W. Wright


Mathematics Program

In this thesis, the reader will not find a study of any kind; there is no

methodology, questionnaire, interview, test, or data analysis. This thesis is simply a

research paper on fractional derivatives, a topic that I have found to be fascinating. The

reader should be delighted by a short history of the topic in Chapter 1, where he/she will

read about the contributions made by some of the great mathematicians from the last

three centuries.

In Chapter 2 the reader will find and intuitive approach for finding the general

fractional derivative for functions such as e', x P, andf(x). Other topics in Chapter 2

include branch lines and the Weyl Transform. All of the work preformed by an intuitive

approach is backed up by a rigorous approach using Complex Analysis in Chapter 3. In

Chapter 4 the reader will find an excellent application of fractional derivatives in solving

the tautochrone problem.


No paper on fractional derivatives could be complete with out a Chapter (5) on

Oliver Heaviside. Heaviside's thoughts on rigorous formalism and his use of non-logical

mathematics should delight the reader.

Lastly, the reader should enjoy my final thoughts on this topic as well as

Heavisides' thoughts.
MINI-ABSTRACT

John M. Beach

Fractional Derivatives

Year: 2000

Advisor: Dr. Marcus W. Wright

Mathematics Program

In this thesis, the reader will not find a study of any kind; there is no

methodology, questionnaire, interview, test, or data analysis. This thesis is simply a

research paper on fractional derivatives, a topic that I have found to be fascinating.

The reader should enjoy my final thoughts on this topic as well as my thoughts on Oliver

Heaviside.
Acknowledgment

The author wishes to express his indebtedness to the following. Dr. Marcus W.

Wright of Rowan University, Mathematics Department for his guidance and

encouragement throughout my years at the University, and to Professor Thomas J. Osler,

from the same institution, for his valuable references and lectors, without which this

paper would not be possible. Also, I express my indebtedness to my wife Lori for her

patience and understanding do to the lack of time that we had together. Also, to my

family and friends for their understanding and encouragement, and to my mother for her

countless prayers.

Lastly, I wish to express my indebtedness to my two brothers Frank and Robert

for their endless pushing, (or should I say nagging) and encouragement. With special

thanks to Frank for his excellent proof-reading, and to Robert for talking me into going

back to school six and a half years ago, for which none of this would be possible.

ii
CONTENTS

A Short History of Fractional Derivatives, 1

1. Liebniz 1690's, 1

2. Euler 1738, 2

3. Lacroix 1819, 3

4. Fourier 1822, 4

5. Abel 1823, 5

6. Liouville 1832, 6

7. The fight of the Decade, 8

8. Riemann (late 1800's), 9

9. Laurent 1884, 10

10. Heaviside 1892, 12

11. 1892 to 1974, 13

12. The Great Explosion 1974, 14

13. References, 16

An Intuitive Approach, 18

1. Derivatives of eax, 18

2. Derivatives of xP, 22

3. Derivatives off(x), 24

4. The Big Fix, 25

iii
5. The Weyl Transform, 28

6. References, 30

III. A Rigorous Approach Using Complex Analysis, 31

1. Riemann Liouville Integral, 31

2. Cauchy's Integral Formula, 31

3. "Swinging" the Branch Line, 32

4. Recap of the Cauchy' s and Riemann Liouville Integrals, 38

5. References, 39

IV. The Tautochrone Problem, 40

1. The rules, 40

2. The Potential Energy Function, 41

3. The Conservation of Energy Law, 42

4. The Law of Indexes, 44

5. "Canned" integrals from the Mathematical Handbook, by Murray R. Spiegel,

46

6. Trigonometry Review, 49

7. The Tautochrone is solved, 49

8. References, 50

V. Oliver Heaviside, 51

1. Heaviside's Thoughts on the Study of Mathematics and Physics, 51

2. Heaviside's Thoughts on Rigorous Formalism, 52

3. Lord Rayleigh's Thoughts on Mathematical Rigor, 52

4. The Current in a Wire, 53


iv
5. Heaviside's Use of Non-Logical Mathematics, 54

6. Heaviside's First Proof, 55

7. Heaviside's Second Proof, 56

8. Heaviside's Second Use of Non-Logical Mathematics, 57

9. Heaviside's Statement to Justified His Unsound Mathematical Proof, 58

10. The Writer's Closing Thoughts on Oliver Heaviside, 59

11. References, 60

Conclusion, 61

Bibliography, 63

v
Chapter 1

A Short History of Fractional Derivatives

Leibniz 1690's

The origin of fractional derivatives is not certain at this time, but we do know that

Leibniz, the inventor of the notation dy / dx, had "toyed" with the idea in the 1600's. In

1695 L'Hopital asked Leibniz: "What if n be '/2?" Surprisingly Leibniz [1] replied:

"...You can see by that, sir, that one can express by an infinite series a quantity such as

dc 2xy or d' 2xy. Although infinite series and geometry are distant relations, infinite series

admits only the use of exponents that are positive and negative integers, and does not, as

yet, know the use of fractional exponents..." As with most great mathematicians, Leibniz

had a unique insight into the unknown. He stumbled onto fractional derivatives realizing

that one-day great things will come from his work. What they would be, he had no idea.
2
In the same letter he continued: "...Thus it follows that dl x will be equal to x Vdx: x.

This is an apparent paradox from which, one day, useful consequences will be drawn..."

Leibniz insight did not stop there. Three years latter in a letter to John Wallis, he

discussed ways of using fractional derivatives in Wallis's infinite product for 2,r. He

states [2]: "...Differential calculus might have been used to achieve this result..." It

should be evident that Leibniz did not have just a passing thought on fractional

derivatives, he must have spent a considerable amount of time on the topic. I wish I were

a fly on the wall in the 1600's.

1
Euler 1738

Euler, another great mathematician, toyed with the idea of fractional derivatives.

43 years after Leibniz went pubic with his controversial ideas of fractional derivatives,

Euler stated in his 1738 dissertation [3]: "...When n is a positive integer, and if p should
n can always be expressed algebraically, so that if n =
be a function of x, the ratio d"p to dx

2 and p = x3 , then dX 3 to dx is 6x to 1. Now it is asked what kind of ratio can then be

made if n be a fraction. The difficulty in this case can easily be understood. For if n is a

positive integer dn can be found by continued differentiation. Such a way, however, is not

evident if n is a fraction. But yet with the help of interpolation which I have already

explained in this dissertation, one may be able to expedite the matter..."

Searching through several books on this topic I only found one "hit" in 80 years

after Euler's dissertation, that "hit" was Laplace. In 1812 Laplace mentioned, in passing,

fractional derivatives by means of integrals. If he was around today I am sure he would

be sorry he did not do more on the subject.

2
Lacroix 1819

In 1819 Lacroix wrote a 700-page textbook on differential and integral calculus. He

stumbled over fractional derivatives in a two-page exercise; he develops the nth

derivative and then generalizes it with the gamma functions. He finishes the exercise with

an example for when y = x and n = '/2; he obtained:

da2y 2_,/x
dx 4

It appears to me that Lacroix "missed the boat" on fractional derivatives. This will

become evident to the reader in Chapter 2, equation a8.

3
Fourier 1822

Three years later Fourier wrote about derivatives of arbitrary order where he

generalized his formula using u as an arbitrary number. He obtained:

duf(x) = fI pU cos[p(x - a) + 2u7r]dp

He stated [6]: "...The number u that appears above will be regarded as any quantity

whatsoever, positive or negative..." Too bad Fourier did not go farther with this topic.

4
Abel 1823

Up to this point in time mathematicians only "played" with the notion of

fractional derivatives. One year after Fourier, Abel [7] took the proverbial ball and ran

with it. While Able was "toying" with the tautochrone problem he stumbled over the

solution by using fractional calculus. Examples of the tautochrone problem will be shown

later. Without going too far into the solution, Abel's general integral equation for k is

given as follows:

k= f(x-t)2f(t)dt

Where k is a known constant for the amount of time it takes for a frictionless mass to

slide down a curve no matter where the mass starts. The functionf is unknown and will

be determined at a latter time.

Able "played" with general integral equations until he came up with the

following:

k =Jf (x)
dx2 (Where k is a known constant)

Abel used Fourier's integral formulas to solve his problem but never gave him credit for

the solution.

5
Liouville 1832

Nine years after Abel's solution, in 1832, the famous mathematician Liouville

published three memoirs, which were the fruit of the first major study in fractional

derivatives. Shortly after his three memoirs, Liouville published several papers on

theoretical applications using fractional derivatives in the solutions. Liouville begun with

a well known formula of his time:

Dmeax = aea x

He then let v be a derivative with arbitrary order, which yielded:

Deax = avea

He "played" with it in a intuitive way with derivatives of arbitrary order and expanded

the formula in a series until he came up with:

f (x) = ECneanx
n=O , Re an > 0 (a)

Which yielded:

DVf(x)= cnaceaen
n=O

The above formula is sometimes known as Liouville's first formula of fractional

derivatives, which is an intuitive approach of arbitrary order v, where Liouville allowed v

to be any number; rational, irrational, or complex. It should be easy to see that Liouville's

first formula is applicable to functions only in the form of (a).

6
Liouville may or may not have been aware of the narrowness of his first formula

for fractional derivatives, but he came up with his second formula of fractional

derivatives. He started with a definite integral:

I = Iua-le-xdu
IJ~uuo d, a>0, x>0

He "played" with the formula by changing variables and operating on both sides with Dv

to obtain his second formula of fractional derivatives:

DVXa = (-l) F(a + v) x-av


W(a) ,a>0 (b)

Where v is any number rational, irrational, or complex.

Although Liouville was the first to try solving fractional differential equations, he

was not totally correct. He realized that his first and second formulas for fractional

derivatives needed too narrow restrictions to be of much use. His first formula was only

good for the class of (a), and his second formula was only good for functions in the form

x-a with a > 0. It is clear that Liouville was aware of this fact since in one of his memoirs
n
of 1834 [5] he says: "...The ordinary differential equation d"y /dx = 0 has the

complementary solution yc = Co + Cix + C2 X2 + ...-+ cn- 1 xn- 1. Thus dUy / dx = 0 (u

arbitrary) should have a corresponding complementary solution..." While Liouville did

come up with a corresponding complementary solution it became the center of

controversy during his time. One would wish that he had gone further in developing this

topic.

7
The Fight of the Decade

From 1833 to 1848 several mathematicians ended up fighting over the work of

Lacroix, Able, and Liouville. In 1833 Peacock supported Lacroix's formula, while

holding Liouville's formulas as being useless except for a few special cases. Peacock

made several errors while trying to support Lacroix's formula; one of his biggest errors

was misapplication of symbolic operations, where he believed that the principles of

symbolic algebra would hold true for derivatives.

On the other hand Kelland supported Liouville on two separate occasions in 1839

and the other time in 1846 when he believed that Liouville's second formula had useful

implications in the form of x - a .

In 1840 De Morgan [6] writes (referring to Lacroix formula and Liouville's

second formula): "...Both these systems may very possibly be part of a more general

system, but at present I incline to the conclusion that neither system has any claim to be

considered as given the form D" m


x , though either may be a form..." Even De Morgan,

one of the great mathematics of all times, could not make up his mind on this matter. In

1848, William Center could not make up his mind either. He stated [7]: "...according to

Liouville's system, by letting a = 0 the fractional derivative of unity equals zero because
°
r(0) = o-...The whole question is plainly reduced to what is dUx / dxU. For when this is

determined we shall determine at the same time which is the correct system..."

Well, who was right? It turns out that De Morgan was correct for both Lacroix

formula and Liouville's second formula were incorporated into a more general formula

years later.
8
Riemann (late 1800's)

Exactly when Riemann worked on fractional derivatives no one knows, for he

never publicized any of it. But we do know he did his work in his student years. Riemann

tried to find the general solution by way of the Taylor series and letting '(x) be the

complementary function, which yielded:

D-V f (x) = 1v) (X - t)V- 1 f (t)dt + (x)


' ^ (Y) ~~~~~~(c)

No one is sure that Riemann knew exactly what the outcome of a complementary

function would be, for he used it to provide a "measure of the deviation". In 1880 Cayley

[8] stated: "...The greatest difficulty in Riemann's theory, it appears to me, is the

question of the meaning of a complementary function containing an infinity of arbitrary

constants...Any satisfactory definition of a fractional operation will demand that this

difficulty be removed..." Later in his paper, Cayley says: "...Riemann was hopelessly

entangled in his version of a complementary function..." All too many times, when we

become to close to a project that we are working on, we can not see the trees through the

proverbial forest. It appears to me that Riemann had this same problem. Riemann did

little more with this topic, but we will see that he had tremendous insight, and several

mathematicians built on his work.

9
Laurent 1884

Two mathematicians, Sonin and Letnikov, developed the prelude to the idea of

fractional derivatives for modern mathematicians. In 1869 Sonin wrote a paper, "On

Differentiation With Arbitrary Index", and Letnikov wrote four papers between 1868 and

1872 on the same topic. Both mathematicians started their work with Cauchy's integral

formula:

Dnf(z)= n! f () (d
27/ J(_ -z) (d)

Where c represents a closed contour going around once counter clockwise. Sonin and

Letnikov were off to a great start since it was permitted to generalize n!. Both knew about

the gamma function and how v! = F(v + 1) when v! takes on arbitrary values of integers.

They knew when n was an integer they would obtain a simple pole in the contour of the

close circuit. They saw when n was not an integer they would no longer have a simply

pole but a branch cut. Sonin and Letnikov realized the problem but did not provide a

solution.

Unfortunately for Sonin and Letnikov for, 12 years later, in 1884, Laurent solved

the problem. Laurent, as well, started with Cauchy's integral formula (d). He used the

rules of transformation and his contour was an open path on a Riemann surface. He

produced his definition for differentiation for arbitrary order:

t
f(x) r(v)c(x
,Df(Xt)
cDx -t) ()
Re v>0 (e)

10
Do you notice what happens if we let x > c in Laurent's definition (e)? You

should see that it is Riemann's definition (c) without his complementary function P(x). It

is important to note that when c = 0 Laurent obtained:

0 Vf (x)= ft) dt
,r ^ Re v> 0

This version is the most commonly used, and is named the Riemann-Liouville fractional

integral. I believe that it should be named the Liouville-Riemann fractional integral, since

Liouville tried to solve the problem first. In any event he finally received recognition for

his work. I wish he were around today to witness the fruits of his labor.

11
Heaviside 1892

Oliver Heaviside, a genius in his time, has become one of my "heroes," although

he was an untrained scientist, (as stated by Miller and Ross, [10], pp. 13), and not a

mathematician. I look at things the same way as he did. I prefer to use an intuitive

approach when looking at problems. This was much more common in previous centuries

than now. In 1892 he published several papers on linear functional operators, where his

unorthodox methods led to solving certain engineering problems such as, transmission of

electrical currents in cables, temperature distribution, and the submarine cable equation.

His brilliant methods, solutions, and applications have been collected and named,

"Heaviside operational calculus." But, back in his time, his work was looked at with

suspicion and distrust. He became a laughing stock of the mathematics community since

he was unable to back up his work with rigorous proofs. I can only thank God for a

mathematician by the name of Bromwich. In 1919 Bromwich set out to prove all of

Heaviside's work, which he did by rigorous proofs.

12
1892 to 1974

It is surprising that 82 years have gone by and only a relatively few research

papers have been written on the topic of fractional derivatives, especially since there has

been an explosion of new mathematicians during this time. Some of the few "greats"' are:

Al-Bassam, Davis Erdelyi, Hardy, Kobler, Littewood, Love, Riesz, Samko, Sneddon,

Weyl, Zygmund, and our own Dr. Thomas J. Osler. With all these new mathematicians

coming on the scene, one would think there would be hundreds if not thousand of

research papers. Even Davis [9] in 1936 said: "...The period of the formal development

of operational methods may be regarded as having ended by 1900. The theory of integral

equations was just beginning to stir the imagination of mathematicians and to reveal the

possibilities of operational methods..." It seems to me, not a whole lot of imaginations

were being stirred in 82 years. But then came the year, 1974.

13
The Great Explosion of 1974

1974, This was the year that research into fractional derivatives really exploded.

The very first international conferences on fractional calculus happened in 1974. It was

held at the University of New Haven. Some of the above mentioned mathematicians were

in attendance as well as Askey, Mikolas and many others, including our own Dr. Thomas

J. Osler. The above heading did say "great explosion." The 1974 conference really stirred

the imagination of many of the above mentioned. In just a little over five years there were

more papers written on fractional derivatives then there even was since the beginning of

mathematical time, about 400 total.

Then came the 1980's. The second international conference on fractional calculus

took place ten years latter in 1984. It was held at the University of Strathclyde, Glasgow,

Scotland. It seems that mathematicians from all over the world had jumped onto the

proverbial bandwagon. Mathematicians from Japan, Soviet Union, England, India,

Canada, Venezuela, Scotland, and a host of smaller nations all have written on the topic.

Some of these mathematicians that wrote on the fractional calculus include: Saigo (1980),

Owa (1990), and Nishimoto (1984, 1987, 1989, 1991) who wrote a four-volume set on

applications The three mathematicians mentioned above are from Japan. In 1987

Marichev and Kilbas, from the Soviet Union, wrote an encyclopedia on the topic, along

with applications. Rauna and Sexena from India wrote several papers in the 1980's.

Srivastava from Canada, Kalla from Venezuela, and McBride from Scotland all made it

to the "top" from their work on fractional derivatives. Even our own Dr. Thomas Osler

published or co-published 10 papers on the topic in the 1980's and 90's.


14
One would think that with the thousands of mathematicians in the world today

there would be countless volumes of published works on this topic. Unfortunately, the

fact is most mathematicians have no idea of the opportunities and applications of

fractional calculus. Many would not even know where to start if given a simple problem.

Even worse is the fact many have only heard of fractional derivatives in passing and

some not at all.

I would like to end this short history with a quote from Miller and Ross, [10].

They stated: "...The fractional calculus finds use in many fields of science and

engineering, including fluid flow, rheology, diffusive transport akin to diffusion,

electrical networks, electromagnetic theory, and probability. Some papers by P. C.

Phillips [1989, 1990] have used the fractional calculus in statistics. R. L. Bagley [1990];

Bagley and Torvik [1986] have found uses for the fractional calculus in viscoelasticity

and the electrochemistry of corrosion. It seems that hardly a field of science or

engineering has remained untouched by this topic. Yet even though the subject is old, it is

rarely included in today's curricula. Possibly, this is because many mathematicians are

unfamiliar with its uses..."

Just in case the reader believes the rest of this paper will be a waste of time to

read since the topic is not included in today's curricula. I will "tease you into reading the

rest of the paper by telling you now that I will show the solution to the tautochrone

problem through the use of fractional derivatives as an application problem in Chapter 4

of this paper.

15
References

[1.1] Leibniz, G. W., 1695a. Letter from Hanover, Germany, to G. F. A. L'Hopital,

September 30, 1695, in Mathematische Schriften, 1849; reprinted 1962, Olms Verag,

Hidesheim, Germany, 2, 301-302.

[1.2] Leibniz, G. W., 1697. Letter from Hanover, Germany, to John Wallis, May 28,

1697, in Mathematische Schriften; reprinted 1962, Olms Verag, Hidesheim,

Germany, 4, 25.

[1.3] Euler, L., 1738. De progressionibus transcendentibus, sev quarum termini

generales algebraice dari nequent, CommentariiAcademiae Scientiarum Imperialis

Scientiarum Petropolitanae,5, p. 55.

[1.4] Fourier, J. B. J., 1822, Theorie analytique de la chaleur, Oeuvres de Fourier,Vol.

1,Firmin Didot, Paris, p. 508.

[1.5] Liouville, J., 1834. Memoire sur le theoreme des fonctions complementaraires, J.

Reine Angew. Math. (Crelle's J.), 11, 1-19.

[1.6] De Morgan, A., 1840. The Differential and Integral Calculus Combining

Differentiation,Integration, Development, Differential Equations, Differences,

Summation, Calculus of Variations...withApplications to Algebra, Plane and Solid

Geometry, Baldwin and Craddock, London; published in 25 parts under the

superintendence of the Society for the Diffusion of Useful Knowledge, pp. 597-599.

[1.7] Center, W., 1848. On the value of (d/dx)0 x0 when 0 is a positive proper fraction,

CambridgeDublin Math. J., 3, 163-169.

[1.8] Cayley, A., 1990. Note on Riemann's paper, Math. Ann., 16, 81-82.
16
[1.9] Davis, H. T., 1927. The application of fractional operators to functional equations,

Amer. J. Math., 1936, V49, 123-142.

[1.10] Miller, Kenneth, S, and Ross, Bertram, 1993. An Introduction to the Fractional

Calculus and Fractional Differential Equations, John Wiley & Sons, Inc. pp. 1 to 20.

17
Chapter 2

An Intuitive Approach

I would like to start at the "heart" of this paper by looking at a few well-known

functions, and try to find various derivatives by means of a intuitive approach. I will be

making use of the usual notation for derivatives as follows:

df (x) Df ( df (x) D 2 f (x) df(x) f (),


f .Dlf(x), 2
=D D^
dx dx 12

Derivatives of eax

Let us begin to look at the derivatives in the form of eaX:


ex D e2x = e 2x Do e = e3x
D ex =
x
D1 ex =e D1 e2 = 2e 2x D1 e = 3e3 x
D 2 e = ex D2 e2 x = 4e 2 x = 22 e2 D 2 e3 X = 9e3 x = 32 e3
D3 e x = e x D 3 e2 X = 8e2x = 23e X2 D 3 e 3x = 27e3x = 33e3x

D n ex = e Dn e 2x = 2ne2 D n ex = 3nex

May I be so bold as to generalize the derivatives for eaX? Well, I can if I give the

stipulation that n 2 0 and n must be an integer:

Dn e ax = anea (al)

Let us try to apply (ai) to integers when n <0. We will obtain the following:

D-lex = ex D- e 2x = 1/2e2 x = 2- e 2 D- e3X = 1/3e3X = 3- le3


D- 2 e x = e x D- 2 e2X = 1/4e2 = 2- 2 e2 D- 2 e3 = 1/9e3 = 3- 2e3
D- 3 ex = e D-3 e2X = 1/8e2 x = 2-3 e 2 x D-3 e 3 X = /27e3 x = 3-3e3x

D - n ex = e D-" e2 = 2-ne2 x D-n e 3 = 3-ne3

18
Do you notice anything about the above results do they look familiar? They should, they

are the indefinite integral as listed below:

D-'ex JeXdx = Dle Je2edx


D-lex = D- =

X 3 = JIe3Xdxdx
D-2ex = edxdx
dxdx De2e =
Dex

D-3e = JJ edxdxdx D' 3 e2X = Jfe 2xdxdxdx D-3e 3X= JJedxdxdx


3

J
DneX = (n - times) exdx(n - times)
Dne2x = f (n - times) e 2Xdx(n - times)
D'ne3 = f (n - times) e 3 Xdx(n - times)

May I boldly go where no graduate student have gone before and say that (al) is valid for

all n positive or negative as long as they are integers. Being as bold as I am, it is not

unreasonable to say that if n > 0 and n is an integer, then (al) will become the derivative

formula for eax, and if n < 0 and n is an integer, then (al) will become the indefinite

integral formula for e".

Now comes the first "meaty" question. Does (al) hold true if n is not an integer,

better yet what if n = /2? Will D'ex a = al2ea hold true, will the following be true?

D'/2 e = eX (a2) D 1/2 e2x= 2 12e2 x (a3)

m
We know that the rules of calculus tell us that D" D y = D + m y let's try it out on the
n

a a
Dl2ex and see if my bold assumptions are true. We know Dle = ae , so let's see what

happens.
2 a
D2Dl/2ea x = D1 /2 a/2 ea = a 1/2 al ea = ae

So far my bold assumption is holding true, (al) appears to be a good candidate.

19
I will now try to derive a second formula for the fractional derivative of eX. We

know that we can expand e" in a Taylor Series as follows:


° 1

ex= xk
k=o k!

Now making use of the gamma function to find the derivative of the power series above,

will yield:

xk-n
DneX = -
=0 (k -n+ 1) (a)

Now, let's try our new formula for a few derivatives and see what happens.
x k-
D'e x = =e x
k0 F(k -1 + 1)
cc Xk-2

D 2ex =F - =ex
S=o r(k - 2 + 1)
X k-3

D 3 ex =E= - ex
k=or(k- 3+1)

(The reader can verify that each of these summations reduce to the usual power series for

e " as given above.) Since it appears that the formula (a4) is another good candidate for the

derivative of eX, let's see if it will work for fractional derivatives.

D eX - +
rk=O(k - I+) (as)

As we can see we have a problem, (a2) does not match (as).


X k-1/2

eSo r(k-1/2+1)

20
Just in case, in some way ex was imbedded in (as) I consulted the computer program

Mathematica, which gave a remarkable result:

6/5 exx 6/ Sr[- 6x]


Dex x
Dl/2ex = rF[- 6 5 ]

(I.e., the sum of the previous infinite series.)

Unfortunately, formula (a2) does not equal (as), so there is no need for us see what

happens to (a3). Regrettably, this means that formula (al) or the formula (a4) is not valid

if n is a fraction. What went wrong? Which formula is invalid? I will unravel the secret

latter, but for now let's look at the derivatives for xP.

21
Derivatives of xP

Let us now look at some derivatives in the form of xP, using the normal notation.
x x°
' =
DO x = x D° X
22 D ° X3 = 3 D ° 4 = X4 D
D' x = D'x 2= 2x D' X3 = 3X2 D 4 =4 D4x = px-
D2 x = 0 2 2
D x =4 D 2X3 =6x D 2 X4 = 12x D2 = p(p - 1)p-2
D3 x =0 D3 2 = 0 D3 3 = 6 D3 4 = 24x D3 x = p(p- )(p -2)i -3
-4
D 4 x=0 D4 x 2 = D4 x3 = D4 x4 = 24 D4 x =p(p - )(p - )(p - 3)

It is not take a leap of faith to see the pattern that is emerging.

D n x P = p(p - 1)(p - 2)(p - 3) ... (p - n + 1)x'-"

Now, if we use a little "trick" from Dr. Osler and multiply numerator and denominator of

the right hand side of the above formula by (p - n)! we obtain:

D"x = p xp-n)
(p-n)! (a6)

Again, I will be so bold as to say the above, (a6) is the general formula for derivatives in

the form x P, if I give the stipulation that n 2 0 and n must be an integer. As a matter of

fact, it is not difficult to see that (a6) is the general formula for the nt h derivative of the

form xP, if n > 0 and n is an integer.

Let us try to apply (a6) to integers when n < 0. We will obtain the following:

D-x x= 2 /2 D-x 2 = x3/3 D- x3 = x 4/4 D x = x6/6


D - 2 x=x 3/6 D - 2 2 = x4 /12 D - 2 x=x5/20 D -2 x = x7/42
D-3 = x4/24 D-3 x 2 = x5/60 D-3 x3 =x6/120 D-3S x = X8/332

Do you notice the above results? Do they look familiar? They should, they are the

indefinite integral as listed below:

22
D-Lx = xPdx

D-2 xp =ff xPdxdx

D-3x p = J xPdxdxdx

D-'x P = f (n - times) xPdx(n - times)

Again, I will boldly go where no graduate student have gone before and say that (a6) is a

valid for all n positive or negative as long as they are integers. It is not unreasonable to

say that if n >0 and n is an integer, then (a6) will become the derivative formula for xP,

and if n < 0 and n is an integer, then (a6) will become the indefinite integral formula for

xP.

Now, with a little help and guidance from Dr. Osler I will again make use of the

gamma function, recall that z! = F(z + 1). Substituting p! and (p - n)! from (a6) with the

gamma function will yield:

-n
Dnx p = r(p + l)
(p- n +) (a7)

Since I1a) is defined for all values of z, integer or non-integer, I will now replace n in

(a7) with a. Rewriting (a7) we obtain:

p-
DaXP = (p + )x
r(p-a+l) (a)

23
Derivatives of f(x)

Fueled with a general derivative for functions in the form of xI I will derive a

large number general derivative of functions in the formf(x). Knowing that we can

expandf(x) in a Taylor's series and making use of (as) I will derive a large variety of

fractional derivatives for the formf(x). If

f (x) = Ianx
n=O

then

Daf (x) = IanDaXn


n=O

by "pushing" the D operator through sigma, yielding:


n- a
~DafF) Et r(n + l)x
Daf(x) = Ia n +
n=O r(n-a + 1) (a9)

(a9) is a very good candidate for functions that can be expanded in a Taylor's series.

24
The Big Fix

The reader has just sat through seven boring pages of simple calculus probably

wondering where I am going. I first derived a possible candidate for the definition of the

fractional derivatives in the form ex which lead to an unexpected contradiction. Then I


P
derived another possible candidate for the form x and another candidate for the general

functionf(x). Well! Continue reading and I will try to clear up the contradictions.

Recalling from elementary calculus:

D-'f(x) = f (x)dx
D-2 f(x) = f (xl)f(x 2)dxd 2

-'f(x) = ff f (X)f(X 2 )f (x3 )dxldx2 dX3

and so on.

However, we obtain a problem since the indefinite integrals have arbitrary constants. I

will get around this problem by using the following limits.

D-'f(x) = f (x)dx

The double integral will be integrated from 0 to x and then from ti to x, integrating the

double integral in this fashion can be found in many elementary calculus books.

f ( t' )d t 2d t '
D-22 f(X)
D = I | xf()dd
f (x) =

We can "pull" the functionf(ti) outside the second integral since it is not a function of t2 ,

which will yield:

D 2 f() = f f(t) dt 2dt

25
Notice that:

fdt2 = x - t

Substituting this in the previous equation, and dropping the subscripts, will yield:

D-2 f () = f(t)(x- t)dt

Following the same process will produce the following integrals:

-3f(X)1
= X 2
D-3 f(x)-2 f(t)(x - t)2 dt

In general, the nth integral will become:

D-f (x) = ( -1)! f (t)(x-t)dt

Recalling the fact that fln) = (n - 1)! and as previously done I will replace - n with a

which will yield.

Daf(x) = 1 x f(t)dt
(or)° (x - t)+l (alo)

Notice that the above expression is looking a lot like the Cauchy's integral

formula? Well, the expression is rapidly becoming the definition of a general formula for

a fractional derivative, but we still need to take care of a few problems. If a <- 1 the

expression is a fractional derivative and can be used as a definition.

26
But, if a is greater than -I the expression becomes undefined. The reader can easily see

as t approaches x, x - t approaches 0 and the denominator becomes undefined. Although I

have called the expression (alo) a fractional derivative it is truly an integral if a < - 1.

Therefore, (alo) will involve limits, which is very uncommon when one thinks of

derivatives. This is reason why we had a problem when we tried to match (al) and (as);

we did not take into consideration the idea of limits. It is common in the field of

fractional derivatives to us the following notation, which sets the limits of integration

from b to x:

I rf(t)dt
bDf ()b (x-t)+l (an)

Lets go back to the beginning of this chapter and look at (ai) where I tried to
x
derive a fractional derivative of the form e , which would be valid for any n.

Dn ea = anea (ai)

Using my new formula (all) I will try to find the limits of integration to validate (ai) so

that it will become valid for any n. We know from calculus that D'ea" = a'lea, then:

1 x lax lab
ee e-- edx
tDx fb = e= ' =
,D-'ea=xe'-dx=e-
a ba a

One can see that in order for (ai) to be valid then we need to make:

1 e ab =
a

This will occur four different fashions, the first two are the trivial cases when a =±o,.The

other two cases will occur when the function eab goes to zero, this happens when ab = -

oo. When a is positive and b is negative, or when b is positive and a is negative.

27
As a side note, in equation al if a is positive, b is - 0, and = a - 1 we get what is known

as the Weyl fractional derivative, or the Weyl Transform which is denoted as:

_W-leax = Ieaxdx

Weyl discovered this form of a fractional derivative in 1917 and his form is now widely

used. Surprisingly, the Weyl Fractional Derivative can be used to solve the fractional

derivative of the form ea for any a positive or negative. It is not difficult for the reader to

convince themselves that the Weyl Fractional Derivative can be used to solve the

fractional derivative of the form e" for any a positive or negative, therefore I will no

waste the reader's time. Although, I did not use the formula (all) it did lead me to the

Weyl Fractional Derivative formula by way of the idea of limits.

Finally, to the reader's delight as well as myself, I have shown that the formula

all is the Weyl Fractional Derivative formula if we let b = - o,

_Waea = a"ea

and it is valid for any a positive or negative. It is very surprising, at lease to me, that the

solution involved limits, one would not expect limits to be involved with derivatives

since we think of them as local properties of functions. Nevertheless, the contradiction

between (al) and (a5) is now solved.

Since limits were involved in the fractional derivative of the form ea let us look

at the fractional derivative of the form x.

D n x p -P=
- n p
! X
(p-n)!

28
We know from calculus that:

Xp+1
D-x P =_
p+l

and making use of (an) will yield:

p+l
p+l

But we want:

xp+1
P
D-x =
p+l

Therefore, I need

bp+1
=0
p+l

this will only happen when b = 0, which will yield:

_Dx X p+1
D-lxP=D~'XP
D-YP=bDx = fJo xpdx-
xpd= -- p+l 1

Again, it is not difficult for the reader to see that formula (all) will be valid for

fractional derivatives of the form xP for any a, positive or negative if we set the lower

limit b = 0.

Daf(x) 1 x ff(t)dt
x r(-a)o (x(x t)" ^ - ^ ((a12)

It should now be clear to the reader that the fractional derivative (all) might be

able to be made valid for any function that can be expanded in a Taylor's series. The

"trick" to using formula (all) is to find the lower limit b that works for the particular form

of function that you are working on.

29
References

[2.1] Miller, Kenneth, S, and Ross, Bertram, 1993. An Introduction to the Fractional

Calculus and Fractional Differential Equations, John Wiley & Sons, Inc. pp. 33 to 35.

[2.1] Osler, Thomas, J, Lectures from Complex Analysis Course at Rowan University,

Spring Semester, 2000, Not Published.

[2.2] Osler, Thomas, J & Kleinz, Marcia, A Child's Garden of FractionalDerivatives,

To be Published.

30
Chapter 3

A Rigorous Approach

Using Complex Analysis

I will begin by looking at the Riemann Liouville Integral:

1
f(-a)
(
j
f (t)dt
(z - t)

This integral is wildly accepted throughout the mathematical community, provided the

stipulation is given that the real part of a is less then zero, Re (a) < 0. If Re (a)>O the

integral will generally diverge and become useless for the needed application, a needs to

be truly arbitrary with no restrictions. A nice way out of this problem is to start with the

Cauchy' s Integral Formula.

f(n)(Z) = n f(t)dt
f n'"_(z) z)n+l

Replacing n with any arbitrary number a, and replacing a! with the gamma function will

yield:

Daf () = r(a+ 1) f(t)dt


27m (t-_Z^+

Right away one can see that there is a problem emerging, i.e., switching to the Cauchy's

Integral Formula has changed the singularities. This was precisely the problem in Chapter

2, of this paper, where (a2) did not equal (as). (The reader should recall that the intuitive

approach to ex did not equal the formula approach.)

31
If we let a be a fractional number, (that does not simplify to a integer), then a

branch line will emerge since t goes to z and there is no way to solve this kind of

problem. This is the reason why equations (a2) and (as) from Chapter 2, of this paper, did

not match. For example, if a = 12, then the value of the integral will become dependent

on where the contour crosses the branch line. I will show how to make the proper choice

of the placement of the branch line which will allow the Cauchy Integral fractional

derivative to generalize the Riemann Liouville Integral.

Im(t)
/ \
/ \
Branch
I Branch | z D
I"aDz
Line l " \

_,-''
.- .--.. -/Re(t)

If we "swing" the branch line in such a way as to let it pass through the origin, and let a

be at the origin we will be able to solve the problem.

Im(t)

/
xA// / / /a
Branch
Line / / /

,//~,/' 32" Re(t)


This adjustment to the branch line is denoted as 0 and is defined as the closed integral

starting at t = 0, circles t = z once in a positive sense and returns to zero. The new

formula will be given as:

oDaf(z) 2(a + 1)
27i.o (t -
a+
Z)a+l
(t)dt (bl)

(bi) has the advantage of not having as many restrictions on a like the Riemann Liouville

Integral does, this is because t does not go anywhere near z.

The question that remains to be answer is; will (bl) match up with the Riemann

Liouville Integral? For simplicity we will begin by letting z lie on the real-axis oft.

Im(t)

/F.___--------------____——
\ _A__ z J Re(t)
___ - _- _------ ------- - - - -
t

It should not be difficult for the reader to see that as e goes to zero the contribution of the

"slanted segments" to the integral goes to zero. In fact as the length of e goes to zero the

contour will lie on the real axis of t. So we will do the computations as if the contour is

on the real axis.

I will begin by looking at the bottom contour.

t\ A

t -z = pei

n=z-t
33
It is not difficult to see that 0 = - n. Therefore the bottom integral will be given as

follows:

Let: o = f f(x)(t-z)-a-dt
as stated above (b 2 )

I must first deal with the real and imaginary parts of (t - z)-(a+ 1). Using some simple

rules and identities, from Complex Analysis, will yield the following.

- (a+l)
(t -)-(a+l) =(eln(t-z))

-
= [elnt-z+i+arg(t-z) ] (a)

[e-(a+l)lnlt-l ]* [ei-arg(t-z)(-(a+))]

Since the bottom contour is going from 0 to z, (from left to right), I will use an absolute

value identity.

0 t z "

It-Z = z-t

Now, continuing with the above simplification:

r g - z -( l
(t _ Z)(a+l) = [e-(a+l)ln(z-t)]*[ei (t )( a+))]

Combining the above with (b 2) and a little simplification I obtain:

bottom =ei~r(a+Dl) f (t)( - )-(a+l)dt

(a
Simplifying a little more, and making use of the fact that en + 1)= _ eina will yield:

| {bottom t)(a+ t dt
t=m-e3 o3 _f()

34
It should not be difficult for the reader to see that the integral around the circle

will equal zero since we will shrink the circle to nothing. Since this is a rigorous proof I

will give some details of this contour.

;t

z /

I will begin with contour around the circle, which will be given as:

lircle | 0=
tJo
f (t)(t - )- ( +l)

Making use of a few trigonometric identities will yield:

t = z+ E'e i

then:

dt = E ie'i'dO

Combining the last three formulas will yield:

i i =
circleo f=0'
2 f (z+ Eeie )e-(a+l )e - i(a+l)O ied

Simplifying and taking the absolute value we get:

6
ircllfo+2 f(Z + E e )E)-e-ia
id A

< BJo f(z+E-ei ~")ddO -> 0, as e-> 0 since Re(a) <0

Where B=maxle-'" , o0< <2r

35
Now we need to deal with the contour along the top.
t - z = peiO

'f---/ -p=z--x

t / /

It is not difficult to see that ( approaches n as E goes to zero. Therefore the top integral

will be given as follows:

(t )( - t) - l ei ~ l )d t
io = 'f a- (-a-

Simplifying in the same fashion as I did with the bottom integral will yield:

; = e-ir(a+l) f f (t)(z-t(a+l)dt
Jtop =z

l = 1:
Finally, reversing the limits of integration and using e

ito =e-ira[ tf(t I


O (Z- t-t

Putting all of the parts together will yield:

IZ+= +J+ bottom circle top

ija f(t) dt + 0 + e-i'a I f d t


fJ t;i (Z - t)(+) (t)

Simplifying:

=(-e + eia{J Z) -df(


t)

36
Factoring out a (- 1) and multiplying the numerator and denominator by 2i will yield:

o° ( ( in „-'2i ) (Z - t)

Applying the trigonometric identity:

e'o - e -' 0
sin0 =
2i

will yield:

z =(-2i sin(ra))( z - t)a+l dt


[L

The above equation is the integral part of the Cauchy's Integral formula. Inserting it into

the formula will yield:

Daf(z) = F(+) (-2i sin(7za)) ( f t)t

Simplifying:

D (f
(z) =(-i.sin(na))J f( t)
7J {'(Z - t) a +

Fortunately, an identity of the Gamma function is:

1 r(a + 1)(-isin(ra))
r(-a) 7r

Replacing the Gamma identity into the above Cauchy's Integral formula will yield:

Daf(Z) = 1 z f(t)

37r(a)
(o(Z - t)a (b)

37
As the reader can see in the case when Re(a) < 0 the above Cauchy's Integral

formula, (b 3 ), is truly the Riemann Liouville Integral, but is defined with less restrictions

on a. Equation (b 3) is good for all a such that a # - 1, - 2, - 3,..., since the Gamma

Function is not defined at these values.

If a = - 1, - 2, - 3,... one would then use the Riemann Liouville Integral.

38
References

[3.1] K. S. Miller & Ross, An Introduction to the FractionalCalculus and Fractional

DifferentialEquations, John Wiley & Sons, New York, 1993.

[3.2] Osler, Thomas, J, Lectures from Complex Analysis Course at Rowan University,

Spring Semester, 2000.

[3.3] Osler, Thomas, J & Kleinz, Marcia, A Child's Garden of FractionalDerivatives,

To be published.

39
Chapter 4

The Tautochrone Problem

The tautochrone problem has been around for several hundred years and was first

solved by Christen Huygens in the early 1700. However in 1823 Niels Henrik Abel [4.1]

solved the tautochrone problem in an entirely different way by using fractional

derivatives. In 1998 Dr. Thomas J. Osler and Dr. Eduardo Flores, [4.2] both from Rowan

University, Glassboro, New Jersey, solved the tautochrone problem using fractional

derivatives as well, but they solved the problem for arbitrary potentials.

Building on Abel's, Osler's, and Flores's work, the following is an excellent

example of how the techniques of fractional derivatives can be used to solve a physical

problem. Simply put, the tautochrone problem finds the cycloid needed to produce the

curve so that a frictionless mass will slide down the curve, acting only on the force of

gravity, to the origin in the same amount of time regardless of the starting point. The

rules are surprisingly simple:

1) The bead slides without friction.

2) Initial velocity is equal to zero.

3) Time to reach origin is T

4) T is independent of Yo, the initial height.

40
(x0, yo)

(x,y)
v g

Solving the tautochrone problem is no easy task, and can not be done quickly. However,

the end result is well worth the effort.

We begin with,

F =- V V(x, y, z)

F = - (V/dxi + dV/9yj + dV/dzk),

Where V(x, y, z) is the potential function. Since the force of gravity acts only in the down

direction the Force Field equation will become:

.0 0
F = (dV/ + aV/dyj + /d/k) = V/dy = g

Integrating both sides of the equation yields:

V = gy + c

Letting c = 0 we get the potential function:

V(y) = gy

Recalling the Conservation of Energy Law,

PE = mgy, and KE = 12 mv2

And stating that m equal the unit mass, then.

PE = gy, and KE = /2 v2

41
Fundamentally, the Conservation of Energy states that energy is conserved neither lost

nor produced. Therefore, Potential Energy (PE) plus Kinetic Energy (KE) is conserved.
12 2
+ gy = /2 vo + gyo

(xo, yo)

(x,y)

v g

Since one of the initial conditions was that the initial speed is equal to zero,

1 V2 + gy = gyo

2 = 2g(yo - y)

v=J2gyo-y

Since the units for speed is distance/time which is a derivative of distance, and the mass

is moving down the curve, the resulting equation is,

ds [ i
v=--= 284yo-y
dt

where s represents the arc length along the curve. Then:

- ds = dt
- y2o

42
Integrating the previous equation will result in:
y=O t=T
J -ds - idt
y=yO
Y- Y
VYo Y
2g t=o

Reversing the order of the integral will force a positive function, as indicated below:

f;0-
0 ,Yo
zgfdt
Y yY=t-0 t=o

Integrating the right hand side yields:

fJds =gT (di)

2 of
Restating the general formula of fractional derivatives that we derived in Chapter

this paper:

Daf(XI)= 1 t t)t) (d2)

The fractional derivative begins to emerge from the left hand side of (dl). Using algebraic

manipulation obtains:

ds ds
-dy
C ,y dy r(l/ 2 ) dy-dy
Y 1r(2) D2ds
(Yo -y) 2+ - r(1/ 2) dy (d 2 ')

Using the fact that:


r(1/2) =J7

And making use of (di) and (d 2 ') the result is:

-.. 2 ds =
d43y

43
Dividing both sides of the equation by the square root of 7r will result in.

D-1/2ds _T
Yo dyy r

Then:

ds -=D'
dy- YO

Therefore:

D-1/ 2 D l = .T
0 o 0 Yo 7r

Subsequently, making use of the Law of Indexes,

D1/ 2=
_ T
0 Y0

And operating on both sides of the equation with oDy-1 , the result is:

OD 0 = pTDyo
D-i/2, Dn'2s • 1' 211Dyo
nD~

which yields:

s(y)= -T oD;I2 1

Since the object of this exercise was to find a cycloid needed to produce the curve so that

a frictionless mass will slide down the curve, acting only on the force of gravity. The

frictionless mass will slide to the origin in the same amount of time regardless of the

starting point.

44
Recalling formula (as) from Chapter 2 where I defined:

1/2
oDy =l=v=
0 [F(3/2)

Therefore we get:

S yl/2
s=,-T
nFr(3/2)

Making use of the fact that r(3/2) = then:

S= YT g

Rearranging the above equation and replacing yo with y will yield an equation for the

distance s:

2T-gy
s(y) =
7r

I am now going to obtain x, y in terms of an angle 0.

Since:

ds =
l+yy
dyy= \ dy

Then:

(dy ) 2 = 2 -2g ( -2

45
Squaring both sides:

dy =t2y

Subtracting 1 and taking the square root from both sides:

y)= 2gT 1
dy) V y

After simplification, the result is:

dxl ]2gT2 - 2y

dy) = y2

Then multiplying both sides by dy:

1 ]2gT 2- 7ry2
A=nd y dy

And taking the integral of both sides:

x y2gT Y r dy
x± 2 7Y (d 3 )

Now, to the question of the usefulness of (d 3 ), the problem is to integrate (d 3).

With the help of Dr. Osler, I consulted the Mathematical Handbook, by Murray R.

Spiegel [4.3], the following "canned" integrals help:

\+
iPxr (ax+b)(px +q) aq-bpf dx
^\ax+b a 2a ((ax+b)(px+q)

And:

(ax + b)(px + q) =-apn a(px+q)

46
Combining the two previous equations yields:

r pqq (ax+ b)(px +q) aq-bp 2 Tan- -p(ax+b)


ax+b a 2a j-ap a(px +q) ( 4)

This integral may seem complicated but it is possible to work with it. It also is helpful in

the simplification of (d 3 ). Multiply both sides of (d3 ) by nr and rearrange the position of y.

T 2
r2y + 2+2g
x7n = | —'-——dy

Let:

a=l
b=O
q = 2gT 2
p =- I

Then, making good use of equation (d4), will yield:

y(l+O)(- 2 y + 2gT2 ) l(2gT) -0(-r 2) 2 -(2)(ly +


1 + 2(1) 1((- r+ 2gT 2 )

Simplifying the equation we get:

xr = (y)(2g 2y) + 2gTTan- 'l


J(7t2gT -~cy (ds)

At this juncture I need to stop and review a little trigonometry. If:

0 = Tan - F
2
(2gT 2
-n y) (d

Then:

SinO= n
47T
47
And:

2
2gT 2y

From the above three equations we can see the following:

fr = 2gT sin0

Squaring both sides and dividing both sides by v will yield:

2gT2
y 2 sin 2 0
sin (d7 )

1-cos 20
Since sin2 0 = 1 - cos2 0, and sin2 0 = 2 the result is:

gT2
Y= g (1-cos 26)
K7r2~~~ ~(ds)

Combing (ds), (d 6 ), (d 7 ), and (d 8) we will obtain:

2gT sin2 2 r2 2gT 2 (1-+ 2gT 2


X7= sin 0 2gT _ 2 _ COS2 0) + 0

Simplifying:

2gT 222

Dividing both sides of the equation by Tr will yield:

=
X g 2

48
Recalling from trigonometry, sin 20 = 2sin 0 cos 0, then:

x= sins20+ ) 20

Finally:

gT2
x= (sin 20 + 20)
7r ~~~~(dO)

and:

gT 2
= -2(1-cos 29)
7r (dlo

Recalling again from trigonometry that a cycloid is defined as follows:

x = p(sin+ + X) (dll)

y = p( - cos ) (di2)

Now, if we let:

gT2
P= 2

d=20

And if we substitute p and 0 into (d 9) we get:

x = p(sinq + X)

Lastly, if we substitute p and 0 into (dmo) we get:

y=p(l -cos )

Therefore, we have a cycloid and the tautochrone problem is solved!

49
References

[4.1] Miller, Kenneth, S, and Ross, Bertram, 1993. An Introduction to the Fractional

Calculus and Fractional Differential Equations, John Wiley & Sons, Inc. pp. 255 to 260.

[4.2] Osler, Thomas, J, and Flores, Eduardo, 1998. The Tautochrone Under Arbitrary

Potentials Using Fractional Derivatives, American Association of Physics Teachers, vol.

67, August 8, 1999. pp. 718 to 722.

[4.3] Spiegel, Murray, R, 1968.Mathematical Handbook of Formulas and Tables,

Schaum's Outline Series in Mathematics, McGraw-Hill Book Company. pp. 72

50
Chapter 5

Oliver Heaviside

No paper on Fractional Derivatives would be complete without having a chapter

on Oliver Heaviside. As I have mentioned in Chapter 1, Oliver Heaviside has become one

of my "heroes." I look at things the same way as he did. I prefer to use an intuitive

approach when looking at problems, rigorous proofs are not interesting to me as they

were not to Heaviside. He believed in the scientific and experiential approach where one

would continue to "play" with a particular problem until a solution was found. He

believed that once a solution was found, and could be duplicated, there is no need to back

it up with a rigorous proof, the solution was the solution and there is no need to go any

further. He did not hate mathematics, nor do I, he just believed that rigorous proofs

hindered the physicist. (Note: All references made in this Chapter will come from

Heaviside's book, ElectromagneticTheory, republished in 1971.) Heaviside says, pp. 4,

"...mathematics being fundamentally an experimental science, like any other, it is clear

that the Science of Nature might be studied as a whole, the properties of space along with

the properties of the matter found moving about therein. This would be very

comprehensive, but I do not suppose that it would be generally practicable, though

possibly the best course for a large-minded man. Nevertheless, it is greatly to the

advantage of a student of physics that he should pick up his mathematics along with his

physics, if he can. For then the one will fit the other. This is the natural way, pursued by

the creators of analysis. If the student does not pick up so much logical mathematics of a

51
formal kind, he will, at any rate, get on in a manner suitable for progress in his physical

studies...Now, in working out physical problems there should be, in the first place, no

pretence of rigorousformalism. The physics will guild the physicist along some-how to

useful and important results, by the constant union of physical and geometrical or

analytical ideas. The practice of eliminating the physics by reducing a problem to a

purely mathematical exercise should be avoided as much as possible. The physics should

be carried on right through, to give life and reality to the problem, and to obtain the great

assistance, which the physics gives to the mathematics..."

Heaviside did not dislike mathematics. He questioned its usefulness to most

physical problems. He believed that a rose is just a rose and no further explanation is

needed. He further says, pp. 7, "...The best result of mathematics is to be able to do

without it. To show the truth of a paradox by example, I would remark that nothing is

more satisfactory to a physicist than to get rid of a formal demonstration of an analytical

theorem and to substitute a quasi-physical one, or a geometrical one free from co-ordinate

symbols, which will enable him to see the necessary truth of the theorem, and make it be

practically axiomatic..."

Heaviside and myself are not alone on this subject; he reproduces a passage from

the Preface to Lord Rayleigh's book on Treatise on Sound, pp. 5, which says, "...In the

mathematical investigation I have usually employed such methods as present themselves

naturally to a physicist. The pure mathematician will complain, and sometimes with

justice, of deficient rigor. But to this question there are two sides. For, however important

it may be to maintain uniformly high standard in pure mathematics, the physicist may

occasionally do well to rest content with arguments, which are fairly satisfactory and

52
conclusive from his point of view. To his mind, exercised in a different order of ideas, the

more severe procedure of the pure mathematician may appear not more but less

demonstrative. And further, in many cases of difficulty to insist upon the highest standard

would mean the exclusion of the subject altogether in view of the space that would be

required..."

Heaviside's lack of a formal mathematical schooling allowed him to do

manipulations on equations in such bazaar fashions that no mathematician in his right

mind would ever dream of doing. The following is just one example of how he solved a

fractional derivative that, of course, is not mathematically sound. As a side note, the

notation that I will use is not the same as Heaviside's since his will not fit the general

notation of this paper.

Heaviside studied the voltage and current in a cable. The situation is partially

based on an analogy with diffusion of heat in a rod. He used the following equations to

relate Vand C, pp. 30-31:

dC _ d
dx dt

and:

dVRC
-=RC
dx

He combined the above two equations to give:

d2V R d
=R-V
dx2 dt (H)

53
He then introduced q to "satisfy" the above equation:

q2 = V
dt

That is, Heaviside essentially let:

q( dt

so that equation (H) becomes:

-2=
2 2 V
Rq
dx

He proceeded to "solve" (H) by assuming q was a constant, (see pp. 4). This led
1/2
Heaviside to the problem of determining the meaning of D 1, where 1 represents what

is now referred to the "Heaviside Function."

The following account of Heaviside's solution is taken from his book,

Electromagnetic Theory, pp. 286-288. Heaviside wanted to know the current in a wire at

any point along such wire. He knew from experiments that the current at any point along

a very long wire satisfied:

DAC= 1

Heaviside started with the following notation and two equations:

* R = resistance, per unit length of the wire.

* S = permittance per unit length.

* V(x) = voltage at any point along the wire.

* C(t) = current in the wire at time t, where for t > 0, C(t) is assumed to be one.

* E = impressed voltage.

* Co = current at x = 0.
54
= V-E
Co0~]S

Coo =V E(t)
R

Heaviside set the two equations equal to each other, which yielded:

E=S~FSC(t)E
R7rt R

Multiplying and dividing both sides by E, R, and S yielded:

U= el

At this point in time I must point out that the above three lines of work was pieced

together from a few hundred pages of Heaviside's work. He knew that the end result

would be a fractional derivative in the form,

DAC=I

since he stated, pp. 286, "...In order to avoid introducing the idea of fractional

differentiation from the theoretical standpoint, I took the value of D1/2C as known

experimentally...There is no question as to its value; that is settled by Fourier's

investigation in the theory of the diffusion of heat in conductors..."

His next step, from what I could tell, was not to square both sides of the equation, but to

just simply "push" the /2 power through the function el, which yielded:

11
D12C = (m)- 2
e2

The above is most definitely not mathematically logical, and it is the only step that

Heaviside could have taken to get from el to e2.

55
Nevertheless he did give a semi-convincing formal proof using sound mathematics. He

started with a fractional derivative, and used a technique which he called, pp. 288, "...an

ingenious device, also well-known..."

(D,2) =2 fe-2tdx*-2 e-tdy

(D /2C) 2 = 4o

(D 12 C) = -^T7|Cdrer2t
JO
, JO n2

2 1 [-e-2
=2C)-
(D

(D"C) =-

- 2
DI2C = (-t)

I suppose that Heaviside was not totally convinced with this short proof, because he gave

a second proof-containing trigonometry, which I will reproduce as follows.

Heaviside began with an infinitely long wire that was subjected at one end with an

impressed voltage E; the current produced was expressed by:

C= S[D C] E
~R ~Hi

1
Heaviside said, pp. 287, "...If E is constant, we have to find what D /2C means. Now, we

can work out this problem in Fourier series, first for a finite cable, and then proceed to

the limit..."

56
He let the wire be of length I and be earthed at one end and have an impressed voltage E

at the other. The capacity of the skin of the wire s is given by:

2 =-RS(DIC)

The voltage anywhere on the wire, at any x, due to E is given by:

V= sin s(l -x) E


sin sl

where V = E at the beginning of the wire and V = 0 at the end of the wire. Heaviside then

used one of his expansion theories, which yielded:

V=E(1 x) 2E r sin sx RI

Where s has the values of 7l/, 2n, 3/l1, ... As I becomes infinitely large the previous

equation is converted from a Fourier series to a definite integral as follows:

d2E rJsinsx s-
V=E-—- ds e RS
n7O S

Heaviside then says, pp. 288, "... The current at the beginning, x = 0, is got by

and then putting x = 0. This makes

2E f« -
C =2E e RSds
Rr 0 H2

57
Heaviside's next step was to set Hi and H2 equal to each other, which would have

yielded:

(S[D E= 2E |§e-RSds
R R7r O 3
e3,

he then used some substitution rules from calculus:

2 S dS
2S
u =- = -- du =
RS 4 RS RS

He said, pp. 288, "...Comparing Hi and H 2 and removing unnecessary constants, we see

that

DoC = 2 -e-u2tdu
7,o H3

Which is a well-known integral..." At this point Heaviside uses the first proof that I

reproduced to arrive at the required result:

D 2C = () -1/2

Let us take a closer look at how Heaviside arrived at H 3 . He said, "...Comparing

Hi and H 2 and removing unnecessary constants..." which is e3. I agree with removing

unnecessary constants to simplify an equation, which will yield:

(DG1C) 2 e _s2tds

The only way Heaviside could arrive at H 3 was to "push" the square root through the

derivative to obtain a fractional derivative. Although Heaviside never stated the step he

took to arrive at H 3 one can easily see the unsound mathematical steps that he took.

Heaviside made a blanket statement to justify the above proof, he said, pp. 288,

"...The above is only one way in a thousand. I do not give any formal proof that all ways
58
properly followed must necessarily lead to the same result..." I am not sure what the

reader thinks, but I have always been under the impression that for a proof to be

mathematically sound it must be capable of being reproduced in another fashion so that

the end results are the same, or in the same form.

In closing of this Chapter I suppose that the reader and myself could debate and

discuss Heaviside's methods for years without end. But Heaviside can not be denied the

great accomplishments that he made during his years of being a scientist. Although I am

not fond of rigorous proof, I do know, and agree, that they are needed. What intrigued me

the most was the fact that Heaviside was able to make such great accomplishments in

electromagnetism as well as differential equations without any formal mathematical

studies. This is why I called him one of my "heroes".

Lastly, I will add that many mathematicians have been motivated to make some of

Heaviside's arguments rigorous. The reader can consult Hillel Poritsky's expository

account in the MAA Monthly to learn more about this matter.

59
References

[5.1] 0. Heaviside, ElectromagneticTheory, vol. 2, Third Edition, Chelsea Publishing

Company, New York, 1971.

[5.2] Poritsky, Hillel, Heaviside's OperationalCalculus - Its Applications And

Foundations, MAA Monthly, June 1936, pp. 331 - 344.

60
Conclusion

In Chapter 1 of this paper the reader has seen the fact that not many

mathematicians have even attempted to work on the topic of fractional derivatives. Most

of the ones that did worked on the topic only did so briefly, and many acknowledged the

existence but did nothing with the topic. More work has been done on fractional

derivatives in the last fifty years then was done before that time. I feel that it is a

fascinating subject, but has been overlooked far to long by the mathematical community.

In Chapter 2 the reader gets a small look into my mind to see how I feel about

using the intuitive approach. I have been blessed with the ability to "toy" with

mathematical problems and come up with valid solutions to them. Unfortunately, I can

not always back them up with rigorous proofs. Although, this particular topic I have

become very good at backing up my work with rigor by using Complex Analysis, as the

reader was able to see in Chapter 3.

Chapter 4, The Tautochrone Problem, is another example of my newfound ability

to back up my intuitive approaches with rigor by means of Complex Analysis. I have

found that many proofs can be done more easily with the help of Complex Analysis than

as they are commonly done. In fact undergraduates can understand many of these proofs

more easily when done in the Complex Analysis form.

61
Lastly, in Chapter 5 the reader was able to see my views on Oliver Heaviside.

Although, I do not agree with the methods he used in solving some of his problems, I do

understand what he was trying to accomplish. He knew the solutions to many of his

problems through experimental data, but he just did not have the mathematical training

needed to back up his work with rigor. Please do not get the wrong impression, I do not

agree with the unsound mathematical logic that Heaviside used, but I do agree with using

the intuitive approach whenever possible.

62
Bibliography

[1] Cayley, A., 1990. Note on Riemann's paper, Math. Ann., 16, 81-82.

[2] Center, W., 1848. On the value of (d/dx)e x0 when 0 is a positive proper fraction,

Cambridge Dublin Math. J., 3, 163-169.

[3] Davis, H. T., 1927. The application of fractional operators to functional equations,

Amer. J. Math., 49, 123-142.

[4] De Morgan, A., 1840. The Differential and Integral Calculus Combining

Differentiation, Integration, Development, Differential Equations, Differences,

Summation, Calculus of Variations...with Applications to Algebra, Plane and Solid

Geometry, Baldwin and Craddock, London; published in 25 parts under the

superintendence of the Society for the Diffusion of Useful Knowledge, pp. 597-599.

[5] Euler, L., 1738. De progressionibus transcendentibus, sev quarum termini generales

algebraice dari nequent, CommentariiAcademiae Scientiarum Imperialis Scientiarum

Petropolitanae,5, p. 55.

[6] Fourier, J. B. J., 1822, Theorie analytique de la chaleur, Oeuvres de Fourier,Vol. 1,

Firmin Didot, Paris, p. 508.

[7] Heaviside, O., Electromagnetic Theory, vol. 2, Third Edition, Chelsea Publishing

Company, New York, 1971

[8] Leibniz, G. W., 1695a. Letter from Hanover, Germany, to G. F. A. L'Hopital,

September 30, 1695, in Mathematische Schriften, 1849; reprinted 1962, Olms Verag,

Hidesheim, Germany, 2, 301-302.

63
[9] Leibniz, G. W., 1697. Letter from Hanover, Germany, to John Wallis, May 28, 1697,

in Mathematische Schriften; reprinted 1962, Olms Verag, Hidesheim, Germany, 4,

25.

[10] Liouville, J., 1834. Memoire sur le theoreme des fonctions complementaraires, J.

Reine Angew. Math. (Crelle's J.), 11, 1-19.

[11] Miller, Kenneth, S, and Ross, Bertram, 1993. An Introduction to the Fractional

Calculus and Fractional Differential Equations, John Wiley & Sons, Inc. pp. 1 to 20,

255 to 260.

[12] Osler, Thomas, J, Lectures from Complex Analysis Course at Rowan University,

Spring Semester, 2000, Not Published.

[13] Osler, Thomas, J, and Flores, Eduardo, 1998. The Tautochrone Under Arbitrary

Potentials Using Fractional Derivatives, American Association of Physics Teachers,

vol. 67, August 8, 1999. pp. 718 to 722.

[14] Osler, Thomas, J & Kleinz, Marcia, A Child's Garden of FractionalDerivatives,

To be published.

[15] Spiegel, Murray, R, 1968.Mathematical Handbook of Formulas and Tables,

Schaum's Outline Series in Mathematics, McGraw-Hill Book Company. pp. 72

64

You might also like