0% found this document useful (0 votes)
2 views

Math Notes

Uploaded by

Munna kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Math Notes

Uploaded by

Munna kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 89

The University of Michigan

Industry Program of the College of Engineering

Notes on Mathematics For Engineers

Prepared by the American Nuclear Society.


University of Michigan Student Branch

October, 1959
IP-394
Acknowledgments

These notes were prepared by the University of Michigan Student Branch of the American
Nuclear Society, Committee on Math Notes, which included R.W. Albrecht, J.M. Carpenter,
D.L. Galbraith, E.H. Klevans, and R.J. Mack, all students in the University of Michigan
Department of Nuclear Engineering. Assistance was provided by Professors William Kerr
and Paul Zweifel.

The translation to LATEXwas done by Alison Chistopherson (class of 2013), with the
encouragement of Alex Bielajew, over the Winter and Summer of 2012. Other than this
paragraph, everything else is a verbatim transcription of the original document.

i
Foreword

This set of notes has been compiled with one primary objective in mind: to provide,
in one volume, a handy reference for a large number of the commonly-used mathematical
formulae, and to do so consistently with respect to notation, definition, and normalization.
Many of us keep these results available to us in an excessive number of references, in which
the notation or normalization varies, or formulae are so spread out that they are difficult to
find, and their use is time-consuming.
Short explanations are included, with some examples, to serve two purposes: first, to
recall to the user some of the ideas which may have slipped his mind since his detailed study
of the material; second, for those who have never studied the material, to make its use at
least plausible, and to help in his study of references.
No claim can be made that all results anyone ever uses are here, but it is hoped that
a sufficient quantity of material is included to make necessary only infrequent use of other
references, except for integral tables, etc. for elementary work. Of course, the user may find
it desirable to add some pages of his own.
Finally, it is recommended that those unfamiliar with the theory at any point not blindly
apply the formulae herein, for this is risky business at best; a text should be studied (and,
of course, understood) first.

ii
Contents

1 Orthogonal Functions 2
1.1 Importance of Orthogonal Functions . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Generation of Orthogonal Functions . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Use of Orthogonal Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Operational Properties of Some Common Sets of Orthogonal Functions . . . 8
1.5 Fourier Series, Range −ℓ ≤ x ≤ ℓ . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5.1 Boundary Value Problem Satisfied by Sines and Cosines . . . . . . . 9
1.5.2 Orthogonal Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5.3 Expansion in Fourier Series . . . . . . . . . . . . . . . . . . . . . . . 9
1.5.4 Normalization Factors . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5.5 Orthonormal Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5.6 Fourier Series, Range 0 ≤ x ≤ L . . . . . . . . . . . . . . . . . . . . 10
1.5.7 Expansion in Half-range Series . . . . . . . . . . . . . . . . . . . . . . 10
1.5.8 Full Range Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2 Legendre Polynomials 12
2.1 Generating Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Recurrence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Differential Equation Satisfied by P (x) . . . . . . . . . . . . . . . . . . . . . 12
2.4 Rodriques’ Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 Normalizing Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.6 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.7 Expansion in Legendre Polynomials . . . . . . . . . . . . . . . . . . . . . . . 13
2.8 Normalized Legendre Polynomials . . . . . . . . . . . . . . . . . . . . . . . . 14
2.9 Expansion in Normalized Polynomials . . . . . . . . . . . . . . . . . . . . . . 14
2.9.1 A Few Low-Degree Legendre Polynomials and Respective Norms . . . 14
2.9.2 Integral Representation of Pℓ (x) . . . . . . . . . . . . . . . . . . . . 14
2.9.3 Bounds on Pℓ (x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3 Associated Legendre Functions 16


3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 Recurrence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3 Differential Equation Satisfied by Pℓm (x) . . . . . . . . . . . . . . . . . . . . 16

iii
3.4 Expression of Pℓ (cos θ) in terms of Pℓm (x) . . . . . . . . . . . . . . . . . . . 17
3.5 Normalizing Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

4 Spherical Harmonics 19
4.1 Definition of Y m (Ω) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.2 Expression of P (cos θ) in terms of the Spherical Harmonics . . . . . . . . . . 19
4.3 Orthonormality of Y m (Ω) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.4 Expansion in Spherical Harmonics . . . . . . . . . . . . . . . . . . . . . . . . 20
4.5 Differential Equation Satisfied by Yℓm (Ω) . . . . . . . . . . . . . . . . . . . . 20
4.6 Some Low-order Spherical Harmonics . . . . . . . . . . . . . . . . . . . . . . 21
4.7 A Useful Relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

5 Laguerre Polynomials 23
5.1 Derivative Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.2 Generating Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.3 Differential Equation Satisfied by Lαn (x) . . . . . . . . . . . . . . . . . . . . . 23
5.4 Orthogonality, Range 0 ≤ x ≤ ∞ . . . . . . . . . . . . . . . . . . . . . . . . 23
5.5 Expansion in Laguerre Polynomials . . . . . . . . . . . . . . . . . . . . . . . 24
5.6 Expansion of X m in Laguerre Polynomials . . . . . . . . . . . . . . . . . . . 24
5.7 Recurrence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

6 Bessel Functions 25
6.1 Differential Equation Satisfied by Bessel Functions . . . . . . . . . . . . . . . 25
6.2 General Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.3 Series Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6.4 Properties of Bessel Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.5 Generating Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.6 Recursion Formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.7 Differential Formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6.8 Orthogonality, Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6.9 Expansion in Bessel Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6.10 Bessel Integral Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

7 Modified Bessel Functions 30


7.1 Differential Equation Satisfied . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.2 General Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.3 Relation of Modified Bessel to Bessel . . . . . . . . . . . . . . . . . . . . . . 31
7.4 Properties of In . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

8 The Laplace Transformation 33


8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
8.1.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
8.1.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

iv
8.1.3 Existence Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
8.1.4 Analyticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
8.1.5 Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
8.1.6 Further Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
8.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8.2.1 Solving Simultaneous Equations . . . . . . . . . . . . . . . . . . . . . 36
8.2.2 Electric Circuit Example . . . . . . . . . . . . . . . . . . . . . . . . . 37
8.2.3 Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
8.3 Inverse Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
8.3.1 Heaviside Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
8.3.2 The Inversion Integral . . . . . . . . . . . . . . . . . . . . . . . . . . 41
8.4 Tables of Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
8.4.1 Analyticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
8.4.2 Cauchy’s Integral Formula . . . . . . . . . . . . . . . . . . . . . . . . 47
8.4.3 Regular and Singular Points . . . . . . . . . . . . . . . . . . . . . . . 50
8.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

9 Fourier Transforms 52
9.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
9.1.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
9.1.2 Range of Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
9.1.3 Existence Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
9.2 Fundamental Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
9.2.1 Transforms of Derivatives . . . . . . . . . . . . . . . . . . . . . . . . 53
9.2.2 Relations Among Infinite Range Transforms . . . . . . . . . . . . . . 55
9.2.3 Transforms of Functions of Two Variables . . . . . . . . . . . . . . . 56
9.2.4 Fourier Exponential Transforms of Functions of Three Variables . . . 56
9.3 Summary of Fourier Transform Formulae . . . . . . . . . . . . . . . . . . . . 57
9.3.1 Finite Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
9.3.2 Infinite Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
9.4 Types of Problems to which Fourier Transforms May be Applied . . . . . . . 61
9.5 Inversion of Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . 66
9.5.1 Finite Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
9.5.2 Infinite Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
9.5.3 Inversion of Fourier Exponential Transforms . . . . . . . . . . . . . . 67
9.6 Table of Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
9.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

10 Miscellaneous Identities, Definitions, Functions, and Notations 72


10.1 Leibnitz Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
10.2 General Solution of First Order Linear Differential Equations . . . . . . . . . 72
10.3 Identities in Vector Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
10.4 Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

v
10.5 Index Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
10.6 Examples of Use of Index Notation . . . . . . . . . . . . . . . . . . . . . . . 77
10.7 The Dirac Delta Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
10.8 Gamma Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
10.9 Error Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

11 Notes and Conversion Factors 82


11.1 Electrical Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
11.1.1 Electrostatic CGS System . . . . . . . . . . . . . . . . . . . . . . . . 82
11.1.2 Electromagnetic CGS System . . . . . . . . . . . . . . . . . . . . . . 82
11.1.3 Practical System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
11.2 Tables of Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
11.2.1 Energy Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
11.3 Physical Constants and Conversion Factors; Dimensional Analysis . . . . . . 83

1
Chapter 1

Orthogonal Functions

In general, orthogonal functions arise in the solution of certain boundary-value problems.


The use of the properties of orthogonal functions may often greatly simplify and systematize
the solution to such a problem, in addition to providing a natural way of making approximate
solutions. Let us first make clear the concept of orthogonality. We begin at what may seem
an improbable starting point.
Two vectors, A and B in a three-dimensional space are said to be “orthogonal” if the
dot (inner) product A · B vanishes:
3
X
A · B = A1 B1 + A2 B2 + A3 B3 = Ai Bi = 0.
i=1

This is easily generalized to a space with more than three dimensions. In an n-dimensional
space the concept of orthogonality is unchanged, except that then the sum is over n terms
Ai Bi , and we have for orthogonality
Xn
A · B = A1 B1 + A2 B2 + A3 B3 = Ai Bi = 0.
i=1

Now one may think of the components of a vector, A1 , A2 , A3 , as the values of a (real)
function at three values of its argument; say A1 = f (r1 ), A2 = f (r2 ), A3 = f (r3 ), or, in terms
which will make our efforts here more clear, Ai = f (ri )(ri = r1 , r2 , r3 ). That is, r has the
values r1 , r2 , r3 , and to get Ai , put ri in f(r).
We may now think of r as having any number, say n, possible values in some range,
so that f(r) evaluated at the various r’s generates an n-dimensional vector. The step to
considering a function as an infinitely-many-dimensional vector is now a natural one; we
allow n to increase without bound, r taking all values in its range.
Let r have some range a ≤ r ≤ b in which it takes on n values such that rj − rj−1 = ∆rj ,
and suppose two such n-dimensional vectors f (rj ) and g(rj ) are thus generated. The inner
product of f with g is generalized as
X n
f (rj )g(rj )∆rj .
j=1

2
Above, for A · B, rj takes on only the discrete values 1,2,3; so ∆r is always unity in this
simple case. Now let us consider the case as n → ∞, where r takes all values between a and
b. The inner product, if it exists, is then
n
X
lim f (rj )g(rj )∆rj .
n→∞
j=1

With proper restrictions on ∆rj (max ∆rj → 0) and on the range of r(a ≤ r ≤ b), this is
just the limit occurring in the definition of the ordinary integral. Thus we say that the inner
product of f(r) with g(r), which is often denoted (f,g), is
Z b
(f, g) = f (r)g(r)dr.
a

Of course, the range could be infinite.


This discussion constitutes the generalization of the dot, or inner, product, to functions.
Two functions are then by definition orthogonal over the range a ≤ r ≤ b when
Z b
(f, g) = f (r)g(r)dr = 0.
a

As we shall see, this definition is subject to generalization by the inclusion of a “weight


function” p(r), with which
Z b
(f, g) = p(r)f (r)g(r)dr.
a
Here f and g are said to be orthogonal “with respect to weight function p” over the range
a ≤ r ≤ b. The function p(r) may of course be unity.
Only the function f (r) ≡ 0, a ≤ r ≤ b, is orthogonal to itself. In general, we denote the
inner product of a function with itself as
Z b
(f, f ) = f (r 2 )dr = Nf2 ,
a

and call it the norm of the function.


Orthogonal sets may or may not possess two other properties, normality and complete-
ness. A set of orthogonal functions {Ui (u)} is said to be normal or orthonormal if Ni2 = 1
for all i.
A set of functions{Ui (u)} orthogonal on the interval u1 ≤ u ≤ u2 is complete if there
exists no other function orthogonal to all the Ui on the same interval, with respect to the
same weight function, if one is involved.

1.1 Importance of Orthogonal Functions


The importance of orthogonal sets in mathematical physics may perhaps be indicated by
further considerations of their analogs: orthogonal coordinate vectors. It is true that any

3
N-dimensional vector may be defined in terms of its components along N coordinates, pro-
vided that no more than two of the reference coordinates are coplanar. But if the reference
coordinates are orthogonal, e.g., Cartesian coordinates, then the equations take a particu-
larly simple form. The situation is somewhat similar when it is desired to expand a function
in terms of a set of other functions – it is much simpler if the set is orthogonal.
Completeness is another important property. It is apparent that no two reference axes
will suffice for the definition of a vector in 3-dimensional space. The set of two reference axes
is not complete in ordinary space, since a third coordinate can be added which is orthogonal
to both of them. Addition of this coordinate makes the set complete. The situation with
orthogonal functions is exactly analogous. Some authors define a complete set as a set in
terms of which any other function defined on the same interval can be expressed.
Some more common sets of orthogonal functions are the sines and cosines, Bessel func-
tions, Legendre polynomials, associated Legendre functions, spherical harmonics, Laguerre
and Hermite polynomials; operational properties of which are listed in these notes.

1.2 Generation of Orthogonal Functions


In the mathematical formulation of physical problems, one often encounters partial dif-
ferential equations or integro-differential equations (which contain not only derivatives of
functions but also integrals of functions), with which are associated a set of boundary con-
ditions. If the equation and its boundary conditions are such as to be “separable” in one of
the variables one may attempt to apply the method of “separation of variables”.
Suppose we (admittedly rather abstractly) represent our equation
M
F (u, v, w, · · · ) = 0 (1.2.1)
L
where {} is an operator involving
L the variables , u, v, w, · · · , applied to the function
F (u, v, w, · · · ). For Example, {} might be something like
Z w2
∂2
+v+ K(w, w ′)dw ′ + ∆2
∂u2 w1
L
so that when {} is applied to a function F the equation appears
Z w2
M ∂2F
{F } = 2
+ vF + K(w, w ′)F (u, v, w)dw ′ + ∆2 F = 0
∂u w1

The process of separation proceeds as follows. One attempts to find a variable, say u,
such that if it is assumed that F (u, v, w, · · · ) may be written

F (u, v, w) = U(u)φ(v, w, · · · )

then Equation (1.2.1) can be written as follows:


M M
{F } = U(u)φ(v, w) = 0. (1.2.2)

4
Now let  
Υ U(u) = Φ φ(v, w) .
Here, Υ is an operator involving only u, and Φ is an operator involving only v, w, · · ·. Re-
turning to the example, assume

F (u, v, w) = U(u)φ(v, w).

Then,
M M
{F } = {Uφ}
Z w2
∂2U φ 2
= ∂u2
+ vUφ + ∆ Uφ + K(w, w ′)Uφdw ′
w1
Z w2
d2 U 2
= φ du2 + vUφ + ∆ Uφ + U K(w, w ′)φ(v, w ′)dw ′
w1
=0

We can, in this case, if Uφ 6= 0, divide by Uφ to get


1 d2 U 1 w2
Z
+v+ K(w, w ′)φ(v, w ′)dw ′ + ∆2 = 0.
U du2 φ w1
This can be rearranged to obtain the following result:
1 d2 U 1 w2
Z
2
+ ∆ = −v − K(w, w ′)φ(v, w ′)dw ′
U du2 φ w1
In this case the separation has been successful, for on the left are functions of u only, while
on the right stand only functions of v and w. Now suppose we were to vary u, fixing v and w.
Then the right side would not change since it does not involve u, and is therefore a constant.
We therefore state this fact:
1 d2 U 1 w2
Z
2 2
2
+ ∆ = µ = −v − K(w, w ′)φ(v, w ′)dw ′
U du φ e1

where µ2 is a constant called the “separation constant”. We may choose it at our discretion.
We now have two equations, where only one existed before:
d2 U
0= du2
+ (∆2 − µ2 )U
Z w2
2
0 = (v + µ )φ + K(w, w ′)φ(v, w ′)dw ′.
w1

For φ 6= 0, we can divide by φ to get

U(u1 ) = 0
dU
du
(u2 ) = U(u2 ).

5
These equations are separated; they do not involve v nor w.
By the process of separation of variables we have, from the original equation involving
u, v, and w, generated a new set of equations, some of which (those involving u) are a
complete problem.  2


0 = dduU2 + (∆2 − µ2 )U

0 = U(u )
1
dU

 du
(u2 ) = U(u2 )
0 = (v + µ2 )φ + R w2 K(w, w ′)φ(v, w ′)dw ′


w1

This was our objective in applying the method of separation of variables. The process may
be repeated on the remaining equation or performed on another variable.
Now if, after separation, the u-equation can be put in the form
d h i 
r(u) dU

du
− q(u) + µ(u) U = 0, (1.2.3)
du
and the boundary conditions assume the form
(
a1 U(u1 ) + a2 U ′ (u1 ) + α1 U(u2 ) + α2 U ′ (u2 ) = 0
(1.2.4)
b1 U(u1 ) + b2 U ′ (u1 ) + β1 U(u2 ) + β2 U ′ (u2 ) = 0
where a, b, α, and β are constants (some may be zero), then the system of differential equa-
tions and boundary conditions is called a “Sturm-Liouville system”1 . It will be noted that
the Sturm-Liouville system is very general, and includes many important equations as special
cases, for example the wave equation with those boundary conditions which are commonly
applied.
This system under quite general conditions generates a complete set of orthogonal func-
tions, one for each of an infinite, discrete set of values of the parameter µ. One finds the
values of µ, called “eigenvalues”, for which solutions exists, and the solution functions cor-
responding to these eigenvalues, called “eigenfunctions”. If the eigenvalues are µ1 , µ2 , · · ·
and the corresponding eigenfunctions are U(u1 ), U(u2 ), · · ·, then in general, the functions
are orthogonal over the range u1 to u2 , with respect to the weight function p(u).
Z u2
p(u)Ui (u)Uj (u)du = Ni2 δij (1.2.5)
u1

Here δ is the Kronecker delta, and p(u) is the same as in equation (1.2.3).
One must take care to find all possible eigenvalues when the equation and boundary
conditions are written exactly as in (1.2.3) and (1.2.4) and they are all real. When all
eigenvalues are found, the set of eigenfunctions is complete, and any function reasonably
well behaved between u1 and u2 may be represented in terms of them. Say we seek an
expansion of a function f(u) in terms of our eigenfunctions,
X
f (u) = fi Ui (u)
i
1
pronounced LEE-oo-vil, NOT Loo-i-vil

6
where fi are a set of constants. Multiplying on the right and left by Uj p(u) and integrating
with respect to u in the range of u1 to u2 .
Z u2 Z u2 X
p(u)Uj (u)f (u)du = p(u)Uj (u)fiUi (u)du
u1 u1
i
X Z u2
= fi p(u)Uj (u)Ui (u)du
i u1

As a consequence of the orthogonality of the Ui (u)’s, defined in equation (1.2.5), this becomes
X
fi Ni2 δij = fj Nj2 .
i

We then solve for the fj ’s;


u2
1 1
Z
fj = 2 p(u)Uj (u)f (u)du = (Uj , f ).
Nj u1 Nj2
In order to justify the switching of the order of summation and integration here, and to
guarantee the existence of (U < j, f ), we usually require that f(u) be absolutely integrable,
i.e., Z u2
f (u)du exists.
u1
It is to be noted that the outline of procedure above requires the solution of an ordinary
differential equation, perhaps not an easy task, but one hopes not as difficult as the problem
of solving the partial differential equation. Very often the eigenfunctions which fit a given
problem are known, and so this process can be bypassed.
Different sets of eigenfunctions have different sets of operational properties, or sets of
relationships between member which may be found useful.
We note in conclusion that sets of orthogonal functions are generated by other means
than be Sturm-Liouville systems; by sets of differential equations and boundary conditions
which are not of Sturm-Liouville type, and by integral equations, to mention two.

1.3 Use of Orthogonal Functions


Let us consider a linear partial differential equation outlining the elimination of one
variable from the equation. Under rather general conditions, we may expand F in an infinite
series of orthogonal functions which we assume are known:

X
F (u, v, w, · · · ) = fi (v, w, · · · )Ui (u)
i=0

where u2
1
Z
fi (v, w, · · · ) = 2 p(u)F (u, v, w, · · · )Ui (u)du.
Ni u1

7
The formula for the coefficient fi follows immediately from multiplying the first equation
by p(u)Ui (u) and integrating. Let the original partial differential equation be represented
abstractly. M
F (u, v, w, · · · ) = S(u, v, w, · · · )
L L
where is a linear differential operator and F merely represents that part of the differ-
ential equation that involves F. Assume F to be expanded in a series in fi Ui ; multiply the
equation by p(u)Uj (u) and integrate over u from u1 to u2 .
M
F =S
MX
fi Ui = S
i
Z u2 MX Z u2
p(u)Uj (u) fi Ui du = Sp(u)Uj (u)du
u1 i u1

= Nj2 Sj (v, w, · · · )
where
Z u2
Sj = S(u, v, w, · · · )p(u)Uj (u)du
u1

so that

X
S(u, v, w, · · · ) = Sj Uj
j

Now, by using the operational properties of the Ui ’s, one reduces the equations (an infinite
set, one for each i) to a set in
Lthe fi ’s and si ’s. The particular steps taken depend upon the
exact nature of the operator , and the set of equations may be coupled, i.e., f’s with several
indices may appear in the same equation (for example, fi−1 , fi , fi+1 ). These equations do
not involve derivatives with respect to variable u, and we have gained in this respect. But
we have to contend with the infinite set of equations.
All is not lost at this point, for, as it turns out, the series for F, (called a generalized
Fourier series because of the manner in which the coefficients of Ui ’s are chosen in the
expansion for F), is the most rapidly convergent series possible in the Ui ’s. Thus solving for
only the coefficients of the leading terms in the series may enable us to obtain a satisfactory
approximation to F. Also, very often, we are interested only in one or two of the fi ’s on
physical or other grounds.

1.4 Operational Properties of Some Common Sets of


Orthogonal Functions
There follow now a few pages on which are outlined, rather concisely, some basic oper-
ational properties of some commonly occurring sets of orthogonal functions. The familiar

8
set of sines and cosines used in the construction of Fourier series is included as an example.
These lists of properties contained in these notes are by no means complete, though they
may suffice for the solution of many problems. The references listed with each section give
detailed derivations, more extensive lists of properties, more discussions of the method and
its limitations, or examples of the use of orthogonal functions. It is recommended that one
unfamiliar with these functions read in some of these references, in order to avoid the pitfalls
of using mathematics beyond the realm of its applicability. These notes have been assembled
mainly for reference. Of special interest may be the tables in Margenau and Murphy, page
254, which lists twelve special cases of the Sturm-Liouville equation with the name of the
orthogonal set which satisfied each one.

1.5 Fourier Series, Range −ℓ ≤ x ≤ ℓ


1.5.1 Boundary Value Problem Satisfied by Sines and Cosines

d2 f
dx2
+ k2 f = 0 k real
f (−ℓ) = f (ℓ)
f (−ℓ) = f ′ (ℓ)

1.5.2 Orthogonal Set


   
nπx nπx
sin ; cos ℓ ≤ x ≤ ℓ, n = 0, 1, 2, . . .
ℓ ℓ

1.5.3 Expansion in Fourier Series


(    )
a0 X nπx nπx
F (x) = + an cos + bn sin
z n=1
ℓ ℓ

where
ℓ  
1
Z
nπx
an = F (x) cos dx n = 0, 1, 2 . . .
ℓ −ℓ ℓ
1 ℓ
Z  
nπx
bn = F (x) sin dx n = 0, 1, 2, . . .
ℓ −ℓ ℓ

9
1.5.4 Normalization Factors

Nn2 cos x = Nn2 sin x = ℓ n = 1, 2, . . .


N02 cos x = 2ℓi

1.5.5 Orthonormal Set

   
1 1 nπx 1 nπx
√ , √ sin , √ cos − ℓ ≤ x ≤ ℓ, n = 1, 2, . . .
ℓ ℓ ℓ ℓ ℓ

1.5.6 Fourier Series, Range 0 ≤ x ≤ L


One may wish to expand a function defined on an interval 0 ≤ x ≤ L, in a Fourier series.
One may choose at his discretion either of two ways, expanding either in a series of sines or
one of cosines. The sine-series expansion yields an odd function and the cosine series yields
an even function when the series is considered as a continuation of the function outside the
range 0 ≤ x ≤ L. An “odd function” is one such that F(x)=-F(-x) and an “even function”
is one such that F(x)=F(-x). Even functions are symmetric about the y axis.
Of course, either series represents the function on the interval 0 ≤ x ≤ L. Note that
on this range, either sin ( nπx
L
) or cos ( nπx
L
)n = 0, 1, 2, . . . forms a complete set, thus the two
possible expansions.

1.5.7 Expansion in Half-range Series

∞  
a0 X nπx
F (x) = an cos
z n=1 L

or
∞  
X nπx
F (x) = bn sin
n=1
L

where

2 L
Z  
nπx
an = F (x) cos dx, n = 0, 1, 2, . . .
L 0 L
2 L
Z  
nπx
bn = F (x) sin dx, n = 0, 1, 2, . . .
L 0 L

10
1.5.8 Full Range Series
Consider the coefficients in a full-range expansion for an even function, i.e., F(x) such
that F(x)=F(-x).
1 ℓ
Z  
nπx
an = F (x) cos dx
ℓ −ℓ ℓ
"Z  #
0   Z ℓ 
1 nπx nπx
= F (x) cos dx + F (x) cos dx
ℓ −ℓ ℓ 0 ℓ

Putting −x in for x in the first integral,


" Z  #
0   Z ℓ 
1 −nπx nπx
an = − F (−x) cos dx + F (x) cos dx
ℓ ℓ ℓ 0 ℓ
−nπx nπx
 
Use F(-x)=F(x), cos ℓ
= cos , and switch the order of integration,

"Z  #
ℓ   Z ℓ 
1 nπx nπx
an = F (x) cos dx + F (x) cos dx
ℓ 0 ℓ 0 ℓ
"Z  #
ℓ 
2 nπx
= F (x) cos dx 6= 0 in general.
ℓ 0 ℓ
"Z  #
ℓ 
1 nπx
bn = F (x) sin dx
ℓ −ℓ ℓ
"Z  #
0   Z ℓ 
1 nπx nπx
= F (x) sin dx + F (x) sin dx
ℓ −ℓ ℓ 0 ℓ

Putting −x in for x,
" Z  #
0   Z ℓ 
1 nπx nπx
bn = − F (−x) sin dx + F (x) sin dx
ℓ −ℓ ℓ 0 ℓ

Using F (x) = F (−x), sin −nπx = − sin nπx


 
ℓ ℓ
, and switching the order of integration,
" Z  #
ℓ   Z ℓ 
1 nπx nπx
bm = − F (x) sin dx + F (x) sin dx
ℓ 0 ℓ 0 ℓ
=0 ∀n ∈ N
Thus, the coefficients of the sine terms in the full range expansion of an even function are
all zero; the cosine coefficients do not, of course, vanish for all n. This situation is much the
same with respect to coefficients in the full range expansion of odd functions, except that in
this case it is the coefficients of the cosine terms which vanish, for n=0, 1, 2 . . ..

11
Chapter 2

Legendre Polynomials

2.1 Generating Function

If

1 X
H(x, y) = p = Pℓ (x)y ℓ
1 − 2xy + y 2 ℓ=0

Then

1 ∂ℓ
Pℓ (x) = H(x, y).
ℓ! y ℓ
Valid for y=0.

2.2 Recurrence Relations

1. (ℓ + 1)Pℓ+1(x) − (2ℓ + 1)xPℓ (x) + ℓPℓ−1 (x) = 0


′ ′
2. ℓPℓ−1 (x) − Pℓ (x) + xPℓ−1 (x) = 0

2.3 Differential Equation Satisfied by P (x)

′′ ′
1. (1 − x2 )Pℓ (x) − 2xPℓ (x) + ℓ(ℓ + 1)Pℓ (x) = 0 ℓ∈Z

12
Very often x = cos θ.
!
2 2
∂ ℓ(ℓ + 1)(1 − x ) + 1
2. Pℓ + Pℓ = 0
x2 (1 − x2 )2

2.4 Rodriques’ Formula

1 dℓ 2
Pℓ (x) = (x − 1)ℓ
2ℓ ℓ! dxℓ

2.5 Normalizing Factor


Norm of Pℓ (x) is:
1
2
Z
Nℓ2 = (Pℓ (x))2 dx =
−1 2ℓ + 1

2.6 Orthogonality
Z 1
Pℓ (x)Pℓ′ (x)dx = δℓℓ′Nℓ2
−1

where δℓℓ′ is the Kronecker delta, defined as


(
0, ℓ 6= ℓ′
δℓℓ′ =
1, ℓ = ℓ′

2.7 Expansion in Legendre Polynomials


Any function f(x) which is defined over the range −1 ≤ x ≤ 1|0, and which is absolutely
integrable over this range may be expanded in an infinite series of Legendre Polynomials;

X
f (x) = fℓ Pℓ (x)
ℓ=0

where

2ℓ + 1 1
Z
fℓ = f (x)ℓ (x)dx
2 −1
Z 1
1
= 2 f (x)Pℓ (x)dx
Nℓ −1

13
2.8 Normalized Legendre Polynomials

Pℓ (x)
Pℓ (x) =
Nℓ

2.9 Expansion in Normalized Polynomials


X
g(x) = gℓ Pℓ (x)
ℓ=0

where
Z 1
gℓ = g(x)Pℓ (x)dx
−1

2.9.1 A Few Low-Degree Legendre Polynomials and Respective


Norms

P0 (x) = 1 N02 = 2
2
P1 (x) = x N12 =
3
1 2
P2 (x) = (3x2 − 1) N22 =
2 5
1 2
P3 (x) = (5x3 − 3x) N32 =
2 7
1 2
P4 (x) = (35x4 − 30x2 + 3) N42 =
8 9
1 2
P5 (x) = (63x5 − 70x3 + 15x) N52 =
8 11
1 2
P6 (x) = (231x6 − 315x4 + 105x2 − 5) N62 =
16 13

2.9.2 Integral Representation of Pℓ (x)

1
Z πh √ iℓ
Pℓ (x) = x + x2 − 1 cos ϕ dϕ
π 0

14
2.9.3 Bounds on Pℓ (x)
For −1 ≤ x ≤ 1,
|Pℓ (x)| ≤ 1, ∀ ℓ ∈ Z and ℓ ≥ 0

References on Legendre Polynomials


1. D. Jackson; “Fourier Series and Orthogonal Polynomials”, Carus Math. Monographs,
Number 6, 1941, pp. 45-68.

2. H. Margenau and G. M. Murphy; “The Mathematics of Physics and Chemistry”, D.


Van Nostrand, First Edition, 1943, pp. 94-109.

3. A. G. Webster; “Partial Differential Equations of Math. Phys.”, G.E. Stechert and


Company, 1927, pp. 302-320.

4. R.V. Churchill; “Four Series and Boundary Value Problems”, McGraw Hill, 1941, pp.
175-201. (for discussion of concept of orthogonality, see Chap. 3).

5. E. Jahnke and F. Emde; “Tables of Functions”, Dover Publications, 1945, pp. 107-125.
(Lists some properties and tabulates functions).

15
Chapter 3

Associated Legendre Functions

3.1 Definition

m d|m|
1. Pℓm (x) = (1 − x2 ) 2 P (x) (0 ≤ m ≤ ℓ)
|m| ℓ
dx!
m
(1 − x2 ) 2 dℓ+m 2
2. Pℓm (x) = ℓ ℓ+m
(x − 1)ℓ
2 ℓ! x

3.2 Recurrence Relations

x
1. Pℓm+1 (x) − 2m √ Pℓm (x) + ℓ(ℓ + 1) − m(m − 1) Pℓm−1 (x) = 0
 
1 − x2
m m+1
m (ℓ + m)Pℓ−1 (x) + (ℓ − m + 1)Pℓ+1 (x)
2. xPℓ (x) =
2ℓ + 1
√ m+1 m+1
2 m P ℓ+1 (x) − Pℓ−1 (x)
3. 1 − x Pℓ (x) =
2ℓ + 1
2m
4. Pℓm+1 (x) = √ m m
 
(ℓ + m)Pℓ−1 (x) + (ℓ − m + 1)Pℓ+1 (x)
1 − x2 (2ℓ + 1)
− ℓ(ℓ + 1) − m(m − 1) Pℓm−1 (x)
 

3.3 Differential Equation Satisfied by Pℓm(x)


" #
2 2
d m d m
(1 − x2 ) 2
Pℓ (x) − 2x Pℓm (x) + ℓ(ℓ + 1) − 2
Pℓm (x) = 0
x dx 1−x

16
Since Pℓm (x) is defined on the interval −1 ≤ x ≤ 1, in physical applications Pℓm (x) is
often associated with an angle θ through the relation x = cos θ. Then the equation satisfied
by Pℓm (x) may be found in the following form:
" #
d2 m d m m2
P (x) + cot θ Pℓ (x) + ℓ(ℓ + 1) − Pℓm (x) = 0
dθ2 ℓ dθ sin2 θ
or
" #
2
 
1 d d m
sin θ P m (x) + ℓ(ℓ + 1) − Pℓm (x) = 0
sin θ dθ dθ ℓ sin2 θ

3.4 Expression of Pℓ(cos θ) in terms of Pℓm(x)


1. Expressions θ1 , ϕ1 and θ2 , ϕ2 respectively denote the polar and azimuthal angles of
two lines passing through the origin. Therefore, Θ, the angle between these two lines, is
given by
cos θ = cos θ1 cos θ2 + sin θ1 sin θ2 cos ϕ1 − ϕ2 .
With these definitions, Pℓ (cos θ may be expressed as follows:

X (ℓ − m)! m
Pℓ (cos θ1 )Pℓm (cos θ2 ) cos m(ϕ2 − ϕ2 )

Pℓ (cos θ) = Pℓ (cos θ1 )Pℓ (cos θ2 ) + 2
m=1
(ℓ + m)!

2. If Pℓm (x) is defined in a slightly different manner that allows negative values for m,

2
|m| d|m|
Pℓm (x) = (1 − x ) 2 Pℓ (x) (|m| ≤ ℓ)
x|m|
then the expansion may be written as follows:

X (ℓ − |m|)! m
Pℓ (cos θ1 )Pℓm (cos θ2 ) cos m(ϕ1 − ϕ2 ) .

Pℓ (cos θ) =
(ℓ + |m|)!
m=−ℓ

3.5 Normalizing Factor


1
2 (ℓ + m)!
Z
2
(Nℓm )2 = Pℓm (x) dx =
−1 2ℓ + 1 (ℓ − m)!

References
1. H. Margenan and G. M. Murphy; “The Mathematics of Physics and Chemistry”, D.
Van Nostrand, 1st edition (1934).

17
2. A. G. Webster; “Partial Differential Equations of Math. Phys.”, G.E. Stechert and
Company, 1927, pp. 302-320.

3. D. K. Holmes and R.V. Meghreblian, “Notes on Reactor Analysis, Part II, Theory”,
U.S.A.E.C. Document CF-4-7-88 (Part II), August 1955, pp. 164-165.

4. E. Jahnke and F. Emde; “Tables of Functions”, Dover Publications, 1945, pp. 107-125.
(Lists some properties and tabulates functions).

18
Chapter 4

Spherical Harmonics

4.1 Definition of Y m (Ω)


The spherical harmonics are a complete, orthonormal set of complex functions of two
variables, defined on the unit sphere. Below, the vector symbol Ω will be used to denote a
pair of variables θ, ϕ here taken to be, respectively, the polar and azimuthal angles specifying
a point on the unit sphere with reference to a coordinate system at its center. With these
conventions, the functions Yℓm (Ω) are defined as:
  21
2ℓ + 1 (ℓ − |m|)!
Yℓm (Ω) = Pℓm (cos θ)eimϕ .
4π (ℓ + |m|)!

It also defines µ = cos θ,


 21 |m|
(1 − µ2 ) dℓ+|m| 2

2ℓ + 1 (ℓ − |m|)! 2
Yℓm (Ω) = ℓ+|m|
(µ − 1)ℓ eimϕ .
4π (ℓ + |m|)! 2ℓ! dµ

Note: Yℓm = (−1)m Yℓ−m where denotes complex conjugate.

4.2 Expression of P (cos θ) in terms of the Spherical Har-


monics
Define θ1 , ϕ1 , i.e. Ω1 , and θ2 , ϕ2 , i.e. Ω2 as the polar and azimuthal angles specifying
two points on the unit sphere, with respect to a coordinate system at its center. Denote by
Θ the angle between the lines drawn from each point to the origin of the coordinates. Then:
m=ℓ
4π X m ∗
Pℓ (cos Θ) = Yℓ (Ω1 )Yℓm (Ω2 )
2ℓ + 1 m=−ℓ

19
4.3 Orthonormality of Y m(Ω)
Z

Yjk (Ω)Yℓm (Ω)dΩ = δjℓ δkm

where the integral over vector Ω indicates a double integration over the full ranges of θ and
ϕ; π2 ≤ θ ≤ π2 , 0 ≤ ϕ ≤ 2π.

4.4 Expansion in Spherical Harmonics


Any functions, perhaps complex, of the variables θ and ϕ, i.e. Ω, absolutely integrable
over θ and ϕ, can be expanded in terms of the functions Yℓm :
∞ X
X ℓ
F (Ω) = Fℓm Yℓm (Ω)
ℓ=0 m=−ℓ

where
Z

Fℓm = F (Ω)Yℓm (Ω)dΩ

4.5 Differential Equation Satisfied by Yℓm (Ω)

1 ∂2Y
 
∂ ∂Y
sin θ + + ℓ(ℓ + 1) sin θY = 0
∂θ ∂θ sin θ ∂ϕ2
Assume Y=Φ(ϕ)Θ(θ) and let
∂2Y
= −m2 Y
∂ϕ2
Imposing the conditions that Y is bounded at cos θ = ±1 makes ℓ an integer. Imposing that
Y is single-valued in ϕ makes m an integer.

20
4.6 Some Low-order Spherical Harmonics

1
Y00 (Ω) = √

r
3
Y1−1 (Ω) = sin θe−iϕ

r
0 3
Y1 (Ω) = cos θ

r
3
Y11 (Ω) = sin θeiϕ

4.7 A Useful Relationship


If the vector Ω is considered to represent a point on the unit sphere, its components can
be represented as follows:
Ωx = sin θ cos ϕ
Ωy = sin θ sin ϕ
Ωz = cos θ

If a new set of components are constructed as follows:


1 1
Ω−1 = √ (Ωx − iΩy ) = √ sin θ(cos ϕ − i sin ϕ)
2 2
Ω0 = Ωx = cos θ
1 1
Ω1 = √ (Ωx + iΩy ) = √ sin θ(cos ϕ + i sin ϕ)
2 2
then these are easily seen to be expressible as the ℓ = 1 spherical harmonics (see E),
r
4π −1
Ω−1 = Y
3 1
r

Ω0 = Y1
3
r
4π 1
Ω1 = Y
3 1

References
1. H. Margenan and G. M. Murphy; “The Mathematics of Physics and Chemistry”, D.
Van Nostrand, 1st edition (1934).

21
2. A. G. Webster; “Partial Differential Equations of Math. Phys.”, G.E. Stechert and
Company, 1927, pp. 302-320.

3. E. Jahnke and F. Emde; “Tables of Functions”, Dover Publications, 1945, pp. 107-125.
(Lists some properties and tabulates functions).

4. D. K. Holmes and R.V. Meghreblian, “Notes on Reactor Analysis, Part II, Theory”,
U.S.A.E.C. Document CF-4-7-88 (Part II), August 1955, pp. 164-165.

5. L. I. Schiff, “Quantum Mechanics”, 2nd Edition, McGraw Hill, 1955, p. 73.

6. Whittaker and Watson; “Modern Analysis”, 4th Edition, Cambridge University Press
(1927), pp. 391-396.

22
Chapter 5

Laguerre Polynomials

5.1 Derivative Definition

dn α+n −x
L(α) m −α x
n (x) = (−1) x e (x e )
dxn

5.2 Generating Function


xt X (−1)n
H(x, t) = (1 − t)−(α+1) e− 1−t = L(α)
n (x)t
n

n=0
n!

Thus

dn
 
xt
−(α+1) − 1−t
L(α)
n (x) = (1 − t) e ; t=0
dtn

5.3 Differential Equation Satisfied by Lαn(x)

d2 y dy
x 2 + (α − x + 1) + nyn = 0 n = 0, 1, 2, . . .
dx dx

5.4 Orthogonality, Range 0 ≤ x ≤ ∞

Z ∞
xα e−x L(α) (α) 2
m (x)Ln (x)dx = Nn δmn
0

23
where

Nn2 = (−1)n n!Γ(n + α + 1)

5.5 Expansion in Laguerre Polynomials


X
F (x) = fn L(α)
n (x)
n=0

where

1
Z
fn = 2 xα e−x F (x)L(α)
n (x)dx
Nn 0

5.6 Expansion of X m in Laguerre Polynomials


m
X (−1)n m!Γ(α + m + 1)
m
x = L(α)
n (x)
n=0
n!Γ(n + α + 1)

5.7 Recurrence Relations

(α) (α)
a. (x − 2n − α − 1)L(α) (x) = Ln+1 (x) + n(n + α)Ln−1 (x)

h n′ i
(α)
b. Ln+1 = (n + 1) Ln(α) − L(α)
n (x)

24
Chapter 6

Bessel Functions

6.1 Differential Equation Satisfied by Bessel Functions

!
d2 y 1 dy ν2
1. 2
+ + 1− 2 y=0
dx x dx x

or
!
ν2
 
1 d dy
2. x + 1− 2 y=0
x dx dx x

(An extensive listing of other equations satisfied by Bessel functions is given in Reference 2,)

6.2 General Solutions


Assuming that A and B are arbitrary constants, the general solutions to the above equa-
tions are:

y = AJν (x) + BJ−ν (x) (ν non-integral)


y = AJn (x) + BNn (x) (n integral)

Where Jn (x) are the Bessel Functions of the first kind of order n and Nn (x) are the Neu-
mann1 Functions, the Bessel Functions of the second kind. The Neumann functions are also
frequently represented by the symbol Yn (x). For non-integers,

Jν (x) cos (νπ) − J−ν (x)


Nν (x) =
sin (νπ)
1
In reference 4, these are called Weber functions

25
For ν = n and n ∈ Z, the above expression reduces to2
∞ x n+2r n−1  2r−n

2 x 1X r 2 1 X (n − r − 1)! x
Nn (x) = Jn (x) log − (−1) F (r) + F (n + r) −
π 2 π r=0 r!(n + r)! π r=0 r! 2

Where F (r) = rs=1 1s . A third function which sometimes finds use is the Hankel function,
P
or a Bessel function of the third kind. There are two such functions, defined by

Hν1 (x) = Jν (x) + iNν (x) (ν unrestricted )


Hν2 (x) = Jν (x) − iNν (x)

Then, for integers, we see that a solution to Bessel’s equation will be

y = A1 Hn(1) (x) + A2 Hn(2) (x)

Where A1 and A2 are arbitrary constants that may be complex. These functions bear
the same relation to the Bessel functions Jν (x) and Nν (x) as the functions e±νx bear close
to cos νx and sin νx. They satisfy the same differential equation and recursion relations as
Jν (x). Their importance results from the fact that they alone vanish for an infinite complex
argument, vis. H (1) if the imaginary part of the argument is positive, H (2) if it is negative,
i.e, limr→∞ H (1) (reiθ ) = 0, limr→∞ H (2) (re−iθ ), 0 ≤ θ ≤ λ.
From the above equations, we can also write
1 h (2) i
Jν (x) = Hν (x) + Hν(1) (x) (ν unrestricted )
2
1 h (2) i
Nν (x) = Hν (x) − Hν(1) (x)
2

6.3 Series Representation


For Bessel Functions of the first kind of order n, a series representation is given as follows:

X (−1)r x2r+n
Jn (x) =
r=0
22r+n r!(1 + n + r)
2
This is shown in reference 8, pg. 577.

26
6.4 Properties of Bessel Functions

1. Jn (x) = (−1)n J−n (x)


2. Jn (x) = (−1)n Jn (−x)
3. Bounds on Jn (x) for n ∈ {Z : n ≥ 0} and x ∈ {R : x ≥ 0}
a. Jn (x) ≤ 1
dk
b. Jn (x) ≤ 1 k = 1, 2, . . .
dxk
4. Limits for Jn (x) for n ∈ {Z : n ≥ 0}
a. lim Jn (x) = 0
n→∞
b. lim Jn (x) = 0
x→∞
xn
c. lim Jn (x) =
x→∞ 2n n!
5. Limits for Nn (x) for n ∈ {Z : n ≥ 0} and x ∈ R
a. lim Nn (x) = 0
x→∞
 n
(n − 1)! 2
b. lim Nn (x) = − n≥1
x→0 π x
2 2
c. lim N0 (x) = − log (cγ = 1.781)
x→0 π γx
6. Graphs of Jn (x) and Nn (x)

6.5 Generating Function



"  #
x 1 X
exp t− = Jn (x)tn ( n integral)
2 t h=−∞

6.6 Recursion Formulae


a. 2Jn (x) = Jn−1 (x) − Jn+1 (x)
2n
b. Jn (x) = Jn−1 (x) + Jn+1 (x)
x

c. xJn (x) = xJn−1 (x) − nJn (x) = xJn (x) − xJn+1 (x)

27
6.7 Differential Formulae

d  n
x Jn (x) = xn Jn−1 (x)

a.
dx
d  −n
x Jn (x) = −x−n Jn+1 (x)

b.
dx

6.8 Orthogonality, Range


For 0 ≤ x ≤ c, Z c
2
1. xJn (λnj x)Jn (λnk x)dx = δjk Nnj
0
2 c2
 2
Where λnj are positive roots of the equation Jn (λc) = 0 and Nnj = 2
Jn+1 (λnj c)
Z c
2
2. xJn (λnℓ x)Jn (λnm x)dx = δℓm Mnℓ
0

Where λnℓ are the positive roots of the equation (λc)Jn (λc) = −hJn (λc) or its equivalent
(n + h)Jn (λc) − λcJn+1 (λc) = 0 where h is some constant > 0 and n = 0, 1, 2, . . . and
" #
2 2 2 2 
2 λ nℓ c + h − n 2
Mnℓ = 2
Jn (λnℓ c)
2λnℓ

6.9 Expansion in Bessel Functions



X
f (x) = Aj Jn (λnj x)
j=0

where Aj will be represented by either of the two following ways:


Z c
2
a. Aj =  2 xf (x)Jn (λnj x)dx
c2 Jn+1 (λnj c) 0

when Jn (λnj c) = 0

2λ2nj c
Z
b. Aj =  2 xf (x)Jn (λnj x)dx
(λ2nj c2 + h2 − n2 ) Jn (λnj c) 0


when λcJn (λc) = −hJn (λc)

28
6.10 Bessel Integral Form
π
1
Z
Jn (x) = cos (nθ − x sin θ)dθ (n = 0, 1, 2, . . .)
π 0

29
Chapter 7

Modified Bessel Functions

7.1 Differential Equation Satisfied


!
d2 y 1 dy ν2
2
+ − 1+ 2 y=0
dx x dx x

7.2 General Solutions

y = AIν (x) + BIν (x) (ν non-integral)


y = AIn (x) + BKn (x) (n integral)

where A and B are arbitrary constants

In (x) = Modified Bessel Function of the first kind of order n


Kn (x) = Modified Bessel Function of the second kind of order n.
For non-integers,
π  1
Kν (x) = I−ν (x) − Iν (x)
2 sin (νπ)
π ν+1 (1) π −ν−1 (2)
= i Hν (ix) = i Hν (−ix).
2 2
For integers,
n−1  −n+2r
2 n+1 x 1X r r(n − r − 1)! x
Kν (x) = (−1) In (x) log + (−1)
π 2 π r=0 r! 2

n1
X ( x2 )n+2r 
+ (−1) F (r) + F (n + r)
π r=0 r!(n + r)!

30
Where
r
X 1
F (r) =
s=1
s

7.3 Relation of Modified Bessel to Bessel


For ν unrestricted,
Iν (x) = i−ν Jν (ix)
i h (2) i
= Hν (x) − Hν(1) (x)
2 !r

xν X 1 x2
=
Γ(ν + 1)2ν r=0 r!(ν + 1)r 4
∞  2r+ν
X 1 x
=
r=0
Γ(r + ν + 1)r! 2

Where (α)r = (α)(α + 1) · · · (α + r − 1) is the rising factorial.

7.4 Properties of In

1. I−n (x) = In (x)



2. 2In (x) = In−1 (x) + In+1 (x)
2n
3. In (x) = In−1 (x) − In+1 (x)
x

4. xIn (x) = xIn−1 (x) − nIn (x)

5. xIn (x) = nIn (x) + xIn+1 (x)

6. I0 (x) = I1 (x)
A few comments on notation of Jahnke and Emde.
1. A general cylindrical function Zp (x) is defined on page 144 by Zp (x) = c1 Jp (x)+c2 Np (x)
for p ∈ Z where c1 , c2 are arbitrary (real or complex) constants. Thus Zp (x) can apply
to Jp (x) by letting c2 =0, to Np (x) for c1 =0, and to Hp (x) by other constant. All
formulae on pages following use this definition of Zp (x).
2. The function Ip (x) is not listed as such, but is found as ip Jp (ix) on pages 224-229.
(1) (2)
3. The function Kp (x) is not listed, but π2 Kn (x) = in+1 Hn (ix) = −in+1 Hn (ix). The
(1) (2) (2)
functions iH0 (ix) = −iH0 (−ix) and −H1 (−ix) are tabulated on pages 236-243.

31
4. This reference is full of extremely interesting, beautiful, and helpful pictures of many
functions, almost suitable for hanging in the living room.

References
1. G.N. Watson; “A Treatise on the Theory of Bessel Functions”, 2nd Edition, Cambridge
University Press, 1944, (exhaustive treatment)

2. E. Jahnke and F. Emde; “Tables of Functions”, Dover Publications, 1945, pp. 107-125.
(Lists some properties and tabulates functions).

3. R.V. Churchill; “Fourier Series and Boundary Value Problems”, McGraw Hill, 1941,
pp. 175-201. (for discussion of concept of orthogonality, see Chap. 3).

4. I.N. Sneddon; “Special Functions of Mathematical Physics and Chemistry” University


Math. Texts, 1956 (thorough discussion for practical use. However, his use of subscript
is not always clear or general.)

5. D. Jackson; “Fourier Series and Orthogonal Polynomials”, Carus Math. Monographs,


Number 6, 1941, pp. 45-68.

6. Whittaker and Watson; “Modern Analysis”, 4th Edition, Cambridge University press,
1958 (more rigorous development)

7. N.W. McLachlan; “Bessel Functions for Engineers”, 2nd Edition, Oxford, Clarendon
Press, 1955.

8. H. and B.S. Jeffrets; “Mathematical Physics”, 3rd Edition, Cambridge University


Press, 1955, (compact, but rigorous presentation. They use a different notation, but it
is clearly defined.)

32
Chapter 8

The Laplace Transformation

8.1 Introduction
8.1.1 Description
The Laplace transformation permits many relatively complicated operations upon a func-
tion, such as differentiation and integration for instance, to be replaced by simpler algebraic
operations, such as multiplication or division, upon the transform. It is analogous to the
way in which such operations as multiplication and division are replaced by simpler pro-
cesses of addition and subtraction when we work not with numbers themselves but with
their logarithms.

8.1.2 Definition
The Laplace transformation applied to a function f (t) associates a function of a new
variable with f (t). This function of s is denoted by Lf (t) or where no confusion will result,
simply by L(f ) or F(s); and the transform is defined by:
Z ∞
L(f ) ≡ f (t)e−st dt
0

8.1.3 Existence Conditions


For a Laplace transformation of f (t) to exist and for f (t) to be recoverable from its
transform, it is sufficient that f (t) be of exponential order, i.e. that there should exist a
constant, s, such that the product e−st |f (t)| is bounded for all values of t greater than some
finite number T ; and that f (t) should be at least piecewise continuous over every finite
interval 0 ≤ t ≤ T , for any finite number T . More formally,

∃M, s, T : e−st |f (t)| ≤ M ∀t > T

33
These conditions are usually met by functions occurring in physical problems. The num-
ber a is called the exponential order of f (t). If a number a exists such that e−at |f (t)| is
bounded, then f is said to be of exponential order.

8.1.4 Analyticity
If f (t) is piecewise continuous and of exponential order a, the transform of f (t), i.e., F(s),
is an analytic function of s for Re(s)>s. Also, it is true that for Re(s)>s, limx→∞ F (s) = 0
and limy→∞ F (s) = 0 when s=x+iy.

8.1.5 Theorems
Theorem I (Linearity)
The Laplace transform is a sum of functions is the sum of the transforms of the individual
functions.
L(f + g) = L(f ) + L(g)

Theorem II (Linearity)
The Laplace transform of a constant times a function is the constant times the transform
of the function.
L(cf ) = cL(f )

Theorem III (Basic Operational Property)


If f (t) is a continuous function of exponential order whose derivative is at least piecewise
continuous over every finite interval 0 ≤ t ≤ t2 , and limt→0+ f (t) = f (0+ ), then the Laplace
transform of the derivative of f (t) is given by

L(f ′) = s{ − f (0+ )

and

L(f ′′) = s2 { − sf (0+ ) − f ′ (0+ ).

The latter, of course, requires an extension of the continuity of f (t) and its derivatives to
include f ′′ (t), and may be formally shown by partial integration. More generally, if f(t) and
n
its first n-1 derivatives are continuous and ddtnf is piecewise continuous, then
 n 
d f
L n
= sn L(f ) − sn−1 f (0+ ) − sn−2 f ′ (0+ ) − . . . − f (n−1) (0+ )
dt

34
Theorem IV (Transforms of Integrals)

R t If f (t) is of exponential order and at least piecewise continuous, then the transform of
a
f (t)dt is given by "Z #
t
1 1 0
Z
L f (t)dt = L(f ) + f (t)dt
a s s a

8.1.6 Further Properties


Below, let us assume all functions of the variable t are piecewise continuous, 0 ≤ t ≤ T ,
and of exponential order as t approaches infinity. Then the following theorems hold:

Theorem V

L est f (t) = F (s − a)
 

Theorem VI

If
(
f (t − b), t ≥ b
fb (t) =
0, t<b

Then

L fb (t) = e−bs F (s)


 

Theorem VII (Convolution)

"Z #
t
L f (t − τ )f (τ )dt = F (s) ∗ G(s)
0
"Z #
t
=L g(t − τ )f (τ )dt
0

35
Theorem VIII (Derivatives of Transforms)

  dF (s)
L tf (t) = −
ds
More generally,
dn F
L tn f (t) = (−1)n n
 
ds

8.2 Examples
8.2.1 Solving Simultaneous Equations
The problem is to solve for y from the simultaneous equations:
Z t

y +y+3 zdt = cos t + 3 sin t
0
2y ′ + 3z ′ + 6z = 0 y0 = −3, z0 = 2
Taking the Laplace transform of each equation, we have:
"Z #
t
L(y ′) + L(y) + 3L zdt = L(cos t) + 3L(sin t)
0
′ ′
2L(y ) + 3L(z ) + 6L(z) = 0
This system of equations can be reduced as follows by using the theorems and definitions
from the previous sections:
  3 s 3
L(y) + 3 + L(y) + L(z) = 2 + 2
   s  s +1 s +1
2 sL(y) + 3 + 3 sL(z) − 2 + 6L(z) = 0
Collecting the terms and transposing, we have
3 s+3
(s + 1)L(y) + L(z) = 2 −3
s s +1
2sL(y) + 3(s + 2)L(z) = 0
The two original integro-differential equations have thus been reduced to two linear algebraic
equations in L(y) and L(z). Applying Cramer’s rule and solving for L(y) since it is y which
we want;
 
s+3 3
s2 +1
−3 s  
0 3(s + 2) 3(s + 2) ss+3
2 +1 − 3
L(y) = =
(s + 1) 3 3s(s + 3)
s
2s 3(s + 2)

36
 
s+3
Where 3s(s + 3) is the determinant of the matrix in the denominator and 3(s + 2) s2 +1
−3
is the determinant of the matrix in the numerator. L(y) can also be written as
 
s+2 s+2
L(y) = −3 .
s(s2 + 1) s(s + 3)
Now applying the method of partial fractions,
   
2 1 − 2s 1 2
L(y) = + − +
s s2 + 1 s+3 s
 
s 1 1
= −2 2 + 2 −
s +1 s +1 s+3
And, finding the inverse in the table of transforms, which are tables relating functions of s
to the corresponding functions of t, and will be found in section IV of this paper,
y = −2 cos t + sin t − e−3t (t > 0)
It should be noted that one of the inherent characteristics of solving differential equations
by the use of Laplace transforms is that the initial conditions are included in the solution.

8.2.2 Electric Circuit Example


Since Laplace transforms are widely used in the determination of the transient response
of electric circuits, suppose a resistor R, inductor L, DC voltage source E, and an open switch
are all connected in series. At time t=0, the switch closes. Find the equation of current flow
after the switch is closed.

1. Kirchoff’s Voltage Law


di
E = iR + L
dt
2. Laplace Transformation
E
= I(s)R + LsI(s) − Lf (0+ )
s
t = 0 ⇒ i = 0 ⇒ f (0+ ) = 0

3. Solving for I(s)


E
I(s) =
s(R + Ls)
4. Taking the Inverse
E Rt

i(t) = 1 − e− L
R

37
8.2.3 Transfer Functions
For certain control functions, and for representing the dynamic behavior of various devices
such as reactors, heat exchangers, etc., it is advantageous to use a “transfer function” because
of the convenience in manipulation it provides. The transfer functions of many elements of
a system, when strung together in a block diagram, represent a convenient way of writing
complicated system equations. The transfer function of a system may be defined as the ratio
of the output to the input of the system in transform (s) space. The conditions for using
transfer functions are:

1. Initial condition operator =0.

2. No loading between transfer functions.

3. Transfer function satisfies existence conditions for Laplace transformation.

4. Linear system.

Example of Transfer Function


Consider an input voltage source ei in series with a resistor R and an inductor with
inductance L. Suppose the potential drop across the inductor is e0 . The objective is to find
the transfer function EE0i (s).

1. Equations

di
ei = Ri + L
dt
di
e0 = L
dt

2. Transforms

Ei (s) = RI(s) + sLI(s)


E0 (s) = sLI(s)

3. Solving for the Transfer Function

E0 sLI(s) sL sτ
(s) = = =
Ei (R + sL)I(s) R + sL 1 + sτ

L
(where τ = R
is the circuit’s time constant.)

38
8.3 Inverse Transformations
8.3.1 Heaviside Methods
When solving equations by the Laplace transform technique, it is frequently the most
difficult part of the procedure to invert the transformed solution for F(s) into the desired
function f(t). A simple way of making this inversion, but unfortunately a method only
applicable to special cases, is to reduce the answer to a number of simple expressions by
using partial fractions, and then applying the Heaviside theorems as outlined below:

Theorem I
h i
−1 p(s)
If y(t) = L , where p(s) and q(s) are polynomials, and the order of q(s) is greater
q(s)
than the order of p(s), then the term in y(t) corresponding to an unrepeated linear factor
p(a) at p(a) at
(s − a) of q(s) is q!(a) e or Q(a) e , where Q(s) is the product of all factors of q(s) except for
(s − a).

Example
s2 +2
If L(f (t)) = s(s+1)(s+2)
, then what is f (t)?

a. Roots of the denominator are s = 0, s = −1, s = −2


b. p(s) = s2 + 2
c. q(s) = s3 + 3s2 + 2s; q ′(s) = 3s2 + 6s + 2
p(0) = 2; p(1) = 3, p(−2) = 6
d. q ′ (0) = 2, q ′ (−1) = −1, q ′ (−2) = 2
2 3 −t 6 −2t
e. f (t) = e0 + e + e = 1 − 3e−t + 3e−2t
2 −1 2

Theorem II
h i
If y(t) = L−1 p(s)
q(s)
where p(s) and q(s) are polynomials and the order of q(s) is greater
than the order of p(s), then the terms in y(t) corresponding to the repeated linear factor
(s − a)r of q(s) are:
" #
(r−1)
at φ (a) φ(r−2) (a) t φ′ (a)tr−2 tr−1
e + + ...+ + φ(a) ,
(r − 1)! (r − 2)! 1! 1!(r − 2)! (r − 1)!

where φ(s) is the quotient of p(s) and all the factors of q(s) except for (s − a)r .

39
Example
s+3
If L(f (t)) = (s+2)2 (s+1)
, then what is f (t)?

s+3
a. φ(s) = s+1 ; φ′ (s) = (s+1)−(s+3)
(s+1)2
−2
= (s+1)2

b. φ(−2) = −1; φ (−2) = −2
The terms in f (t) corresponding to (s + 2)2 are:
 
−2t −2 −1t
e + = −e−2t (s + t).
1! 0!1!
Then, as in the example from Theorem I;
p(s) = 8 + 3; q(s) = s3 + 5s2 + 8s + 4
q ′ (s) = 3s2 + 10s + 8
p(−1) = 2 q ′ (−1) = 1.

Thus,

f (t) = −e−2t (2 + t) + 2e−t

Theorem III
h i
If y(t) = L−1 p(s)
q(s)
, where p(s) and q(s) are polynomials and the order of q(s) is greater
than the order2 of p(s),
 then the eterms in y(t) corresponding to an unrepeated quadratic
−at
2
factor (s + a) + b of q(s) are b (φi cos bt + φr sin bt) where φr and φi are respectively,
the real and imaginary partsof φ(−a+ ib), and φ(s) is the quotient of p(s) and all the factors
of q(s) except (s + a)2 + b2 .


Example
s
If L(f (t)) = (s+2)2 (s2 +2s+2)
, then what is f (t)?

a. Considering the linear factor as in the example of Theorem II


s −s2 + 2
φ(s) = 2
; φ′ (s) =
(s + 2s + 2) (s2 + 2s + 2)2
1
φ(−2) = −1; φ′ (−2) = −
2
b. Considering the quadratic factor;

s2 + 2s + 2 = (s + 1)2 + 12
s
φ(s) = .
(s + 2)2

40
Therefore,

−1 + i
φ(−a + ib) = φ(−1 + i) =  2
(−1 + i) + 2
−1 + i −1 + i 1 i
= 2
= = + .
(1 + i) 2i 2 2
e−t (cos t+sin t)
So φr = φi = 12 and the terms in f(t) corresponding to (s2 + 2s + 2) are 2
. By
adding the two partial inverses, we get the following answer:

−(1 + 2t)e−2t e−t (cos t + sin t)


f (t) = +
2 2

8.3.2 The Inversion Integral


When the function cannot be reduced to a form amenable to inversion by tables of
transforms or Heaviside methods, there remains a most powerful method for the evaluation
of inverse transformations. The inversion is given by an integral in the complex s-plane,
Z γ+i∞
1
f (t) = est F (s)ds
2πi γ−i∞

where γ is some real number chosen such that F(s) is analytic (see Appendix A) for Re(s)≥ γ,
and the Cauchy principle value of the integral is to be taken, i.e.
Z γ+iB
1
f (t) = lim eSt F (s)ds
2πi B→∞ γ−iB

Let us illustrate the formal origin of the inversion integral in the following way. In the
complex plane, let φ(x) be a function of z, analytic on the line x=γ, and in the entire half
plane R to the right of this line. Moreover, let |φ(z)| approach zero uniformly as z becomes
infinite through this half plane. Then s0 is any point in the half plane R, we can choose a
semi-circular contour c, composed of c1 and c2 , as shown below, and apply Cauchy’s integral
formula, (see Appendix B)
1
I
φ(z)
Φ(s0 ) = dz
2πi c z − s0
Here, φ(z) is analytic within and on the boundary c of a simply connected region R and s0
is any point in the interior of R integration around c in a positive sense).
Thus, Cauchy’s integral formula yields
Z γ−ib
1 1 1
I Z
φ(z) φ(z) φ(z)
φ(s) = dz = dz + dz
2πi c z − s 2πi c2 z − s 2πi γ+ib z − s

41
Now, for values of z on the path of integration, c2 , and for b sufficiently large,
|z − a| ≥ b − |a − γ| ≥ b − |a|
|φ(z)|
Z Z
φ(z)
⇒ dz ≤ |dz|
c2 z − s c2 |z − s|
Z
M
≤ |dz|
b − |s| c2
bM

b − |s|
b
where M is the maximum value of |φ(z)| on c2 . As b → ∞, the fraction b−|s|
→ 1, and M
approaches zero. Hence, Z
φ(z)
lim dz = 0
b→∞ c z − s
2

and the contour integral reduces to


Z γ−ib Z γ+i∞
1 φ(z) 1 φ(z)
φ(s) = lim dz = dz
b→∞ 2πi γ+ib z − s 2πi γ−i∞ z − s
Now taking the inverse transform of the above equation,
( Z γ+i∞ )
1 φ(z)
f (t) = L−1 {φ(s)} = L−1 dz
2πi γ−i∞ z − s
Z γ+i∞  
1 −1 φ(z)
= L dz
2πi γ−i∞ z−s
Z γ+i∞  
1 −1 1
= φ(z)L dz
2πi γ−i∞ z−s
Z γ+i∞
1
= φ(z)etz dz
2πi γ−i∞
Since from our table of transforms
 
−1 1
L = ezt
s−z
Our final equation after switching from z to s as dummy variable in the last integral is just
the inversion integral which we were establishing.
Z γ+i∞
−1 1
φ(z)est ds

L φ(z) = f (t) =
2πi γ−i∞
At this point, it would be advantageous to know how to evaluate the integral on the right.
According to residue theorem, the integral of est φ(s) around a path enclosing the isolated
singular points s)1, s2 , · · · , sn , of est φ(s) has the value
 
2πi φ1 (t) + ρ2 (t) + . . . + ρn (t)

42
where ρn (t) is the residue of est φ(s) at s=sn . For discussion of residues, see Appendix C; for
singular points, Appendix D.
Let the path of integration be made up of the line segments γ − ib, γ + ib, and c3 . Then,
γ+ib N
1 1
Z Z X
st
φ(s)e ds + φ(s)est ds = ρn (t)
2πi γ−ib 2πi c3 n=1

If the second integral around c3 vanishes for b → ∞, as often happens, we are led to the
immediate result that
N
X
−1

L φ(s) = f (t) = ρn (t)
n=1

Note that in the formal derivation of the inversion formula, we assumed that φ(s) (and
therefore est φ(s)) is analytic for s ≥ γ, and that lims→∞ |φ(s)|=0 in that plane. In our
discussion of the residue form of the inversion, we work in the left half-plane. This is
because Laplace transforms have the property that they are analytic in a right half-plane,
and that in that plane, lims→∞ |φ(s)|=0.
Questions of the validity of the above procedures, alterations of contour, and applications
to problems are not dealt with here, as they are presented in detail in the references.

43
8.4 Tables of Transforms

F(s) f(t)
1 Unit impulse at t=0, δ(t)
s Unit doublet impulse at t=0, δ2 (t)
1
s
Unit step at t=0, u(t)
1
s+a
e−at
1 e−at −e−bt
(s+a)(s+b) b−a
s+c (c−a)e−at −(c−b)e−bt
(s+a)(s+b) b−a
s+c c c−a c−b
s(s+a)(s+b) ab
+ a(a−b)
e−at + b(b−a)
e−bt
1 e−at e−bt e−ct
(s+a)(s+b)(s+c) (c−a)(b−a)
+ (a−b)(c−b)
+ (a−c)(b−c)
s+d (d−a)e−at (d−b)e−bt (d−c)e−ct
(s+a)(s+b)(s+c) (c−a)(b−a)
+ (a−b)(c−b)
+ (a−c)(b−c)
s2 +es+d (a2 −ea+d)e−at (b2 −eb+d)e−bt (c2 −ec+d)e−ct
(s+a)(s+b)(s+c) (c−a)(b−a)
+ (a−b)(c−b)
+ (a−c)(b−c)
1 sin bt
a2 +b2 b
√  
s+d d2 +b2 −1 b
s2 +b2 b
sin (bt + ϕ) ϕ = tan d
s
s2 +b2
cos (bt)

44
F(s) f(t)

1 e−at
(s+a)2 +b2 b
sin (bt)

s+a
(s+a)2 +b2
e−at cos (bt)

e−at (d−a)2 +b2
 
s+d b
(s+a)2 +b2 b
sin (bt + ϕ) ϕ = tan−1 d−a
 
1 1 e−at b
s[(s+a)2 +b2 ] b20
+ b0 b
sin (bt − ϕ) ϕ = tan−1 −a b20 = a2 + b2

e−at (d−a)2 +b2
   
s+d d b b
s[(s+a)2 +b2 ] b20
+ b0 b
sin (bt + ϕ) ϕ = tan−1 d−a − tan−1 −a b20 = a2 + b2

1 sinh (bt)
s2 −b2 b

s
s2 −b2
cosh (bt)

1 tn−1
sn (n−1)!
∀n ∈ N

1 tν−1
sν Γ(ν)
∀ν > 0

1 e−at +at
s2 (s+a) a2
−1

s+d d−a −at


s2 (s+a) a2
e + ad t + a−d
a2

45
F(s) f(t)

s+d
(d − a)t + 1 e−at
 
(s+a)2

1 1−(at+1)e−at
s(s+a)2 a2
h i
s+d d a−d
s(s+a)2 a2
+ − ad2 e−at
a
t
h i 
s2 +bs+d d ba−a2 −d a2 −d
s(s+a)2 a2
+ a
t + a2 e−at

1 1 1
s2 (s2 +b2 ) b2
t − b3
sin (bt)

1 1 1 2
s3 (s2 +b2 ) b4
(cos (bt) − 1) + 2b2
t

1 1 1
s2 (s2 −b2 ) b3
sinh (bt) − b2
t

1 1 1 2
s3 (s2 −b2 ) b4
(cosh (bt) − 1) − 2b2
t

1 1

(s2 +b2 )2 2b3
sin (bt) − bt cos (bt)

s 1
(s2 +b2 )2 2b
t sin (bt)

s2 1

(s2 +b2 )2 2b
sin (bt) + bt cos (bt)

s2 +b2
(s2 +b2 )2
t cos (bt)

1 e−at

2 2b3
sin (bt) − bt cos (bt)
[(s+a)2 +b2 ]
s+a te−at
2 2b
sin (bt)
[(s+a)2 +b2 ]
(s+a)2 −b2
2 te−at cos (bt)
[(s+a)2 +b2 ]
e−t1 s
s2
(t − t1 )u(t − t1 )
(t1 s+1)e−t1 s
s2
tu(t − t1 )
(t21 s2 +2t1 s+2)e−t1 s
s3
t2 u(t − t1 )

46
8.4.1 Analyticity
Let ω be a single valued complex function of z such that ω = f (z) = u(x, y) + iv(x, y)
where u and v are real functions. The definition of the limit of f (z) as z approaches z0 and
the theorems on limits of sums, products, and quotients correspond to those in the theory of
functions of a real variable. The neighborhoods involved are not two-dimensional; however;
and the condition
lim f (z) = u0 + iv0
z→z0

is satisfied if and only if the two-dimensional limits of the real functions u(x, y) and v(x, y)
as x → x0 , y → y0 have the values u0 and v0 respectively. Also, f (z) is continuous when
z = z0 is and only if u(x, y) and v(x, y) are both continuous at x0 , y0 ).
The derivative of ω at a point z is:

dω ∆ω f (z + ∆z) − f (z)
= f ′ (z) = lim = lim ,
dz ∆z→0 ∆z ∆z→0 ∆z
provided that this limit exists (it must be independent of direction). Now suppose one
chooses a path on which ∆y = 0 so that ∆z = ∆x. Then, since ∆ω = ∆u + i∆v,
 
dω ∆u ∆v ∂u ∂v
= lim +i = +i
dz ∆x→0 ∆x ∆x ∂x ∂x
The case when ∆x = 0 so that ∆z = i∆y is similar.
 
dω ∆u ∆v ∂u ∂v
= lim +i = +i
dz ∆y→0 ∆y ∆y ∂y ∂y

Equating the real and imaginary parts of the above equations, since we insist that the
derivative must be independent of direction, we get
∂u ∂v ∂v ∂u
= ; =− .
∂x ∂y ∂x ∂y
These are known as the “Cauchy-Reimann conditions”.
Now, the definition of analyticity is that “a function f (z) is said to be analytic at a
point z0 is its derivative f ′ (z) exists at every point of some neighborhood of z0 ”. And, it is
necessary and sufficient that f (z) = u + iv satisfies the Cauchy-Reimann conditions in order
for the function to have a derivative at point z.

8.4.2 Cauchy’s Integral Formula


Theorem I
H
If f (z) is analytic at all points within and on a closed curve c, then c
f (z)dz = 0.

Proof:

47
I I I I
f (z)dz = (u + iv)(dx + idy) = (udx − vdy) + i (vdx + udy).
c c c c
Applying Green’s lemma to each integral,
I Z Z   Z Z  
∂v ∂u ∂u ∂v
f (z)dz = − − dxdy + i − − dxdy.
c R ∂x ∂y R ∂x ∂y
Due to analyticity, the integrands on the right vanish identically, giving
Z
f (z)dz = 0. QED
c

Theorem II
If f (z) is analytic within and on the boundary c of a simply connected region R and if z0
1
H f (z)
is any point in the interior of R, then f (z0 ) = 2πi c z−z0
dz where the integration around c
is in the positive sense.

Proof:

Let c0 be a circle with center at z0 whose radius ρ is sufficiently small such that c0 lies
f (z)
entirely within R. The function f (z) is analytic everywhere within R, hence z−z 0
is analytic
everywhere within R except at z = z0 . By the “principle of deformation of contours” (see
any complex variable book), we have that

f (z) f (z) f (z0 ) + f (z) − f (z0 )


I I I
dz = dz = dz
c z − z0 c0 z − z0 c0 z − z0
1 f (z) − f (z0 )
I I
= f (z0 ) dz + dz.
c0 z − z0 c0 z − z0
1
H
Considering the first integral c0 z−z 0
dz and letting z − z0 = reiθ , dz = rieiθ dθ, we get the
following: Z 2π iθ Z 2π
rie
dθ = i dθ = 2πi.
0 reiθ 0
With that, we may also observe the following:

f (z) − f (z0 ) |f (z) − f (z0 )|


Z I
1. dz ≤ |dz|
c0 z − z0 c0 |z − z0 |
2. On c0 , z − z0 = ρ
3. |f (z) − f (z0 )| < ǫ provided that |z − z0 | = ρ < δ (Due to the continuity of f (z)).

48
Choosing ρ to be less than δ, we write
f (z) − f (z0 )
I I I
ǫ ǫ ǫ
dz < |dz| = |dz| = 2πρ = 2πǫ.
c0 z − z0 c0 ρ ρ c0 ρ
Since the integral on the left is independent of ǫ, yet cannot exceed 2πǫ, which can be made
arbitrarily small, it follows that the absolute value of the integral is zero. We therefore have
that
f (z)
I
dz = f (z0 )2πi + 0 or,
c0 z − z0
1 f (z)
I
f (z0 ) = dz. QED
2πi c0 z − z0
This is known as “Cauchy’s integral formula”.

Calculation of Residues
Laurent Series
Theorem I:

If f (z) is analytic throughout the closed region R, bounded by two concentric circles c1
and c2 , then at any point in the annular ring bounded by the circles, f (z) can be represented
by a series of the following form where a is the common center of the circles:
X∞
f (z) = an (z − a)n
n=−∞
1 f (ω)
I
an = dω.
2πi c (ω − a)n+1
Each integral is taken in the counter-clockwise direction around any curve c lying within the
annulus and encircling its inner boundary (for proof see any complex variable book). This
series is called the Laurent series.

Residues
The coefficient, a−1 , of the term (z − a)−1 in the Laurent expansion of a function f (z),
is related to the integral of the function through the formula
1
I
a−1 = f (z)dz.
2πi c
In particular, the coefficient of (z − a)−1 in the expansion of f (z) about an “isolated singular
point” is called the “residue” of f (z) at that point.
If we consider a simply closed curve c containing inHits interior a number of isolated
singularities of a function f (z), then it can be shown that c f (z)dz = 2πi [r1 + r2 + · · · + rn ]
where r1 , r2 , · · · , rn are the residues of f (z) at the singular points within c.

49
Determination of Residues
The determination of residues by the use of series expansions is often quite tedious. An
alternative procedure for a simple or first order pole at z = a can be obtained by writing
a−1
f (z) = + a0 + a1 (z − a) + · · ·
z−a
and multiplying by z − a to get
(z − a)f (z) = a−1 + a0 (z − a) + a1 (z − a)2 + · · ·
and letting z → a to get
 
a−1 = lim (z − a)f (z) .
z→a

A general formula for the residue at a pole of order m is


dm−1 
(z − a)m f (z) .

(m − 1)!a−1 = lim m−1
z→a dz

For polynomials, the method of residues reduces to the Heaviside method for finding inverse
Laplace transforms.

8.4.3 Regular and Singular Points


If ω = f (z) possesses a derivative at z − z0 and at every point in some neighborhood of
z0 , then f (z) is said to be “analytic” at z = z0 , and z0 is called a “regular point” of the
function.
If a function is analytic at some point in every neighborhood of a point z0 , but not at z0 ,
then z0 is called a “singular point” of the function.
If a function is analytic at all points except z0 , in some neighborhood of z0 , then z0 is an
“isolated singular point”.
About an isolated singular point z0 , a function always has a Laurent series representation
A−2 A−1
f (z) = . . . + 2
+ + A0 + A1 (z − z0 ) + . . .
(z − z0 ) z − z0

where 0 < |z − z0 | < γ0 and γ0 is the radius of the neighborhood in which f (z) is analytic
except at z0 . This series of negative powers of (z − z0 ) is called the “principle part” of f (z)
about the isolated singular point z0 . The point z0 is an “essential singular point” of f (z) if
the principle part has an infinite number of non-vanishing terms. It is a “pole of order m”
if A−m 6= 0 and A−n = 0 when n > m. It is called a “simple pole” when m = 1.

8.5 References
1. Churchill; “Operational Mathematics” - A complete discourse on operational methods,
well written and presented.

50
2. Wylie; “Advanced Engineering Mathematics” - A concise review of the highlights of
transformation calculus, both Fourier and Laplace transforms.

3. Murphy; “Basic Automatic Control Theory” - An exposition of Laplace transforms


with many good electrical engineering examples. A fairly complete table of Laplace
transforms.

4. Erdelyi, A., et al; “Tables of Integral Transforms”, Vols. I and II, McGraw Hill, New
York (1954). Very extensive compilation of transforms of many kinds.

5. Gardner and Barnes; “Transients in Linear Systems”, Extensive tables of transforms


of ratios of polynomials.

51
Chapter 9

Fourier Transforms

9.1 Definitions
9.1.1 Basic Definitions
In addition to the Laplace transform there exists another commonly-used set of trans-
forms called Fourier transforms. At least five different Fourier transforms may be distin-
guished. Their definitions are as follows:

Finite Range Cosine Transform


Z π
Fc [n] = Cn [f ] ≡ f (x) cos (nx)dx (n = 0, 1, 2, . . .)
0
Finite Range Sine Transform
Z π
Fs [n] = Sn [f ] ≡ f (x) sin (nx)dx (n = 1, 2, 3, . . .)
0
Infinite Range Cosine Transform
r Z ∞
2
Fc [r] = Cr [f ] ≡ f (x) cos (rx)dx (0 ≤ r ≤ ∞)
π 0
Infinite Range Sine Transform
r Z ∞
2
Fs [r] = Sr [f ] ≡ f (x) sin (rx)dx (0 ≤ r ≤ ∞)
π 0
Infinite Range Exponential Transform
r Z ∞
1
Fe [r] = Er [f ] ≡ f (x)eirx dx (−∞ ≤ r ≤ ∞)
2π −∞

9.1.2 Range of Definition


In the infinite range transforms, the transform variable is continuous; in the case of the
finite range transforms, the variable takes only positive integer values or zero. Considering

52
the range of integration used in the definition of each transform, we see that the finite range
transforms apply to functions defined on a finite interval, the infinite range sine and cosine
transforms to functions defined on a semi-infinite interval, while the exponential transform
applies to functions defined on the infinite interval.

9.1.3 Existence Conditions


As an existence condition for all these transforms,R it is customarily required that the
function be absolutely integrable over the range, i.e. |f (x)|dx exists. Note that although for
the derivations to follow, the more stringent conditions of continuity or sectional continuity
are imposed upon the function, absolute integrability is all that is required in the general
case.

9.2 Fundamental Properties


9.2.1 Transforms of Derivatives
Consider the finite range cosine transforms of the derivative f ′ of the function f ,
Z π

Cn [f ] = f ′ (x) cos (nx)dx.
0

Integrating by parts,
π Z π
Cn [f ] = f (x) cos (nx) +n f (x) sin (nx)dx
0
0
= f (π) cos (nπ) − f (0+ ) + nSn [f ].

Since n is an integer,

Cn [f ′ ] = (−1)n f (π) − f (0+ ) + nSn [f ].


 

Consider also,
Z π

Sn [f ] = f ′ (x) sin (nx)dx
0
π Z π
= f (x) sin (nx) −n f (x) cos (nx)dx
0
0
= −nCn [f ].

53
Now take for f , f = g ′; and by iteration we get the following:

Cn [f ′ ] = Cn [g ′′ ] = (−1)n g ′(π) − g ′ (0+ ) + nSn [g ′ ]


 

= (−1)n g ′ (π) − g ′ (0+ ) − n2 Cn [g].


 

Similarily,

Sn [g ′′ ] = −nCn [g ′ ]
= −n (−1)n g(π) − g(0) − n2 Sn [g]
 

= n g(0) − (−1)n g(π) − n2 Sn [g].


 

Now consider the infinite range cosine transform


r Z ∞
2
Cr [f ′ ] = f ′ (x) cos (rx)dx,
π 0

and again integrating by parts, and assuming limx→∞ f (x) = 0, which is a consequence of
our condition of absolute integrability, we get
r r Z ∞
′ 2 2
Cr [f ] = [−f (0)] + r f (x) sin (rx)dx
π π 0
r
2
=− f (0) + rSr [f ].
π
Likewise,
r Z ∞
′ 2
Sr [f ] = f ′ (x) sin (rx)dx
π 0
r Z ∞
2
= −r f (x) cos (rx)dx = −rCr [f ].
π 0
Iterating once, we find
r
′′ 2 ′
Cr [f ] = − f (0) + rSr [f ′ ]
π
r
2 ′
=− f (0) − r 2 Cr [f ].
π
Similarly,

Sr [f ′′ ] = −rCr [f ′ ]
r
2
= −r f (0) − r 2 Sr [f ].
π

54
Finally, consider
r ∞
1
Z

Er [f ] = f ′ (x)eirx dx
2π −∞

and assuming limx→∞ f (x) = 0,


Z ∞
′ ir
Er [f ] = − √ f (x)eirx dx.
2π −∞

Iterating, it follows that

Er [f ′′ ] = −r 2 Er [f ].

In each case, we have assumed continuity for f ′ and f ′′ in order to perform the indicated
parts integrations. One may proceed with the iterations, obtaining relations involving trans-
forms of higher derivatives. Further properties are derivable with similar ease, the procedure
usually involving an integration by parts.

9.2.2 Relations Among Infinite Range Transforms


It is interesting to note some relations among the infinite range transforms. Recalling
Euler’s identity eirx = cos (rx) + i sin (rx), we find that
Z ∞ Z ∞
1 i
Er [f ] = √ f (x) cos (rx)dx + √ f (x) sin (rx)dx
2π −∞ 2π −∞
Z 0 Z ∞
1 1
=√ f (x) cos (rx)dx + √ f (x) cos (rx)dx
2π −∞ 2π 0
Z 0 Z ∞
i i
+√ f (x) sin (rx)dx + √ f (x) sin (rx)dx
2π −∞ 2π 0
Z 0 Z ∞
1 1
=√ f (−x) cos (rx)dx + √ f (x) cos (rx)dx
2π −∞ 2π 0
Z 0 Z ∞
i i
+√ f (x) sin (rx)dx + √ f (−x) sin (rx)dx
2π −∞ 2π 0
or
1
Er [f ] = Cr [f (−x)] + Cr [f (x)] + iSr [f (x)] − iSr [f (−x)] ,
2
which is not very interesting except when f (x) is either even or odd on the infinite interval;
if even, i.e. f (x) = f (−x), then the exponential transform reduced to the cosine transform;
if odd, i.e. f (x)
√ = −f (−x), then the exponential transform reduces to the sine transform,
with a factor −1.

55
9.2.3 Transforms of Functions of Two Variables
The transforms may also be used with functions of two or more variables; for example,
if f is a function of x and y, defined for 0 ≤ x ≤ π, 0 ≤ y ≤ π, then
Z π
Sm [f ] = f (x, y) sin (mx)dx
Z0 π
Sn [f ] = f (x, y) sin (ny)dy
0
Z π Z π
Sm,n [f ] = Sn [f ] sin (my)dy = Sm [f ] sin (nπ)dx
0 0
Z πZ π
= f (x, y) sin (mx) sin (ny)dxdy.
0 0

Furthermore,
" #
2
∂ f n  o
Sm,n = m Sn f (0, y) − (−1) Sn [f (π, y)] − m2 Sm,n [f ]
m
∂x2

so that if

f (0, y) = f (π, y) = f (x, 0) = f (x, π),

then
" #
∂2f
Sm,n = −m2 Sm,n [f ]
∂x2
" #
∂2f ∂2f
Sm,n + = −(m2 + n2 )Sm,n [f ].
∂x2 ∂y 2

Similar formulae may be derived for Cm,n and extensions can be worked out in analogy
to the single-variable properties. These transforms of more than one variable amount to
transforms of transforms, obtained by taking the transform of the function with respect to
a single variable, and subsequently taking the transform of this transformed function with
respect to another variable. In fact, if the boundary conditions in the various dimensions
are not all of the same type, more than one type of transformation may be used. In fact,
one fairly common combination is the Fourier plus the Laplace transformation.

9.2.4 Fourier Exponential Transforms of Functions of Three Vari-


ables
Consider a function of three variables, f (x1 , x2 , x3 ), piecewise continuous and absolutely
integrable over the infinite range with respect to each variable. We may apply the exponential

56
transform with respect to each variable, defining the three-times transformed function.
Z ∞Z ∞Z ∞
1
Ek [f ] = g(k1 , k2 , k3 ) = 3 ei(k1 x1 +k2 x2 +k3 x3 ) f (x1 , x2 , x3 )dx1 dx2 dx3
(2π) −∞ −∞ −∞
2

Using vector notation, this may be rewritten as follows:

1
Z
Ek [f ] = g(k) = 3 eik·x f (x)d3 x
(2π) 2 x

where k has the components k1 , k2 , k3 , x has the components x1 , x2 , x3 , and d3 x = dx1 dx2 dx3 ,
and the integration is to be taken over the full range, −∞ to ∞ of each variable. The inverse
transformation gives back f (x),

1
Z
f (x) = 3 e−ik·x g(k)d3 k.
(2π) k
2

Properties: suppose Ek [f ] = g(k); Ek [F] = G(k), then

1. Ek [∇f ] = −ikg(k)
~ · F ] = −ik · G(k)
2. Ek [∇
3. Ek [∇ × F] = −ik × G(k)
4. Ek [∇2 f ] = −k2 f

From a glance at formulae 1 to 4, we see that under this transformation, the vector operator
∇ operating on a function transforms into the vector ik times the transformed function.

9.3 Summary of Fourier Transform Formulae


9.3.1 Finite Transforms
Functions defined on any finite range can be transformed into functions defined on the
interval 0 ≤ x ≤ π.

1. Definitions
r Z π
2
a. Sn [y] = y(x) sin (nx)dx n = 1, 2, . . .
π 0
r Z π
2
b. Cn [y] = y(x) cos (nx)dx n = 0, 1, . . .
π 0

57
2. Inversions (0 ≤ x ≤ π)
r ∞
2X
a. y(x) = Sn [y] sin (nx)
π n=1
r ∞
2X 1
b. y(x) = Cn [y] cos (nx) + C0 [y]
π n=1 π

3. Transforms of Derivatives
a. Sn [y ′] = −nCn [y] n = 1, 2, . . .
b. Cn [y ′] = nSn [y] − y(0) + (−1)n y(π) n = 0, 1, 2, . . .
c. Sn [y ′′] = −n2 Sn [y] + n y(0) + (−1)n y(π)
 
n = 1, 2, . . .
d. Cn [y ′′ ] = −n2 Cn [y] − y ′(0) + (−1)n y(π) n = 0, 1, 2, . . .
(Note that for b and c, the functions must be known on the boundaries and for d, the
derivative must be known on the boundaries.)

4. Transforms of Integrals
Z x
(−1)n

1
a. Sn y(ξ)dξ = Cn [y] − C0 [y] n = 1, 2, . . .
0 n n
Z x 
1
b. Cn y(ξ)dξ = − Sn [y]
n
Z0 x 
C0 y(ξ)dξ = πC0 [y] − C0 [xy]
0

5. Convolution Properties

a. Define convolution of f (x), g(x) for −π ≤ x ≤ π.


Z π
p∗q ≡ p(x − ξ)q(ξ)dξ = q ∗ p
−π
b. Transforms of Convolutions
Define extension of f (x), where f (x) is defined for 0 ≤ x ≤ π.
(
f1 (−x) = −f1 (x)
Odd extension:
f1 (x + 2π) = f1 (x)
(
f2 (−x) = f2 (x)
Even extension:
f2 (x + 2π) = f2 (x)
1. 2Sn [f ]Sn [g] = −Cn [f1 ∗ g1 ]
2. 2Sn [f ]Cn [g] = Sn [f1 ∗ g2 ]
3. 2Cn [f ]Cn [g] = Cn [f1 ∗ g2 ]

58
6. Derivatives of Transforms

d
a. Sn [y] = Cn [xy]
dn
d
b. Cn [y] = −Sn [xy]
dn

(Here the differentiated transforms must be in a form valid for n as a continuous variable
instead of only for integral n.)

9.3.2 Infinite Transforms


R∞ R∞
It must be true that 0
y(x)dx or −∞
y(x)dx exists.

1. Definitions
r Z ∞
2
a. Sr [y] = y(x) sin (rx)dx r≥0
π 0
r Z ∞
2
b. Cr [y] = y(x) cos (rx)dx r≥0
π 0
Z ∞
1
c. Er [y] = √ y(x)eirx dx −∞ ≤r ≤∞
2π −∞

2. Inversions
r Z ∞
2
a. y(x) = Sr [y] sin (rx)dx x>0
π 0
r
2  
= Sx Sr [y]
π
r Z ∞
2
b. y(x) = Cr [y] cos (rx)dx x>0
π 0
r
2  
= Cx Cr [y]
π Z

1
c. y(x) = √ Er [y]e−irx dx
2π −∞
   
= E−x Er [y] = Ex E−r [y]

(The Cauchy principal value of the integral is to be taken).

59
3. Transforms of Derivatives
a. Sr [y ′] = −rCr [y]
b. Cr [y ′] = rSr [y] − y(0)
c. Er [y ′ ] = −irEr [y]
d. Sr [y ′′ ] = −r 2 Sr [y] + ry(0)
e. Cr [y ′′] = −r 2 Cr [y] − y ′(0)
f. Er [y ′′ ] = −r 2 Er [y]
 n 
d y
g. Er = (−ir)n Er [y]
dxn
4. Transforms of Integrals
Z x 
1
a. Sr y(ξ)dξ = Cr [y]
r
Z0 x 
−1
b. Cr y(ξ)dξ = Sr [y]
0 r
Rx
(In a and b, 0 y(ξ)dξ must be piecewise continuous and approaching zero as x → ∞ ).
Z x 
1
c. Er y(ξ)dξ = Er [y]
c r
Rx
(Where c is any lower limit and −∞ y(ξ)dξ must be piecewise continuous and approaching
zero as x → ∞ ).
"Z Z #
x λ
−1
d. Sr y(ξ)dξdλ = 2 Sr [y]
0 0 r
"Z Z #
x λ
−1
e. Cr y(ξ)dξdλ = 2 Cr [y]
0 0 r
"Z Z #
x λ
−1
f. Er y(ξ)dξdλ = 2 Er[y]
0 0 r

5. Some Relations Between Transforms for real y(x)


a. Cr [y] + iSr [y] = Er [y]
or
( 
Cr [y] = ℜ Er [y] ,

Sr [y] = ℑ Er [y]
y(x) = Θ e−ǫt

b. For y(x) = y(−x), ǫ>0
Er [y] = 2L[y],

60
(where Laplace transform variable is taken as ir.)

6. Convolution Properties
Z ∞ 

a. 2Sr [f ]Sr [g] = Cr g(ξ) f (x + ξ) − f1 (x − ξ) dξ
0
Z ∞ 

b. 2Sr [f ]Cr [g] = Sr g(ξ) f (x + ξ) + f1 (x − ξ) dξ
0
Z ∞ 

= Cr f (ξ) g2 (x − ξ) − g(x + ξ) dξ
0
Z ∞ 

c. 2Cr [f ]Cr [g] = Cr g(ξ) f2 (x − ξ) + f (x + ξ) dξ
0

where extensions defined

y1 (−x) = −y(x); y2 (−x) = y(x), ∀x


"Z #

d. Er [f ]Er [g] = Er f (ξ)g(x − ξ)dξ
−∞

7. Derivatives of Transforms
d
a. Sr [y] = Cr [xy]
dr
d
b. Cr [y] = −Sr [xy]
dr
d
c. Er [y] = iEr [xy]
dr

9.4 Types of Problems to which Fourier Transforms


May be Applied
General Discussion
It is to our great advantage to have some inkling as to just which transform to use and
where to use it. We have noted that finite-range transforms are useful on functions defined
over a finite range, Fc [r] and Fs [r] are useful on functions defined over semi-infinite intervals,
and Fe [r] on functions defined over the infinite range. Nevertheless, there is still more to be
said.
First, it goes almost without saying that if it can be avoided, it is undesirable to introduce
an unknown quantity into an equation. Now, if an equation in f defined over 0 ≤ x ≤ π
2
contains a differential operator which one wishes to reduce, say ddxf2 , and f (π) and f (0) are

61
known, while f ′ (π) and f ′ (0) are not, then clearly Sn is the best option because in doing so,
we introduce f (π) and f (0) and need not know the value of f ′ at any point. We would not
use Cn because f ′ (0) and f ′ (π) are unknown and could not be solved for until much later in
the work. On the other hand, if f ′ (0) and f ′ (π) are known, then one uses Cn for the same
reason. The situation is similar with respect to the infinite range transforms; use Fs [r] to
2
reduce ddxf2 when f (0) is known and use Fc [r] when f ′ (0) is known. No such question arises
with respect to Fe [r].
We have noted at the start that the functions to which the Fourier transforms are appli-
cable are usually required to be absolutely integrable. This kind of knowledge of a function
is usually evident from the physical meaning of the function, before the function itself is
known. The Laplace transform, on the other hand, merely requires that the function be of
exponential order, i.e., |f (x)| < Meαx ∀M, α ∈ R.

Example
Consider the following steady-state heat conduction problem in a medium with no internal
heat generation. Suppose we have a two-dimensional slab which is semi-infinite along the y-
axis (0 ≤ y ≤ ∞) and finite along the x-axis (0 ≤ x ≤ π). The left face at x = 0 is insulated
for y > a but has a heat flux q for 0 < y < a. The right face at x = π is insulated for all
y. Furthermore, suppose that the face at y = 0 is held at temperature T0 , 0 < x < π. The
position-dependent temperature T (x, y) is modeled using Laplace’s equation with boundary
conditions.
∇2 T = 0
(
∂T q 0<y<a
−k (0, y) =
∂x 0 y>a
∂T
(π, y) = 0
∂x
T (x, 0) = T0
We propose to do the problem by the method of Fourier transforms, but intuitively we know
limy→∞ T ≡ T̄ 6= 0 and that therefore, the transform of T does not exist. However, the
function T − T̄ ≡ Θ is such that limy→∞ Θ = limy→∞ T − T̄ = T̄ − T̄ = 0 and the transform
may (in fact, does) exist. Let us, therefore, substitute T = Θ + T̄ into the above problem to
obtain the following:
∇2 Θ = 0
(
∂Θ q 0<y<a
−k (0, y) =
∂x 0 y>a
∂Θ
(π, y) = 0
∂x
Θ(x, 0) + T̄ = T0 ⇒ Θ(x, 0) = T0 − T̄ ≡ Θ0

62
The structure of the problem is not essentially change, except that now Θ0 is not known
since T̄ is not known.
2 2
We must reduce the operators ∂∂xΘ2 and ∂∂yΘ2 . In x, ∂Θ
∂x
(0, y) and ∂T∂x
heta
(π, y) are known.
Thus, a finite range Fourier cosine transform is a good option here. In y, we know that
Θ(x, 0) = Θ0 and that limy→∞ Θ = 0. Therefore, an infinite range Fourier sine transform is
a good option. We shall denote the x-transformed functions by the superscript fn and the
y-transformed functions by the superscript F . Recall that
!f
∂2Θ ∂Θ ∂Θ
= −n2 Θf + (−1)n (π, y) − (0, y)
∂x2 ∂x ∂x

and
!F
∂2Θ
= −r 2 ΘF + rΘ(x, 0)
∂y 2

It is irrelevant in which order the transformations are applied or inverted, although one order
may prove nicer than another. Let us transform first with respect to x.

∂Θ ∂Θ d 2 Θf n
0 = −n2 Θf n + (−1)n (π, y) − (0, y) +
∂x ∂x dy 2
(
∂Θ − kq 0 < y < a
(0, y) =
∂x 0 y>a
∂Θ
(π, y) = 0
∂x (
π n=0
Θf n (x, 0) =
0 n = 1, 2, . . .

Then with respect to y ( Churchill pg. 300, formula 3):

∂ΘF ∂ΘF
0 = −n2 Θf n + (−1) (π, y) − (0, y) − r 2 Θf nF + rΘf n (0)
∂x ∂x
∂ΘF q 1 − cos (ar)
(0, y) = −
∂x k r

(See Erdélye, pg. 63, formula 1)

∂ΘF
(π, y) = 0
∂x
Θf n (x, 0) = 0 n = 1, 2, 3, . . .

63
Making substitutions yields the single algebraic equation:
q (1 − cos (ar))
0 = −(n2 + r 2 )Θf nF + ; n = 1, 2, . . .
k r
q (1 − cos (ar))
0 = −r 2 Θf oF + + πΘ0 r; n=0
k r
Solving for Θf nF :
q (1 − cos (ar)) πΘ0
Θf oF = + ; n=0
k r3 r
q (1 − cos (ar))
Θf nF = ; n = 1, 2, . . .
k r(r 2 + n2 )
We propose to invert first with respect to r, but we would run into difficulties for n = 0. Let
us, therefore, integrate the x-transformed equations directly for n = 0 to get Θf o . We have
d 2 Θf o q
+ =0 0<y<a
dy 2 k
2 fo

=0 y>a
dy 2
Θf o (x, 0) = πΘ0
lim Θf o = 0.
y→∞

Integrating,
q 2
Θf o = − y + C1 y + C2 0<y<a
2k
Θf o = C3 y + C4 y>a
f
Now lim Θ = 0 ⇒ C3 = C4 = 0.
y→∞

Also,

Θf o (x, 0) = πΘ0 = C2 .
It is necessary to cook up another condition to get C1 . In a problem of this type, we must
fo
require ∂Θ
∂y
and Θ to be continuous, therefore ∂Θ
∂y
and Θf o are continuous. Applying these
conditions at y = a,
dΘf o dΘf o
(x, a− ) = (x, a+ )
dy dy
qa qa
0 = − + C1 ; C1 =
k k
fo − fo +
Θ (a ) = Θ (a )
qa2 qa2
0=− + + πΘ0 .
2k k

64
Somewhat surprisingly, applying this last condition yields

qa2
Θ0 = − .
2πk
Thus
(
qa
− 2k (y − a)2 y≤a
Θf o =
0 y≥a

We have Θf o . Let us now invert Θf nF to get Θf n .

q (1 − cos (ar))
Θf nF = n = 1, 2, . . .
k r(n2 + r 2 )
1 1
The inverse of r(n2 +r 2 )
is n2
(1 − e−ny ) (See Erdélye, pg. 65, formula 20).

Also, a property of the Fourier sine and cosine transforms is


 1
F −1 g(r) cos (ar) =
 
G(y + a) + G(y − a) .
2
It is also true that for the sine transform, if

F −1 [g f ] = G(y),

then

G(−y) = −G(y).

Therefore,
 
−1 1
F (1 − cos (ar)) =
r(r 2 + n2 )
 h i
 12 (1 − e−ny ) − 1 2 1 − e−n(y+1) + 1 − e−n(y−a)
n 2n
= h i
 12 (1 − e−ny ) − 1 2 1 − e−n(y+1) − 1 + e−n(a−y)
n 2n
 h i
 12 1 − e−ny − 1 + e−ny (e−na +ena ) y>a
n 2
= h −n(y+a) −n(a−y)
i
 12 1 − e−ny + (e −e )
y<a
n 2
( −ny
e
2 (cosh (na) − 1) y>a
= 1n −ny −na
n2
(1 − e −e sinh (ny)) y<a

65
Now, lacking a known inversion to invert with respect to n, we use the series form

1 fo 2 X fn
Θ= Θ + Θ cos (nx)
π π n=1

qa 2q P∞ (1−e−ny −e−na sinh (ny))
− 2πk (y − a)2 + πk n=1

n2
cos (nx) y>a
=
 2q P∞
 e−ny
πk n=1 n2
(cosh (na) − 1) cos (nx) y<a

and recall that

T = Θ + T̄
qa2
T0 = Θ0 + T̄ = − + T̄
2πk
qa2
T̄ = T0 +
2πk
qa2
T = Θ + Θ0 +
2πk

9.5 Inversion of Fourier Transforms


9.5.1 Finite Range
Inversions of the finite range transforms are easily seen to be a consequence of the com-
pleteness and orthogonality of the cosines in the case of the cosine transform and of the sine
in the case of the sine transform on the interval of integration. Indeed, one sees that the
integrals defining Cn and Sn are just the Fourier coefficients for expansions of f in a cosine
series or a sine series. Thus, their inversions are given by

1 2X
f (x) = Cn−1 [Cn ] = C0 [f ] + Cn [f ] cos (nx) (0 ≤ x ≤ π)
π π n=1

2X
f (x) = Sn−1 [Sn ] = Sn [f ] sin (nx) (0 ≤ x ≤ π).
π n=1
Two facts, though obvious, should be noted with regard to these transforms. If the
function is defined over some range other than 0 ≤ x ≤ π, say 0 ≤ x ≤ L, there arises no
difficulty since one can define a new variable, such as ξ = πx L
. This way, x = L ⇒ xi = π,
L
and f (x) = g(ξ) = f ( π ξ). If the function is extended out of the range 0 ≤ x ≤ π, the
inversion of the cosine transform is the even extensions, i.e.,
Cn−1 Cn (x) = Cn−1 Cn (−x) ,
   

while the inversion of the sine transform is the odd extensions, i.e.,

Sn−1 Sn (x) = −Sn−1 Sn (−x) .


   

66
9.5.2 Infinite Range
Inversions of the infinite range transforms follow from the Fourier integral theorem in
various forms. The inversion of the cosine transform, for example, arises from the formula
2 ∞
Z Z ∞
f (x) = cos (rx) f (y) cos (ry)dydr
π 0 0
r Z ∞ r Z ∞
2 2
= cos (rx) f (y) cos (ry)dydr.
π 0 π 0
The interior integral is just what we above defined as Cr [f ]. Therefore,
r Z ∞
2
f (x) = Cr−1 [Cr ] = Cr [f ] cos (rx)dr.
π 0
The other inversions follow immediately in the same way from other forms of the Fourier
integral
q theorem. It is to our great advantage to note that, with the normalization factor
2 √1
π
or 2π
inserted as above, the inversion integral is just the transform of the transform,
i.e.,
f (x) = Cr−1 [Cr ] = Cx Cr [f ]
 

Similar formulae for the sine and exponential transforms are

f (x) = Sr−1 [Sr ] = Sx Sr [f ]


 

f (x) = Er−1 [Er ] = E−x Er [f ] .


 

Knowing this fact doubles the utility of a table of transforms since it can be used backwards
as well as forwards. That is, given a transform one wishes to invert, one may first look for
it among tabulated transformed functions; not finding an inversions there, one may equally
well look for his transform among the tabulated functions. If it is found there, the inversions
of the given transform is the transform of the tabulated function.
There are tables of both the finite range transforms and the infinite range transforms,
useful for the purpose of inverting these transforms. However, this is just one way of obtaining
an inversions (the easiest, of course). In the case of the finite range transforms, where the
inversion is a Fourier series, and one does not know how to sum it, or get the inversion in
closed form, then the truncated series is a useful approximation to the inverse.
In the case of the infinite range transforms, the inversion integral is subject to evaluation
by the methods of complex integration and residue theory.

9.5.3 Inversion of Fourier Exponential Transforms


We have seen that if the transform of a function F (x) is
Z ∞
1
f (r) = √ F (x)eirx dx,
2π −∞

67
then (under the proper conditions on F ), the function can be recovered from its transform
through the inversion formula:

1
Z
F (x) = √ f (r)e−irx dr.
2π −∞

Note that in the inversion formula, r is a real variable. Let us change the variable r to a
new (complex) variable, ir = s, and let φ(s) = f (r). Then

1
Z
φ(s) = √ F (x)esx dx,
2π −∞

and the inversions is idr = ds


Z
i
F (x) = − √ φ(s)e−sx ds.
2π c

Where c is the path of integration in the complex s-plane (along the iy axis at x = 0) and
the Cauchy Principal value is to be taken such that
Z iβ
i
F (x) = − √ lim φ(s)e−sx ds.
2π β→∞ −iβ

Suppose φ(s) is analytic in the left half plane ℜ(s) < 0, except at a finite number of
isolated singular points sn . Let us close the path cβ , −β ≤ ℑ(s) ≤ β, with a semicircle in
the left half-plane, choosing β so large as to included all finite singular points in the plane
ℜ(s) < 0. Therefore, cβ is a vertical line along the imaginary axis, extending from −iβ to
iβ, and c′β is a semicircle with radius β which intersects the real axis at x = −β and the
imaginary axis at iy = iβ and iy = −iβ. Every singular point is contained within this half
circle. By Cauchy’s residue theorem, we then have that
Z Z k
X
−sx
e φ(s)ds + e−sx φ(s)ds = 2πi ρj
cβ c′β j=1

where ρj denotes the residue of e−sx φ(s) at the singular points sj , and we have assumed that
there are k such singular points.
Since we have hypothesized that β be so large that c′β include all finite singular points in
the left half plane, in the limit as β → ∞, the right side remains constant, and we have the
following:
Z Z k
X
−sx −sx
lim e φ(s)ds + lim e φ(s)ds = 2πi ρj
β→∞ cβ β→∞ cβ j=1

68
or
Z iβ k
X Z
−sx
lim e φ(s)ds = 2πi ρj − lim e−sx φ(s)ds.
β→∞ −iβ β→∞ c′β
j=1

Thus,
Z iβ
i
F (x) = − √ lim esx φ(s)ds
2π β→∞ −iβ
k
√ X i
Z
= 2π ρj + √ lim e−sx φ(s)ds.
j=1
2π β→∞ ′

Many times, the limit on the right hand side is zero, or is easy to evaluate, so that the above
formula is a useful tool for inverting the transform.
Reasoning in similar fashion, but completing the path with a semi-circle in the right half
plane ℜ(s) > 0, we obtain a similar formula:
k ′
√ X ′ i
Z
F (x) = − 2π ρj − √ lim e−sx φ(s)ds
k=1
2π β→∞ ′′

where the curve c′′β is another semicircular path similar to c′β except it extends out into the
positive real axis (rather than the negative real axis). Hence, this curve intersects the real
axis at x = β. Another way to visualize the curve is to realize that combining semi-circle c′β
with semi-circle c′′β results in a full circle with radius β centered at the origin of the complex
plane. For x < 0, one may find that the limit
Z
lim e−sx φ(s)ds = 0
β→∞ c′β

so that the first formula becomes


k
√ X
F (x) = 2π ρj .
j=1

(Recall that ρj are residues at singular points in the left half-plane ℜ(s) < 0). Again, one
may find that for x > 0, the limit
Z
lim e−sx φ(s)ds = 0
β→∞ c′′
β

69
so that the second formula becomes
k ′
√ X
F (x) = − 2π ρ′j .
j=1

At x = 0,

1
F (0) = [F (0+ ) + F (0− )]
2

where

F (0+ ) = lim F (ǫ)


ǫ→0

F (0 ) = lim F (−ǫ)
ǫ→0

9.6 Table of Transforms

y(x) Sn (y)
a
 
1 nπ
1 − (−1)n
a2
x nπ
(−1)n+1
a3 a 3
x2
  

(−1)n+1 − 2 nπ 1 − (−1)n
nπa
 
ecx n2 π 2 +c2 a2
1 − (−1)n eca
a ωa
(n =
)
sin (ωx) 2 π
nπa ωa
2 2 2 2
(−1)n+1 sin (ωa) (n 6= )
n π −ω a π
ωa
0 (n = )
cos ωx π
nπa ωa
1 − (−1)n cos (ωa) (n 6=
 
2 2 2 2
)
n π −ω a π
nπa
sinh (cx) n2 π 2 +c2 a2
(−1)n+1 sinh (ca)
nπa
 n

cosh (cx) 2 2
n π +c a 2 2 1 − (−1) cosh (ca)
a2
a−x nπ
a 3
  
x(a − x) 2 nπ
1 − (−1)n
sin (ω(a−x)) nπa ωa
sin (ωx) n2 π 2 −ω 2 a2
(n 6= π
)
sin (c(a−x)) nπa
sin (cx) n2 π 2 +c2 a2

70
y(x) Cn (y)
a (n = 0)
1
0 (n = 1, 2, . . .)
a2
(n = 0)
2
x  2
a
[(−1)n − 1] (n = 1, 2, . . .)

a3
(n = 0)
x2 3
2a3
(01)n (n = 1, 2, . . .)
n2 π 2
a2 c
ecx n2 π 2 +c2 a2
[(−1)n eca − 1]
ωa
0 )
(n =
π
sin (ωx) a2 ω ωa
n
 
(−1) cos (ωa) − 1 (n =
6 )
n2 π 2 − ω 2a2 π
a ωa
(n = )
2 π
cos ωx a2 ω ωa
2 2 2 2
(−1)n+1 sin (ωa) (n 6= )
n π −ω a π
a2 c
 n

sinh (cx) 2 2
n π +c a2 2 (−1) cosh (ca) − 1
a2 c
cosh (cx) n2 π 2 +c2 a2
(−1)n sinh (ca)
4 3
a (n = 0)
(x − a)2 3
a3
2 2 2 (n = 1, 2, . . .)

cos (ω(a−x)) a2 ω
sin (ωa) n2 π 2 +ω 2 a2
(n 6= ωa
π
)
cosh (c(a−x)) a2 c
sinh (ca) n2 π 2 +c2 a2

For a few additional transforms


R ∞ of this type, Rsee Churchill or SheddonR (References 2 and
∞ ∞
3). For transforms of the form −∞ y(x)einx dx, 0 y(x) sin (nx)dx, and 0 y(x) cos (nx)dx,
see Erdelyi (reference 1).

9.7 References
1. Sneddon, Ian N. “Fourier Transforms”, McGraw Hill, 1951.
2. Churchill, Ruel V., “Operational Mathematics”, McGraw Hill, 1958.
3. Erdelyi, Volume I, “Bateman Mathematical Tables”.

71
Chapter 10

Miscellaneous Identities, Definitions,


Functions, and Notations

10.1 Leibnitz Rule

If
Z b(x)
f (x) = g(x, y)dy,
a(x)

Then
b(x)
2g(x, y)
Z
d
f (x) = g[x, b(x)]b′ (x) − g[x, a(x)]a′ (x) + dy.
dx a(x) 2x

10.2 General Solution of First Order Linear Differen-


tial Equations
The objective is to solve for y(x) which satisfied the following differential equation:

dy
f (x) = + a(x)y
dx

with boundary condition

y(x0 ) = y0 .

72
The procedure is to find an integrating factor. Define h such that

dh
= a(x).
dx

Thus
Z x
h= a(x′ )dx′ .
c

The integrating factor will be eh , since

d(eh y) dy dh dy
= eh + eh y = eh + eh a(x)y = eh f (x).
dx dx dx dx

Then
Z yeh Z x

h
d(e y) = eh f (x′ )dx′
y0 eh0 x0

where
Z x0
h0 = a(x′ )dx′
Zc x0
h′ = a(x′′ )dx′′ .
c

Thus,
Z x

h
ye − y0 e h0
= eh f (x′ )dx′ .
x0

Hence
Z x
h0 −h ′
y = y0 e + eh −h f (x′ )dx′ .
x0

Recalling that
Z x
h= a(x′ )dx′
c
Z x′ Z x Z x
′ ′′ ′′ ′′ ′
h −h= a(x )dx − a(x )dx = − a(x′′ )dx′′ .
c c x′

73
Finally,
Z x Rx
h0 −h a(x′′ )dx′′
y = y0 e + f (x′ )e− x′ dx′ .
x0

(Note that the constant c appearing as a lower limit in the integral of the integrating factor
is not a boundary condition: it disappears in the final solution.)

10.3 Identities in Vector Analysis



Below, bold quantities are vectors, and V is the vector differential operator ∇ = î ∂x +
∂ ∂
ĵ ∂y + k̂ ∂x , and î, ĵ, k̂ are unit vectors in the x, y, and z directions, respectively.

1. a · b × c = b · c × a = c · a × b
2. a × (b × c) = b(a · c) − c(a · b)
3. (a × b) · (c × d) = a · b × (c × d)

= a · c(b · d) − d(b · c)
= (a · c)(b · d) − (a · d)(b · c)
4. (a × b) × (c × d) = (a × b · d)c − (a × b · c)d
5. ∇(φ + ψ) = ∇φ + ∇ψ
6. ∇(φψ) = φ∇ψ + ψ∇φ
7. ∇(a · b) = (a · ∇)b + (b · ∇)a + a × (∇ × b) + b × (∇ × a)
8. ∇ · (a + b) = ∇ · a + ∇ · b
9. ∇ × (a + b) = ∇ × a + ∇ × b
10. ∇ · (φa) = a · ∇φ + φ∇ · a
11. ∇ × (φa) = (∇φ) × a + φ∇ × a
12. ∇ · (a × b) = b · ∇ × a − a · ∇ × b
13. ∇ × (a × b) = a(∇ · b) − b(∇ · a) + (b · ∇)a − (a · ∇)b
14. ∇ × ∇ × a = ∇(∇ · a) − ∇2 a
15. ∇ × ∇φ = 0
16. ∇·∇×a=0

If r = îx + ĵy + k̂z

17. ∇ · r = 3; ∇×r=0

74
If V represents a volume bounded by a closed surface S with unit vector n̂ normal to S and
directed positive outwards, then,
ZZZ ZZ
18. (∇φ)dV = (φn̂)dS
ZZZ V S
ZZ
19. (∇ · a)dV = (a · n̂)dS (Gauss’ Theorem)
ZZZ V S
ZZ ZZZ
20. (f ∇ · g)dV = (f g · n̂)dS − (g · ∇f )dV
ZZZ V ZZ S ZZZ V

21. (g · ∇f )dV = (f g · n̂)dS − (f ∇ · g)dV


Z Z ZV ZZS V

22. (∇ × a)dV = (n̂ × a)dA


ZZZ V S ZZ
2 2
 
23. ϕ∇ ψ − ψ∇ φ dV = (ϕ∇ψ − ψ∇φ) · n̂ dS (Green’s Theorem)
V S

If S is an unclosed surface bounded by contour C, and ds is an increment of length along C,


then
ZZ Z
24. (n̂ × ∇φ)dA = φdS
S C
ZZ Z
25. (∇ × a)dA = a · dS (Stokes’ Theorem).
S C

10.4 Coordinate Systems

Cartesian Coordinates
∂ ∂ ∂
∇= î + ĵ + k̂
∂x ∂y ∂z
2
∂ φ ∂ φ ∂2φ
2
∇2 φ = 2 + 2 + 2
∂x ∂y ∂z
3
d r = dxdydz

Cylindrical Coordinates
∂ 1 ∂ ˆ ∂
∇= ρˆ0 + θ0 + k̂
∂ρ ρ ∂θ ∂z
1 ∂2ϕ ∂2ϕ
 
1 ∂ ∂ϕ
∇2 ϕ = ρ + 2 2 + 2
ρ ∂ρ ∂ρ ρ ∂θ ∂z
3
d r = ρdρdθdz

75
Spherical Coordinates

∂ 1 ∂ ˆ 1 ∂
∇= rˆ0 + θ0 + ϕˆ0
∂r r ∂θ r sin (θ) ∂ϕ
∂2ψ
   
2 1 ∂ 2 ∂ψ 1 ∂ ∂ψ 1
∇ ψ= 2 r + 2 sin (θ) + 2 2
r ∂r ∂r r sin (θ) ∂θ ∂θ r sin (θ) ∂ϕ2
d3 r = r 2 sin (θ)drdθdϕ

10.5 Index Notation


A short note will be given on this notation which greatly simplifies certain mathematical
problems (to mention one advantage). A convention is adopted here which is sufficient for
working with rectangular Cartesian coordinates. For more general coordinate systems, a
more elaborate convention is needed (see references 1, 2, and 3).
Consider a simple example which illustrates the utility and application of the index
notation. Supposed one has a set of three equations

u = ax x + ay y + az z
v = bx x + by y + bz z
w = cx x + cy y + cz z

By defining

u = u1 v = u1 w = u3 ;
x = x1 y = x2 z = x3 ;
ax = a11 ay = a12 az = a13 ;
bx = a21 by = a22 bz = a23 ;
cx = a31 cy = a32 cz = a33 ;

These may be written

u1 = a11 x1 + a12 x2 + a13 x3


u2 = a21 x1 + a22 x2 + a23 x3
u3 = a31 x1 + a32 x2 + a33 x3

76
or
3
X
u1 = a1α xα
α=1
3
X
u2 = a2α xα
α=1
3
X
u3 = a3α xα
α=1

or
3
X
ui = aiα xα (i = 1, 2, 3)
α=1

To this point we have effected considerable simplification of the original equations. By the
introduction of the “summation convention”, we can so even further. Notice that there are
two kinds of indices on the right: i, which occurs once in the product, and α, which occurs
twice. Index i is called a “single-occurring” index; α is called a “double-occurring” index.
The convention to be introduced is:
1. Doubly-occurring indices are to be given all possible values and the results summed
within the equation.
2. Single-occurring indices are to be assigned one value in an equation, but as many
equations are to be generated as there are available values for the index.
Thus from 1, we may drop the summation symbol, and by 2, we may drop the parenthesis
denoting values for i. With the summation convention in force, we have ui = aiα xα which
unambiguously represents the original equations, if α and i have the same range, which we
shall assume.
A nice but unnecessary finishing touch can be put on the convention which seems to make
things clearer in the work: for all singly occurring indices use lower-case Roman letters; for
all double-occurring indices, use lower-case Greek letters.
We remark that it is possible to have any number of indices on a quantity.

10.6 Examples of Use of Index Notation

A. Some Handy Symbols


(
1 i=j
1. δij =
0 i=
6 j

77
This is the Kronecker delta.

1
 i 6= j, i 6= k, j 6= k, i, j, k in cyclic order
2. εijk = −1 i= 6 j, i 6= k, j 6= k, i, j, k in anticylic order
0

i = j, i = k, or j = k

This is called the Levy-Civita Tensor density.

B. Some Relationships Expressed in Index Notation

1. Dot Product (a scalar)


a · b = aα bα
2. Cross Product (a vector; consider ith component)
(a × b)i = εiαb aα bα = −εiβαb β a α
3. Triple Product (a scalar)
a · b × c = aα εαβγ bβ cγ
4. Gradient (ϕ is a scalar)
∂ϕ
(∇ϕ)i =
∂xi
5. Divergence (V is a vector)
∂Vα
∇·V =
∂xα
6. Curl (V is a vector)
∂Vα
(∇ × V)i = εiαβ β
∂x
7. Laplacian
∂2φ
∇2 φ =
∂xα ∂xα
∂xi
8. = δij
∂xj

C. Some Identities in δij and ǫijk in 3-D space

1. δαα = 3
2. εαβγ εαβγ = 6
3. εiαβ εjαβ = 2δij
4. εijβ εkℓβ = δik δjℓ − δiℓ δjk
5. εijk εℓmn = δiℓ δjm δkn + δim δjn δkℓ

78
10.7 The Dirac Delta Function
The Dirac Delta “function” symbolizes an integration operation and in this sense is not
strictly a function (in the interpretation of Professor R. V. Churchill). It cannot be the end
result of a calculation, but is meaningful only if an integration is to be carried out over its
argument. we define the δ-“function” as follows:

δ(x) ≡ 0, x 6= 0
Z ǫ
δ(x)dx = 1, ǫ>0
−ǫ
Z ǫ
f (x)δ(x)dx = f (0)
−ǫ

Very often it is convenient to think of the δ-“function” as a function zero everywhere


except where its argument is zero, but which is so large at that point where its argument
vanishes that its integral over any region of which that point is an interior point, is equal
to unity. Mathematicians shudder at the idea of the δ-“function”, but physicists have used
them for years (carefully), finding them of great utility.
Shiff’s “Quantum Mechanics” lists some properties of the Dirac δ:

δ(x) = δ(−x)
 
′ ′ ′ dδ(x)
δ (x) = −δ (−x) δ (x) =
dx
xδ(x) = 0
xδ ′ (x) = −δ(x)
1
δ(ax) = δ(x) a>0
a
2 2 2 
δ(x − a ) = δ(x − a) + δ(x + a) a>0
Z a
δ(a − x)δ(x − b)dx = δ(a − b)

f (x)δ(x − a) = f (a)δ(x − a)

Professor Churchill uses as a δ-“function” the operation


Z ǫ
lim f (x)U(h, x)dx; ǫ>0
h→∞ −ǫ

where U(h, x) is the function


(
1
h
; − h2 ≤ x ≤ h2
U(h, x) =
0; x < −h2
or x > h
2

79
Here, note that the limit is taken after integration. Other such representations are common,
like that given by Schiff:

sin (gx)
δ(x) = lim
g→∞ πx

which means not that the limit is to be taken exactly as show, but rather that it is taken
after integration, i.e., with this representation:
ǫ ǫ
sin (gx)
Z Z
δ(x)dx = lim dx
−ǫ g→∞ −ǫ πx
Z ǫ Z ǫ
sin (gx)
f (x)δ(x)dx = lim f (x) dx
−ǫ g→∞ −ǫ πx

Schiff gives still another representation in terms of an integral:


∞ g
1 1
Z Z
ixξ
δ(x) = e dξ = lim eixξ dξ
2π −∞ g→∞ 2π −g
1 2 sin (gx)
= lim
g→∞ 2π x
sin (gx)
= lim
g→∞ πx

thus
ǫ ǫ ∞
1 ixξ
Z Z Z
f (x)δ(x)dx = e f (x)dξdx
−ǫ −ǫ −∞ 2π

10.8 Gamma Function

A. Definitions
Z ∞
Γ(x) = e−t tx−1 dt
0

80
B. Properties

a. Γ(x + 1) = xΓ(x)
b. Γ(n) = (n − 1)! (Γ(1) = 1) n∈N
π
c. Γ(x)Γ(1 − x) =
sin (πx)
1 √
d. Γ(x)Γ(x + ) = 21−2x πΓ(2x)
 2
1−b 1−a 2−a 3−a
e. Γ = · · ···
1−a 1−b a−b 3−b

 
1
f. Γ = π
2

Since by 1, one may reduce Γ(x) to a product involving Γ or some number between 1 and 2,
a handy table for calculations is one like that found at the end of Chemical Rubber Integral
Tables, for Γ(x), 1 ≤ x ≤ 2.

References
1. H. Margenau and G.M. Murphy; “The Mathematics of Physics and Chemistry”, D.
Van Nostrand Company, inc., New York, 1956m pp. 93-98.

2. Whittacker and Watson, “Modern Analysis”, 4th Edition, Cambridge University Press
(1927), Chapter VII.

10.9 Error Function

x
2
Z
2
erf (x) = √ e−λ dλ
π 0

2
Z
2
erf c(x) = 1 − erf (x) = √ e−λ dλ
π x

81
Chapter 11

Notes and Conversion Factors

11.1 Electrical Units


11.1.1 Electrostatic CGS System
1. The electrostatic cgs units of charge, sometimes called the escoulomb or statcoulomb,
is that “point” charge which repels an “equal” point charge at a distance of 1 cm in a
vacuum, with a force of 1 dyne.

2. The electrostatic cgs unit of field strength is that field in which 1 escoulomb experiences
a force of 1 dyne. It, therefore, is 1 dyne per escoulomb.

3. The electrostatic cgs unit of potential difference (or esvolt) is the difference of potential
between two points such that 1 erg of work is done in carrying 1 escoulomb from one
point to the other. It is 1 erg per escoulomb.

11.1.2 Electromagnetic CGS System


1. The unit magnetic pole is a “point” pole which repels an equal pole at a distance of 1
cm, in a vacuum, with a force of 1 dyne.

2. The unit magnetic field strength, the oersted, is that field in which a unit pole expe-
riences a force of 1 dyne. It therefore is 1 dyne per unit pole.

3. The absolute unit of current (or abampere) is that current which in a circular wire of
1-cm radius, produces a magnetic field of strength 2π dynes per unit pole at the center
of the circle. One abampere approximately equals 3 · 1010 esamperes, or 10 amp.

4. The electromagnetic cgs unit of charge (or abcoulomb) is the quantity of electricity
passing in 1 second through any cross section of a conductor carrying a steady current
of 1 abampere. One abcoulomb equals 10 coulombs.

82
5. The electromagnetic cgs unit of potential difference (or abvolt) is a potential difference
between two points, such that 1 erg of work is done in transferring abcoulomb from
one point to the other. One abvolt = 10−8 volts which is approximately 31 · 10−10 esvolt.

11.1.3 Practical System

11.2 Tables of Transforms

Practical Electrostatic cgs Electromagnet cgs


1
Quantity 1 Coulomb 3 · 109 escoulombs 10
abcoulomb
1
Current 1 Ampere 3 · 109 esamperes 10
abcoulomb
1
Potential Difference 1 Volt 300
esvolt 108 abvolts
1
Electrical Field Strength 1 Volt/cm 300
dyne/escoulomb 108 abvolts/cm

11.2.1 Energy Relationships


• 1 esvolt x 1 escoulomb = 1 erg

• 1 abvolt x 1 abcoulomb = 1 erg

• 1 volt x 1 coulomb = 107 ergs = 1 joule

• The electron volt equals the work done when an electron is moved from one point to
another differing in potential by 1 volt.
1
• 1 electron volt = 4.80 · 10−10 escoulomb x 300
esvolt = 1.60 · 10−12 erg

11.3 Physical Constants and Conversion Factors; Di-


mensional Analysis
See http://physics.nist.gov/cuu/Constants/index.html.

83

You might also like