Math Notes
Math Notes
October, 1959
IP-394
Acknowledgments
These notes were prepared by the University of Michigan Student Branch of the American
Nuclear Society, Committee on Math Notes, which included R.W. Albrecht, J.M. Carpenter,
D.L. Galbraith, E.H. Klevans, and R.J. Mack, all students in the University of Michigan
Department of Nuclear Engineering. Assistance was provided by Professors William Kerr
and Paul Zweifel.
The translation to LATEXwas done by Alison Chistopherson (class of 2013), with the
encouragement of Alex Bielajew, over the Winter and Summer of 2012. Other than this
paragraph, everything else is a verbatim transcription of the original document.
i
Foreword
This set of notes has been compiled with one primary objective in mind: to provide,
in one volume, a handy reference for a large number of the commonly-used mathematical
formulae, and to do so consistently with respect to notation, definition, and normalization.
Many of us keep these results available to us in an excessive number of references, in which
the notation or normalization varies, or formulae are so spread out that they are difficult to
find, and their use is time-consuming.
Short explanations are included, with some examples, to serve two purposes: first, to
recall to the user some of the ideas which may have slipped his mind since his detailed study
of the material; second, for those who have never studied the material, to make its use at
least plausible, and to help in his study of references.
No claim can be made that all results anyone ever uses are here, but it is hoped that
a sufficient quantity of material is included to make necessary only infrequent use of other
references, except for integral tables, etc. for elementary work. Of course, the user may find
it desirable to add some pages of his own.
Finally, it is recommended that those unfamiliar with the theory at any point not blindly
apply the formulae herein, for this is risky business at best; a text should be studied (and,
of course, understood) first.
ii
Contents
1 Orthogonal Functions 2
1.1 Importance of Orthogonal Functions . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Generation of Orthogonal Functions . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Use of Orthogonal Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Operational Properties of Some Common Sets of Orthogonal Functions . . . 8
1.5 Fourier Series, Range −ℓ ≤ x ≤ ℓ . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5.1 Boundary Value Problem Satisfied by Sines and Cosines . . . . . . . 9
1.5.2 Orthogonal Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5.3 Expansion in Fourier Series . . . . . . . . . . . . . . . . . . . . . . . 9
1.5.4 Normalization Factors . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5.5 Orthonormal Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5.6 Fourier Series, Range 0 ≤ x ≤ L . . . . . . . . . . . . . . . . . . . . 10
1.5.7 Expansion in Half-range Series . . . . . . . . . . . . . . . . . . . . . . 10
1.5.8 Full Range Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2 Legendre Polynomials 12
2.1 Generating Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Recurrence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Differential Equation Satisfied by P (x) . . . . . . . . . . . . . . . . . . . . . 12
2.4 Rodriques’ Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 Normalizing Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.6 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.7 Expansion in Legendre Polynomials . . . . . . . . . . . . . . . . . . . . . . . 13
2.8 Normalized Legendre Polynomials . . . . . . . . . . . . . . . . . . . . . . . . 14
2.9 Expansion in Normalized Polynomials . . . . . . . . . . . . . . . . . . . . . . 14
2.9.1 A Few Low-Degree Legendre Polynomials and Respective Norms . . . 14
2.9.2 Integral Representation of Pℓ (x) . . . . . . . . . . . . . . . . . . . . 14
2.9.3 Bounds on Pℓ (x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
iii
3.4 Expression of Pℓ (cos θ) in terms of Pℓm (x) . . . . . . . . . . . . . . . . . . . 17
3.5 Normalizing Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4 Spherical Harmonics 19
4.1 Definition of Y m (Ω) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.2 Expression of P (cos θ) in terms of the Spherical Harmonics . . . . . . . . . . 19
4.3 Orthonormality of Y m (Ω) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.4 Expansion in Spherical Harmonics . . . . . . . . . . . . . . . . . . . . . . . . 20
4.5 Differential Equation Satisfied by Yℓm (Ω) . . . . . . . . . . . . . . . . . . . . 20
4.6 Some Low-order Spherical Harmonics . . . . . . . . . . . . . . . . . . . . . . 21
4.7 A Useful Relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5 Laguerre Polynomials 23
5.1 Derivative Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.2 Generating Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.3 Differential Equation Satisfied by Lαn (x) . . . . . . . . . . . . . . . . . . . . . 23
5.4 Orthogonality, Range 0 ≤ x ≤ ∞ . . . . . . . . . . . . . . . . . . . . . . . . 23
5.5 Expansion in Laguerre Polynomials . . . . . . . . . . . . . . . . . . . . . . . 24
5.6 Expansion of X m in Laguerre Polynomials . . . . . . . . . . . . . . . . . . . 24
5.7 Recurrence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6 Bessel Functions 25
6.1 Differential Equation Satisfied by Bessel Functions . . . . . . . . . . . . . . . 25
6.2 General Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.3 Series Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6.4 Properties of Bessel Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.5 Generating Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.6 Recursion Formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.7 Differential Formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6.8 Orthogonality, Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6.9 Expansion in Bessel Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6.10 Bessel Integral Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
iv
8.1.3 Existence Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
8.1.4 Analyticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
8.1.5 Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
8.1.6 Further Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
8.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8.2.1 Solving Simultaneous Equations . . . . . . . . . . . . . . . . . . . . . 36
8.2.2 Electric Circuit Example . . . . . . . . . . . . . . . . . . . . . . . . . 37
8.2.3 Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
8.3 Inverse Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
8.3.1 Heaviside Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
8.3.2 The Inversion Integral . . . . . . . . . . . . . . . . . . . . . . . . . . 41
8.4 Tables of Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
8.4.1 Analyticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
8.4.2 Cauchy’s Integral Formula . . . . . . . . . . . . . . . . . . . . . . . . 47
8.4.3 Regular and Singular Points . . . . . . . . . . . . . . . . . . . . . . . 50
8.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
9 Fourier Transforms 52
9.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
9.1.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
9.1.2 Range of Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
9.1.3 Existence Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
9.2 Fundamental Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
9.2.1 Transforms of Derivatives . . . . . . . . . . . . . . . . . . . . . . . . 53
9.2.2 Relations Among Infinite Range Transforms . . . . . . . . . . . . . . 55
9.2.3 Transforms of Functions of Two Variables . . . . . . . . . . . . . . . 56
9.2.4 Fourier Exponential Transforms of Functions of Three Variables . . . 56
9.3 Summary of Fourier Transform Formulae . . . . . . . . . . . . . . . . . . . . 57
9.3.1 Finite Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
9.3.2 Infinite Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
9.4 Types of Problems to which Fourier Transforms May be Applied . . . . . . . 61
9.5 Inversion of Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . 66
9.5.1 Finite Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
9.5.2 Infinite Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
9.5.3 Inversion of Fourier Exponential Transforms . . . . . . . . . . . . . . 67
9.6 Table of Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
9.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
v
10.5 Index Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
10.6 Examples of Use of Index Notation . . . . . . . . . . . . . . . . . . . . . . . 77
10.7 The Dirac Delta Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
10.8 Gamma Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
10.9 Error Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
1
Chapter 1
Orthogonal Functions
This is easily generalized to a space with more than three dimensions. In an n-dimensional
space the concept of orthogonality is unchanged, except that then the sum is over n terms
Ai Bi , and we have for orthogonality
Xn
A · B = A1 B1 + A2 B2 + A3 B3 = Ai Bi = 0.
i=1
Now one may think of the components of a vector, A1 , A2 , A3 , as the values of a (real)
function at three values of its argument; say A1 = f (r1 ), A2 = f (r2 ), A3 = f (r3 ), or, in terms
which will make our efforts here more clear, Ai = f (ri )(ri = r1 , r2 , r3 ). That is, r has the
values r1 , r2 , r3 , and to get Ai , put ri in f(r).
We may now think of r as having any number, say n, possible values in some range,
so that f(r) evaluated at the various r’s generates an n-dimensional vector. The step to
considering a function as an infinitely-many-dimensional vector is now a natural one; we
allow n to increase without bound, r taking all values in its range.
Let r have some range a ≤ r ≤ b in which it takes on n values such that rj − rj−1 = ∆rj ,
and suppose two such n-dimensional vectors f (rj ) and g(rj ) are thus generated. The inner
product of f with g is generalized as
X n
f (rj )g(rj )∆rj .
j=1
2
Above, for A · B, rj takes on only the discrete values 1,2,3; so ∆r is always unity in this
simple case. Now let us consider the case as n → ∞, where r takes all values between a and
b. The inner product, if it exists, is then
n
X
lim f (rj )g(rj )∆rj .
n→∞
j=1
With proper restrictions on ∆rj (max ∆rj → 0) and on the range of r(a ≤ r ≤ b), this is
just the limit occurring in the definition of the ordinary integral. Thus we say that the inner
product of f(r) with g(r), which is often denoted (f,g), is
Z b
(f, g) = f (r)g(r)dr.
a
3
N-dimensional vector may be defined in terms of its components along N coordinates, pro-
vided that no more than two of the reference coordinates are coplanar. But if the reference
coordinates are orthogonal, e.g., Cartesian coordinates, then the equations take a particu-
larly simple form. The situation is somewhat similar when it is desired to expand a function
in terms of a set of other functions – it is much simpler if the set is orthogonal.
Completeness is another important property. It is apparent that no two reference axes
will suffice for the definition of a vector in 3-dimensional space. The set of two reference axes
is not complete in ordinary space, since a third coordinate can be added which is orthogonal
to both of them. Addition of this coordinate makes the set complete. The situation with
orthogonal functions is exactly analogous. Some authors define a complete set as a set in
terms of which any other function defined on the same interval can be expressed.
Some more common sets of orthogonal functions are the sines and cosines, Bessel func-
tions, Legendre polynomials, associated Legendre functions, spherical harmonics, Laguerre
and Hermite polynomials; operational properties of which are listed in these notes.
The process of separation proceeds as follows. One attempts to find a variable, say u,
such that if it is assumed that F (u, v, w, · · · ) may be written
F (u, v, w) = U(u)φ(v, w, · · · )
4
Now let
Υ U(u) = Φ φ(v, w) .
Here, Υ is an operator involving only u, and Φ is an operator involving only v, w, · · ·. Re-
turning to the example, assume
Then,
M M
{F } = {Uφ}
Z w2
∂2U φ 2
= ∂u2
+ vUφ + ∆ Uφ + K(w, w ′)Uφdw ′
w1
Z w2
d2 U 2
= φ du2 + vUφ + ∆ Uφ + U K(w, w ′)φ(v, w ′)dw ′
w1
=0
where µ2 is a constant called the “separation constant”. We may choose it at our discretion.
We now have two equations, where only one existed before:
d2 U
0= du2
+ (∆2 − µ2 )U
Z w2
2
0 = (v + µ )φ + K(w, w ′)φ(v, w ′)dw ′.
w1
U(u1 ) = 0
dU
du
(u2 ) = U(u2 ).
5
These equations are separated; they do not involve v nor w.
By the process of separation of variables we have, from the original equation involving
u, v, and w, generated a new set of equations, some of which (those involving u) are a
complete problem. 2
0 = dduU2 + (∆2 − µ2 )U
0 = U(u )
1
dU
du
(u2 ) = U(u2 )
0 = (v + µ2 )φ + R w2 K(w, w ′)φ(v, w ′)dw ′
w1
This was our objective in applying the method of separation of variables. The process may
be repeated on the remaining equation or performed on another variable.
Now if, after separation, the u-equation can be put in the form
d h i
r(u) dU
du
− q(u) + µ(u) U = 0, (1.2.3)
du
and the boundary conditions assume the form
(
a1 U(u1 ) + a2 U ′ (u1 ) + α1 U(u2 ) + α2 U ′ (u2 ) = 0
(1.2.4)
b1 U(u1 ) + b2 U ′ (u1 ) + β1 U(u2 ) + β2 U ′ (u2 ) = 0
where a, b, α, and β are constants (some may be zero), then the system of differential equa-
tions and boundary conditions is called a “Sturm-Liouville system”1 . It will be noted that
the Sturm-Liouville system is very general, and includes many important equations as special
cases, for example the wave equation with those boundary conditions which are commonly
applied.
This system under quite general conditions generates a complete set of orthogonal func-
tions, one for each of an infinite, discrete set of values of the parameter µ. One finds the
values of µ, called “eigenvalues”, for which solutions exists, and the solution functions cor-
responding to these eigenvalues, called “eigenfunctions”. If the eigenvalues are µ1 , µ2 , · · ·
and the corresponding eigenfunctions are U(u1 ), U(u2 ), · · ·, then in general, the functions
are orthogonal over the range u1 to u2 , with respect to the weight function p(u).
Z u2
p(u)Ui (u)Uj (u)du = Ni2 δij (1.2.5)
u1
Here δ is the Kronecker delta, and p(u) is the same as in equation (1.2.3).
One must take care to find all possible eigenvalues when the equation and boundary
conditions are written exactly as in (1.2.3) and (1.2.4) and they are all real. When all
eigenvalues are found, the set of eigenfunctions is complete, and any function reasonably
well behaved between u1 and u2 may be represented in terms of them. Say we seek an
expansion of a function f(u) in terms of our eigenfunctions,
X
f (u) = fi Ui (u)
i
1
pronounced LEE-oo-vil, NOT Loo-i-vil
6
where fi are a set of constants. Multiplying on the right and left by Uj p(u) and integrating
with respect to u in the range of u1 to u2 .
Z u2 Z u2 X
p(u)Uj (u)f (u)du = p(u)Uj (u)fiUi (u)du
u1 u1
i
X Z u2
= fi p(u)Uj (u)Ui (u)du
i u1
As a consequence of the orthogonality of the Ui (u)’s, defined in equation (1.2.5), this becomes
X
fi Ni2 δij = fj Nj2 .
i
where u2
1
Z
fi (v, w, · · · ) = 2 p(u)F (u, v, w, · · · )Ui (u)du.
Ni u1
7
The formula for the coefficient fi follows immediately from multiplying the first equation
by p(u)Ui (u) and integrating. Let the original partial differential equation be represented
abstractly. M
F (u, v, w, · · · ) = S(u, v, w, · · · )
L L
where is a linear differential operator and F merely represents that part of the differ-
ential equation that involves F. Assume F to be expanded in a series in fi Ui ; multiply the
equation by p(u)Uj (u) and integrate over u from u1 to u2 .
M
F =S
MX
fi Ui = S
i
Z u2 MX Z u2
p(u)Uj (u) fi Ui du = Sp(u)Uj (u)du
u1 i u1
= Nj2 Sj (v, w, · · · )
where
Z u2
Sj = S(u, v, w, · · · )p(u)Uj (u)du
u1
so that
∞
X
S(u, v, w, · · · ) = Sj Uj
j
Now, by using the operational properties of the Ui ’s, one reduces the equations (an infinite
set, one for each i) to a set in
Lthe fi ’s and si ’s. The particular steps taken depend upon the
exact nature of the operator , and the set of equations may be coupled, i.e., f’s with several
indices may appear in the same equation (for example, fi−1 , fi , fi+1 ). These equations do
not involve derivatives with respect to variable u, and we have gained in this respect. But
we have to contend with the infinite set of equations.
All is not lost at this point, for, as it turns out, the series for F, (called a generalized
Fourier series because of the manner in which the coefficients of Ui ’s are chosen in the
expansion for F), is the most rapidly convergent series possible in the Ui ’s. Thus solving for
only the coefficients of the leading terms in the series may enable us to obtain a satisfactory
approximation to F. Also, very often, we are interested only in one or two of the fi ’s on
physical or other grounds.
8
set of sines and cosines used in the construction of Fourier series is included as an example.
These lists of properties contained in these notes are by no means complete, though they
may suffice for the solution of many problems. The references listed with each section give
detailed derivations, more extensive lists of properties, more discussions of the method and
its limitations, or examples of the use of orthogonal functions. It is recommended that one
unfamiliar with these functions read in some of these references, in order to avoid the pitfalls
of using mathematics beyond the realm of its applicability. These notes have been assembled
mainly for reference. Of special interest may be the tables in Margenau and Murphy, page
254, which lists twelve special cases of the Sturm-Liouville equation with the name of the
orthogonal set which satisfied each one.
d2 f
dx2
+ k2 f = 0 k real
f (−ℓ) = f (ℓ)
f (−ℓ) = f ′ (ℓ)
∞
( )
a0 X nπx nπx
F (x) = + an cos + bn sin
z n=1
ℓ ℓ
where
ℓ
1
Z
nπx
an = F (x) cos dx n = 0, 1, 2 . . .
ℓ −ℓ ℓ
1 ℓ
Z
nπx
bn = F (x) sin dx n = 0, 1, 2, . . .
ℓ −ℓ ℓ
9
1.5.4 Normalization Factors
1 1 nπx 1 nπx
√ , √ sin , √ cos − ℓ ≤ x ≤ ℓ, n = 1, 2, . . .
ℓ ℓ ℓ ℓ ℓ
∞
a0 X nπx
F (x) = an cos
z n=1 L
or
∞
X nπx
F (x) = bn sin
n=1
L
where
2 L
Z
nπx
an = F (x) cos dx, n = 0, 1, 2, . . .
L 0 L
2 L
Z
nπx
bn = F (x) sin dx, n = 0, 1, 2, . . .
L 0 L
10
1.5.8 Full Range Series
Consider the coefficients in a full-range expansion for an even function, i.e., F(x) such
that F(x)=F(-x).
1 ℓ
Z
nπx
an = F (x) cos dx
ℓ −ℓ ℓ
"Z #
0 Z ℓ
1 nπx nπx
= F (x) cos dx + F (x) cos dx
ℓ −ℓ ℓ 0 ℓ
Putting −x in for x,
" Z #
0 Z ℓ
1 nπx nπx
bn = − F (−x) sin dx + F (x) sin dx
ℓ −ℓ ℓ 0 ℓ
11
Chapter 2
Legendre Polynomials
If
∞
1 X
H(x, y) = p = Pℓ (x)y ℓ
1 − 2xy + y 2 ℓ=0
Then
1 ∂ℓ
Pℓ (x) = H(x, y).
ℓ! y ℓ
Valid for y=0.
′′ ′
1. (1 − x2 )Pℓ (x) − 2xPℓ (x) + ℓ(ℓ + 1)Pℓ (x) = 0 ℓ∈Z
12
Very often x = cos θ.
!
2 2
∂ ℓ(ℓ + 1)(1 − x ) + 1
2. Pℓ + Pℓ = 0
x2 (1 − x2 )2
1 dℓ 2
Pℓ (x) = (x − 1)ℓ
2ℓ ℓ! dxℓ
2.6 Orthogonality
Z 1
Pℓ (x)Pℓ′ (x)dx = δℓℓ′Nℓ2
−1
where
2ℓ + 1 1
Z
fℓ = f (x)ℓ (x)dx
2 −1
Z 1
1
= 2 f (x)Pℓ (x)dx
Nℓ −1
13
2.8 Normalized Legendre Polynomials
Pℓ (x)
Pℓ (x) =
Nℓ
∞
X
g(x) = gℓ Pℓ (x)
ℓ=0
where
Z 1
gℓ = g(x)Pℓ (x)dx
−1
P0 (x) = 1 N02 = 2
2
P1 (x) = x N12 =
3
1 2
P2 (x) = (3x2 − 1) N22 =
2 5
1 2
P3 (x) = (5x3 − 3x) N32 =
2 7
1 2
P4 (x) = (35x4 − 30x2 + 3) N42 =
8 9
1 2
P5 (x) = (63x5 − 70x3 + 15x) N52 =
8 11
1 2
P6 (x) = (231x6 − 315x4 + 105x2 − 5) N62 =
16 13
1
Z πh √ iℓ
Pℓ (x) = x + x2 − 1 cos ϕ dϕ
π 0
14
2.9.3 Bounds on Pℓ (x)
For −1 ≤ x ≤ 1,
|Pℓ (x)| ≤ 1, ∀ ℓ ∈ Z and ℓ ≥ 0
4. R.V. Churchill; “Four Series and Boundary Value Problems”, McGraw Hill, 1941, pp.
175-201. (for discussion of concept of orthogonality, see Chap. 3).
5. E. Jahnke and F. Emde; “Tables of Functions”, Dover Publications, 1945, pp. 107-125.
(Lists some properties and tabulates functions).
15
Chapter 3
3.1 Definition
m d|m|
1. Pℓm (x) = (1 − x2 ) 2 P (x) (0 ≤ m ≤ ℓ)
|m| ℓ
dx!
m
(1 − x2 ) 2 dℓ+m 2
2. Pℓm (x) = ℓ ℓ+m
(x − 1)ℓ
2 ℓ! x
x
1. Pℓm+1 (x) − 2m √ Pℓm (x) + ℓ(ℓ + 1) − m(m − 1) Pℓm−1 (x) = 0
1 − x2
m m+1
m (ℓ + m)Pℓ−1 (x) + (ℓ − m + 1)Pℓ+1 (x)
2. xPℓ (x) =
2ℓ + 1
√ m+1 m+1
2 m P ℓ+1 (x) − Pℓ−1 (x)
3. 1 − x Pℓ (x) =
2ℓ + 1
2m
4. Pℓm+1 (x) = √ m m
(ℓ + m)Pℓ−1 (x) + (ℓ − m + 1)Pℓ+1 (x)
1 − x2 (2ℓ + 1)
− ℓ(ℓ + 1) − m(m − 1) Pℓm−1 (x)
16
Since Pℓm (x) is defined on the interval −1 ≤ x ≤ 1, in physical applications Pℓm (x) is
often associated with an angle θ through the relation x = cos θ. Then the equation satisfied
by Pℓm (x) may be found in the following form:
" #
d2 m d m m2
P (x) + cot θ Pℓ (x) + ℓ(ℓ + 1) − Pℓm (x) = 0
dθ2 ℓ dθ sin2 θ
or
" #
2
1 d d m
sin θ P m (x) + ℓ(ℓ + 1) − Pℓm (x) = 0
sin θ dθ dθ ℓ sin2 θ
2. If Pℓm (x) is defined in a slightly different manner that allows negative values for m,
2
|m| d|m|
Pℓm (x) = (1 − x ) 2 Pℓ (x) (|m| ≤ ℓ)
x|m|
then the expansion may be written as follows:
ℓ
X (ℓ − |m|)! m
Pℓ (cos θ1 )Pℓm (cos θ2 ) cos m(ϕ1 − ϕ2 ) .
Pℓ (cos θ) =
(ℓ + |m|)!
m=−ℓ
References
1. H. Margenan and G. M. Murphy; “The Mathematics of Physics and Chemistry”, D.
Van Nostrand, 1st edition (1934).
17
2. A. G. Webster; “Partial Differential Equations of Math. Phys.”, G.E. Stechert and
Company, 1927, pp. 302-320.
3. D. K. Holmes and R.V. Meghreblian, “Notes on Reactor Analysis, Part II, Theory”,
U.S.A.E.C. Document CF-4-7-88 (Part II), August 1955, pp. 164-165.
4. E. Jahnke and F. Emde; “Tables of Functions”, Dover Publications, 1945, pp. 107-125.
(Lists some properties and tabulates functions).
18
Chapter 4
Spherical Harmonics
19
4.3 Orthonormality of Y m(Ω)
Z
∗
Yjk (Ω)Yℓm (Ω)dΩ = δjℓ δkm
Ω
where the integral over vector Ω indicates a double integration over the full ranges of θ and
ϕ; π2 ≤ θ ≤ π2 , 0 ≤ ϕ ≤ 2π.
where
Z
∗
Fℓm = F (Ω)Yℓm (Ω)dΩ
Ω
1 ∂2Y
∂ ∂Y
sin θ + + ℓ(ℓ + 1) sin θY = 0
∂θ ∂θ sin θ ∂ϕ2
Assume Y=Φ(ϕ)Θ(θ) and let
∂2Y
= −m2 Y
∂ϕ2
Imposing the conditions that Y is bounded at cos θ = ±1 makes ℓ an integer. Imposing that
Y is single-valued in ϕ makes m an integer.
20
4.6 Some Low-order Spherical Harmonics
1
Y00 (Ω) = √
4π
r
3
Y1−1 (Ω) = sin θe−iϕ
8π
r
0 3
Y1 (Ω) = cos θ
4π
r
3
Y11 (Ω) = sin θeiϕ
8π
References
1. H. Margenan and G. M. Murphy; “The Mathematics of Physics and Chemistry”, D.
Van Nostrand, 1st edition (1934).
21
2. A. G. Webster; “Partial Differential Equations of Math. Phys.”, G.E. Stechert and
Company, 1927, pp. 302-320.
3. E. Jahnke and F. Emde; “Tables of Functions”, Dover Publications, 1945, pp. 107-125.
(Lists some properties and tabulates functions).
4. D. K. Holmes and R.V. Meghreblian, “Notes on Reactor Analysis, Part II, Theory”,
U.S.A.E.C. Document CF-4-7-88 (Part II), August 1955, pp. 164-165.
6. Whittaker and Watson; “Modern Analysis”, 4th Edition, Cambridge University Press
(1927), pp. 391-396.
22
Chapter 5
Laguerre Polynomials
dn α+n −x
L(α) m −α x
n (x) = (−1) x e (x e )
dxn
∞
xt X (−1)n
H(x, t) = (1 − t)−(α+1) e− 1−t = L(α)
n (x)t
n
n=0
n!
Thus
dn
xt
−(α+1) − 1−t
L(α)
n (x) = (1 − t) e ; t=0
dtn
d2 y dy
x 2 + (α − x + 1) + nyn = 0 n = 0, 1, 2, . . .
dx dx
Z ∞
xα e−x L(α) (α) 2
m (x)Ln (x)dx = Nn δmn
0
23
where
∞
X
F (x) = fn L(α)
n (x)
n=0
where
∞
1
Z
fn = 2 xα e−x F (x)L(α)
n (x)dx
Nn 0
(α) (α)
a. (x − 2n − α − 1)L(α) (x) = Ln+1 (x) + n(n + α)Ln−1 (x)
′
h n′ i
(α)
b. Ln+1 = (n + 1) Ln(α) − L(α)
n (x)
24
Chapter 6
Bessel Functions
!
d2 y 1 dy ν2
1. 2
+ + 1− 2 y=0
dx x dx x
or
!
ν2
1 d dy
2. x + 1− 2 y=0
x dx dx x
(An extensive listing of other equations satisfied by Bessel functions is given in Reference 2,)
Where Jn (x) are the Bessel Functions of the first kind of order n and Nn (x) are the Neu-
mann1 Functions, the Bessel Functions of the second kind. The Neumann functions are also
frequently represented by the symbol Yn (x). For non-integers,
25
For ν = n and n ∈ Z, the above expression reduces to2
∞ x n+2r n−1 2r−n
2 x 1X r 2 1 X (n − r − 1)! x
Nn (x) = Jn (x) log − (−1) F (r) + F (n + r) −
π 2 π r=0 r!(n + r)! π r=0 r! 2
Where F (r) = rs=1 1s . A third function which sometimes finds use is the Hankel function,
P
or a Bessel function of the third kind. There are two such functions, defined by
Where A1 and A2 are arbitrary constants that may be complex. These functions bear
the same relation to the Bessel functions Jν (x) and Nν (x) as the functions e±νx bear close
to cos νx and sin νx. They satisfy the same differential equation and recursion relations as
Jν (x). Their importance results from the fact that they alone vanish for an infinite complex
argument, vis. H (1) if the imaginary part of the argument is positive, H (2) if it is negative,
i.e, limr→∞ H (1) (reiθ ) = 0, limr→∞ H (2) (re−iθ ), 0 ≤ θ ≤ λ.
From the above equations, we can also write
1 h (2) i
Jν (x) = Hν (x) + Hν(1) (x) (ν unrestricted )
2
1 h (2) i
Nν (x) = Hν (x) − Hν(1) (x)
2
26
6.4 Properties of Bessel Functions
′
a. 2Jn (x) = Jn−1 (x) − Jn+1 (x)
2n
b. Jn (x) = Jn−1 (x) + Jn+1 (x)
x
′
c. xJn (x) = xJn−1 (x) − nJn (x) = xJn (x) − xJn+1 (x)
27
6.7 Differential Formulae
d n
x Jn (x) = xn Jn−1 (x)
a.
dx
d −n
x Jn (x) = −x−n Jn+1 (x)
b.
dx
when Jn (λnj c) = 0
2λ2nj c
Z
b. Aj = 2 xf (x)Jn (λnj x)dx
(λ2nj c2 + h2 − n2 ) Jn (λnj c) 0
′
when λcJn (λc) = −hJn (λc)
28
6.10 Bessel Integral Form
π
1
Z
Jn (x) = cos (nθ − x sin θ)dθ (n = 0, 1, 2, . . .)
π 0
29
Chapter 7
30
Where
r
X 1
F (r) =
s=1
s
7.4 Properties of In
31
4. This reference is full of extremely interesting, beautiful, and helpful pictures of many
functions, almost suitable for hanging in the living room.
References
1. G.N. Watson; “A Treatise on the Theory of Bessel Functions”, 2nd Edition, Cambridge
University Press, 1944, (exhaustive treatment)
2. E. Jahnke and F. Emde; “Tables of Functions”, Dover Publications, 1945, pp. 107-125.
(Lists some properties and tabulates functions).
3. R.V. Churchill; “Fourier Series and Boundary Value Problems”, McGraw Hill, 1941,
pp. 175-201. (for discussion of concept of orthogonality, see Chap. 3).
6. Whittaker and Watson; “Modern Analysis”, 4th Edition, Cambridge University press,
1958 (more rigorous development)
7. N.W. McLachlan; “Bessel Functions for Engineers”, 2nd Edition, Oxford, Clarendon
Press, 1955.
32
Chapter 8
8.1 Introduction
8.1.1 Description
The Laplace transformation permits many relatively complicated operations upon a func-
tion, such as differentiation and integration for instance, to be replaced by simpler algebraic
operations, such as multiplication or division, upon the transform. It is analogous to the
way in which such operations as multiplication and division are replaced by simpler pro-
cesses of addition and subtraction when we work not with numbers themselves but with
their logarithms.
8.1.2 Definition
The Laplace transformation applied to a function f (t) associates a function of a new
variable with f (t). This function of s is denoted by Lf (t) or where no confusion will result,
simply by L(f ) or F(s); and the transform is defined by:
Z ∞
L(f ) ≡ f (t)e−st dt
0
33
These conditions are usually met by functions occurring in physical problems. The num-
ber a is called the exponential order of f (t). If a number a exists such that e−at |f (t)| is
bounded, then f is said to be of exponential order.
8.1.4 Analyticity
If f (t) is piecewise continuous and of exponential order a, the transform of f (t), i.e., F(s),
is an analytic function of s for Re(s)>s. Also, it is true that for Re(s)>s, limx→∞ F (s) = 0
and limy→∞ F (s) = 0 when s=x+iy.
8.1.5 Theorems
Theorem I (Linearity)
The Laplace transform is a sum of functions is the sum of the transforms of the individual
functions.
L(f + g) = L(f ) + L(g)
Theorem II (Linearity)
The Laplace transform of a constant times a function is the constant times the transform
of the function.
L(cf ) = cL(f )
L(f ′) = s{ − f (0+ )
and
The latter, of course, requires an extension of the continuity of f (t) and its derivatives to
include f ′′ (t), and may be formally shown by partial integration. More generally, if f(t) and
n
its first n-1 derivatives are continuous and ddtnf is piecewise continuous, then
n
d f
L n
= sn L(f ) − sn−1 f (0+ ) − sn−2 f ′ (0+ ) − . . . − f (n−1) (0+ )
dt
34
Theorem IV (Transforms of Integrals)
R t If f (t) is of exponential order and at least piecewise continuous, then the transform of
a
f (t)dt is given by "Z #
t
1 1 0
Z
L f (t)dt = L(f ) + f (t)dt
a s s a
Theorem V
L est f (t) = F (s − a)
Theorem VI
If
(
f (t − b), t ≥ b
fb (t) =
0, t<b
Then
"Z #
t
L f (t − τ )f (τ )dt = F (s) ∗ G(s)
0
"Z #
t
=L g(t − τ )f (τ )dt
0
35
Theorem VIII (Derivatives of Transforms)
dF (s)
L tf (t) = −
ds
More generally,
dn F
L tn f (t) = (−1)n n
ds
8.2 Examples
8.2.1 Solving Simultaneous Equations
The problem is to solve for y from the simultaneous equations:
Z t
′
y +y+3 zdt = cos t + 3 sin t
0
2y ′ + 3z ′ + 6z = 0 y0 = −3, z0 = 2
Taking the Laplace transform of each equation, we have:
"Z #
t
L(y ′) + L(y) + 3L zdt = L(cos t) + 3L(sin t)
0
′ ′
2L(y ) + 3L(z ) + 6L(z) = 0
This system of equations can be reduced as follows by using the theorems and definitions
from the previous sections:
3 s 3
L(y) + 3 + L(y) + L(z) = 2 + 2
s s +1 s +1
2 sL(y) + 3 + 3 sL(z) − 2 + 6L(z) = 0
Collecting the terms and transposing, we have
3 s+3
(s + 1)L(y) + L(z) = 2 −3
s s +1
2sL(y) + 3(s + 2)L(z) = 0
The two original integro-differential equations have thus been reduced to two linear algebraic
equations in L(y) and L(z). Applying Cramer’s rule and solving for L(y) since it is y which
we want;
s+3 3
s2 +1
−3 s
0 3(s + 2) 3(s + 2) ss+3
2 +1 − 3
L(y) = =
(s + 1) 3 3s(s + 3)
s
2s 3(s + 2)
36
s+3
Where 3s(s + 3) is the determinant of the matrix in the denominator and 3(s + 2) s2 +1
−3
is the determinant of the matrix in the numerator. L(y) can also be written as
s+2 s+2
L(y) = −3 .
s(s2 + 1) s(s + 3)
Now applying the method of partial fractions,
2 1 − 2s 1 2
L(y) = + − +
s s2 + 1 s+3 s
s 1 1
= −2 2 + 2 −
s +1 s +1 s+3
And, finding the inverse in the table of transforms, which are tables relating functions of s
to the corresponding functions of t, and will be found in section IV of this paper,
y = −2 cos t + sin t − e−3t (t > 0)
It should be noted that one of the inherent characteristics of solving differential equations
by the use of Laplace transforms is that the initial conditions are included in the solution.
37
8.2.3 Transfer Functions
For certain control functions, and for representing the dynamic behavior of various devices
such as reactors, heat exchangers, etc., it is advantageous to use a “transfer function” because
of the convenience in manipulation it provides. The transfer functions of many elements of
a system, when strung together in a block diagram, represent a convenient way of writing
complicated system equations. The transfer function of a system may be defined as the ratio
of the output to the input of the system in transform (s) space. The conditions for using
transfer functions are:
4. Linear system.
1. Equations
di
ei = Ri + L
dt
di
e0 = L
dt
2. Transforms
E0 sLI(s) sL sτ
(s) = = =
Ei (R + sL)I(s) R + sL 1 + sτ
L
(where τ = R
is the circuit’s time constant.)
38
8.3 Inverse Transformations
8.3.1 Heaviside Methods
When solving equations by the Laplace transform technique, it is frequently the most
difficult part of the procedure to invert the transformed solution for F(s) into the desired
function f(t). A simple way of making this inversion, but unfortunately a method only
applicable to special cases, is to reduce the answer to a number of simple expressions by
using partial fractions, and then applying the Heaviside theorems as outlined below:
Theorem I
h i
−1 p(s)
If y(t) = L , where p(s) and q(s) are polynomials, and the order of q(s) is greater
q(s)
than the order of p(s), then the term in y(t) corresponding to an unrepeated linear factor
p(a) at p(a) at
(s − a) of q(s) is q!(a) e or Q(a) e , where Q(s) is the product of all factors of q(s) except for
(s − a).
Example
s2 +2
If L(f (t)) = s(s+1)(s+2)
, then what is f (t)?
Theorem II
h i
If y(t) = L−1 p(s)
q(s)
where p(s) and q(s) are polynomials and the order of q(s) is greater
than the order of p(s), then the terms in y(t) corresponding to the repeated linear factor
(s − a)r of q(s) are:
" #
(r−1)
at φ (a) φ(r−2) (a) t φ′ (a)tr−2 tr−1
e + + ...+ + φ(a) ,
(r − 1)! (r − 2)! 1! 1!(r − 2)! (r − 1)!
where φ(s) is the quotient of p(s) and all the factors of q(s) except for (s − a)r .
39
Example
s+3
If L(f (t)) = (s+2)2 (s+1)
, then what is f (t)?
s+3
a. φ(s) = s+1 ; φ′ (s) = (s+1)−(s+3)
(s+1)2
−2
= (s+1)2
′
b. φ(−2) = −1; φ (−2) = −2
The terms in f (t) corresponding to (s + 2)2 are:
−2t −2 −1t
e + = −e−2t (s + t).
1! 0!1!
Then, as in the example from Theorem I;
p(s) = 8 + 3; q(s) = s3 + 5s2 + 8s + 4
q ′ (s) = 3s2 + 10s + 8
p(−1) = 2 q ′ (−1) = 1.
Thus,
Theorem III
h i
If y(t) = L−1 p(s)
q(s)
, where p(s) and q(s) are polynomials and the order of q(s) is greater
than the order2 of p(s),
then the eterms in y(t) corresponding to an unrepeated quadratic
−at
2
factor (s + a) + b of q(s) are b (φi cos bt + φr sin bt) where φr and φi are respectively,
the real and imaginary partsof φ(−a+ ib), and φ(s) is the quotient of p(s) and all the factors
of q(s) except (s + a)2 + b2 .
Example
s
If L(f (t)) = (s+2)2 (s2 +2s+2)
, then what is f (t)?
s2 + 2s + 2 = (s + 1)2 + 12
s
φ(s) = .
(s + 2)2
40
Therefore,
−1 + i
φ(−a + ib) = φ(−1 + i) = 2
(−1 + i) + 2
−1 + i −1 + i 1 i
= 2
= = + .
(1 + i) 2i 2 2
e−t (cos t+sin t)
So φr = φi = 12 and the terms in f(t) corresponding to (s2 + 2s + 2) are 2
. By
adding the two partial inverses, we get the following answer:
where γ is some real number chosen such that F(s) is analytic (see Appendix A) for Re(s)≥ γ,
and the Cauchy principle value of the integral is to be taken, i.e.
Z γ+iB
1
f (t) = lim eSt F (s)ds
2πi B→∞ γ−iB
Let us illustrate the formal origin of the inversion integral in the following way. In the
complex plane, let φ(x) be a function of z, analytic on the line x=γ, and in the entire half
plane R to the right of this line. Moreover, let |φ(z)| approach zero uniformly as z becomes
infinite through this half plane. Then s0 is any point in the half plane R, we can choose a
semi-circular contour c, composed of c1 and c2 , as shown below, and apply Cauchy’s integral
formula, (see Appendix B)
1
I
φ(z)
Φ(s0 ) = dz
2πi c z − s0
Here, φ(z) is analytic within and on the boundary c of a simply connected region R and s0
is any point in the interior of R integration around c in a positive sense).
Thus, Cauchy’s integral formula yields
Z γ−ib
1 1 1
I Z
φ(z) φ(z) φ(z)
φ(s) = dz = dz + dz
2πi c z − s 2πi c2 z − s 2πi γ+ib z − s
41
Now, for values of z on the path of integration, c2 , and for b sufficiently large,
|z − a| ≥ b − |a − γ| ≥ b − |a|
|φ(z)|
Z Z
φ(z)
⇒ dz ≤ |dz|
c2 z − s c2 |z − s|
Z
M
≤ |dz|
b − |s| c2
bM
=π
b − |s|
b
where M is the maximum value of |φ(z)| on c2 . As b → ∞, the fraction b−|s|
→ 1, and M
approaches zero. Hence, Z
φ(z)
lim dz = 0
b→∞ c z − s
2
42
where ρn (t) is the residue of est φ(s) at s=sn . For discussion of residues, see Appendix C; for
singular points, Appendix D.
Let the path of integration be made up of the line segments γ − ib, γ + ib, and c3 . Then,
γ+ib N
1 1
Z Z X
st
φ(s)e ds + φ(s)est ds = ρn (t)
2πi γ−ib 2πi c3 n=1
If the second integral around c3 vanishes for b → ∞, as often happens, we are led to the
immediate result that
N
X
−1
L φ(s) = f (t) = ρn (t)
n=1
Note that in the formal derivation of the inversion formula, we assumed that φ(s) (and
therefore est φ(s)) is analytic for s ≥ γ, and that lims→∞ |φ(s)|=0 in that plane. In our
discussion of the residue form of the inversion, we work in the left half-plane. This is
because Laplace transforms have the property that they are analytic in a right half-plane,
and that in that plane, lims→∞ |φ(s)|=0.
Questions of the validity of the above procedures, alterations of contour, and applications
to problems are not dealt with here, as they are presented in detail in the references.
43
8.4 Tables of Transforms
F(s) f(t)
1 Unit impulse at t=0, δ(t)
s Unit doublet impulse at t=0, δ2 (t)
1
s
Unit step at t=0, u(t)
1
s+a
e−at
1 e−at −e−bt
(s+a)(s+b) b−a
s+c (c−a)e−at −(c−b)e−bt
(s+a)(s+b) b−a
s+c c c−a c−b
s(s+a)(s+b) ab
+ a(a−b)
e−at + b(b−a)
e−bt
1 e−at e−bt e−ct
(s+a)(s+b)(s+c) (c−a)(b−a)
+ (a−b)(c−b)
+ (a−c)(b−c)
s+d (d−a)e−at (d−b)e−bt (d−c)e−ct
(s+a)(s+b)(s+c) (c−a)(b−a)
+ (a−b)(c−b)
+ (a−c)(b−c)
s2 +es+d (a2 −ea+d)e−at (b2 −eb+d)e−bt (c2 −ec+d)e−ct
(s+a)(s+b)(s+c) (c−a)(b−a)
+ (a−b)(c−b)
+ (a−c)(b−c)
1 sin bt
a2 +b2 b
√
s+d d2 +b2 −1 b
s2 +b2 b
sin (bt + ϕ) ϕ = tan d
s
s2 +b2
cos (bt)
44
F(s) f(t)
1 e−at
(s+a)2 +b2 b
sin (bt)
s+a
(s+a)2 +b2
e−at cos (bt)
√
e−at (d−a)2 +b2
s+d b
(s+a)2 +b2 b
sin (bt + ϕ) ϕ = tan−1 d−a
1 1 e−at b
s[(s+a)2 +b2 ] b20
+ b0 b
sin (bt − ϕ) ϕ = tan−1 −a b20 = a2 + b2
√
e−at (d−a)2 +b2
s+d d b b
s[(s+a)2 +b2 ] b20
+ b0 b
sin (bt + ϕ) ϕ = tan−1 d−a − tan−1 −a b20 = a2 + b2
1 sinh (bt)
s2 −b2 b
s
s2 −b2
cosh (bt)
1 tn−1
sn (n−1)!
∀n ∈ N
1 tν−1
sν Γ(ν)
∀ν > 0
1 e−at +at
s2 (s+a) a2
−1
45
F(s) f(t)
s+d
(d − a)t + 1 e−at
(s+a)2
1 1−(at+1)e−at
s(s+a)2 a2
h i
s+d d a−d
s(s+a)2 a2
+ − ad2 e−at
a
t
h i
s2 +bs+d d ba−a2 −d a2 −d
s(s+a)2 a2
+ a
t + a2 e−at
1 1 1
s2 (s2 +b2 ) b2
t − b3
sin (bt)
1 1 1 2
s3 (s2 +b2 ) b4
(cos (bt) − 1) + 2b2
t
1 1 1
s2 (s2 −b2 ) b3
sinh (bt) − b2
t
1 1 1 2
s3 (s2 −b2 ) b4
(cosh (bt) − 1) − 2b2
t
1 1
(s2 +b2 )2 2b3
sin (bt) − bt cos (bt)
s 1
(s2 +b2 )2 2b
t sin (bt)
s2 1
(s2 +b2 )2 2b
sin (bt) + bt cos (bt)
s2 +b2
(s2 +b2 )2
t cos (bt)
1 e−at
2 2b3
sin (bt) − bt cos (bt)
[(s+a)2 +b2 ]
s+a te−at
2 2b
sin (bt)
[(s+a)2 +b2 ]
(s+a)2 −b2
2 te−at cos (bt)
[(s+a)2 +b2 ]
e−t1 s
s2
(t − t1 )u(t − t1 )
(t1 s+1)e−t1 s
s2
tu(t − t1 )
(t21 s2 +2t1 s+2)e−t1 s
s3
t2 u(t − t1 )
46
8.4.1 Analyticity
Let ω be a single valued complex function of z such that ω = f (z) = u(x, y) + iv(x, y)
where u and v are real functions. The definition of the limit of f (z) as z approaches z0 and
the theorems on limits of sums, products, and quotients correspond to those in the theory of
functions of a real variable. The neighborhoods involved are not two-dimensional; however;
and the condition
lim f (z) = u0 + iv0
z→z0
is satisfied if and only if the two-dimensional limits of the real functions u(x, y) and v(x, y)
as x → x0 , y → y0 have the values u0 and v0 respectively. Also, f (z) is continuous when
z = z0 is and only if u(x, y) and v(x, y) are both continuous at x0 , y0 ).
The derivative of ω at a point z is:
dω ∆ω f (z + ∆z) − f (z)
= f ′ (z) = lim = lim ,
dz ∆z→0 ∆z ∆z→0 ∆z
provided that this limit exists (it must be independent of direction). Now suppose one
chooses a path on which ∆y = 0 so that ∆z = ∆x. Then, since ∆ω = ∆u + i∆v,
dω ∆u ∆v ∂u ∂v
= lim +i = +i
dz ∆x→0 ∆x ∆x ∂x ∂x
The case when ∆x = 0 so that ∆z = i∆y is similar.
dω ∆u ∆v ∂u ∂v
= lim +i = +i
dz ∆y→0 ∆y ∆y ∂y ∂y
Equating the real and imaginary parts of the above equations, since we insist that the
derivative must be independent of direction, we get
∂u ∂v ∂v ∂u
= ; =− .
∂x ∂y ∂x ∂y
These are known as the “Cauchy-Reimann conditions”.
Now, the definition of analyticity is that “a function f (z) is said to be analytic at a
point z0 is its derivative f ′ (z) exists at every point of some neighborhood of z0 ”. And, it is
necessary and sufficient that f (z) = u + iv satisfies the Cauchy-Reimann conditions in order
for the function to have a derivative at point z.
Proof:
47
I I I I
f (z)dz = (u + iv)(dx + idy) = (udx − vdy) + i (vdx + udy).
c c c c
Applying Green’s lemma to each integral,
I Z Z Z Z
∂v ∂u ∂u ∂v
f (z)dz = − − dxdy + i − − dxdy.
c R ∂x ∂y R ∂x ∂y
Due to analyticity, the integrands on the right vanish identically, giving
Z
f (z)dz = 0. QED
c
Theorem II
If f (z) is analytic within and on the boundary c of a simply connected region R and if z0
1
H f (z)
is any point in the interior of R, then f (z0 ) = 2πi c z−z0
dz where the integration around c
is in the positive sense.
Proof:
Let c0 be a circle with center at z0 whose radius ρ is sufficiently small such that c0 lies
f (z)
entirely within R. The function f (z) is analytic everywhere within R, hence z−z 0
is analytic
everywhere within R except at z = z0 . By the “principle of deformation of contours” (see
any complex variable book), we have that
48
Choosing ρ to be less than δ, we write
f (z) − f (z0 )
I I I
ǫ ǫ ǫ
dz < |dz| = |dz| = 2πρ = 2πǫ.
c0 z − z0 c0 ρ ρ c0 ρ
Since the integral on the left is independent of ǫ, yet cannot exceed 2πǫ, which can be made
arbitrarily small, it follows that the absolute value of the integral is zero. We therefore have
that
f (z)
I
dz = f (z0 )2πi + 0 or,
c0 z − z0
1 f (z)
I
f (z0 ) = dz. QED
2πi c0 z − z0
This is known as “Cauchy’s integral formula”.
Calculation of Residues
Laurent Series
Theorem I:
If f (z) is analytic throughout the closed region R, bounded by two concentric circles c1
and c2 , then at any point in the annular ring bounded by the circles, f (z) can be represented
by a series of the following form where a is the common center of the circles:
X∞
f (z) = an (z − a)n
n=−∞
1 f (ω)
I
an = dω.
2πi c (ω − a)n+1
Each integral is taken in the counter-clockwise direction around any curve c lying within the
annulus and encircling its inner boundary (for proof see any complex variable book). This
series is called the Laurent series.
Residues
The coefficient, a−1 , of the term (z − a)−1 in the Laurent expansion of a function f (z),
is related to the integral of the function through the formula
1
I
a−1 = f (z)dz.
2πi c
In particular, the coefficient of (z − a)−1 in the expansion of f (z) about an “isolated singular
point” is called the “residue” of f (z) at that point.
If we consider a simply closed curve c containing inHits interior a number of isolated
singularities of a function f (z), then it can be shown that c f (z)dz = 2πi [r1 + r2 + · · · + rn ]
where r1 , r2 , · · · , rn are the residues of f (z) at the singular points within c.
49
Determination of Residues
The determination of residues by the use of series expansions is often quite tedious. An
alternative procedure for a simple or first order pole at z = a can be obtained by writing
a−1
f (z) = + a0 + a1 (z − a) + · · ·
z−a
and multiplying by z − a to get
(z − a)f (z) = a−1 + a0 (z − a) + a1 (z − a)2 + · · ·
and letting z → a to get
a−1 = lim (z − a)f (z) .
z→a
For polynomials, the method of residues reduces to the Heaviside method for finding inverse
Laplace transforms.
where 0 < |z − z0 | < γ0 and γ0 is the radius of the neighborhood in which f (z) is analytic
except at z0 . This series of negative powers of (z − z0 ) is called the “principle part” of f (z)
about the isolated singular point z0 . The point z0 is an “essential singular point” of f (z) if
the principle part has an infinite number of non-vanishing terms. It is a “pole of order m”
if A−m 6= 0 and A−n = 0 when n > m. It is called a “simple pole” when m = 1.
8.5 References
1. Churchill; “Operational Mathematics” - A complete discourse on operational methods,
well written and presented.
50
2. Wylie; “Advanced Engineering Mathematics” - A concise review of the highlights of
transformation calculus, both Fourier and Laplace transforms.
4. Erdelyi, A., et al; “Tables of Integral Transforms”, Vols. I and II, McGraw Hill, New
York (1954). Very extensive compilation of transforms of many kinds.
51
Chapter 9
Fourier Transforms
9.1 Definitions
9.1.1 Basic Definitions
In addition to the Laplace transform there exists another commonly-used set of trans-
forms called Fourier transforms. At least five different Fourier transforms may be distin-
guished. Their definitions are as follows:
52
the range of integration used in the definition of each transform, we see that the finite range
transforms apply to functions defined on a finite interval, the infinite range sine and cosine
transforms to functions defined on a semi-infinite interval, while the exponential transform
applies to functions defined on the infinite interval.
Integrating by parts,
π Z π
Cn [f ] = f (x) cos (nx) +n f (x) sin (nx)dx
0
0
= f (π) cos (nπ) − f (0+ ) + nSn [f ].
Since n is an integer,
Consider also,
Z π
′
Sn [f ] = f ′ (x) sin (nx)dx
0
π Z π
= f (x) sin (nx) −n f (x) cos (nx)dx
0
0
= −nCn [f ].
53
Now take for f , f = g ′; and by iteration we get the following:
Similarily,
Sn [g ′′ ] = −nCn [g ′ ]
= −n (−1)n g(π) − g(0) − n2 Sn [g]
and again integrating by parts, and assuming limx→∞ f (x) = 0, which is a consequence of
our condition of absolute integrability, we get
r r Z ∞
′ 2 2
Cr [f ] = [−f (0)] + r f (x) sin (rx)dx
π π 0
r
2
=− f (0) + rSr [f ].
π
Likewise,
r Z ∞
′ 2
Sr [f ] = f ′ (x) sin (rx)dx
π 0
r Z ∞
2
= −r f (x) cos (rx)dx = −rCr [f ].
π 0
Iterating once, we find
r
′′ 2 ′
Cr [f ] = − f (0) + rSr [f ′ ]
π
r
2 ′
=− f (0) − r 2 Cr [f ].
π
Similarly,
Sr [f ′′ ] = −rCr [f ′ ]
r
2
= −r f (0) − r 2 Sr [f ].
π
54
Finally, consider
r ∞
1
Z
′
Er [f ] = f ′ (x)eirx dx
2π −∞
Er [f ′′ ] = −r 2 Er [f ].
In each case, we have assumed continuity for f ′ and f ′′ in order to perform the indicated
parts integrations. One may proceed with the iterations, obtaining relations involving trans-
forms of higher derivatives. Further properties are derivable with similar ease, the procedure
usually involving an integration by parts.
55
9.2.3 Transforms of Functions of Two Variables
The transforms may also be used with functions of two or more variables; for example,
if f is a function of x and y, defined for 0 ≤ x ≤ π, 0 ≤ y ≤ π, then
Z π
Sm [f ] = f (x, y) sin (mx)dx
Z0 π
Sn [f ] = f (x, y) sin (ny)dy
0
Z π Z π
Sm,n [f ] = Sn [f ] sin (my)dy = Sm [f ] sin (nπ)dx
0 0
Z πZ π
= f (x, y) sin (mx) sin (ny)dxdy.
0 0
Furthermore,
" #
2
∂ f n o
Sm,n = m Sn f (0, y) − (−1) Sn [f (π, y)] − m2 Sm,n [f ]
m
∂x2
so that if
then
" #
∂2f
Sm,n = −m2 Sm,n [f ]
∂x2
" #
∂2f ∂2f
Sm,n + = −(m2 + n2 )Sm,n [f ].
∂x2 ∂y 2
Similar formulae may be derived for Cm,n and extensions can be worked out in analogy
to the single-variable properties. These transforms of more than one variable amount to
transforms of transforms, obtained by taking the transform of the function with respect to
a single variable, and subsequently taking the transform of this transformed function with
respect to another variable. In fact, if the boundary conditions in the various dimensions
are not all of the same type, more than one type of transformation may be used. In fact,
one fairly common combination is the Fourier plus the Laplace transformation.
56
transform with respect to each variable, defining the three-times transformed function.
Z ∞Z ∞Z ∞
1
Ek [f ] = g(k1 , k2 , k3 ) = 3 ei(k1 x1 +k2 x2 +k3 x3 ) f (x1 , x2 , x3 )dx1 dx2 dx3
(2π) −∞ −∞ −∞
2
1
Z
Ek [f ] = g(k) = 3 eik·x f (x)d3 x
(2π) 2 x
where k has the components k1 , k2 , k3 , x has the components x1 , x2 , x3 , and d3 x = dx1 dx2 dx3 ,
and the integration is to be taken over the full range, −∞ to ∞ of each variable. The inverse
transformation gives back f (x),
1
Z
f (x) = 3 e−ik·x g(k)d3 k.
(2π) k
2
1. Ek [∇f ] = −ikg(k)
~ · F ] = −ik · G(k)
2. Ek [∇
3. Ek [∇ × F] = −ik × G(k)
4. Ek [∇2 f ] = −k2 f
From a glance at formulae 1 to 4, we see that under this transformation, the vector operator
∇ operating on a function transforms into the vector ik times the transformed function.
1. Definitions
r Z π
2
a. Sn [y] = y(x) sin (nx)dx n = 1, 2, . . .
π 0
r Z π
2
b. Cn [y] = y(x) cos (nx)dx n = 0, 1, . . .
π 0
57
2. Inversions (0 ≤ x ≤ π)
r ∞
2X
a. y(x) = Sn [y] sin (nx)
π n=1
r ∞
2X 1
b. y(x) = Cn [y] cos (nx) + C0 [y]
π n=1 π
3. Transforms of Derivatives
a. Sn [y ′] = −nCn [y] n = 1, 2, . . .
b. Cn [y ′] = nSn [y] − y(0) + (−1)n y(π) n = 0, 1, 2, . . .
c. Sn [y ′′] = −n2 Sn [y] + n y(0) + (−1)n y(π)
n = 1, 2, . . .
d. Cn [y ′′ ] = −n2 Cn [y] − y ′(0) + (−1)n y(π) n = 0, 1, 2, . . .
(Note that for b and c, the functions must be known on the boundaries and for d, the
derivative must be known on the boundaries.)
4. Transforms of Integrals
Z x
(−1)n
1
a. Sn y(ξ)dξ = Cn [y] − C0 [y] n = 1, 2, . . .
0 n n
Z x
1
b. Cn y(ξ)dξ = − Sn [y]
n
Z0 x
C0 y(ξ)dξ = πC0 [y] − C0 [xy]
0
5. Convolution Properties
58
6. Derivatives of Transforms
d
a. Sn [y] = Cn [xy]
dn
d
b. Cn [y] = −Sn [xy]
dn
(Here the differentiated transforms must be in a form valid for n as a continuous variable
instead of only for integral n.)
1. Definitions
r Z ∞
2
a. Sr [y] = y(x) sin (rx)dx r≥0
π 0
r Z ∞
2
b. Cr [y] = y(x) cos (rx)dx r≥0
π 0
Z ∞
1
c. Er [y] = √ y(x)eirx dx −∞ ≤r ≤∞
2π −∞
2. Inversions
r Z ∞
2
a. y(x) = Sr [y] sin (rx)dx x>0
π 0
r
2
= Sx Sr [y]
π
r Z ∞
2
b. y(x) = Cr [y] cos (rx)dx x>0
π 0
r
2
= Cx Cr [y]
π Z
∞
1
c. y(x) = √ Er [y]e−irx dx
2π −∞
= E−x Er [y] = Ex E−r [y]
59
3. Transforms of Derivatives
a. Sr [y ′] = −rCr [y]
b. Cr [y ′] = rSr [y] − y(0)
c. Er [y ′ ] = −irEr [y]
d. Sr [y ′′ ] = −r 2 Sr [y] + ry(0)
e. Cr [y ′′] = −r 2 Cr [y] − y ′(0)
f. Er [y ′′ ] = −r 2 Er [y]
n
d y
g. Er = (−ir)n Er [y]
dxn
4. Transforms of Integrals
Z x
1
a. Sr y(ξ)dξ = Cr [y]
r
Z0 x
−1
b. Cr y(ξ)dξ = Sr [y]
0 r
Rx
(In a and b, 0 y(ξ)dξ must be piecewise continuous and approaching zero as x → ∞ ).
Z x
1
c. Er y(ξ)dξ = Er [y]
c r
Rx
(Where c is any lower limit and −∞ y(ξ)dξ must be piecewise continuous and approaching
zero as x → ∞ ).
"Z Z #
x λ
−1
d. Sr y(ξ)dξdλ = 2 Sr [y]
0 0 r
"Z Z #
x λ
−1
e. Cr y(ξ)dξdλ = 2 Cr [y]
0 0 r
"Z Z #
x λ
−1
f. Er y(ξ)dξdλ = 2 Er[y]
0 0 r
60
(where Laplace transform variable is taken as ir.)
6. Convolution Properties
Z ∞
a. 2Sr [f ]Sr [g] = Cr g(ξ) f (x + ξ) − f1 (x − ξ) dξ
0
Z ∞
b. 2Sr [f ]Cr [g] = Sr g(ξ) f (x + ξ) + f1 (x − ξ) dξ
0
Z ∞
= Cr f (ξ) g2 (x − ξ) − g(x + ξ) dξ
0
Z ∞
c. 2Cr [f ]Cr [g] = Cr g(ξ) f2 (x − ξ) + f (x + ξ) dξ
0
7. Derivatives of Transforms
d
a. Sr [y] = Cr [xy]
dr
d
b. Cr [y] = −Sr [xy]
dr
d
c. Er [y] = iEr [xy]
dr
61
known, while f ′ (π) and f ′ (0) are not, then clearly Sn is the best option because in doing so,
we introduce f (π) and f (0) and need not know the value of f ′ at any point. We would not
use Cn because f ′ (0) and f ′ (π) are unknown and could not be solved for until much later in
the work. On the other hand, if f ′ (0) and f ′ (π) are known, then one uses Cn for the same
reason. The situation is similar with respect to the infinite range transforms; use Fs [r] to
2
reduce ddxf2 when f (0) is known and use Fc [r] when f ′ (0) is known. No such question arises
with respect to Fe [r].
We have noted at the start that the functions to which the Fourier transforms are appli-
cable are usually required to be absolutely integrable. This kind of knowledge of a function
is usually evident from the physical meaning of the function, before the function itself is
known. The Laplace transform, on the other hand, merely requires that the function be of
exponential order, i.e., |f (x)| < Meαx ∀M, α ∈ R.
Example
Consider the following steady-state heat conduction problem in a medium with no internal
heat generation. Suppose we have a two-dimensional slab which is semi-infinite along the y-
axis (0 ≤ y ≤ ∞) and finite along the x-axis (0 ≤ x ≤ π). The left face at x = 0 is insulated
for y > a but has a heat flux q for 0 < y < a. The right face at x = π is insulated for all
y. Furthermore, suppose that the face at y = 0 is held at temperature T0 , 0 < x < π. The
position-dependent temperature T (x, y) is modeled using Laplace’s equation with boundary
conditions.
∇2 T = 0
(
∂T q 0<y<a
−k (0, y) =
∂x 0 y>a
∂T
(π, y) = 0
∂x
T (x, 0) = T0
We propose to do the problem by the method of Fourier transforms, but intuitively we know
limy→∞ T ≡ T̄ 6= 0 and that therefore, the transform of T does not exist. However, the
function T − T̄ ≡ Θ is such that limy→∞ Θ = limy→∞ T − T̄ = T̄ − T̄ = 0 and the transform
may (in fact, does) exist. Let us, therefore, substitute T = Θ + T̄ into the above problem to
obtain the following:
∇2 Θ = 0
(
∂Θ q 0<y<a
−k (0, y) =
∂x 0 y>a
∂Θ
(π, y) = 0
∂x
Θ(x, 0) + T̄ = T0 ⇒ Θ(x, 0) = T0 − T̄ ≡ Θ0
62
The structure of the problem is not essentially change, except that now Θ0 is not known
since T̄ is not known.
2 2
We must reduce the operators ∂∂xΘ2 and ∂∂yΘ2 . In x, ∂Θ
∂x
(0, y) and ∂T∂x
heta
(π, y) are known.
Thus, a finite range Fourier cosine transform is a good option here. In y, we know that
Θ(x, 0) = Θ0 and that limy→∞ Θ = 0. Therefore, an infinite range Fourier sine transform is
a good option. We shall denote the x-transformed functions by the superscript fn and the
y-transformed functions by the superscript F . Recall that
!f
∂2Θ ∂Θ ∂Θ
= −n2 Θf + (−1)n (π, y) − (0, y)
∂x2 ∂x ∂x
and
!F
∂2Θ
= −r 2 ΘF + rΘ(x, 0)
∂y 2
It is irrelevant in which order the transformations are applied or inverted, although one order
may prove nicer than another. Let us transform first with respect to x.
∂Θ ∂Θ d 2 Θf n
0 = −n2 Θf n + (−1)n (π, y) − (0, y) +
∂x ∂x dy 2
(
∂Θ − kq 0 < y < a
(0, y) =
∂x 0 y>a
∂Θ
(π, y) = 0
∂x (
π n=0
Θf n (x, 0) =
0 n = 1, 2, . . .
∂ΘF ∂ΘF
0 = −n2 Θf n + (−1) (π, y) − (0, y) − r 2 Θf nF + rΘf n (0)
∂x ∂x
∂ΘF q 1 − cos (ar)
(0, y) = −
∂x k r
∂ΘF
(π, y) = 0
∂x
Θf n (x, 0) = 0 n = 1, 2, 3, . . .
63
Making substitutions yields the single algebraic equation:
q (1 − cos (ar))
0 = −(n2 + r 2 )Θf nF + ; n = 1, 2, . . .
k r
q (1 − cos (ar))
0 = −r 2 Θf oF + + πΘ0 r; n=0
k r
Solving for Θf nF :
q (1 − cos (ar)) πΘ0
Θf oF = + ; n=0
k r3 r
q (1 − cos (ar))
Θf nF = ; n = 1, 2, . . .
k r(r 2 + n2 )
We propose to invert first with respect to r, but we would run into difficulties for n = 0. Let
us, therefore, integrate the x-transformed equations directly for n = 0 to get Θf o . We have
d 2 Θf o q
+ =0 0<y<a
dy 2 k
2 fo
dΘ
=0 y>a
dy 2
Θf o (x, 0) = πΘ0
lim Θf o = 0.
y→∞
Integrating,
q 2
Θf o = − y + C1 y + C2 0<y<a
2k
Θf o = C3 y + C4 y>a
f
Now lim Θ = 0 ⇒ C3 = C4 = 0.
y→∞
Also,
Θf o (x, 0) = πΘ0 = C2 .
It is necessary to cook up another condition to get C1 . In a problem of this type, we must
fo
require ∂Θ
∂y
and Θ to be continuous, therefore ∂Θ
∂y
and Θf o are continuous. Applying these
conditions at y = a,
dΘf o dΘf o
(x, a− ) = (x, a+ )
dy dy
qa qa
0 = − + C1 ; C1 =
k k
fo − fo +
Θ (a ) = Θ (a )
qa2 qa2
0=− + + πΘ0 .
2k k
64
Somewhat surprisingly, applying this last condition yields
qa2
Θ0 = − .
2πk
Thus
(
qa
− 2k (y − a)2 y≤a
Θf o =
0 y≥a
q (1 − cos (ar))
Θf nF = n = 1, 2, . . .
k r(n2 + r 2 )
1 1
The inverse of r(n2 +r 2 )
is n2
(1 − e−ny ) (See Erdélye, pg. 65, formula 20).
F −1 [g f ] = G(y),
then
G(−y) = −G(y).
Therefore,
−1 1
F (1 − cos (ar)) =
r(r 2 + n2 )
h i
12 (1 − e−ny ) − 1 2 1 − e−n(y+1) + 1 − e−n(y−a)
n 2n
= h i
12 (1 − e−ny ) − 1 2 1 − e−n(y+1) − 1 + e−n(a−y)
n 2n
h i
12 1 − e−ny − 1 + e−ny (e−na +ena ) y>a
n 2
= h −n(y+a) −n(a−y)
i
12 1 − e−ny + (e −e )
y<a
n 2
( −ny
e
2 (cosh (na) − 1) y>a
= 1n −ny −na
n2
(1 − e −e sinh (ny)) y<a
65
Now, lacking a known inversion to invert with respect to n, we use the series form
∞
1 fo 2 X fn
Θ= Θ + Θ cos (nx)
π π n=1
qa 2q P∞ (1−e−ny −e−na sinh (ny))
− 2πk (y − a)2 + πk n=1
n2
cos (nx) y>a
=
2q P∞
e−ny
πk n=1 n2
(cosh (na) − 1) cos (nx) y<a
T = Θ + T̄
qa2
T0 = Θ0 + T̄ = − + T̄
2πk
qa2
T̄ = T0 +
2πk
qa2
T = Θ + Θ0 +
2πk
while the inversion of the sine transform is the odd extensions, i.e.,
66
9.5.2 Infinite Range
Inversions of the infinite range transforms follow from the Fourier integral theorem in
various forms. The inversion of the cosine transform, for example, arises from the formula
2 ∞
Z Z ∞
f (x) = cos (rx) f (y) cos (ry)dydr
π 0 0
r Z ∞ r Z ∞
2 2
= cos (rx) f (y) cos (ry)dydr.
π 0 π 0
The interior integral is just what we above defined as Cr [f ]. Therefore,
r Z ∞
2
f (x) = Cr−1 [Cr ] = Cr [f ] cos (rx)dr.
π 0
The other inversions follow immediately in the same way from other forms of the Fourier
integral
q theorem. It is to our great advantage to note that, with the normalization factor
2 √1
π
or 2π
inserted as above, the inversion integral is just the transform of the transform,
i.e.,
f (x) = Cr−1 [Cr ] = Cx Cr [f ]
Knowing this fact doubles the utility of a table of transforms since it can be used backwards
as well as forwards. That is, given a transform one wishes to invert, one may first look for
it among tabulated transformed functions; not finding an inversions there, one may equally
well look for his transform among the tabulated functions. If it is found there, the inversions
of the given transform is the transform of the tabulated function.
There are tables of both the finite range transforms and the infinite range transforms,
useful for the purpose of inverting these transforms. However, this is just one way of obtaining
an inversions (the easiest, of course). In the case of the finite range transforms, where the
inversion is a Fourier series, and one does not know how to sum it, or get the inversion in
closed form, then the truncated series is a useful approximation to the inverse.
In the case of the infinite range transforms, the inversion integral is subject to evaluation
by the methods of complex integration and residue theory.
67
then (under the proper conditions on F ), the function can be recovered from its transform
through the inversion formula:
∞
1
Z
F (x) = √ f (r)e−irx dr.
2π −∞
Note that in the inversion formula, r is a real variable. Let us change the variable r to a
new (complex) variable, ir = s, and let φ(s) = f (r). Then
∞
1
Z
φ(s) = √ F (x)esx dx,
2π −∞
Where c is the path of integration in the complex s-plane (along the iy axis at x = 0) and
the Cauchy Principal value is to be taken such that
Z iβ
i
F (x) = − √ lim φ(s)e−sx ds.
2π β→∞ −iβ
Suppose φ(s) is analytic in the left half plane ℜ(s) < 0, except at a finite number of
isolated singular points sn . Let us close the path cβ , −β ≤ ℑ(s) ≤ β, with a semicircle in
the left half-plane, choosing β so large as to included all finite singular points in the plane
ℜ(s) < 0. Therefore, cβ is a vertical line along the imaginary axis, extending from −iβ to
iβ, and c′β is a semicircle with radius β which intersects the real axis at x = −β and the
imaginary axis at iy = iβ and iy = −iβ. Every singular point is contained within this half
circle. By Cauchy’s residue theorem, we then have that
Z Z k
X
−sx
e φ(s)ds + e−sx φ(s)ds = 2πi ρj
cβ c′β j=1
where ρj denotes the residue of e−sx φ(s) at the singular points sj , and we have assumed that
there are k such singular points.
Since we have hypothesized that β be so large that c′β include all finite singular points in
the left half plane, in the limit as β → ∞, the right side remains constant, and we have the
following:
Z Z k
X
−sx −sx
lim e φ(s)ds + lim e φ(s)ds = 2πi ρj
β→∞ cβ β→∞ cβ j=1
68
or
Z iβ k
X Z
−sx
lim e φ(s)ds = 2πi ρj − lim e−sx φ(s)ds.
β→∞ −iβ β→∞ c′β
j=1
Thus,
Z iβ
i
F (x) = − √ lim esx φ(s)ds
2π β→∞ −iβ
k
√ X i
Z
= 2π ρj + √ lim e−sx φ(s)ds.
j=1
2π β→∞ ′
cβ
Many times, the limit on the right hand side is zero, or is easy to evaluate, so that the above
formula is a useful tool for inverting the transform.
Reasoning in similar fashion, but completing the path with a semi-circle in the right half
plane ℜ(s) > 0, we obtain a similar formula:
k ′
√ X ′ i
Z
F (x) = − 2π ρj − √ lim e−sx φ(s)ds
k=1
2π β→∞ ′′
cβ
where the curve c′′β is another semicircular path similar to c′β except it extends out into the
positive real axis (rather than the negative real axis). Hence, this curve intersects the real
axis at x = β. Another way to visualize the curve is to realize that combining semi-circle c′β
with semi-circle c′′β results in a full circle with radius β centered at the origin of the complex
plane. For x < 0, one may find that the limit
Z
lim e−sx φ(s)ds = 0
β→∞ c′β
(Recall that ρj are residues at singular points in the left half-plane ℜ(s) < 0). Again, one
may find that for x > 0, the limit
Z
lim e−sx φ(s)ds = 0
β→∞ c′′
β
69
so that the second formula becomes
k ′
√ X
F (x) = − 2π ρ′j .
j=1
At x = 0,
1
F (0) = [F (0+ ) + F (0− )]
2
where
y(x) Sn (y)
a
1 nπ
1 − (−1)n
a2
x nπ
(−1)n+1
a3 a 3
x2
nπ
(−1)n+1 − 2 nπ 1 − (−1)n
nπa
ecx n2 π 2 +c2 a2
1 − (−1)n eca
a ωa
(n =
)
sin (ωx) 2 π
nπa ωa
2 2 2 2
(−1)n+1 sin (ωa) (n 6= )
n π −ω a π
ωa
0 (n = )
cos ωx π
nπa ωa
1 − (−1)n cos (ωa) (n 6=
2 2 2 2
)
n π −ω a π
nπa
sinh (cx) n2 π 2 +c2 a2
(−1)n+1 sinh (ca)
nπa
n
cosh (cx) 2 2
n π +c a 2 2 1 − (−1) cosh (ca)
a2
a−x nπ
a 3
x(a − x) 2 nπ
1 − (−1)n
sin (ω(a−x)) nπa ωa
sin (ωx) n2 π 2 −ω 2 a2
(n 6= π
)
sin (c(a−x)) nπa
sin (cx) n2 π 2 +c2 a2
70
y(x) Cn (y)
a (n = 0)
1
0 (n = 1, 2, . . .)
a2
(n = 0)
2
x 2
a
[(−1)n − 1] (n = 1, 2, . . .)
nπ
a3
(n = 0)
x2 3
2a3
(01)n (n = 1, 2, . . .)
n2 π 2
a2 c
ecx n2 π 2 +c2 a2
[(−1)n eca − 1]
ωa
0 )
(n =
π
sin (ωx) a2 ω ωa
n
(−1) cos (ωa) − 1 (n =
6 )
n2 π 2 − ω 2a2 π
a ωa
(n = )
2 π
cos ωx a2 ω ωa
2 2 2 2
(−1)n+1 sin (ωa) (n 6= )
n π −ω a π
a2 c
n
sinh (cx) 2 2
n π +c a2 2 (−1) cosh (ca) − 1
a2 c
cosh (cx) n2 π 2 +c2 a2
(−1)n sinh (ca)
4 3
a (n = 0)
(x − a)2 3
a3
2 2 2 (n = 1, 2, . . .)
nπ
cos (ω(a−x)) a2 ω
sin (ωa) n2 π 2 +ω 2 a2
(n 6= ωa
π
)
cosh (c(a−x)) a2 c
sinh (ca) n2 π 2 +c2 a2
9.7 References
1. Sneddon, Ian N. “Fourier Transforms”, McGraw Hill, 1951.
2. Churchill, Ruel V., “Operational Mathematics”, McGraw Hill, 1958.
3. Erdelyi, Volume I, “Bateman Mathematical Tables”.
71
Chapter 10
If
Z b(x)
f (x) = g(x, y)dy,
a(x)
Then
b(x)
2g(x, y)
Z
d
f (x) = g[x, b(x)]b′ (x) − g[x, a(x)]a′ (x) + dy.
dx a(x) 2x
dy
f (x) = + a(x)y
dx
y(x0 ) = y0 .
72
The procedure is to find an integrating factor. Define h such that
dh
= a(x).
dx
Thus
Z x
h= a(x′ )dx′ .
c
d(eh y) dy dh dy
= eh + eh y = eh + eh a(x)y = eh f (x).
dx dx dx dx
Then
Z yeh Z x
′
h
d(e y) = eh f (x′ )dx′
y0 eh0 x0
where
Z x0
h0 = a(x′ )dx′
Zc x0
h′ = a(x′′ )dx′′ .
c
Thus,
Z x
′
h
ye − y0 e h0
= eh f (x′ )dx′ .
x0
Hence
Z x
h0 −h ′
y = y0 e + eh −h f (x′ )dx′ .
x0
Recalling that
Z x
h= a(x′ )dx′
c
Z x′ Z x Z x
′ ′′ ′′ ′′ ′
h −h= a(x )dx − a(x )dx = − a(x′′ )dx′′ .
c c x′
73
Finally,
Z x Rx
h0 −h a(x′′ )dx′′
y = y0 e + f (x′ )e− x′ dx′ .
x0
(Note that the constant c appearing as a lower limit in the integral of the integrating factor
is not a boundary condition: it disappears in the final solution.)
1. a · b × c = b · c × a = c · a × b
2. a × (b × c) = b(a · c) − c(a · b)
3. (a × b) · (c × d) = a · b × (c × d)
= a · c(b · d) − d(b · c)
= (a · c)(b · d) − (a · d)(b · c)
4. (a × b) × (c × d) = (a × b · d)c − (a × b · c)d
5. ∇(φ + ψ) = ∇φ + ∇ψ
6. ∇(φψ) = φ∇ψ + ψ∇φ
7. ∇(a · b) = (a · ∇)b + (b · ∇)a + a × (∇ × b) + b × (∇ × a)
8. ∇ · (a + b) = ∇ · a + ∇ · b
9. ∇ × (a + b) = ∇ × a + ∇ × b
10. ∇ · (φa) = a · ∇φ + φ∇ · a
11. ∇ × (φa) = (∇φ) × a + φ∇ × a
12. ∇ · (a × b) = b · ∇ × a − a · ∇ × b
13. ∇ × (a × b) = a(∇ · b) − b(∇ · a) + (b · ∇)a − (a · ∇)b
14. ∇ × ∇ × a = ∇(∇ · a) − ∇2 a
15. ∇ × ∇φ = 0
16. ∇·∇×a=0
17. ∇ · r = 3; ∇×r=0
74
If V represents a volume bounded by a closed surface S with unit vector n̂ normal to S and
directed positive outwards, then,
ZZZ ZZ
18. (∇φ)dV = (φn̂)dS
ZZZ V S
ZZ
19. (∇ · a)dV = (a · n̂)dS (Gauss’ Theorem)
ZZZ V S
ZZ ZZZ
20. (f ∇ · g)dV = (f g · n̂)dS − (g · ∇f )dV
ZZZ V ZZ S ZZZ V
Cartesian Coordinates
∂ ∂ ∂
∇= î + ĵ + k̂
∂x ∂y ∂z
2
∂ φ ∂ φ ∂2φ
2
∇2 φ = 2 + 2 + 2
∂x ∂y ∂z
3
d r = dxdydz
Cylindrical Coordinates
∂ 1 ∂ ˆ ∂
∇= ρˆ0 + θ0 + k̂
∂ρ ρ ∂θ ∂z
1 ∂2ϕ ∂2ϕ
1 ∂ ∂ϕ
∇2 ϕ = ρ + 2 2 + 2
ρ ∂ρ ∂ρ ρ ∂θ ∂z
3
d r = ρdρdθdz
75
Spherical Coordinates
∂ 1 ∂ ˆ 1 ∂
∇= rˆ0 + θ0 + ϕˆ0
∂r r ∂θ r sin (θ) ∂ϕ
∂2ψ
2 1 ∂ 2 ∂ψ 1 ∂ ∂ψ 1
∇ ψ= 2 r + 2 sin (θ) + 2 2
r ∂r ∂r r sin (θ) ∂θ ∂θ r sin (θ) ∂ϕ2
d3 r = r 2 sin (θ)drdθdϕ
u = ax x + ay y + az z
v = bx x + by y + bz z
w = cx x + cy y + cz z
By defining
u = u1 v = u1 w = u3 ;
x = x1 y = x2 z = x3 ;
ax = a11 ay = a12 az = a13 ;
bx = a21 by = a22 bz = a23 ;
cx = a31 cy = a32 cz = a33 ;
76
or
3
X
u1 = a1α xα
α=1
3
X
u2 = a2α xα
α=1
3
X
u3 = a3α xα
α=1
or
3
X
ui = aiα xα (i = 1, 2, 3)
α=1
To this point we have effected considerable simplification of the original equations. By the
introduction of the “summation convention”, we can so even further. Notice that there are
two kinds of indices on the right: i, which occurs once in the product, and α, which occurs
twice. Index i is called a “single-occurring” index; α is called a “double-occurring” index.
The convention to be introduced is:
1. Doubly-occurring indices are to be given all possible values and the results summed
within the equation.
2. Single-occurring indices are to be assigned one value in an equation, but as many
equations are to be generated as there are available values for the index.
Thus from 1, we may drop the summation symbol, and by 2, we may drop the parenthesis
denoting values for i. With the summation convention in force, we have ui = aiα xα which
unambiguously represents the original equations, if α and i have the same range, which we
shall assume.
A nice but unnecessary finishing touch can be put on the convention which seems to make
things clearer in the work: for all singly occurring indices use lower-case Roman letters; for
all double-occurring indices, use lower-case Greek letters.
We remark that it is possible to have any number of indices on a quantity.
77
This is the Kronecker delta.
1
i 6= j, i 6= k, j 6= k, i, j, k in cyclic order
2. εijk = −1 i= 6 j, i 6= k, j 6= k, i, j, k in anticylic order
0
i = j, i = k, or j = k
1. δαα = 3
2. εαβγ εαβγ = 6
3. εiαβ εjαβ = 2δij
4. εijβ εkℓβ = δik δjℓ − δiℓ δjk
5. εijk εℓmn = δiℓ δjm δkn + δim δjn δkℓ
78
10.7 The Dirac Delta Function
The Dirac Delta “function” symbolizes an integration operation and in this sense is not
strictly a function (in the interpretation of Professor R. V. Churchill). It cannot be the end
result of a calculation, but is meaningful only if an integration is to be carried out over its
argument. we define the δ-“function” as follows:
δ(x) ≡ 0, x 6= 0
Z ǫ
δ(x)dx = 1, ǫ>0
−ǫ
Z ǫ
f (x)δ(x)dx = f (0)
−ǫ
δ(x) = δ(−x)
′ ′ ′ dδ(x)
δ (x) = −δ (−x) δ (x) =
dx
xδ(x) = 0
xδ ′ (x) = −δ(x)
1
δ(ax) = δ(x) a>0
a
2 2 2
δ(x − a ) = δ(x − a) + δ(x + a) a>0
Z a
δ(a − x)δ(x − b)dx = δ(a − b)
f (x)δ(x − a) = f (a)δ(x − a)
79
Here, note that the limit is taken after integration. Other such representations are common,
like that given by Schiff:
sin (gx)
δ(x) = lim
g→∞ πx
which means not that the limit is to be taken exactly as show, but rather that it is taken
after integration, i.e., with this representation:
ǫ ǫ
sin (gx)
Z Z
δ(x)dx = lim dx
−ǫ g→∞ −ǫ πx
Z ǫ Z ǫ
sin (gx)
f (x)δ(x)dx = lim f (x) dx
−ǫ g→∞ −ǫ πx
thus
ǫ ǫ ∞
1 ixξ
Z Z Z
f (x)δ(x)dx = e f (x)dξdx
−ǫ −ǫ −∞ 2π
A. Definitions
Z ∞
Γ(x) = e−t tx−1 dt
0
80
B. Properties
a. Γ(x + 1) = xΓ(x)
b. Γ(n) = (n − 1)! (Γ(1) = 1) n∈N
π
c. Γ(x)Γ(1 − x) =
sin (πx)
1 √
d. Γ(x)Γ(x + ) = 21−2x πΓ(2x)
2
1−b 1−a 2−a 3−a
e. Γ = · · ···
1−a 1−b a−b 3−b
√
1
f. Γ = π
2
Since by 1, one may reduce Γ(x) to a product involving Γ or some number between 1 and 2,
a handy table for calculations is one like that found at the end of Chemical Rubber Integral
Tables, for Γ(x), 1 ≤ x ≤ 2.
References
1. H. Margenau and G.M. Murphy; “The Mathematics of Physics and Chemistry”, D.
Van Nostrand Company, inc., New York, 1956m pp. 93-98.
2. Whittacker and Watson, “Modern Analysis”, 4th Edition, Cambridge University Press
(1927), Chapter VII.
x
2
Z
2
erf (x) = √ e−λ dλ
π 0
∞
2
Z
2
erf c(x) = 1 − erf (x) = √ e−λ dλ
π x
81
Chapter 11
2. The electrostatic cgs unit of field strength is that field in which 1 escoulomb experiences
a force of 1 dyne. It, therefore, is 1 dyne per escoulomb.
3. The electrostatic cgs unit of potential difference (or esvolt) is the difference of potential
between two points such that 1 erg of work is done in carrying 1 escoulomb from one
point to the other. It is 1 erg per escoulomb.
2. The unit magnetic field strength, the oersted, is that field in which a unit pole expe-
riences a force of 1 dyne. It therefore is 1 dyne per unit pole.
3. The absolute unit of current (or abampere) is that current which in a circular wire of
1-cm radius, produces a magnetic field of strength 2π dynes per unit pole at the center
of the circle. One abampere approximately equals 3 · 1010 esamperes, or 10 amp.
4. The electromagnetic cgs unit of charge (or abcoulomb) is the quantity of electricity
passing in 1 second through any cross section of a conductor carrying a steady current
of 1 abampere. One abcoulomb equals 10 coulombs.
82
5. The electromagnetic cgs unit of potential difference (or abvolt) is a potential difference
between two points, such that 1 erg of work is done in transferring abcoulomb from
one point to the other. One abvolt = 10−8 volts which is approximately 31 · 10−10 esvolt.
• The electron volt equals the work done when an electron is moved from one point to
another differing in potential by 1 volt.
1
• 1 electron volt = 4.80 · 10−10 escoulomb x 300
esvolt = 1.60 · 10−12 erg
83