Fourier Transforms (Second Part)
Fourier Transforms (Second Part)
then we know that the expansion coefficients are given by the familiar
I will study a particular basis, one that looks suspiciously familiar from section
3. Suppose
en = [1, e2π in/ N , e4π in/ N , . . . e2kπ in/ N , . . . e2( N − 1)π in/ N ] . (5.5)
Each basis vector is in the form
en = [1, ω n , ω n2 , ω n3 , . . . ω nN −1 ] (5.6)
where we write ω n = e2π in/ N . This basis is orthogonal. To prove that fact consider
N −1 N −1
(em , en ) = 1 + ω mω −n + ω m ω −n + . . . + ω m
2 2
ω −n (5.7)
N −1
= 1 + ω m−n + ω m−n
2
+ . . . + ω m−n . (5.8)
This is an example of a geometric series, which you ought to be able to sum on
sight: when m ≠ n the sum is
ω m−n
N
−1
(em , en ) = (5.9)
ω m−n − 1
e2π i(m − n) − 1
= . (5.10)
e2π i(m − n)/ N − 1
But e2π iK = 1 if and only if K is an integer; thus the numerator vanishes, and the
denominator does not because we have excluded the case m = n. This proves the
complex vectors en are mutually orthogonal under the inner product (5.1). The case
m = n is trivial, since then ω m−n = ω 0 = 1 and then (5.8) gives at once that
We write out what we have in full; (5.3) and (5.4) are as follows
N −1
zm = Σ
n=0
cn e2π imn/ N (5.12)
1 N −1
cm =
N Σ
n=0
zn e−2π imn/ N . (5.13)
Again the minus sign in (5.13) comes from the complex conjugate in (5.1). The first
equation can be interpreted as the construction of a finite length, discretely sampled
signal from periodic components with complex amplitudes: it is a finite Fourier syn-
thesis. The second almost symmetrical equation says how to find the expansion coef-
ficients from the original sequence. We can regard the pair as means of approximat-
ing the pair of equations (3.10) and (3.11), and they are often used that way; but
(5.13) and (5.13) stand on their own as a particular form of Fourier analysis called
the Discrete Fourier Transform, or DFT for short. An important reason why
these equations are so useful for approximation is that when N is a power of two (or
can be written as the product of a few primes) the numerical evaluation of the DFT
can be calculated extremely rapidly. The algorithm is called the Fast Fourier
Transform, or FFT. We have already met this algorithm in our discussion of convo-
lution.
One of the chief applications of these ideas is to find numerical approximations for
an analytic FT, because all too often the necessary integral is hard. Often an asymp-
totic approximation via the method of stationary phase, or the saddle-point
integral will do the trick (see Bender and Orszag, 1978), and it is a good idea to per-
form those kinds of calculations anyhow, even if one plans to compute the integral
numerically.
Let us define (5.13) as the Discrete Fourier Transform (DFT):
1 N −1
fˆm =
N Σ
n=0
f n e−2π imn/ N , m = 0, 1, 2, . . . N − 1 (5.14)
so that a vector of f ∈ C
| N (a complex N -dimensional vector) is mapped into another
such vector. The FFT is just a fast way of doing a certain matrix multiply. On the
other hand, the Fourier transform is:
∞
fˆ(ν ) = ∫− f (t) e−2 i t dt
π ν
(5.16)
∞
Suppose we are willing to approximate the integral by the trapezoidal rule, which
is the sum:
b
N −1
T[ g] = ½h[ g(a) + g(b)] + h Σ
n=1
g(a + nh) ˜ ∫
a
g(t) dt (5.17)
where the spacing h = (b − a)/ N . Since a and b are at infinity in the original integral
(5.16) it should not matter if we keep the factor of one half at the end points in
(5.17), since our function should be vanishing small there: we can take a straight
sum. In the approximation we can include only finitely many terms, say 2M + 1.
FOURIER TRANSFORMS 20
Then we will sample the function f (t) symmetrically about t = 0, at the points
t = 0, ± ∆t, ± 2∆t, . . . ± M∆t; now we have an approximation
M
f˜(ν ) = Σ
n=−M
f (n∆t) e−2π iν n∆t ∆t (5.18)
2M
= Σ
k=0
f ((k − M)∆t) e−2π im(k − M)∆t ∆ν ∆t (5.20)
2M
= ∆te−2π imM∆t ∆ν Σ
k=0
f ((k − M)∆t) e−2π imk∆t ∆ν (5.21)
Comparing this expression with (5.14) we see that the sum is an FFT calculation if
we make N = 2M + 1 and ∆t ∆ν = 1/(2M + 1). Notice that the input vector is
f n = f ((n − M) ∆t) and the output vector is f˜m approximates fˆ(m ∆ν ).
To get reasonable accuracy out of the trapezoidal rule, we need two things intu-
itively: (a) A fine enough sampling ∆t to capture short wavelength behavior of the
original function; (b) Integration over a large enough interval, so that f (t) is effec-
tively zero near the end points, which is equivalent to taking M large enough that
f (M ∆t) is very small. These two demands determine an upper bound on ∆t and
lower bound on M, and therefore choices about ∆ν , the spacing in frequency at which
the results are computed, and the highest frequency, M ∆ν are constrained. Usually
one finds one has to make M much larger than one would like and there are too
many points in frequency for convenience.
and the integral of g(t) vanishes much faster than h2 , and usually is of the form
e−c/h . So the DFT approximation of an analytic function can be very good, provided a
large enough interval is chosen to make the function very small at the end points.
The key to a proper treatment of the error in (5.23) is another important result
for Fourier Theory, the Poisson Sum Formula: for any sufficiently smooth function,
say f ∈ S
∞ ∞
Σ
n=−∞
f (n) = Σ fˆ(m)
m=−∞
(5.24)
where fˆ is the FT of f . To me this is an amazing result, because the sum only sam-
ples the function at integer points, while the FT integrates over the whole thing. To
use it on (5.23) write f (n) = g(nh) and expand the sum on the right in (5.24)
∞ 1 1 ∞ 1 1
Σ
n=−∞
g(nh) =
h
ĝ(0) + Σ
h n=1
ĝ(m/h) +
h Σ
n=−∞
ĝ(m/h) (5.25)
∞
1 1 ∞
=
h ∫ g(t) dt + Σ [ ĝ(m/h) + ĝ(−m/h)]
h m=1
(5.26)
∞
−
After multiplying across by h the sum on the left is the trapezium approximation, on
the left is the true integral, and the sum is the error in the approximation. This sum
can usually be evaluated by an asymptotic method, and when h is small only the
first term is important. Remember the Poisson Sum Formula, because it has other
applications in time series.
Exercises
1. The FFT is fastest when the number of terms N is a power of two, that is, an
even number, while the analysis above yields an odd number for N . How can
we ensure an even number instead?
FOURIER TRANSFORMS 22
where f (x) = f (x1 , x2 ) is a complex valued function; this can be stated compactly as
f : IR2 → C
| . The integral is performed over the whole x1 , x2 plane. As before we will
not inquire into what kinds of functions are suitable for this operation, but continu-
ously differentiable functions that decay to zero at infinity will certainly be safe.
The inverse follows the same pattern as in one dimension:
Notice the vector dot product in the exponent. This is the key to understanding the
2-D FFT in physical terms. Equation (6.2) like (1.2) is building up a function from a
collection of elementary periodic components, but with the additional complication
that each element has a direction as well as a wavelength. The complex exponential
is
e+2π ik ⋅ x = cos 2π (k1 x1 + k2 x2 ) + i sin 2π (k1 x1 + k2 x2 ) . (6.3)
The Fourier parameter k = (k1 , k2 ) is the wavevector or wavenumber; it points in
the direction of increasing phase, normal to the wavefront. Imagine plotting the real
part of (6.3); you will see a sinusoidal undulation with peaks and troughs running at
right angles to the direction k̂ and with a wavelength λ = 1/ √
k21 + k22 = |k|−1 . The
Differentiation introduces the obvious modification to allow for the fact that the gra-
dient is a vector operator:
F [∇ f ] = 2π ik fˆ(k) . (6.7)
And its converse is:
1 1 ∂ fˆ ∂ fˆ ∂ fˆ
F [x f (x)] = − ∇k fˆ = − , , . (6.8)
2π i 2π i ∂k1 ∂k2 ∂k3
∞ ∞
∫ ∫− dx2 e− (x + x ) e−2 i(k x + k x )
π π
2 2
= dx1 1 2 1 1 2 2
(6.12)
∞
− ∞
∞ ∞
∫ −π x21 −2π ik1 x1
∫ dx2 e−π x2 e−2π ik2 x2
2
= dx1 e e × (6.13)
∞
− ∞
−
∞ 2π
dθ e−2π ikr cos (θ − Φ) G(r) r dr .
F [ g] = ∫ ∫
(6.18)
0
0
The value of the integral in brackets is independent of the value of Φ because the θ
integral is over a complete period of a periodic function, and shifting the argument of
the cosine by any constant amount has no effect. (Prove this.) The inner integral
becomes
2π 2π
∫
0
dθ e−2π ikr cos θ = ∫
0
dθ cos (2π kr cos θ ) = 2π J 0 (2π kr) (6.19)
where J 0 is called a Bessel function (of the first kind and order zero). It is a func-
tion that looks a lot like a slowly decaying cosine, (see Figure 6b) and is the simplest
member of a big family of special functions obeying a particular kind of ordinary dif-
ferential equation. Our final result can be written
ĝ(k) = H [G](|k|) (6.20)
where
∞
H [G](k) = H(k) = ∫0 2π r dr J 0(2π kr) G(r) . (6.21)
Exercises
1. Write the Hankel transform of order zero as
∞
H [ g](k) = 2π ∫0 f (r) J 0(2π kr) r dr
Show that the inverse Hankel transform is identical to the forward transform,
that is, H [ g] = H −1 [ g].
FOURIER TRANSFORMS 28
7. Change of Dimension
A scalar function is defined in 3-dimensional space IR3 , but observed only on a plane.
This situation arises often in potential fields in geophysics, for example, we may
know gravity only on the sea surface, even though it is defined above and below that
surface. Or in lower dimensions, we may know the magnetic anomaly only on a sin-
gle track, though it is observable on the sea surface, and in principle defined in all of
space. How is the Fourier transform of the function f in IR3 , as defined by
related to the 2-D FT on the plane z = 0? The answer is surprisingly easy to prove.
Use coordinates (x1 , x2 , x3 ) rather than x, y, z. We need a notation that distin-
guishes 2- and 3-D transform operations and their results; the following is not stan-
dard, but is quite serviceable. The 2-D FT we need is defined as
0
fˆ (k1 , k2 ) = F 2 [ f (x1 , x2 , 0)] (7.2)
∞ ∞
= ∫− −∫ dx1 dx2 f (x1 , x2, 0) e−2 i(k x + k x ) .
π 1 1 2 2
(7.3)
∞ ∞
Now consider the inverse of the full 3-D FT (7.1):
f (x1 , x2 , x3 ) = F −1 ˆ
3 [f ] = ∫ d3k fˆ(k) e2 ik ⋅ x
3
π
(7.4)
IR
∞ ∞ ∞
= ∫− −∫ −∫ dk1 dk2 dk3 fˆ(k) e2 i(k x + k x + k x ) .
π 1 1 2 2 3 3
(7.5)
∞ ∞ ∞
If we set x3 = 0 in (7.4) we have
∞ ∞ ∞
f (x1 , x2 , 0) = ∫− −∫ −∫ dk1 dk2 dk3 fˆ(k) e2 i(k x + k x )
π 1 1 2 2
(7.6)
∞ ∞ ∞
∞ ∞ ∞
= ∫ ∫
−∞ −∞ −∞
∫
dk1 dk2 e2π i(k1 x1 + k2 x2 ) dk3 fˆ(k)
(7.7)
∞
=F −1
2 [ ∫− dk3 fˆ(k)] . (7.8)
∞
Now all we need do is take the 2-D FT of both sides of (7.8)
∞
∫− dk3 fˆ(k1, k2, k3) .
0
fˆ (k1 , k2 ) = (7.9)
∞
This result shows that the to obtain the 2-D FT on a plane through the origin, you
must integrate the full 3-D FT on the line in the wavenumber space that is
FOURIER TRANSFORMS 29
perpendicular to the plane. As you will easily see, the same argument works in
going from the FT over a plane to the FT on a line in the plane. Bracewell calls this
the Slice Theorem. You don’t want to make the mistake of thinking that the 2-D
FT on a slice is just a slice through the 3-D FT, but this is an error that is often
made.
You might imagine that this is fairly useless, since 3-D FTs are harder to find
than 2-D or 1-D transforms. But that is not always true. Here is a simple example.
Consider the gravitational potential of a point mass at the origin of coordinates. It is
well known that
Gm
U (x) = − . (7.10)
|x|
On the plane z = x3 = h the potential is clearly
Gm
u(x1 , x2 ) = U (x1 , x2 , h) = − . (7.11)
√ x21 + x22 + h2
There several reasons why we would like the 2-D FT of u; for example, to perform a
convolution over an extended body to calculate its gravity anomaly, when we could
use the Convolution Theorem. Equation (7.11) clearly is a function that is circularly
symmetric about (0, 0). So according to (6.20), after setting r 2 = x21 + x22 , we have
∞
J 0 (2π r √
k21 + k22 )
F 2 [u] = 2π ∫
0
r dr
√ r 2 + h2
(7.12)
which is not an easy integral. We can discover the answer quite simply in another
way.
In place of (9) we write the fundamental differential equation for the gravita-
tional potential, Poisson’s equation:
∇2U = − 4π G ρ (x) (7.13)
where ρ is the density distribution. For a point mass at the origin, this becomes
∂2U ∂2U ∂2U
+ + = − 4π Gmδ (x) . (7.14)
∂x21 ∂x22 ∂x23
Here you can imagine that δ (x) is a very tall, narrow Gaussian function centered on
the origin, with unit volume; in the ideal case the function becomes arbitrarily nar-
row and high, and then represents the idealization of the density distribution of a
point mass. We will look at this theory later. For now we need only think about
what the 3-D FT is of this function. Without going into details, which you can sup-
ply yourself, the FT of the Gaussian is a 3-D Gaussian in k but scaled so that if w is
its width is space, its width in wavenumber must be 1/w. As w tends towards zero
the FT tends to a constant, and because the integral of δ (x) over all space is unity,
that constant must be unity too. Now we take the 3-D FT of (7.13) and we see
Therefore, the 3-D FT of the potential due to a point mass at the origin is very sim-
ple; it is
Gm
Û (k) = 2
. (7.16)
π |k|
Once again the Fourier transform has made a differential equation into an algebraic
equation. This process only works when the boundary conditions are applied at
infinity. We want the 2-D FT on the plane z = h, not the plane z = 0. One way to get
this is to shift the point mass to the position x3 = − h; the shift property in three
dimensions is
Gm e2π ik3 h
= . (7.19)
π k21 + k22 + k23
This is the 3-D FT of a point mass at z = − h. According to (7.9), all we need do now
to find the 2-D FT of u is integrate this equation over k3 :
∞
Gm e2π ik3 h
F 2 [u] =
π ∫
−∞
dk3
(k21 + k22 ) + k23
. (7.20)
Since k1 and k2 are constants as far as the integral is concerned, and the integrand
is an even function of k3 , the integral is nothing more than the FT of f 3 defined by
(3.10), and given by (3.11):
Gm
e−2π h√
k1 + k2
2 2
û(k1 , k2 ) = .
√
k21 + k22
As advertised, û is circularly symmetric in wavenumber.
FOURIER TRANSFORMS 31
∞
12 Slice Theorem F 2 [ f (x1 , x2 , 0)] = ∫− fˆ(k1, k2, k3) dk3 x ∈ IR3
∞