Variations and Applications of The Fast Fourier Transform Algorithms
Variations and Applications of The Fast Fourier Transform Algorithms
Abstract—This paper will cover basic material relating to the II. DIFFERENT VERSIONS OF FFT
Fast Fourier Transform (FFT). It will begin going over its history
and what the FFT actually accomplishes, and then proceed into A. DFT - Original Development of the FFT
the many different versions developed, specifically focusing on
differences in algorithms and computational complexity, as well Any algorithm that can perform a DFT who’s upper
as primary applications where each is used. Discussed at the end complexity is on the lines of O(Nlog2N) is considered to be an
will be potential future advancements and applications of the FFT. The original DFT algorithm is given by:
FFT.
N −1 n
− j 2πk
Index Terms—Fast Fourier Transform (FFT), computational X k = ∑ x ne N
k = 0, … , N −1. (1)
complexity, applications of FFT n =0
This process takes a total of N operations to evaluate the
sum, which will leave N total output terms. In summing all the
I. INTRODUCTION terms, it is clear that the basic DFT algorithm has a
computational complexity of O(N2).
T he Fourier Transform is a mathematical operation € to
express any function with regards to time as a function
with regards to frequency. This is essentially useful is signal
B. Cooley-Tukey FFT
The most common implementation of the FFT is the
processing, as all signals being analyzed are functions of time, original algorithm proposed by Cooley and Tukey back in
whether they be sound waves that are audible to humans, 1965. They were able to reinvent Gauss’s findings, adding in
electromagnetic waves for communication devices in the newer research on its complexity and computation
gigahertz range (109), or up to X-Rays in the petahertz range requirements. The algorithm works recursively, in that if the
(1015). Analyzing a signal with respect to frequency rather DFT is size N, you can split it up into two smaller DFTs, N1
than time can tell a lot about a source, whether it’s a broad and N2, such that N = N1N2.
sound, a specific color, or a radio signal at a specific This recursive method, the radix-2 case, typically assigns N1
frequency. and N2 to N/2. This will give two summations; both are
The Discrete Fourier Transform (DFT) is essentially a similar-looking to the DFT algorithm, but one will be summed
Fourier Transform that takes a discrete-time input and over the even indices (n=2b) and one over the odd (n=2b+1):
transforms it to frequency. Typically this is used for any N
−1
N
−1
continuous signals, which are sampled over a finite duration of 2
−
j 2π
(2b )k
2
−
j 2π
(2b +1)k
time to be converted to frequency. Although effective in X k = ∑ x 2b e N
+ ∑x 2b +1 e N
. (2)
frequency analysis, performing the mathematical operations b =0 b =0
was very computationally costly. This algorithm is considered divide-and-conquer because by
A Fast Fourier Transform (FFT) algorithm is any algorithm splitting up the DFT into two smaller functions, computations
that improves the complexity of the original DFT while still become uncomplicated, leading the complexity of this
effectively transforming a signal to its frequency domain.€The algorithm to converge to O(Nlog2N).
premise behind the FFT was thought up in the early 1800s by C. Prime-Factor FFT
Carl Gauss, but was further developed in the 1960s by James
Similar to Cooley-Tukey’s algorithm, the Prime-Factor FFT
Cooley and John Tukey. Since being investigated, applications
splits a size-N DFT into two separate DFTs, but this time, N1
for the FFT have included everything from transient and
and N2 are both relatively prime, rather than both equaling
frequency analysis of discrete audio signals, to spectral
N/2. Two numbers are relatively prime if their only common
analysis in chemistry, which analyzes chemical properties of factor is 1. This algorithm served as motivation to Cooley and
compounds by looking at their optical spectrum. Tukey’s widely used FFT algorithm.
In order to express the Prime-Factor algorithm, the inputs
and outputs need to be redefined altogether. The input, n, and
output, k, would respectively become:
Case Study 1 – The FFT 2
n = n1N 2 + n 2 N1 mod N, O(Nlog2N) plus O(N) extra operations. Thus, in practice, this
(3) algorithm is not typically used a lot, because of the extra
k = k1N 2 −1N 2 + k 2 N1−1N1 mod N. operations and computations needed.
By reassigning these values, the new FFT becomes:
F. Bluestein’s FFT
%
N1 −1 N 2 −1
−
j 2π
n k ( −
j 2π
n1 k1
X k N −1 N −1 = ∑ '' ∑ x n1 N 2 +n 2 N1 e N2 2 2
** e N1 . (4) Like Rader’s FFT, Bluestein’s algorithm evaluates using
1 2 2 +k2 N1 N1
n1 =0 & n 2 =0 ) circular convolution. The difference it works for arbitrary-
€ sized DFTs rather than solely prime-sized ones. Bluestein’s
Although this is still a O(Nlog2N) algorithm, it only works
with relatively prime N’s, and its accuracy is questioned. It has algorithm works by re-indexing and re-expressing the DFT
been combined with Cooley and Tukey’s FFT for the case sum, like before with Rader’s. The product in the DFT, nk, is
€ when N1 and N2 are relatively prime. Because of the task of now replaced with:
having to re-index and re-express the DFT each time, this (n − k ) 2 + n 2 + k 2
algorithm is mostly never used by itself. nk = − . (10)
2
D. Bruun’s FFT Now the new transform becomes:
Bruun’s FFT algorithm is fairly new compared to the others −
jπ 2 N −1
k $ −
jπ 2
n ' jπ ( k −n ) 2
Xk = e N
∑ &x e N
)e N k = 0, . . ., N −1. (11)
listed in this study. It works recursively, factorizing real n =0 %
n
(
polynomials in the transformation, and does not bother with € The resultant convolution sequences, both length-N, that
imaginary numbers until the last stage of the summation. emerge from this summation are:
Using the expression for the DFT, Bruun’s algorithm starts by jπ 2 jπ 2
− n − n
renaming the exponential function with imaginary parts: € an = x n e N
m, bn = e N
(12) .
j 2π
− n Now the algorithm can be rewritten using these sequences,
ω Nn = e N
n = 0, . . ., N −1. (5) N −1
Bruun then defined a polynomial, x(z), given by: X k = bk* ∑ an bk −n k = 0, . . ., N −1, (13)
N −1
€ (6) n =0
x ( z) = ∑x z n
n
. where bk* is the phase factor of the FFT.
€ n =0 The Bluestein FFT is still a O(Nlog2N) algorithm, except in
By reducing this polynomial farther, you get the solution to practice, it performs slower than the Cooley-Tukey FFT,
this FFT, making it more uncommon in practical applications.
€
X k = x (ω Nk ) = x ( z) mod( z − ω Nk ), (7) One usage of the Bluestein algorithm is for Z-Transforms. It
€ where “mod” is simply the remainder of this reduction. can be rewritten as:
N −1
While this algorithm is fast, it is not the most accurate,
therefore making it more efficient to use the Cooley-Tukey Xk = ∑x z n
kn
k = 0, . . ., M −1, (14)
€ FFT. Because of this, it is hardly used in practical n =0
applications. for any number of N or M inputs and outputs. One name
coined from this is the Bluestein Chirp Z-Transform. For
E. Rader’s FFT |z|=1, bm becomes a complex sine function. This causes the
The Rader FFT is an algorithm that works by changing € the output to sweep frequencies, known as “chirping,” which is
DFT into a circular convolution. It only works for prime bases why it is known as a Chirp Z-Transform.
of the DFT, and typically it is more effective and simpler to
use the Cooley-Tukey algorithm. The only real use for Rader’s III. APPLICATIONS OF THE FFT
FFT is for large prime bases. It works by re-indexing n and k Briefly described at the beginning of this paper are some of
of the DFT. The new n becomes n = gq mod(N), and the new k the different applications where the FFT is used. These
becomes k = g-p mod(N). The DFT is transformed to:
N −1
include audio and signal processing, communications, physics,
optics, chemistry, among many others.
X0 = ∑x , n The FFT is an essential tool in audio when trying to analyze
n =0
frequency content in digital signals. If an audio signal is
j 2π − ( p−q) (8)
N −2 −
N
g viewed in time, it may resemble a sine wave, which could say
X g − p = x0 + ∑ xg q e p = 0, . . ., N − 2. a lot about its sound characteristics. Most other signals,
q =0 though, could be a random conglomeration of changing peaks,
The result of this FFT sum is a circular convolution of two making it more difficult to tell what exactly is going on.
sequences, both N–1 in size. These sequences, labeled aq and Therefore, by viewing an FFT plot in conjunction with its
bq, are given by: transient signal plot, it is easy to see all of the frequency
j 2π − q content as well as its full time-domain signal.
€ − g
aq = x g q , bq = e N
. (9) For example, take a digital frequency equalization filter. It
would be very abstract to try and view solely the time-domain
In doing this convolution, N–1 may have large prime
factors, making it necessary to perform this algorithm signal of this, if you wanted to add some sort of band-stop or
recursively. This adds to the complexity of it, making it shelving filter. So instead, real-time FFTs are taken with the
€
Case Study 1 – The FFT 3
signal playing, which allows you to view as well as hear the something completely fresh in the engineering world and
filtering. A screenshot of a multiband shelving and low/hi-pass opens the doors for more research in optimizing the FFT.
filter is shown below.
REFERENCES
[1] Cooley, James W.; Lewis, Peter A. W.; Welch, Peter D.;, "The Fast
Fourier Transform and Its Applications," Education, IEEE Transactions
on , vol.12, no.1, pp.27-34, March 1969.
[2] Stone, H.S.; , "R66-50 An Algorithm for the Machine Calculation of
Complex Fourier Series," Electronic Computers, IEEE Transactions on,
vol.EC-15, no.4, pp.680-681, Aug. 1966.
[3] Chan, S.C.; Ho, K.L.; , "On indexing the prime factor fast Fourier
transform algorithm," Circuits and Systems, IEEE Transactions on ,
vol.38, no.8, pp.951-953, Aug 1991.
[4] Bruun, G.; , "z-transform DFT filters and FFT's," Acoustics, Speech and
Signal Processing, IEEE Transactions on , vol.26, no.1, pp. 56- 63, Feb
1978.
[5] Rader, C.; Brenner, N.; , "A new principle for fast Fourier
transformation," Acoustics, Speech and Signal Processing, IEEE
Fig. 1. Stereo equalizer typical in audio processing, showing spectrum. Transactions on , vol.24, no.3, pp. 264- 266, Jun 1976.
[6] Bluestein, L.; , "A linear filtering approach to the computation of
discrete Fourier transform," Audio and Electroacoustics, IEEE
Similar to audio, optics is a continuously growing field, Transactions on , vol.18, no.4, pp. 451- 455, Dec 1970.
which uses the FFT to analyze its frequency and wavelengths. [7] Rabiner, L.; Schafer, R.; Rader, C.; , "The chirp z-transform algorithm,"
Audio and Electroacoustics, IEEE Transactions on , vol.17, no.2, pp. 86-
If a sine wave is oscillating at 440Hz, you will hear a mid- 92, Jun 1969.
pitched sine, and you could view an FFT graph which shows a [8] H. Hassanieh, P. Indyk, D. Katabi, E. Price: “Nearly Optimal Sparse
spike at that frequency. The color red, however, is not audible. Fourier Transform,” arXiv.org,/ abs/1201.2501v1, Cornell University
Library, Jan. 2012.
Taking an FFT of will give a spike right at that color’s
oscillating frequency, around 450 THz (1012).
There are endless applications for the FFT in orthogonal
frequency-division multiplexing (OFDM) communication
technology. Digital signals being transferred wirelessly over
multiple bands to other devices need some sort of
transformation to be able send and receive this data. The FFT
is the most efficient, real-time transformation method
available.
A more specific, yet simple example of this is within
wireless routers. It works on either one, or multiple frequency
bands in the gigahertz range. It works by taking the inverse
FFT of a data signal, which the sender can then modulate and
transmit. The receiver grabs that signal, demodulates it, and
then takes its FFT to encode it.
IV. CONCLUSION
While the idea of the FFT has been around for a long time,
it is still being developed and optimized today. In January
2012, MIT came out with a new algorithm, the “Nearly
Optimal Sparse Fourier Transform,” which functions by
splitting up the initial signal into smaller frequency bands to
lower the amount of computation that goes into solving the
FFT. This is more effective for audio and imaging, where
frequencies are generally sparse and can be analyzed more
easily. In splicing signals up continuously, the MIT
researchers were able to determine the period of the
waveform, which assisted in determining its most dominant
frequency.
The paper proves that this method can achieve O(klog2N)
and O(k logN log2(N/k)) for the general case, where k is the
number of non-zero Fourier coefficients. Both of these
complexities improve upon the original O(Nlog2N) case.
Although not an ideal algorithm for everything, it does offer