0% found this document useful (0 votes)
13 views20 pages

A Primer On Fourier Analysis - Part I

This document serves as an introductory primer on Fourier analysis, aiming to demystify the subject for newcomers by simplifying complex concepts and emphasizing practical applications, particularly in digital signal processing. It outlines the basic idea of Fourier analysis, which involves decomposing a function into its constituent sine waves, and discusses the Fourier transform's significance in various fields. The author also presents a simple algorithm for extracting sine wave components from a given waveform, highlighting common pitfalls and the importance of understanding both frequency and phase in the analysis.

Uploaded by

Zijing Teng
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views20 pages

A Primer On Fourier Analysis - Part I

This document serves as an introductory primer on Fourier analysis, aiming to demystify the subject for newcomers by simplifying complex concepts and emphasizing practical applications, particularly in digital signal processing. It outlines the basic idea of Fourier analysis, which involves decomposing a function into its constituent sine waves, and discusses the Fourier transform's significance in various fields. The author also presents a simple algorithm for extracting sine wave components from a given waveform, highlighting common pitfalls and the importance of understanding both frequency and phase in the analysis.

Uploaded by

Zijing Teng
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

1/31/25, 1:29 PM LabKitty: A Primer on Fourier Analysis -- Part I

Thursday, December 10, 2015 About Me

LabKitty

A Primer on Fourier Analysis -- Part I Nerd with attitude.


And clip art.

A LabKitty Primer is an introduction to some technical


topic I am smitten with. They differ from most primers View my complete profile

as to the degree of mathematical blasphemy on


Pages
display. I gloss over details aplenty and mostly work
Home
only the simplest cases. I'll show mistakes and dead
JSOrrery
ends and things any normal person would try that JSpinArt
don't work. Mostly I want to get you, too, excited FlashCards
about the topic, perhaps enough to get you motivated to learn it properly. ZAILOS
CharBuilder
And, just maybe, my little irreverent introduction might help you understand
Mage
a more sober treatment when the time comes.
-- Click for LabKitty gear! --
The Fourier transform doesn't only transform data, it transformed
civilization. In the guise of the FFT, it created a digital signal processing
revolution that took the technique out of the classroom and into everything
from GPS to Roombas. It is a staple of STEM education, and for good
reason: There is almost no field, no application, no problem that does not
yield to or benefit from application of Fourier analysis. If aliens have a Wiki
on Earthlings, Fourier and those blankets you can wear may well be the
sole entries.

--- Gifts for Smart Cats! ---


That being said, Fourier analysis can be intimidating to the newcomer. The
material has a fierce reputation, often taught as some holy relic Search This Blog
impenetrable as death. That's where LabKitty comes in. Yes, there is much
Search
about Fourier analysis that only time and dedicated study can unravel. But
the basic idea is surprisingly straightforward. And if there's anything
textbook authors hate, it's admitting something is straightforward.

Browlf

cat on a beach (14)


My goal here is simple: I want to get you to a place where you understand classical music (19)
the vector of numbers returned by the Matlab (or Octave) function fft( ). I COVID (9)

claim if you understand that, then you understand Fourier analysis. Alas, crux moves (11)
drink and derive (13)
that goal will require a number of concepts that have more to do with eigenvalues (8)
computational details than underlying ideas. Those details take something evil thesis advisor (6)
simple and beautiful and make it seem harder and uglier than it really is. famous first words (16)
fear and loathing (6)
funding (10)
As such, I have broken the Primer into two parts. In Part I, we meet the
git (4)
Fourier transform. What it is, what it means, how to calculate it if you don't happy math (7)

https://www.labkitty.com/2015/12/a-primer-on-fourier-analysis-part-i.html 1/20
1/31/25, 1:29 PM LabKitty: A Primer on Fourier Analysis -- Part I
care about computational efficiency, and why it's so dang useful. We'll also javascript (22)
LabKitty Academy (10)
take a look at the equations to help connect my presentation to what you'll
LabKitty apps (5)
find in textbooks. In Part II, we'll get into how the pros do it. What a Nyquist LabKitty primer (8)
frequency is, why there's only N/2 Fourier coefficients for a signal with N LabKitty reads (7)
samples, what a power spectral density is, why it should have error bars, LabKitty recommends (24)
LabKitty watches stuff (11)
and so on. Answers to those questions are not beyond the mortal ken, but
mad scientist (20)
you should have a firm grasp of what Fourier is doing before you take a matlab (8)
look under his hood, so to speak. Ergo, Part II. neurobrowlf (13)
now this (16)
politics (19)
My emphasis throughout is how the technique is used in computational
quote monstah (16)
applications, particularly in digital signal processing. If you're looking for
sanity check (13)
existence and uniqueness proofs or Parsevel's theorem or contour shameless self-promotion (9)
integration or Dirichlet conditions, you need to look elsewhere. But if you've shop talk (11)

ever wondered just what the heck Fourier was ranting about, read on. win stuff (3)

I. The Basic Idea

We start simple. Consider the following: We have two sine waves. For now,
let's make the frequency of one an integer multiple of the other. So 1 Hz
and (a smaller amplitude) 8 Hz will do, When plotted over a period of one
second, they look like this:

If we add these two sine waves together, we get the following composite
waveform:

The idea of Fourier analysis is this: Can we go in the other direction? That
is, given a function,* can we extract the collection of sine waves that
generates it? That's it. That's the concept in a nutshell.

* Yes, I mean any function, not just some obviously made-from-sine-waves


example like the one I'm using here to ease us into the topic. While it's
possible to devise pathological functions that don't have a Fourier
representation (e.g., functions with an infinite number of discontinuities or
infinitely sharp edges), a remarkable result of Fourier analysis is that any
reasonable function can be generated using a sum of sine waves (although
sometimes the collection required is very large). I will not prove that claim
here, but it is true nonetheless.

https://www.labkitty.com/2015/12/a-primer-on-fourier-analysis-part-i.html 2/20
1/31/25, 1:29 PM LabKitty: A Primer on Fourier Analysis -- Part I
Footnote: You may have noticed the time axis in the above plots goes from
0 to 100 and not 0 to 1. This is because I plotted amplitude versus sample
number and 100 is the number of samples I used to generate the
waveforms (I'll show code in a just a bit). I could have relabeled the time
axis to go 0 to 1, but sometimes it's useful to see how many samples are in
the waveform at a glance, and relabeling would hide that information. If this
bugs you, imagine the abscissa labeled time in 1/100ths of a second.

If all this seems trivial, note we will quickly segue from the "what" to the
"how" and figuring out how to extract the sinusoidal components of a
function took humans quite awhile to grok. Fourier only announced his
technique in 1807, which is many years ago but it's more than a century
after Newton invented calculus. The concept of the Fourier transform
eluded many smart people for a very long time indeed.

II. A Rock-Stupid Fourier Transform Algorithm

We seek to extract the collection of sine waves present in an arbitrary


waveform. There are many complex theoretical aspects of this task. These
include issues such as existence (does a collection exist?) and uniqueness
(does more than one exist?) and convergence (how closely does the sum
reconstruct the original waveform?). Additionally, it took 150 years to devise
an efficient extraction algorithm that could run on primitive computer
systems like the Apollo lunar module and Microsoft Windows. It cannot be
denied Fourier analysis deserves some of its legendary mystique.

However, the basic algorithm is, well, embarrassingly stupid: We simply


correlate the given waveform with every possible frequency sine wave we
can fit on top of it. Each coefficient gives us the amplitude of the sine wave
"present" at that frequency. (There are some hiccups in this story I'm
omitting -- I'll address those as we go along.)

If your memory is foggy, by the "correlation of two signals" we mean


multiplying one waveform by the other on a point-for-point basis and
adding up the results. In vector-speak, it's a dot product. Vector-speak is
convenient for Matlab coding -- if we have vectors "foo" and "bar" then their
dot-product is obtained using dot(foo,bar). Vector-speak is also useful for
explaining how the Fourier transform works. Recall that the dot product
gives you the projection of one vector in the direction of the other. By taking
the dot product of a waveform with a sinusoid we are probing how much
the signal "projects" in that frequency. Think of the signal as throwing
shadows of different lengths in different directions, except in Fourier
analysis the "directions" are not literal and geometric, but rather exist in an
abstract multidimensional frequency-land.

Enough talk; let's see some code:

% Fourier components via correlation


% This code is incomplete -- do not use

tmax = 1;
t = [0.01:0.01:tmax];
N = length(t);

https://www.labkitty.com/2015/12/a-primer-on-fourier-analysis-part-i.html 3/20
1/31/25, 1:29 PM LabKitty: A Primer on Fourier Analysis -- Part I
x = sin(1*2*pi*t) + 0.25*sin(8*2*pi*t);

nc = 10;
components = zeros(nc,1);

for index = 1:nc


temp = sin(index*2*pi*t);
components(index) = dot(temp,x);
end

% reconstruction of x

rx = zeros(size(x));
scale = 2/N;

for index = 1:nc


temp = scale * sin(index*2*pi*t);
rx = rx + components(index) * temp;
end

clf;
subplot(3,1,1);
plot(x);
axis([1 100 -1.2*max(x) 1.2*max(x)]);
a = axis;grid on;

subplot(3,1,2);
bar(components,'k');
axis([0 nc -1 1.2*max(components)]);

subplot(3,1,3);
plot(rx,'r');
axis(a);grid on;

Enough code; let's see some results. Here's the Fourier decomposition of
our 1 Hz + 8 Hz signal:

The trace in the top panel is the original signal. The middle panel is a bar
plot of the amplitude of the sine wave present at each frequency, which I'm
going to call the Fourier coefficients. Furthermore, I will refer to this result
as performing a Fourier decomposition. The Fourier coefficients collectively
define the signal's frequency domain representation, in contrast to the time
domain representation we started with. The red trace at bottom is a

https://www.labkitty.com/2015/12/a-primer-on-fourier-analysis-part-i.html 4/20
1/31/25, 1:29 PM LabKitty: A Primer on Fourier Analysis -- Part I
reconstruction of the data using the Fourier components (handy mnemonic:
red for "reconstruction").

I had to scale the coefficients by 1/50 (= 2/N) to get a correct reconstruction


(see code). We'll see why this is in Part II. For now, however, we can see
that the coefficients are qualitatively correct. Our algorithm found a sine
wave at 1 Hz, a smaller sin wave at 8 Hz, and none present at any other
frequency. Furthermore, the reconstructed waveform nicely matches the
given data.

So far, so good. I'll mention two caveats, and then I'll introduce a few
hiccups that require modification of our algorithm.

First, we are working here with sampled data (aka discrete data). That is, I
generated the two waveforms by sampling a 1 Hz and 8 Hz sine wave at a
finite number of places (100 places, to be specific). The waveforms look
like continuous functions but that's only because I'm "connecting the dots"
in the plots so to speak. This is so common when plotting data you
probably don't even think about it. However, there are important differences
when doing Fourier analysis of sampled data (i.e., in Matlab or Octave)
compared to Fourier analysis of continuous functions (i.e., pencil-and-
paper type calculations). We'll bump into these differences a little later.

Second, I made up the data in this example so I knew what sine waves
were present. As such, I knew when to stop correlating. However, suppose
we were given an arbitrary waveform to decompose. How would we know
when we have extracted all possible components? Surprisingly, this
seemingly-innocent question opens a monster can of worms. The short
answer is for data of length N, you can extract at most N/2 components.
(Our data was 100 samples, so we cannot extract more than 50 sine
waves.) Why the maximum is N/2 and not some sensible number like N (or
infinity) will be explained in Part II. Also, while I would be technically
justified in showing all N/2 components in the bar plot, I only showed the
first dozen or so otherwise the bars get scrunched together. I knew what
frequencies were present, so plotting a bunch of zero components seemed
silly.

Now for some hiccups.

III. Some Hiccups

If we apply our algorithm to a literal sine wave, it stands to reason we


should get back that sine wave. The correlation should light up for the
"probe" whose frequency matches that of the signal, and it should return
zero for all other probes (this quirk is called orthogonality in formal
treatments).

Let's try it. Change the definition of x(t) to a 5 Hz sine wave in the code:

x = sin(5*2*pi*t);

Here's the output:

https://www.labkitty.com/2015/12/a-primer-on-fourier-analysis-part-i.html 5/20
1/31/25, 1:29 PM LabKitty: A Primer on Fourier Analysis -- Part I

Cool. It works. Now let's try our algorithm on a 5 Hz sine wave that includes
a constant term that shifts the signal away from zero (this is called a DC
offset):

x = sin(5*2*pi*t) + 1.0;

Here's the output:

Hmm. That's not so good. Our algorithm still extracts the proper coefficient,
but the reconstruction is wrong -- it fails to include the offset. The good
news is that's easy enough to fix: We simply remove any DC before
performing the decomposition (this is called demeaning the data) and we
add that offset back at the end when we reconstruct the data. Bullet
dodged.

Next, let's decompose a 5 Hz sine wave phase-shifted by a quarter wave:

x = sin(5*2*pi*(t-0.25);

Here's the output:

https://www.labkitty.com/2015/12/a-primer-on-fourier-analysis-part-i.html 6/20
1/31/25, 1:29 PM LabKitty: A Primer on Fourier Analysis -- Part I

Yikes. Something has gone dreadfully wrong.

Alas, a sine wave is not defined only by a frequency; a sine wave is


defined by a frequency and a phase. So, in general, we need to correlate
not just sine waves of all possible frequencies, but also of all possible
phase values. That seems like a show stopper. There's a finite number of
frequencies that fit on top of our data -- 1 Hz, 2 Hz, and so on. But there's
infinitely many different phase values at each frequency.

It would appear we are screwed. Are we?

Trigonometry to the rescue! Somewhere in a big list of trig identities you


were supposed to memorize back in high school, there's one that states a
sine wave of any given phase and frequency can be constructed by adding
together a (zero-phase) sine wave and a (zero-phase) cosine having the
same frequency but different amplitudes. In short, instead of computing an
amplitude and a phase at each frequency, we compute an amplitude of a
sine and an amplitude of a cosine at each frequency. We compute twice as
many coefficients as before and use both sines and cosines (which I will
collectively call "sinusoids") in the reconstruction.

Here's the updated algorithm:

% LabKitty Fourier Primer


% FT by correlation
% (www.labkitty.com)

tmax = 1; % must be 1

t = [0.01:0.01:tmax];
N = length(t);

x = sin(5*2*pi*t);
% x = x + 0.4* sin(15*2*pi*t);
% x = sin(5*2*pi*(t-0.25));
% x = x + 1;

% demean the data

dc = mean(x);
dmx = x - dc;

https://www.labkitty.com/2015/12/a-primer-on-fourier-analysis-part-i.html 7/20
1/31/25, 1:29 PM LabKitty: A Primer on Fourier Analysis -- Part I

nc = N/2;
bk = zeros(nc,1);
ak = zeros(nc,1);

for k = 1:nc
temp = sin(k*2*pi*t);
bk(k) = dot(temp,dmx);
temp = cos(k*2*pi*t);
ak(k) = dot(temp,dmx);
end

% reconstruction of x

rx = zeros(size(x));
scale = 2/N;

for k = 1:nc
temp = scale * cos(k*2*pi*t);
rx = rx + ak(k) * temp;
temp = scale * sin(k*2*pi*t);
rx = rx + bk(k) * temp;
end

rx = rx + dc;

subplot(3,1,1);
plot(x);
axis([1 N -2 2]);
a = axis; grid on;

subplot(3,1,2);
power_spectral_density = (bk.^2+ak.^2)/N;
bar(power_spectral_density,'k');
axis([0 nc -0.1 1.2*max(power_spectral_density)]);

subplot(3,1,3);
plot(rx,'r');
axis(a); grid on;

When we apply this to the phase-shifted 5 Hz signal from the previous


troublesome example, we get the following correct result:

The reconstruction from the new algorithm matches the data. Note I'm now
https://www.labkitty.com/2015/12/a-primer-on-fourier-analysis-part-i.html 8/20
1/31/25, 1:29 PM LabKitty: A Primer on Fourier Analysis -- Part I
bar-plotting the sum of the squares of the sine and cosine coefficients at
each frequency (divided by N) rather than their individual amplitudes. I'm
also now plotting all N/2 of these, as is standard practice. This quantity is
called power and this kind of a plot is called a power spectral density. We
will have much more to say about the PSD in Part II.

Footnote: I'm using the variable names ak and bk for the vector of sine and
cosine coefficients in the code. These are standard names used in the
textbooks. We shall bump into them a bit later.

One last test. Let's change things up a bit by using a 5 Hz + 15 Hz signal.


We indeed get a correct decomposition and reconstruction of this signal
(results not shown; you can try it out for yourself). However, now we make
the signal twice as long by changing tmax from 1 to 2. Here's the results:

Ack! The reconstruction is now twice the amplitude of the original data!
Also, the Fourier components appear to be repeating. There's correct
components at 5 Hz and 15 Hz, but then two ghosts appear at 85 Hz and
95 Hz! What is going on??

It's no coincidence that the data is twice is long and the results are 2x as
wrong. (Indeed, tripling the length of the data produces six components
and a reconstruction 3x the correct amplitude.) The short answer is it's not
the number of samples (divided by 2) that determines the number of
frequencies we can extract, it's the number of samples per unit time
(divided by 2). In signal processing lingo, this is called the sampling
frequency. One thousand samples per second is aka a sampling frequency
of 1 KHz and we can extract 500 components. Forty samples per second is
aka a sampling frequency of 40 Hz and we can extract 20 components.
And so on. We have been using 100 samples per second in the examples,
so we can extract 50 components. However, changing tmax to 2 increased
the length of N to 200, and the code tried to extract 100, not 50. Half of
these components are ghosts and including the ghosts broke the
reconstruction.

We'll see how to fix this properly in Part II. For now, I just make a note in
the code that tmax must be 1. Again, you can run the code yourself and
verify we get the correct result.

https://www.labkitty.com/2015/12/a-primer-on-fourier-analysis-part-i.html 9/20
1/31/25, 1:29 PM LabKitty: A Primer on Fourier Analysis -- Part I
That's it. That is truly the concept of the Fourier transform a nutshell. All
that's left is improving computational efficiency and chasing around the
theoretical implications. As I promised in the introduction, I will not get into
these topics. However, I should at least show you some equations because
that will help you connect my explanation to a proper treatment you will find
in textbooks and elsewhere. Additionally, I have thus-far been rather
cavalier with my terminology which is no doubt offending the purists.
Showing some equations will provide me an opportunity to define things
more carefully.

IV. The Equations of Fourier Analysis

A purist would point out what I have been calling the Fourier transform
should really have been called the Discrete Fourier Transform (DFT) which
is more like a Fourier series and less like a Fourier transform. A purist
would also make a distinction between the Discrete Fourier Transform and
the Discrete Time Fourier Transform (DTFT), and then note the Fast
Fourier Transform (FFT) is just an efficient way of computing the DFT. A
purist might then start to drone on about Fourier transforms on arbitrary
locally compact abelian topological groups, and whoever put that on the
Fourier transform Wikipedia page needs to be slapped. There's a
difference between "helpful" and "showing off," which is something a purist
doesn't understand.

You are forgiven if you find this zoo of terminology confusing. Let's see if
we can't organize the various Fourier animals into their proper cages.
Those cages go as follows:

Fourier Series (continuous, periodic)


Fourier Transform (continuous, non-periodic)
Discrete Transforms -- DFS, DTFT, DFT (discrete, quasiperiodic)
Fast Fourier Transform (discrete, quasiperiodic)

I've included the type of function each method can analyze in parentheses.
Just to be clear, when I say "continuous" I mean pencil-and-paper kind of
solutions. In Matlab or Octave (or any software) we work with "discrete"
data, aka "sampled" data, aka "digital" data, aka a "sequence." (Also, just
to be clear, I use the terms "data" and "waveform" and "signal" and
"function" and "input" interchangeably.) Keep in mind any equation I show
might be written slightly differently in your textbook or in other sources.
Stay frosty.

Footnote: The level of math unpleasantness ratchets up a few notches in


what follows. As such, it would be a good time to go get some beverage,
perhaps in your favorite LabKitty beverage container.

Fourier Series (continuous, periodic)

We begin by taking a step back to Fourier's original idea. In Fourier's day


there were no computers; he worked with an ordinary continuous function,
say of time, which I will write x(t). He considered a periodic function, that is,
there exists some fixed number T such that x(t) = x(t+T) for all t. In
particular, we seek the smallest value of T for which this is true, and we

https://www.labkitty.com/2015/12/a-primer-on-fourier-analysis-part-i.html 10/20
1/31/25, 1:29 PM LabKitty: A Primer on Fourier Analysis -- Part I
write that special value T0 and call it the fundamental period. Sometimes
it's more useful to talk about frequency than period, so we define f0 = 1/T0
which is called the fundamental frequency. Frequency tends to get
multiplied by 2π in the equations, so we define the fundamental angular
frequency ω0 = 2πf0 so we don't have to keep writing 2π everywhere. The
units of f0 are cycles-per-second (aka "Hertz"); the units of ω0 are radians-
per-second (aka "radians-per-second").

Footnote: Multiples of the fundamental frequency are called harmonics,


and you will sometimes hear Fourier analysis referred to as harmonic
analysis. Also, Fourier components are collectively called a spectrum, and
you will sometimes hear Fourier analysis referred to as spectral analysis.

A continuous periodic function can be represented as a Fourier series. The


equations for Fourier series can be written in one of three forms. The three
forms are equivalent -- indeed, each can be derived from the others -- but
they emphasize different aspects of the analysis. Feel free to pick your
favorite.

Fourier Series -- Trigonometric Form

With terminology and notation in hand, we can basically just write down an
equation analogous to the rock-stupid algorithm for Fourier decomposition
we used earlier:

x(t) = a0/2 +
Σk=1,∞ [ ak ⋅ cos(kω0t) + bk ⋅ sin(kω0t) ]

This is the trigonometric form. The leading coefficient a0/2 is the DC offset
(if present), to which is added sine and cosine components having
multiplies of the fundamental frequency (some authors just write a0 and
absorb the factor of 1/2 in the coefficient definition). Earlier I noted we can
extract a maximum of N/2 components from data containing N samples.
That limit does not apply for a continuous function, which is why the sum
here goes to infinity.

Footnote: The odd formatting of the summation limits is just my HTML


incompetence; the upper limit (∞) should be written at the top of the
summation sign, but I couldn't figure out how to do that.

As we know, the coefficients ak and bk come from correlation, which in


continuous time is an integral:

ak = 2/T0 ∫T0 x(t) ⋅ cos(kω0t) dt

bk = 2/T0 ∫T0 x(t) ⋅ sin(kω0t) dt

The integral is over one period. Interestingly, it doesn't matter where you
begin and end, as long as you integrate over one complete period.

Footnote: I should note I haven't derived these equations; I've simply


written them down. However, the derivation is not as difficult as you might

https://www.labkitty.com/2015/12/a-primer-on-fourier-analysis-part-i.html 11/20
1/31/25, 1:29 PM LabKitty: A Primer on Fourier Analysis -- Part I
think. See any textbook on Fourier analysis for details.

Fourier Series -- Explicit Phase Form

Earlier, we introduced cosines into the decomposition algorithm as a way of


handing phase. It's possible to work backwards from the trigonometric form
given above to arrive at an explicit phase form of the equation for Fourier
series:

x(t) = c0 + Σk=1,∞ ck ⋅ cos(kω0t − θk)

here, c0 = a0/2, ck = √ (ak2 + bk2), and θk = arctan(bk/ak). My textbooks all


use cosine and not sine in the phase form. I don't know why.

Fourier Series -- Complex Exponential Form

It is possible to obtain a third version of the Fourier series equation by


applying Euler's identity exp(it) = cos(t) + i⋅sin(t) to the trigonometric form
(to be clear, exp(•) = e• and is just a way of writing the exponential function
that's easier to read). Leaving the details as an exercise, we obtain the
complex exponential form:

x(t) = Σk=−∞,∞ Ck ⋅ exp(ikω0t)

Note the new summation limits (−∞ ,∞). The constants Ck are given by:

Ck = 1/T0 ∫T0 x(t) ⋅ exp(−ikω0t) dt

Footnote: Note k can be negative in the summation. Apparently, there is


such a thing as a "negative" frequency. Perhaps you are thinking: "What
the hell is a negative frequency?" (See: Interlude: What the Hell is a
Negative Frequency?) For now, think of negative frequency as a
bookkeeping trick to make the complex exponential give the same results
as the other Fourier forms. A decomposition requires two players: (i) sines
and cosines, (ii) frequency and phase, or (iii) positive and negative
frequencies. It's like Yoda warned: Always two there are, no more, no less.
You're free to work with whichever pair you like.

Footnote: Although you're free to work with whichever pair you like, I
beseech thee to pick the complex exponential form as your favorite even
though it might seem unnecessarily complicated. The sooner you become
comfortable with complex exponentials, the easier your life will become,
not just in Fourier analysis but in all areas of mathematics.

Homework: Poke around a textbook on Fourier analysis and locate these


equations. Bonus points: Derive each equation starting with one of the
others (using trig and Euler's identity, etc.)

Interlude: What the Hell is a Negative Frequency?

What the hell is a negative frequency?


https://www.labkitty.com/2015/12/a-primer-on-fourier-analysis-part-i.html 12/20
1/31/25, 1:29 PM LabKitty: A Primer on Fourier Analysis -- Part I

Here, somebody will haughtily sniff and tell you a negative frequency is
obviously the same thing as a positive frequency except the signal is
moving backwards in time. Hey, thanks a million Wikipedia. You're a peach.

I claim an actual helpful answer is not impossible, but it requires a visit to


the complex plane. Recall Euler's identity:

exp(iω) = cos(ω) + i sin(ω)

If we make this a function of time -- exp(iωt) -- we generate a phasor:

exp(iωt) = cos(ωt) + i sin(ωt)

Geometrically, a phasor is a unit arrow spinning around the complex plane.


The tail of the vector is stuck at the origin, and the tip traces out a unit
circle:

From now on, when you see a complex exponential, practice shrugging
and saying: Meh, little arrow spinning around in the complex plane. The
relation of the phasor to the familiar (i.e., real-valued) trigonometric
functions are generated by its projections on the real and imaginary axes
(shown here in red and blue). As the phasor moves around the circle, its
shadow on the real axis sweeps out a cosine (red) and its shadow on the
imaginary axis sweeps out a sinewave (blue). The speed of the spinning is
determined by the frequency, ω (bigger = faster). If the frequency is
positive, the arrow moves counterclockwise (convince yourself of this by
substituting a few values of t).

If the frequency is negative, the arrow moves clockwise. That's it. That's all
a "negative frequency" is.

Fourier Transform (continuous, non-periodic)

Fourier series can only be used to decompose a periodic function. If we


want to decompose a non-periodic function, we must use the Fourier
transform. We obtain the latter from the former using a magic trick. It's a bit
of work, but it's a nice demonstration of a standard technique in
mathematics in which you create a new tool by cleverly modifying an old
one.

First, a reminder: Suppose we have a function f(x) defined between two


points x=a and x=b. If we divide the interval [a b] into N subintervals each
having length Δx = (b−a)/N, then the Riemann sum Σ k=1,N f(kΔx) ⋅ Δx is

https://www.labkitty.com/2015/12/a-primer-on-fourier-analysis-part-i.html 13/20
1/31/25, 1:29 PM LabKitty: A Primer on Fourier Analysis -- Part I
approximately the area under f(x) between a and b. In the limit as N goes
to infinity, the Riemann sum defines the integral ∫a,b f(x) dx and is exactly
the area under f(x) between a and b. Go look up the Riemann sum (or at
least draw a picture) if you are shaky on the concept.

Now, back to the Fourier transform.

Suppose x(t) is a non-periodic function. Suppose also that x(t) is finite, that
is, x(t) = 0 for | t | > τ for some number τ (we'll let τ go to infinity eventually,
so this isn't a serious restriction. Also, τ is supposed to be the Greek letter
tau; I've noticed in some browsers it looks like a miniature T. FYI).

Now, form a periodic extension of x(t) by cutting and pasting copies of x(t)
to the left and right of itself forever (be sure to include a little zero padding
between copies). Here's an example of what I mean:

Let's call the periodic extension xP(t). Note xP(t) is periodic. Ergo, xP(t) has
a Fourier series:

xP(t) = Σk=−∞,∞ Ck ⋅ exp(ikω0t)

Ck = 1/T0 ∫T0 xP(t) ⋅ exp(−ikω0t) dt

We can evaluate the integral anywhere we like, so long as its over one full
period. Let's use −T0/2 to +T0/2 (we assume there's sufficient zero padding
such that −T0/2 < −τ and τ < T0/2). Inside [−T0/2 T0/2 ], xP(t) is x(t) plus a
bunch of zero-padding that doesn't contribute anything to an integral. So
we can write:

Ck = 1/T0 ∫−T0/2,+T0/2 x(t) ⋅ exp(−ikω0t) dt

(Note x(t) is back in the integrand). Outside [−τ τ], x(t) = 0. So we can
extend the integral limits:

Ck = 1/T0 ∫−∞,∞ x(t) ⋅ exp(−ikω0t) dt

Next, we introduce a definition from outta nowhere. Define:

X(ω) = ∫−∞,∞ x(t) ⋅ exp(−iωt) dt

I'll explain why we do this in a moment. For now, note Ck = 1/T0 X(kω0).
So, going back to the very first expression, we can write the series for xP(t)
as:
https://www.labkitty.com/2015/12/a-primer-on-fourier-analysis-part-i.html 14/20
1/31/25, 1:29 PM LabKitty: A Primer on Fourier Analysis -- Part I

xP(t) = 1/T0 Σk=−∞,∞ X(kω0) ⋅ exp(ikω0t)

But 1/T0 = ω0/2π. Substituting:

xP(t) = ω0/2π Σk=−∞,∞ X(kω0) ⋅ exp(ikω0t)

Or, moving ω0 inside the sum:

xP(t) = 1/2π Σk=−∞,∞ X(kω0) ⋅ exp(ikω0t) ⋅ ω0

Now, we don't want the Fourier decomposition of xP(t), we want the


decomposition of x(t). No problemo: x(t) is the limit of xP(t) as T0 goes to
infinity (make sure you understand this -- go back and look at the picture).
We don't "stretch out" or deform x(t) to increase T0; we simply insert more
zero padding. The copies of x(t) get pushed farther and farther out and, in
the limit, they never happen at all. That is, xP(t) "becomes" x(t). Also, note
as T0 goes to infinity, ω0 (= 2π/T0) goes to zero (i.e., becomes
infinitesimal). This is the crux of the magic trick.

Let's press on. Taking the limit of both sides of the previous expression as
T0 goes to infinity we obtain:

x(t) = limω0→0 1/2π Σk=−∞,∞ X(kω0) ⋅ exp(ikω0t) ⋅ ω0

What does the RHS equal? Let's rewrite it using Δw instead of ω0 (it
doesn't matter what symbol we use; I'm just trying to make the expression
look like something familiar):

x(t) = limΔω→0 1/2π Σk=−∞,∞ X(kΔω) ⋅ exp(ikΔωt) ⋅ Δω

Hopefully you now recognize the RHS as the Riemann sum of 1/2π X(ω)
exp(iωt) -- this is why I introduced the definition. Taking the limit, we
therefore obtain:

x(t) = 1/2π ∫−∞,∞ X(ω) ⋅ exp(iωt) dω

where, as before:

X(ω) = ∫−∞,∞ x(t) ⋅ exp(−iωt) dt

This is the Fourier transform. Whereas Fourier series is based on an


(infinite) sum, the Fourier transform is an integral expression. In one sense,
X(ω) simply plays the role of Ck. But in another sense, X(ω) is not some
lowly coefficient -- it's a continuous function of frequency. A bona fide
function, no less than x(t) itself! Together they form a Fourier transform
pair. Think of x(t) and X(ω) as doppelgängers. Mirror images. Equal
partners. Equivalent definitions of your function -- one in the time domain
and other in the frequency domain. There's an elegant symmetry to the
expressions for x(t) and X(ω). And unlike Fourier series, there is no
requirement that x(t) be periodic to apply the Fourier transform.
https://www.labkitty.com/2015/12/a-primer-on-fourier-analysis-part-i.html 15/20
1/31/25, 1:29 PM LabKitty: A Primer on Fourier Analysis -- Part I

Footnote: You'll see many variations of what I have written as the Fourier
transform. Some authors use a negative exponent in the definition of x(t)
and a positive exponent in X(ω). Some authors move the leading 1/2π to
X(ω), or put a 1/√2π on both for total symmetry bonus points (you can use
whatever leading constant you like as long as their product equals 1/2π).
Some people go back to using f instead of ω which makes the leading
constant equal to 1 instead of 1/2π. It's all very confusing.

Discrete Transforms -- DFS, DTFT, DFT (discrete, quasiperiodic)

Ultimately, we want to do Fourier analysis on a computer. "On a computer"


means working with discrete data, and discrete data has two
characteristics that create Fourier weirdness.

First, discrete data is discrete.* So far we have considered a continuous


function x(t) when discussing equations. Now, the function of interest is
described by a list -- more properly a vector -- of numbers. I will write this
as x[n], where n = 0,…,N-1 are N samples of data (The standard textbook
convention is to index from 0 to N-1; this would of course go 1 to N in
Matlab because sensible people start vector indexing at 1.) We will assume
x[n] to be real, although in general x[n] can be complex and none of the
math changes. We will also assume the samples of x[n] are evenly spaced
because nonuniform sampling substantially complicates matters.

* I should be more careful with terminology: By "discrete" I mean sampled


(i.e., by the sampling frequency of your data acquisition system). That is,
time is discrete. Discrete data is also discrete in amplitude simply because
computers have a finite word length. This is more properly called
quantization. In what follows, we're concerned with the effects of sampling,
not quantization (although the latter can sometimes create problems due to
round-off error).

Second, discrete data is by definition non-periodic. Any real-world data has


finite length. It has a beginning, which we call x[0], and an end, called x[N-
1]. As such, there is no (integer) T such that x[k+T] = x[k] for all k (try it!
Remember, x[k] = 0 for k < 0 and k > N-1).

Since x[n] is non-periodic, it seems reasonable we should attack it like we


did the Fourier transform. We form a periodic extension of x[n], find its
Fourier series, then take the limit as the extension period goes to infinity.
There is indeed a discrete-time version of Fourier series called the Discrete
Fourier Series (DFS). There is also a discrete-time version of the Fourier
transform called the Discrete Time Fourier Transform (DTFT). So far so
good. As in the continuous case, the DTFT returns the spectrum of x[n] as
a continuous function of frequency, X(ω). A continuous function of anything
is useless on a computer. Ergo, we sample X(ω) at N points and use that in
our code just like we did earlier with sine and cosine.

It seems straightforward enough. But something terrible happens.

Sampling reduces X(ω) to a finite collection of components. It's not hard to

https://www.labkitty.com/2015/12/a-primer-on-fourier-analysis-part-i.html 16/20
1/31/25, 1:29 PM LabKitty: A Primer on Fourier Analysis -- Part I
prove, but a picture is worth a thousand proofs:

Here the traces stand in for three different X(ω) returned by the DTFT.
When we sample these, we lose information. Squint and look at the dots.
The three sampled waveforms (dots) are identical even though the original
waveforms are not. This is called aliasing. You can construct at most N/2
unique frequency components from any continuous function sampled at N
locations; if you try to generate more, you start repeating components you
already have. We saw an example of this back in Section II when we
applied our algorithm to data of length 2.

In the continuous case, a non-periodic input generates infinitely many


frequency components. In the discrete case, we can only generate a finite
number of components whether x[n] is periodic or not. As such, there's no
point in constructing a periodic extension. We bypass that mess by treating
x[n] as if it were periodic. Because x[n] is (assumed) periodic, we get a
series not an integral. Because of aliasing, the series are finite not infinite.

What results is the Discrete Fourier Transform. Analogous to the


continuous case, we define Ω0 = 2π/N. We replace ω0 with Ω0, x(t) with
x[n], X(ω) with X[k], and an integral over T0 with a sum over N:

x[n] = 1/N Σk=0,N−1 X[k] ⋅ exp(ikΩ0n)

where

X[k] = Σn=0,N-1 x[n] ⋅ exp(−ikΩ0n)

This is what my rock-stupid algorithm above is doing, albeit expressed here


using fancy mathematics. Whereas I summed a sine and a cosine series,
the DFT merges these into a single complex exponential. Whereas I
summed N/2 terms, the summation on k in the DFT goes from 0 to N-1, for
a total of N terms. However, the results are the same.

I'll have more to say about the DFT in Part II, for the DFT is the rock upon
which Matlab's fft( ) function is built (and that's just the beginning of the
bizarre goings on inside the DFT). So for now, a summary: Sampled data is
finite and therefore non-periodic. Ergo, a proper Fourier representation
requires infinitely many frequencies. But sampled data can only represent
https://www.labkitty.com/2015/12/a-primer-on-fourier-analysis-part-i.html 17/20
1/31/25, 1:29 PM LabKitty: A Primer on Fourier Analysis -- Part I
a finite collection of frequencies. Something has to give. The compromise
is the DFT does an implicit periodic extension behind your back. This is
what I mean when I say the DFT works on "quasiperiodic" data (a term I
made-up -- don't expect to find it elsewhere). Some authors use DFS when
x[k] is literally periodic and DTFT when x[n] includes an explicit periodic
extension, reserving the term DFT for when x[n] is merely assumed to be
periodic. Those distinctions make for much confusion when encountering
Fourier for the first time. To simplify, I lump them all under a single moniker:
"Discrete Fourier Transform." Purists and Wikipedia crazies will disapprove.
Consider yourself warned.

The Fast Fourier Transform (discrete, quasiperiodic)

The Fast Fourier Transform (FFT) is just a computationally efficient


implementation of the DFT, invented (some say "rediscovered") by J.W.
Cooley and J.W. Tukey in the 1960s. Nobody computes a DFT using
correlation anymore. That being said, to say the FFT is "just" a fast version
of the DFT does it a grave disservice. The FFT is to the DFT what the SR-
71 is to the Wright brothers' biplane. The FFT put the DFT on the map. It
brought Fourier transforms to the masses.

How fast is it? Crazy fast.* For all intensive porpoises, the FFT is free,
computationally speaking. In any significant application that includes the
FFT, some other part of the processing will be your speed bottleneck.
Guaranteed. Cooley and Tukey should have won a Nobel prize for the FFT.
They should be household names.

* Quick back-of-the-envelope demonstration: Assuming a 1 µsec processor


cycle, the DFT of a length N = 1,000,000 waveform would take 2 weeks.
The FFT would take about 30 seconds.

We won't get into the FFT algorithm, either presently or in Part II. IIRC,
Matlab's fft( ) uses the Cooley and Tukey approach (called the Radix-2)
probably with some proprietary tweaks. For us it will remain a black box.
We are interested in what comes out of the FFT, what the coefficients tell
us and how they can be used. However, people often (incorrectly) use
"FFT" as a synonym for the DFT, or even as a synonym for "Fourier
analysis," so it's good you know how it fits into the big picture. The big
picture is: FFT = fast DFT.

Next time on A LabKitty Primer on Fourier Analysis

The goal when I started writing A LabKitty Primer on Fourier Analysis was
to explain the vector of numbers returned by the Matlab/Octave fft( )
function. If you understand those, I claim, you understand Fourier analysis.
Our goal has not changed. And while we have come far, we are not there
yet. And, alas, we're not going to get there today. In describing fft( ), I
quickly realized there's a host of new concepts must deal with. Aliasing.
Nyquist frequency. Vector folding. Windowing. These concepts will need
proper decyphering before we get to the end of our journey, before we
finally chuck the ring into Mt. Doom. As Peter Jackson has taught us: Why
do anything in one part when making a trilogy triples the box?

https://www.labkitty.com/2015/12/a-primer-on-fourier-analysis-part-i.html 18/20
1/31/25, 1:29 PM LabKitty: A Primer on Fourier Analysis -- Part I
I do not have a trilogy, but I do have a sequel. FFT II: Electric Boogaloo.
Nerds in Paradise. FFT Harder. Whatever. But feel free to pause here for a
rest if need be. Like Frodo in Riverdance. Sample the lamba bread. Visit
the wenchkeep. Enjoy a beverage in a quality LabKitty beverage container.
Can't stress that last part enough.

When you are ready to continue, click here: A LabKitty Primer on Fourier
Analysis Part II. More wonders await.

Posted by LabKitty at 9:37 AM

Labels: LabKitty primer, matlab

2 comments:
Kitten September 28, 2016 at 7:42 PM

Hi. Nice post! I'm getting multiples in amplitude in my reconstructed signal, as you noted happens. I'd
like to know what you will write about this in Part II, specifically how you said you'd explain further why it
is happening and what the proper solution is. I see you gave a couple hints, but they don't lend to me a
full explanation to the problem in my situation. I was looking at a couple different signals and in both of
them I'm getting a multiple of 5 when I look at 50 constants. In one, I have a sampling rate of 1kHz and
10 samples. In the other, I have a sampling rate of 200 Hz and 20 samples. I'd appreciate it if you can
explain further why this is happening and what the proper solution is. (I'll only look at the first # of
samples/2 constants for now!) Thanks!

Reply

Replies

LabKitty October 1, 2016 at 11:31 AM

The short answer is: If your sampling rate is Fs (say 200 Hz), you should not attempt to
reconstruct any components beyond Fs/2 (100 Hz, in your example). The cutoff is called the
Nyquist frequency and the multiple samples are due to something called aliasing (which is
introduced in the figure in which three different sinewaves all produce the same sampled
waveform -- multiple frequencies in the reconstructed signal is kinda this effect in reverse).

Alas, Part II of the Primer has been delayed indefinitely. I've been posting shorter articles but
these Primers take *forever* to write!

Thanks for your comment.

Reply

Enter Comment

Newer Post Home Older Post

Subscribe to: Post Comments (Atom)

The content provided herein is solely the work of the author in my personal capacity and does not reflect the attitudes, positions, or opinions of any institution,
university, or corporate entity I am currently or have been previously affiliated with in any capacity.

(c) 2014-25 LabKitty Design. Simple theme. Powered by Blogger.

https://www.labkitty.com/2015/12/a-primer-on-fourier-analysis-part-i.html 19/20
1/31/25, 1:29 PM LabKitty: A Primer on Fourier Analysis -- Part I

https://www.labkitty.com/2015/12/a-primer-on-fourier-analysis-part-i.html 20/20

You might also like