Math 202 FS
Math 202 FS
We now study the response of differential equations to periodic inputs. A periodic function
is one which repeats itself perfectly with a fixed frequency. That is, there is a period p such
that
f (t + p) = f (t), ∀t.
f (t)
f (t)
t
t 1p t + p 2p 3p
We will see how all such functions can be expressed in terms of sines and cosines.
f (t) = t2 − 1, t ∈ [−1, 2)
f (t)
−1 2 5 8 11
Example 27.2 The function f (t) = eiω t is periodic with p = 2π/ω since
ω 2π
f (t + p) = eiω (t+2π/ω) = ei (ω t+ ω
)
= ei ω t ei 2π = eiω t 1 = f (t).
In general, a signal with frequency cHz has period 1/c and angular frequency 2π c.
1
Example 27.3 The function f (t) = sin(2t) − 2 cos(3t) is periodic.
Solution. The component sin(2t) repeats every time t is increased by π, so that component
has period π. The component cos(3t) repeats every time that t is increased by 2π/3, so that
component has period 2π3
. The combination of the two cycles repeats when time is increased
by a p which is a multiple of both π and 2π
3
. The least such multiple is 2π, so the period is
2π, the fundamental frequency is c = 1/2πHz.
Putting these two together, every time t is increased by 2π, the sine component has undergone
three complete cycles, and the cosine component has had two complete cycles. Thus, the
period is 2π, the frequency is c = 1/2πHz and the angular frequency is ω = 1.
1π 2π 3π 4π
2 cos(3t)
sin(2t)
t
Notice that the function f is built of just two repeating sinusoidal functions, yet exhibits a
more complex waveform.
The fundamental frequency is the greatest common divisor of all the frequencies in the sig-
3
nal; in this case, the two frequencies are 2π and π1 , and these other frequencies are harmonics.
2
28 Representing a periodic function
Recall the square wave
As n alternates between even and odd integers, the values flips between 1 and −1. This
function has period 2π.
f (t)
2π 4π 6π
When the DE
y 00 + k 2 y = f (t)
is subject to square wave forcing, resonance is seen for odd integers k. We will now see why.
Then
Z ∞
L {f (t)} = e−st f (t) dt
Z0 p Z 2p Z 3p
−st −st
= e f (t) dt + e f (t) dt + e−st f (t) dt + · · ·
0 p 2p
∞
X Z (n+1)p
= e−st f (t) dt.
n=0 np
On each of these intervals the periodicity of f can be exploited. Make the substitution
τ = t − np so that
Z (n+1)p Z p Z p
−st −s(τ +np)
e f (t) dt = e f (τ + np) dτ = e−sτ (e−sp )n f (τ ) dτ.
t=np τ =0 | {z } 0
=f (τ )
3
Then, returning to the calculation,
∞ Z
X (n+1)p
L {f (t)} = e−st f (t) dt
n=0 np
X∞ Z p
= e−sτ (e−sp )n f (τ ) dτ
n=0 0
Z p
= e−sτ f (τ ) dτ (e−sp )n
Z0 p
1
= e−sτ f (τ ) dτ .
0 1 − e−sp
(1 − e−sπ )
= .
s(1 + e−sπ )
This calculation is a bit painful, but we have got somewhere! We are used to Laplace transforms
that look like
c1 c2
+ + ···
s − a1 s − a2
so that
−1 c1 c2
L + + · · · = c1 ea1 t + c2 ea2 t + · · · .
s − a1 s − a2
The key to understanding the Laplace transform of the square wave is to notice that the
denominator is zero at s = 0 and whenever e−sπ = −1. Looking in the complex plane,
ez = −1 precisely when z is an odd multiple of iπ; that is, sπ = i mπ, m odd or s = i m.
Therefore, we might be expect to express the square wave as a sum of complex exponentials
X
f (t) = cm eim t .
m odd
But, by Euler’s formula, eix = cos x + i sin x, so in fact, this sum could appear as
X
square wave = am cos m t + bm sin m t.
m odd
4
Remark: the elaborate reasoning above can be done for any periodic function. Recall that
Z p
1
L {periodic f } = e−sτ f (τ ) dτ.
1 − e−s p 0
In this case, the LT has zeros in the denominator whenever sp = i 2nπ, so f might be
represented as a sum of
2nπt 2nπt
cos and sin .
p p
sin t, sin 3t, sin 5t, · · · , cos t, cos 3t, cos 5t, · · · .
f (t)
errors
t
t
2π
2π
Left: square wave with several candidate sin components; Right: errors
Suppose we choose b to make b sin t the best fit to the square wave f . To do this, we are
going to make
|b sin t − f (t)|
as small as possible. However, it is not clear what the best choice of b is! We can make a
good fit at some points with a suitable choice of b, but then it doesn’t work well at other
points. A clean approach is obtained by trying to minimize the mean square error. That is,
we choose b to make Z 2π
(b sin t − f (t))2 dt
0
as small as possible. To this end, expand
Z 2π Z 2π
2
(b sin t − f (t)) dt = [(f (t))2 − 2 f (t) b sin t + b2 sin2 t] dt
0
Z0 2π Z 2π Z 2π
2 2
= (f (t)) dt − 2 b f (t) sin t dt + b sin2 t dt.
0 0 0
5
To minimize, we try differentiating and setting to 0:
d 2π
Z Z 2π Z 2π
2
0= (f (t) − b sin t) dt = 0 − 2 f (t) sin t dt + 2 b sin2 t dt.
db 0 0 0
Then, R 2π
0
f (t) sin t dt
b= R 2π 2 .
0
sin t dt
Finally, we have something to compute!
Z 2π Z π Z 2π Z 2π
f (t) sin t dt = (1) sin t dt + (−1) sin t dt = 4 and sin2 t dt = π.
0 0 π 0
f (t)
2π
6
29 Orthogonal expansion and real Fourier Series
Recall: when describing vectors in three dimensions we have
r = (x, y, z)
The three vectors {e1 , e2 , e3 } = {(1, 0, 0), (0, 1, 0), (0, 0, 1)} have the special property that
they are an orthogonal basis for R3 . In Euclidean space, orthogonality is defined via the
dot product
u · v = (u1 , u2 , . . . , um ) · (v1 , v2 , . . . , vm ) = u1 v1 + u2 v2 + · · · + um vm
v = c1 e 1 + c2 e 2 + · · · + cm e m
• ei · ej = 0 (unless i = j).
Key idea: the representation of v in terms of the basis elements is very easy to obtain when
the basis is orthogonal. This is codified in the following theorem.
v · ej = (c1 e1 + c2 e2 + · · · + cm em ) · ej = c1 e1 · ej + c2 e2 · ej + · · · + cm em · ej .
Now, all of the terms ei · ej = 0 except when i = j, so this expression actually becomes
v · ej = 0 + 0 + · · · + 0 + cj ej · ej + 0 · · · + 0;
7
We use this idea to express periodic functions in terms of some special basis.
First, we need a notion of what it means for two period-p functions to be orthogonal.
Definition. Functions, f, g : [0, p) → R are orthogonal if and only if
Z p
f (t) g(t) dt = 0.
0
Example 29.2 Let f (t) = t (1 − t) and g(t) = t + c be defined on [0, 1). Choose a value of
c so that f and g are orthogonal.
Solution.
Z 1 Z 1
f (t) g(t) dt = t (1 − t) (t + c) dt
0 0
Z 1
= (t2 − t3 + ct − ct2 ) dt
0
1
c 2 1 − c 3 t4
= t + t −
2 3 4 t=0
c 1−c 1
= + +
2 3 4
c 1
= + .
6 12
Hence, when c = −1/2, f and g are orthogonal; otherwise, they are not.
f (t)g(t)
c = 1/2
c=0
t
c = −1/2
1
c = −1
When c = −1/2 positive area between f (t)g(t) and the t axis balances negative area.
Example 29.3 Show that sin 2 t and cos 3 t are orthogonal over [0, 2π).
Z 2π
1 2π
Z
sin 2t cos 3t dt = [sin(2t + 3t) + sin(2t − 3t)] dt
0 2 0
t=2π
1 −1
= cos 5t + cos t
2 5 t=0
1
= [(−1/5 + 1) − (−1/5 + 1)] = 0.
2
Exercise: Show that sin m t, sin nt, cos pt, cos qt are mutually orthogonal over [0, 2π) unless
m = n, p = q.
8
Rp
If we let 0 f (t) g(t) dt replace the dot product in Theorem 29.1, and {e1 (t), e2 (t), . . . , em (t), . . .}
be an orthogonal basis then
Rp Rp Rp
0
f (t) e1 (t) dt 0
f (t) e2 (t) dt f (t) em (t) dt
f (t) = R p e1 (t) + R p e2 (t) + · · · + R p0 em (t) + · · · .
0 1
e (t) e1 (t) dt 0 2
e (t) e2 (t) dt 0 m
e (t) em (t) dt
Remark: Similar to the above exercise, it is easy to show that the given set is orthogonal. To
show that it is a basis requires a proof of completeness, which is beyond the scope of this
course.
We can take the set of basis functions to be {1, cos 2nπt
p
sin 2nπt
p
}∞
n=1 . Direct calculations give
Z p 2 Z p 2
2πnt p 2πnt p
sin dt = and cos dt = .
0 p 2 0 p 2
where
2 p
Z
2π n t
an = f (t) cos dt
p 0 p
Z p
2 2π n t
bn = f (t) sin dt
p 0 p
Note: because of periodicity, the integrals in f can be computed over any interval of length
p. For example, if p = 2π the integral limits could be taken as 0 and 2π, or −π and π, or 1
and 1 + 2π — use whatever is most convenient and think about why it works!
Finding the coefficients an , bn is called analysis. [a0 is the “DC component” – why? ]
Recovering the function f as the sum is called synthesis. This sum is the Fourier series.
9
Example 29.4 (Triangular wave) The function f (t) = |t|, defined on [−1, 1] is repeated
periodically with period p = 2. Here, we have a couple of choices of how to set up the integrals.
The obvious two are [−1, 1] and [0, 2]. For this example, we’ll use the former choice.
f (t)
t
−1 1 2 3 4
Note: The graph shows that f is an even function (symmetric about t = 0). This means
that we might expect the Fourier series to be symmetric too, so perhaps will contain cosines
(because these are even), but not sines.
Solution. We will set up the integrals over the interval [0, 2], so
2 1
Z Z 1
2πn t
an = f (t) cos dt = f (t) cos πn t dt
2 −1 2 −1
and Z 1 Z 1
2 2πn t
bn = f (t) sin dt = f (t) sin πn t dt.
2 −1 2 −1
Next, Z 1 Z 0 Z 1
an = f (t) cos nπt dt = (−t) cos nπ t dt + t cos nπ t dt. (†)
−1 −1 0
10
Both integrals require a similar integration by parts calculation:
Z Z
1 1
t cos nπt dt = t sin nπt − 1 sin nπt dt
nπ nπ
1 1 − cos nπt
= t sin nπt −
nπ nπ nπ
1 1
= t sin nπt + cos nπt.
nπ (nπ)2
Hence
Z 0 0
1 1
(−t) cos nπ t dt = − t sin nπt + cos nπt
−1 nπ (nπ)2
−1
1 1 1 1
=− (0) sin(0) + cos(0) − − (−1) sin(−nπ) + cos(−nπ)
nπ (nπ)2 nπ (nπ)2
(−1)n
1
=− 0+ + 0+
(nπ)2 (nπ)2
1
= [−1 + (−1)n ]
(nπ)2
−2
(nπ)2
n odd
=
0 n even
Beyond the DC term, the first term in the Fourier series is of the form cos πt with frequency
c = 1/2 (angular frequency π). This is the fundamental frequency for this signal. The
other frequencies, at odd multiples of c, are the harmonics.
R2
Exercise.
Repeat the analysis step for the previous example using integrals 0
and
t t ∈ [0, 1)
f (t) = . Verify that exactly the same coefficients are obtained.
2 − t t ∈ [1, 2].
11
Even and odd functions
A function is odd if g(−t) = −g(t) (if you flip in the t = 0 axis, the function turns upside
down). This means that any area between the graph of g and the t-axis for t < 0 has the
opposite sign of the corresponding area with t > 0. An immediate consequence is that integrals
R p/2
of odd functions over symmetric intervals are always 0: −p/2 g(t) dt = 0.
If f is odd, then f × cos is also odd, so there are no cosine terms in the Fourier series for an
odd function.
Similarly, g is even if g(−t) = g(t). If f is even then f × sin is an odd function, and there
are no sine terms in the Fourier series of an even function.
This reasoning can be pushed further, for if g is even then areas to the left of t = 0 are
R0 R p/2
exactly the same as for t > 0: −p/2 g(t) dt = 0 g(t) dt. This can simplify calculations of
the non-vanishing Fourier coefficients.
Rectification
The function max{f (t), 0} is called half-rectified, and any negative values are replaced with
0. The function |f (t)| is fully-rectified.
12
30 Convergence of Fourier series
The process of calculating a Fourier series (analysis) is formulaic, but does the synthesised
series represent the function?
Recall that the square wave has Fourier series
4 1 1
sin t + sin 3t + sin 5t + · · · .
π 3 5
1
This series has period 2π, with fundamental frequency 2π and infinitely many harmonics at
the odd multiples. It is very natural to wonder about whether there is a sensible convergence
theory, since if we stop the series after N terms, we obtain a finite linear combination of
trigonometric functions, all of which are continuous.
This problem upset many mathematicians in Fourier’s time: how could a periodic function with
discontinuities be adequately approximated by “trigonometric polynomials”? [In the early 19th
century there was not really a decent notion of a function, and the convergence theory had to
wait about 50 years.]
Let f be p–periodic and for each N let
N
a0 X 2nπt 2nπt
sN (t) = + an cos + bn sin
2 n=1
p p
Theorem 30.1 (Pointwise convergence of Fourier series). Suppose that f is p-periodic and
that both f and f 0 are piecewise continuous. Then
The proof of this theorem is too hard for Math202. However, notice that if f is continuous at
t, then the two one-sided limits defining f ∗ (t) agree, and are equal to f (t). Thus, the Fourier
series converges to f at points of continuity, and converges to the “average across the jumps”
at points of discontinuity.
Rp
Remark: for any function with 0 |f (t)|2 dt < ∞, the Fourier series converges in a mean
squared sense (this is called “L2 convergence”); if f is differentiable then the convergence is
uniform.
13
Example 30.1 (Square wave) The Fourier series for the square wave is guaranteed to
converge to
1 0<t<π
f∗ (t) = 0 t = 0, π
−1 π < t < 2π.
Thus, it averages the values across the discontinuities. Interestingly, we can evaluate the
Fourier series at other points to obtain numerical facts. For example, since sin(2m + 1) π2 =
(−1)m we apply convergence of the series at π/2 to get
4 1 1 1
1 − + − + · · · = 1.
π 3 5 7
This recovers the formula:
π 1 1 1
= 1 − + − + ··· .
4 3 5 7
Example 30.2 ( Sawtooth wave) Suppose f (t) = t on [−π, π) is repeated periodically.
f (t)
t
−π π 3π 5π
Here, we have p = 2π and it is easiest to set up the integrals over the interval [−π, π). So
Z π
1 π 1 π π 2 − (−π)2
Z Z
2 2π 0 1 π
a0 = t cos t dt = t cos 0 dt = t dt = t2 −π = = 0.
2π −π 2π π −π π −π π π
The calculation of the other coefficients is a bit more involved. It turns out that an = 0 for
all other n, but
Z π
2 2π n
bn = t sin t dt
2π −π 2π
1 π
Z
= t sin n t dt
π −π
π
1 π
−1
Z
1
= t( cos nt) + cos nt dt
π n −π n −π
π
1 −(π cos(nπ) − (−π) cos(−n π)) 1 sin nt
= +
π n n n −π
n n
1 −π(−1) − π(−1) sin(nπ) − sin(−nπ)
= +
π n n2
n
−2(−1)
=
n
14
using integration by parts, and the fact that cos(±nπ) = (−1)n for every n.
The analysis is finished by noticing that (−1)n = 1 when n is even, and −1 when n is odd.
Summarising,
− n2 n even
a0 = an = 0, bn = 2
n
n odd.
Remark on overshoot
Looking closely at the output from Matlab, we see that near the discontinuities in the square
waves or sawtooth waves, the graphs from the truncated Fourier series “overshoot”. A little
bit of thought shows is that since there is a gap between f ∗ (t) and f (t) when t is near a point
of discontinuity, no approximation by a continuous function will be perfect. In fact, there is a
so-called Gibbs phenomenon, whereby the Fourier series always “overshoots” near a point
of discontinuity. Adding more terms to the series does not make this problem go away: the
first peak from the highest frequency component in the N -truncated series occurs a distance
p
≈ 2N from the discontinuity, and converges in height to about 8.9% above the height jumped
by the graph.
15
31 Solving DEs with periodic input; resonance
Motivated by the appearance of resonance in certain periodically forced oscillators, it is natural
to wonder: under what circumstances does a periodic input (forcing) lead to a periodic solution
to a DE? The answer turns out to be a bit delicate.
We will assume that our DEs have homogeneous solutions which can be used to solve for
initial conditions, and will focus on finding particular solutions to
y 00 + m y 0 + k y = f (t) (1)
(the extension to higher order equations is obvious). We also assume that f is periodic with
period p, so
∞
a0 X 2πn 2πn
f (t) = + an cos t + bn sin t. (2)
2 n=1
p p
Let us suppose that y(t) is also periodic, and try to solve for its Fourier series. Write:
∞
A0 X 2πn 2πn
y(t) = + An cos t + Bn sin t.
2 n=1
p p
Differentiating, we find
∞
0
X 2πn 2πn 2πn 2πn
y (t) = −An sin t + Bn cos t
n=1
p p p p
and ∞ 2
00
X 2πn 2πn 2πn 2πn
y (t) = −An cos t − Bn sin t.
n=1
p p p p
Putting all these expressions back in (1) and (2) and grouping all terms
∞ 2 !
k A0 X 2πn 2πn 2πn
+ −An + m Bn + k An cos t
2 n=1
p p p
∞ 2 !
X 2πn 2πn 2πn
+ −Bn − m An + k Bn sin t
n=1
p p p
∞
a0 X 2πn 2πn
= + an cos t + bn sin t.
2 n=1
p p
Then we obtain several sets of equations, by requiring all of the coefficients to match up:
k A0 a0
=
2 2
2 !
2πn 2πn
An k − + Bn m = an
p p
2 !
2πn 2πn
−An m + Bn k − = bn
p p
16
The first equation is easy to solve. The second family of equations can be written as
αn βn An a
= n (3)
−βn αn Bn bn
2
2πn
where αn = k − p and βn = m 2πn p
. The equations in (3) can be solved under two
kinds of circumstances:
In the first case, both the forcing and the solution have no component in the nth harmonic. The
second case is more interesting, since it allows the matrix to be inverted, and the coefficients
of the solution to be found in terms of the coefficients of the forcing function.
2πn
Definition. The DE (1) has a resonance of frequency ωn = p
if m = 0 and k = ωn2 .
All our work above has proven the first part of:
Theorem 31.1. Let f (t) be p-periodic and have Fourier series (2). Then
2πn
Proof. We will only need to prove part 2. Let the DE have a resonance ωn = p
, so (1) is
y 00 + ωn2 y = f (t).
If f (t) = an cos ωn t + bn sin ωn t then a particular solution is
−bn an
y(t) = t cos ωn t + sin ωn t .
2ωn 2ωn
Remark: We can use this theorem to check for possible resonances in a DE. If the left-
hand-side has a resonance at frequency ωn , this doesn’t mean that the DE necessarily has an
unbounded solution, but it does mean that if f (t) has a component in the resonant frequency
then the solution will be unbounded.
Example 31.1 Let f (t) be the square wave with period 2π. Find the values of k for which
the solutions to
y 00 + k y = f (t)
are unbounded.
17
Example 31.2 Let f (t) be the square wave with period 2π. Find a particular solution to
y 00 + 4 y = f (t).
Then ∞
X
00
y (t) = −An n2 cos nt − Bn n2 sin nt,
n=1
so that ∞
00 A0 X
y + 4y = + An (−n2 + 4) cos nt + Bn (−n2 + 4) sin nt.
2 n=1
Comparing with the Fourier series for f (t) we see that there are no cosine terms, so all the
An = 0. We also see that for any even n, the sine term in f vanishes. This means that we
need to solve only the equations
41
Bn (−n2 + 4) = , n odd.
πn
Hence,
4 4
Bn = 2
= .
πn(−n + 4) π n (4 − n2 )
The solution to the DE is now
X X 4 4 4 4
y(t) = Bn sin nt = sin nt = sin t− sin 3t− sin 5t−· · · .
π n (4 − n2 ) π3 π 15 π105
n odd n odd
18
32 Complex Fourier series
Recall that ei θ = cos θ + i sin θ. This allows the analysis and synthesis of Fourier series in
complex form:
∞ Z p
X i 2πp n t −i 2πp n t 1 2π n
f (t) = c0 + cn e + c−n e where cn = f (t) e−i p
t
dt.
n=1
p 0
• the complex form can be derived from Euler’s formula, and for real signals the real and
complex forms are related by
1
cn = (an − i bn ), c−n = cn
2
• conversely
a0 = 2 c 0 , an = cn + c−n , bn = i (cn − c−n )
• the coefficients “encode” f ; in the synthesis stage it is essential to include both the
positive and negative coefficients in the sum
Example (sawtooth wave revisited). The sawtooth signal defined by repeating f (t) = t
on [−π, π) with period p = 2π can be analysed as a complex Fourier series.
Analysis.
Z π
1 2πn
cn = t e−i 2π
t
dt
2π −π
π Z π
1 1
= t e−i n t − e−i n t dt
2π(−i n) t=−π 2π (−in) −π
πe−inπ − (−π)einπ
= +0
2π(−in)
π(−1)n + π(−1)n i (−1)n
=i = .
2π n n
A separate calculation gives c0 = 0.
Synthesis.
∞ ∞
i (−1)n in t i (−1)n −int X (−1)n
X 1 1
f (t) = 0+ e + e = − 2 sin nt = 2 sin t − sin 2t + sin 3t − · · · .
n=1
n −n n=1
n 2 3
19
33 Discrete Fourier Transform
Suppose that we are given a complicated (real) signal f (t) on an inteval [0, L], but that the
signal is known to contain a regular sinusoidal cycle with n repetitions. That component might
be written in the form
2π n t 2π n t 2π n 2π n
a cos + b sin = c ei L t + ce−i L t (*)
L L
• the frequency is n
L
Hz, and the angular frequency is 2π n
L
√
• the amplitude is a2 + b2 = 2 |c|
Method
1. Sample the function at N points by calculating
L
fk = f (tk ) where tk = k, k = 1, . . . , N.
N
This corresponds to a sampling rate N/L. The signal becomes a vector f = (f1 , f2 , . . . , fN ).
Picture
|Fj | height N
index j
1 2 3 4 n N
2π n 2π n
Whole truth: c ei L
t
+c e−i L
t
in (*) causes spikes in (Fj ) of height |c|N at j = n, N −n.
20
Proof. Sampling the signal f :
2π n 2π n k L 2π n k
fk = f (tk ) = ei L k
t
= ei L N = ei N .
k=1
N
2π j k
+i 2πNn k
X
−i
= e N
k=1
N
2π
X
i (n−j) k
= e N
k=1
N k
2π
X
= ei N (−j+n)
k=1
The algebra above may not seem like progress, but we have now written
N
2π
X
Fj = zk where z = ei N (n−j)
.
k=1
n−j
Notice that z = 1 if N
is an integer, and z 6= 1 otherwise. In the latter case, z N = 1. Thus
N
X
Fj = zk
k=1
z(1−z N )
1−z
z 6= 1
= 1| + 1 +{z· · · + 1} z = 1
N times
0 n−j
N
not an integer
=
N n=j
21
Example 33.1 The signal f (t) = sin(2t) − 2 cos(3t) on interval [0, 4π] can be written in
complex form as
1 1
f (t) = ei 2t − e−i 2t − ei 3 t − e−i 3 t .
2i 2i
This contains frequencies of 1/πHz and 3/2πHz.
The numerical sampling and transformation can be done in Matlab (via the fft function).
With N = 100 samples, we see peaks in |F| of height 50 and 100 at indices j = 4, 6. To
understand why this is so, note that when n = 4,
2π n 2π 4
ei L
t
= ei 4π
t
= ei 2 t .
1
The signal contains a component of this function with coefficient c = 2i
. The theory predicts
a peak in Fj at j = n = 4 of height
1
|c| N = 100 = 50.
2
Similarly, for n = 6,
2π n 2π 6
ei L
t
= ei 4π
t
= ei 3 t
and theory predicts a peak in |Fj | at j = n = 6 of height 100.
22
The Discrete Fourier Transform
The Discrete Fourier Transform replaces the analysis and synthesis steps of Fourier series when
f consists of N samples from a signal of time duration L.
PN −1 2πn k
Analysis: Fn = k=0 fk e−i N (implement with fft in Matlab)
i 2πn k
P −1
Synthesis: fk = N1 N n=0 Fn e
N (implement with ifft in Matlab)
−1
Interpretation. Fn /N is the contribution of frequency Ln Hz to the signal. {Fn }N n=0 is the
frequency domain representation of the time domain signal {f0 , f1 , . . . , fk , . . . fN −1 }.
• the Matlab functions fft and ifft use clever algebra to evaluate FAST by padding
the signal length N to become a power of 2 and using some cool recursion
Power spectrum
L
1. Fix L, N and sample f at tk = k N
, 0 ≤ k < N ; or {f0 , . . . , fN −1 } from data
Example 1. Recall the signal f (t) = sin 2t−2 cos 3t. This contains frequencies 1/π ≈ 0.318
and 3/2π ≈ 0.478. The Matlab script dfteg1 takes a sample of length N = 800 over a
period of L = 100sec.
Note that the raw data of {Fn } are complex valued. When suitably scaled, the power spectrum
shows peaks in the right places (and some weird ones). The recovery with ifft is excellent.
23
Example 2. The script dfteg2 calculates the power spectrum for sin 2πt + 2 sin 4πt from a
sample of length L = 10, N = 250. This gives a sampling frequency of cs = 250/10 = 25Hz,
which is sufficient to resolve the two frequencies of 1 and 2Hz.
Example 3. The script dfteg3 calculates the power spectrum for sin 2πt + 2 sin 4πt +
sin 100t from a sample of length L = 10, N = 250. This signal has a component with
frequency 100/2π ≈ 16Hz.
The power spectrum shows spikes at frequencies 1, 2, 9, 16Hz. Three of these are expected,
but 9Hz is a surprise. Additionally, when reconstructing the signal via ifft, some of the
higher frequency behaviour is smoothed over.
The problem with this example is that the sampling rate is not high enough to capture the
higher frequency behaviour. By increasing N to 1000, these problems go away. Experimenting
with various values of N between 250 and 1000 shows the effect on the “ghost” frequencies.
24
Aliasing
All of the DFT examples so far have shown a phenomenon of “ghosting”, whereby the power
spectrum appears perfectly symmetric, with the presence of some possibly false peaks.
Consequence. The symmetry guarantees that if the power spectrum of a signal has a peak
at frequency n then the DFT will also show a ghost peak at frequency N − n. In fact, what
the DFT does is put half the power at n and the other half at N − n.
Interpretation. Only the first half of the picture should be considered: 0 ≤ n < N/2 as
only frequencies in this range can be resolved. Put another way, to resolve a given frequency,
the sampling rate must be high enough. If aliasing errors are suspected, increase the sampling
rate.
Sampling Theorem. The frequency c from a signal on [0, L] with N samples can be re-
covered from the DFT if
N 1
cs = > 2c and c> .
L L
Proof: The frequency corresponding to Fn is n/L. Because the lowest frequency corresponds
to n = 1, and the highest resolvable frequency from the DFT has n = N/2 the maximum and
minimum resolvable frequencies are
N 1
cmax = and cmin = .
2L L
Comments
• If N/L is too low, high frequency behaviour cannot be resolved, and N could be increased
• If L is too low, low frequency behaviour cannot be resolved, and the recovered signal
will show drift
Exercise Use dfteg4 to investigate sin 2πt + 2 cos 4πt + sin 0.1 t with (N, L) = (250, 10),
(250, 100), (1000, 100). Calculate cmin and cmax in each case to explain what you see.
25