Lecture 2
Lecture 2
We know that for a deterministic signal g(t), we define the Fourier Transform as,
Z ∞
G(f ) = g(t)e−j2πf t dt, (1)
−∞
where G(f ) corresponds to the contribution of the frequency f to the overall signal, across all
time. We also know that we can reconstruct g(t) from G(f ) using the relation
Z ∞
g(t) = G(f ) ej2πf t dt. (2)
−∞
We also know that for the Fourier transform to exist g(t) and by consequence G(f ) should
follow the following commandments, known as Dirichlet’s conditions:
1) g(t) should be single valued, with a finite number of finite maxima and minima within a
finite time interval
2) g(t) may contain a finite number of finite discontinuities.
3) g(t) should be absolutely integrable. That is, there exists M < ∞, such that
Z ∞
|g(t)|dt ≤ M.
−∞
As a consequence, all the energy signals (signals that have a finite energy) are Fourier
Transformable.
R. Chopra is with the Department of Electronics and Electrical Engineering, Indian Institute of Technology Guwahati, Assam,
India. (email: [email protected]).
If the signal g(t) is real valued, then, g(t) will be the same as its complex conjugate, i.e.
g(t) = g ∗ (t). Using this, we can now write,
Z ∞ ∗
∗ j2πf t
g (t) = G(f )e dt
−∞
Z ∞
= G∗ (f )e−j2πf t dt
−∞
Z ∞
= G∗ (−f )ej2πf t dt
−∞
Z ∞
= G(f )ej2πf t dt
−∞
G∗ (−f ) = G(f )
That is, |G(f )| = |G(−f )| implying a symmetric magnitude spectrum, and ∠G(f ) = ∠G(−f )
implying an anti-symmetric phase spectrum. In simpler words, the magnitude spectrum of a
signal that is real valued in the time domain is an even function of the frequency, and its phase
spectrum is an odd function of the frequency. Let us now quickly list some more properties of
the Fourier transform, but without proving them. At this point, we for the sake of convenience,
invent some notation and declare F as the Fourier transform operator such that
Z ∞
F[g(t)] = g(t)e−j2πf t dt = G(f ),
−∞
and Z ∞
−1
F [G(f )] = G(f ) ej2πf t dt = g(t).
−∞
1) Linearity: If F[x1 (t)] = X1 (f ), and F[x2 (t)] = X2 (f ), then for a, b ∈ C, and x(t) =
ax1 (t)+bx2 (t), such that F[x(t)] = X(f ), then X(F ) = aX1 (f )+bX2 (f ). The properties
of homogeneity and superposition are implicit here.
2) Time Shift: F[x(t − τ )] = X(f )e−j2πf τ .
3) Frequency Shift: F[x(t)ej2πνt ] = X(f − ν).
1
4) Scaling: F[x(at)] = |a| X(f /a)∀a ∈ R.
h i
5) Differentiation: F dx(t)
dt
= jf 2X(f ).
x(t)dt = 2 X(f )
R
6) Integration: F jf
.
R∞
7) Convolution: If x(t) = −∞ x1 (τ )x2 (t − τ )dτ , then X(F ) = X1 (f )X2 (f ).
8) Duality: If F[x(t)] = X(f ), then F[X(t)] = x(−f ).
Fig. 1: Example 1
B. Why?
Now the question is, why do we fret so much about the Fourier transform? We have spent an
entire semester worrying about it, and now I am telling you that we will have to worry about it
at least for another semester. Why?
The answer to this question is simple, we are talking about communication systems, and
the job of communication systems is to convey information. This information is in the form of
signals and the most of the communication systems can be modeled as linear time invariant (LTI)
systems. Now, LTI systems have the complex sinusoid as their eigenfunction. That is, if you
feed a complex sinusoid to an LTI system, the output will be another complex sinusoid with
the same freqneucy and a scaling factor. To see this, we consider our first GNU Radio example,
available as Example 1 in the Chapter 2 folder in the code base, or by clicking here. The block
diagram of this system is seen in Figure. 1. Note that the LTI system considered here is a filter
with arbitrary coefficients.
You can run this file and play around with the types of waveforms in the Signal Source block.
You will simply notice that the only waveform resulting in no change in the wave shape is the
sinusoid. You can also go ahead and play with the filter coefficients. You will learn more about
these in you DSP course.
This is also a good time to revisit the delta functions that were taught in the Signals and
Systems course, viz. the Kronecker delta and the Dirac delta. The Kronecker delta is a simple
discrete time function defined as
1,
n=0
δ[n] = . (3)
0, n 6= 0
That is, δ[n] is zero at all instants of time except n = 0, where it takes the value 1.
On the other hand we define the Dirac delta function or the impulse function using the
following two properties,
1) δ(t) = 0 for t 6= 0
R∞
2) −∞ δ(t)dt = 1.
We can however approximate it using a multitude of functions. Here is an example using the
gate function, let us define the function f∆ (t) as
1 , t ∈ −∆ ∆
∆ 2 2
f∆ (t) = (4)
0, otherwiese
we see that the area under f∆ (t) is always unity, and therefore,
Therefore,
F[δ(t)] = 1
So far so good, now invoking the frequency shift property of the Fourier transform, we get,
But what do we do with this, well this helps us a bit, remember that we had said that we only
finite energy signals can have Fourier transforms. We can throw that out of the window for the
specific case of periodic signals. Let us see how.
I know that for any arbitrary periodic signal p(t) with a base frequency f0 I can write,
∞
X
p(t) = P [k]ej2πkf0 t , (8)
k=−∞
with Z T /2
1
P [k] = p(t)dt. (9)
T −T /2