Signalsss
Signalsss
In the introduction to these notes we remarked that discrete-time signals are the mathe-
matical model of choice in two signal processing situations: the first, which encompasses
the long-established tradition of observing physical phenomena, captures the process of
repeatedly measuring the value of a physical quantity at successive instants in time for
analysis purposes. The second, which is much more recent and dates back to the first
digital processors, is the ability to synthesize discrete-time signal by means of iterative
numerical algorithms.
The repeated measurement of a “natural” signal is called sampling; at the base of the
notion is a view of the world in which physical phenomena have a potentially infinitely
small granularity, in the sense that measurements can be achieved with arbitrary denseness.
For this reason, it is customary to model real-world phenomena as functions of a real
variable (the variable being time or space); defining a quantity over the real line allows
for infinitely small subdivisions of the function’s domain and therefore infinitely precise
localization of its values. We will refer to this model of the world as to the continuous-
time paradigm. Whether philosophically valid1 or physically valid2 , the continuous-time
paradigm is a model of immense usefulness in the description of analog signal processing
systems. So useful, in fact, that even in the completely discrete-time synthesis scenario,
we will often find ourselves in the need of converting a sequence to a well defined function
of a continuous variable in order to interface our digital world to the analog world outside.
The process, which can be seen as the dual of sampling, is called interpolation.
1
Remember Zeno’s paradoxes...
2
The shortest unit of time at which the usual laws of gravitational physics still hold is called Planck
time and is estimated at 10−43 seconds. Apparently, therefore, the universe works in discrete-time...
249
250 Chapter 10.
Interpolation. Interpolation comes into play when discrete-time signals need to be con-
verted to continuous-time signals. The need arises at the interface between the digital
world and the analog world; as an example, consider a discrete-time waveform synthesizer
which is used to drive an analog amplifier and loudspeaker. In this case, it is useful to ex-
press the input to the amplifier as a function of a real variable, defined over the entire real
line; this is because the behavior of analog circuitry is best modeled by continuous-time
functions. We will see that at the core of the interpolation process is the association of a
physical time duration Ts to the intervals between samples of the discrete-time sequence.
The fundamental questions concerning interpolation involve the spectral properties of the
interpolated function with respect to those of the original sequence.
Sampling.
A typical method to obtain a discrete-time representation of a continuous-time signal
is through periodic sampling (uniform sampling) where a sequence of samples x[n] are
obtained from a continuous-time signal xc (t) as,
x[n] = xc (nTs ), −∞ < n < ∞ (10.1)
where Ts is the sampling period and Fs = T1s is the sampling frequency.
A natural question we asked is whether such a sampling process extends a loss of
information, i.e. given {x[n]}, can we reconstruct xc (t) for any t?
This would mean that we can interpolate between values of {xc (nTs )} to reconstruct
xc (t). If the answer is in the negative (at least for a given class of signals), this means
that all the processing tools developed in the discrete-time domain can be applied to
continuous-time signals as well, after sampling. The fundamental question is whether this
is possible, and if so what are the interpolating functions.
Notation. In the rest of this chapter we will encounter a series of variables which are all
interrelated and whose different forms will be used interchangeably according to conve-
nience. They are summarized here for a quick reference:
Inner product and convolution. We have already encountered some examples of continuous-
time signals in conjunction with Hilbert spaces; in section 4.2.2, for instance, we introduced
the space of square integrable functions over an interval and, in a short while, we will in-
troduce the space of bandlimited signals. For inner product spaces whose elements are
functions on the real line, we will use the following inner product definition:
Z ∞
hf (t), g(t)i = f ∗ (t)g(t)dt (10.2)
−∞
The convolution of two real continuous-time signals is defined as usual from the definition
of the inner product; in particular
The convolution operator in continuous time is linear and time invariant, as can be easily
verified. Note that, just like in discrete-time, convolution represents the operation of
filtering a signal with a continuous-time LTI filter, whose impulse response is of course a
continuous-time function.
252 Chapter 10.
from which the Fourier transforms of sine, cosine, and constant functions can be easily
derived. Please note that, in continuous-time, the CTFT of a complex exponential is not
a train of impulses but just a single impulse.
The Convolution Theorem. The convolution theorem for continuous-time signal exactly
mirrors the theorem in section 7.5.2; it states that if h(t) = (f ∗ g)(t) then the Fourier
transforms of the three signals are related by H(jΩ) = F (jΩ)G(jΩ). In particular we can
use the convolution theorem to compute
Z ∞
1
(f ∗ g)(t) = F (jΩ)G(jΩ)ejΩt dΩ (10.8)
2π −∞
The Sinc Function. Let us now consider a prototypical ΩN -bandlimited signal ϕ(t) whose
Fourier transform is constant over the interval [−ΩN , ΩN ] and zero everywhere else. If we
define the rect function as (see also section 7.7.1):
1 |x| ≤ 1/2
rect(x) =
0 |x| > 1/2
we can express the Fourier transform of the prototypical ΩN -bandlimited signal as
π Ω
Φ(jΩ) = rect (10.9)
ΩN 2ΩN
where the leading factor is just a normalization term. The time-domain expression for the
signal is easily obtained from the inverse Fourier transform as
sin ΩN t t
ϕ(t) = = sinc (10.10)
ΩN t Ts
where we have used Ts = π/ΩN and defined the sinc function as
sin(πx)
sinc(x) = x 6= 0
1 πx x=0
Figure 10.1: The sinc function in frequency (X(jΩ)) and time (x(t)) domains.
• The sinc function is zero for all integer values of its argument, except in zero. This
feature is called the interpolation property of the sinc, as we will see more in detail
later.
• The sinc function is square integrable (it has finite energy) but it is not absolutely
integrable (hence the discontinuity of its Fourier transform).
• The scaled sinc function represents the impulse response of an ideal, continuous-time
lowpass filter with cutoff frequency ΩN .
10.4. The Sampling Theorem 255
Let us define a new continuous-time signal which places Dirac delta impulses at the sam-
pling locations, i.e.,
X X
xs (t) = x[n]δ(t − nTs ) = xc (nTs )δ(t − nTs ), (10.12)
n n
which is a fictitious signal serving as an intermediate step between the continuous and
discrete-time worlds.
We can also write
X X
xs (t) = xc (nTs )δ(t − nTs ) = xc (t) δ(t − nTs ), (10.13)
n
|n {z }
s(t)
i.e.,
Hence we see that from the modulation property of continuous-time Fourier transforms,
Z +∞
1
Xs (jΩ) = Xc (jθ)S (j(Ω − θ)) dθ . (10.15)
2π −∞
| {z }
Xc (jΩ)∗S(jΩ)
256 Chapter 10.
Now
+∞
X 2π X
CTFT
δ(t − nTs ) ⇐⇒ δ(Ω − kΩs ), (10.16)
n
Ts
k=−∞
2π
where Ωs = Ts . Using this in (10.15) we see that
+∞
1 1 X
Xs (jΩ) = Xc (jΩ) ∗ S(jΩ) = Xc (j(Ω − kΩs )) . (10.17)
2π Ts
k=−∞
Therefore, we see that the sampled sequence has a Fourier transform which consists of
periodically repeated copies of the original CTFT of xc (t), shifted by integer multiples
and superimposed.
To observe its effects, see Figure 10.2 representing a bandlimited Fourier transform
with bandwidth ΩN , Figure 10.3 is the periodic impulse train S(jΩ) and finally Figure
10.4 is Xs (jΩ) along with X ejω in Figure 10.5. From Figure 10.4 it is clear that to
retain information through sampling we need
so that the replicas of Xc (jΩ) do not overlap when they are added together in (10.17). If
this condition
is satisfied, it is clear that one can recover xc (t) from x[n] (or Xc (jΩ) from
X ejω ) by taking the inverse CTFT of one of the replicas, i.e., by taking
where
Ts |Ω| ≤ Ωc
Hr (jΩ) = (10.20)
0 else,
and
ΩN ≤ Ωc ≤ Ωs − ΩN .
If xc (t) is a bandlimited signal with Xc (jΩ) = 0 for |Ω| > ΩN , then xc (t) is uniquely
determined by its samples x[n] = xc (nTs ), if Ωs = 2π
Ts ≥ 2ΩN .
10.4. The Sampling Theorem 257
This gives us an idea on how to reconstruct the original signal from the samples using
(10.19). We use (10.19) to see that
Z
xr (t) = hr (t) ∗ xs (t) = hr (t − τ )xs (τ ) dτ
τ
Z X
= δ(τ − nTs )xc (nTs )hr (t − τ ) dτ
τ n
X Z
= xc (nTs ) hr (t − τ )δ(τ − nTs ) dτ
n τ
X
−→ xr (t) = xc (nTs )hr (t − nTs ), (10.21)
n
where hr (t) is the inverse CTFT of Hr (jΩ). The form of (10.21) shows the underlying
operation as an interpolating between the sampled values. This point-of-view will be de-
veloped next in an alternate proof of the sampling theorem in terms of Hilbert spaces and
bases functions. Finally note that since we choose Ωs ≥ 2ΩN , we have perfect reconstruc-
tion, i.e.,
xr (t) = xc (t).
where, once again, Ts = π/ΩN . Note that we have ϕ(n) (t) = ϕ(0) (t−nTs ) so that each basis
function is simply a shifted version of the prototype basis function ϕ(0) . Orthogonality
can be easily proved as follows: first of all, because of the symmetry of the sinc function
and the time-invariance of the convolution, we can write
so that {ϕ(n) (t)}n∈Z is orthogonal with normalization factor ΩN /π (or, equivalently, 1/Ts ).
In order to show that the space of ΩN -bandlimited functions is indeed a Hilbert space,
we should also prove that the space is complete. This is a more delicate notion to show5
and here it will simply be assumed.
in the derivation we have first rewritten the inner product as a convolution operation, then
we have applied the convolution theorem, and recognized the penultimate line as simply
the inverse CTFT of X(jΩ) calculated in t = nTs . We therefore have the remarkable
result that the n-th basis expansion coefficient is proportional to the sampled value of x(t)
at t = nTs . For this reason, the sinc basis expansion is also called sinc sampling.
Reconstruction of x(t) from its projections can now be achieved via the orthonormal
basis reconstruction formula (4.40); since the sinc basis is just orthogonal rather than
5
Completeness of the sinc basis can be proven as a consequence of the completeness of the Fourier basis
in the continuous-time domain.
260 Chapter 10.
orthonormal, (4.40) needs to take into account the normalization factor and we have:
∞
1 X
x(t) = hϕ(n) (t), x(t)i ϕ(n) (t)
Ts n=−∞
∞
X t − nTs
= x(nTs )sinc (10.28)
n=−∞
Ts
A few notes:
• The proof of the theorem is inherent to the properties of the Hilbert space of ban-
dlimited functions, and it is trivial after having proved the existence of an orthogonal
basis.
• Another way to state the above point is to say that the minimum sampling fre-
quency Ωs for perfect reconstruction is exactly twice the Nyquist frequency, where
the Nyquist frequency is the highest frequency of the bandlimited signal; the sam-
pling frequency Ω must therefore satisfy the relationship:
Ω ≥ Ωs = 2ΩN
or, in Hertz,
F ≥ Fs = 2FN .
10.5. The Space of Bandlimited Signals. 261
Example 10.1 Let xc (t) = cos(4000πt) = cos [2π(2000)t], for which the Fourier trans-
form is shown in Fig. 10.6.
Thus fmax = 2000 (ΩN = 4000π) for this case and we need fs ≥ 4000 as the sampling
rate. Let fs = T1s = 6000, Ωs = 2πfs .
2000 2π
x[n] = xc (nTs ) = cos (2π2000nTs ) = cos 2π n = cos n .
6000 3
Now for reconstruction we get
+∞ +∞
X t − nTs X 2π sin π(6000t − n)
x̂c (t) = x[n]sinc = cos n .
n=−∞
Ts n=−∞
3 π(6000t − n)
Example 10.2 Fig. 10.9 shows signal Xc (jΩ) and its sampled version.
264 Chapter 10.
Figure 10.8: Xc (jΩ), Xs (jΩ), and X ejω .
10.6 Interpolation
Interpolation is a procedure whereby we convert a discrete-time sequence x[n] to a continuous-
time function x(t). Since this can be done in an arbitrary number of ways, we have to start
by formulating some requirements on the resulting signal. At the heart of the interpolating
10.6. Interpolation 265
the interpolated function is that its values at multiples of Ts be equal to the corresponding
points of the discrete-time sequence, i.e.
x(t)|t=nTs = x[n];
the interpolation problem now reduces to “filling the gaps” between these instants.
where I(t) is called the interpolation function (for linear functions the notation IN (t) is
used and the subscript N indicates how many discrete-time samples besides the current one
enter in the computation of the interpolated values for x(t)). The interpolation function
must satisfy the fundamental interpolation properties:
I(0) = 1
(10.30)
I(k) = 0 for k ∈ Z \ {0},
where the second requirement implies that, no matter what the support of I(t) is, its
values should not affect other interpolation instants. By changing the function I(t), we
can change the type of interpolation and the properties of the interpolated signal x(t).
Note that (10.29) can be interpreted either simply as a linear combination of shifted
interpolation functions or, more interestingly, as a “mixed domain” convolution prod-
uct, where we are convolving a discrete-time signal x[n] with a continuous-time “impulse
response” I(t) scaled in time by the interpolation period Ts .
Zero-Order Hold. The simplest approach for the interpolating function is the piecewise-
constant interpolation; here the continuous-time signal is kept constant between discrete
sample values, yielding:
1 1
x(t) = x[n] for (n − )Ts ≤ t < (n + )Ts .
2 2
10.6. Interpolation 267
1
1
0.8
0.6
0.8
0.4
0.2 0.6
−0.2 0.4
−0.4
0.2
−0.6
−0.8
x(t)
0
x(nT)
−1 y(t)
(a) (b)
An example is shown in Figure 10.10(a); it is apparent that the resulting function is far
from smooth since the interpolated function is discontinuous. The interpolation function
is simply:
I0 (t) = rect(t)
and the values of x(t) depend only on the current discrete-time sample value.
1
1
0.8
0.6
0.8
0.4
0.2 0.6
−0.2 0.4
−0.4
0.2
−0.6
−0.8
x(t)
0
x(nT)
−1 y(t)
(a) (b)
which is shown in Figure 10.11(b)6 . It is immediate to verify that I1 (t) satisfies the
interpolation properties (10.30).
Higher-Order Interpolators. The zero- and first-order interpolators are widely used in
practical circuits due to their extreme simplicity. These schemes can be extended to higher
order interpolation functions and, in general, IN (t) will be an N -th order polynomial in
t. The advantage of the local interpolation schemes is that, for small N , they can be
easily implemented in practice as causal interpolation schemes (locality is akin to FIR
filtering); their disadvantage is that, because of the locality, their N -th derivative will
be discontinuous. This discontinuity represents a lack of smoothness in the interpolated
function; from a spectral point of view this corresponds to a high frequency energy content,
which is usually undesirable.
for a long time under the name of Lagrange interpolation. Note that a polynomial in-
terpolating a finite set of samples is a maximally smooth function in the sense that it is
continuous together with all its derivatives.
Consider a length (2N + 1) discrete-time signal x[n], with n= −N, . . . , N . Associate to
each sample an abscissa tn = nTs ; we know from basic algebra that there is one and only
one polynomial P (t) of degree 2N which passes through all the 2N + 1 pairs (tn , x[n]) and
this polynomial is the Lagrange interpolator. The coefficients of the polynomial could be
found by solving the set of 2N + 1 equations:
{P (tn ) = x[n]}n=−N,...,N (10.31)
but a simpler way to determine the expression for P (t) is to use the set of 2N + 1 Lagrange
polynomials of degree 2N :
N
Y (t − tk )
L(N )
n (t) =
(tn − tk )
k=−N
k6=n
N
Y t/Ts − k
= n = −N, . . . , N. (10.32)
n−k
k=−N
k6=n
(N )
The polynomials Ln (t) for Ts = 1 and N = 2 (i.e. interpolation of 5 points) are plotted
in Figure 10.12-(a). By using this notation, the global Lagrange interpolator for a given set
of abscissa/ordinate pairs can now be written as a simple linear combination of Lagrange
polynomials:
N
X
P (t) = x[n]L(N )
n (t) (10.33)
n=−N
and it is easy to verify that this is the unique interpolating polynomial of degree 2N in the
(N )
sense of (10.31). Note that each of the Ln (t) satisfies the interpolation properties (10.30)
or, concisely (for Ts = 1):
L(N )
n (m) = δ[n − m], m ∈ {−N, . . . , N }.
The interpolation formula, however, cannot be written in the form of (10.29) since the
Lagrange polynomials are not simply shifts of a single prototype function. The continuous
time signal x(t) = P (t) is now a global interpolating function for the finite-length discrete-
time signal x[n], in the sense that it depends on all samples in the signal; as a consequence,
x(t) is maximally smooth (x(t) ∈ C ∞ ). An example of Lagrange interpolation for N = 2
is plotted in Figure 10.12-(b).
270 Chapter 10.
1.4 3
1.2
2
1
0.8
1
0.6
0
0.4
0.2
−1
L (t)
−2
−0.2 L (t) −2
−1
L0(t)
L1(t)
−0.4 Interpolated points
L2(t) Lagrange interpolation
−3
−2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5
−0.6
−2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 (a)
(a) (b)
(N )
Figure 10.12: Lagrange interpolation. (a) The polynomials Ln (t) used to
(N )
compute the interpolation for N = 2 and T = 1. Note that Ln (m) is zero
except for m = n, where it is 1. (b) Interpolation using 5 points.
The beauty of local interpolation schemes lies in the fact that the interpolated function is
simply a linear combination of shifted versions of the same prototype interpolation function
I(t); this unfortunately has the disadvantage of creating a continuous-time function which
lacks smoothness. Polynomial interpolation, on the other hand, is perfectly smooth but it
only works in the finite-length case and it requires different interpolation functions with
different signal lengths. Yet, both approaches can come together in a nice mathematical
way and we are now ready to introduce the maximally smooth interpolation scheme for
infinite discrete-time signals.
Let us take the expression for the Lagrange polynomial of degree N in (10.32) and
10.6. Interpolation 271
where we have used the change of variable m = n − k. We can now invoke Euler’s infinite
product expansion for the sine function
∞
Y τ2
sin(πτ ) = (πτ ) 1− 2
k
k=1
of the discrete-time sequence exists, the spectrum of the interpolated function X(jΩ) can
be obtained as follows:
Z ∞ X ∞
t − nTs
X(jΩ) = x[n]sinc e−jΩt dt
−∞ n=−∞ Ts
∞ Z ∞
X t − nTs
= x[n] sinc e−jΩt dt
n=−∞ −∞ Ts
now we use (10.9) to get the Fourier Transform of the scaled and shifted sinc
∞
X π Ω
= x[n] rect e−jnTs Ω
n=−∞
Ω N 2Ω N
10.7 Aliasing
The “naive” notion of sampling, as we have seen, is associated to the very practical
idea of measuring the instantaneous value of a continuous-time signal at uniformly spaced
instants in time. For bandlimited signals, we have seen that this is actually equivalent to an
orthogonal decomposition in the space of bandlimited functions, which guarantees that the
set of samples x(nTs ) uniquely determines the signal and allows its perfect reconstruction.
We now want to address the following question: what happens if we simply sample an
arbitrary time signal in the “naive” sense (i.e. in the sense of simply taking x[n] = x(nTs ))
and what can we reconstruct from the set of samples thus obtained?
7
To find a simple everyday analogy, think of a 45rpm vinyl record played at either 33rpm (slow inter-
polation) or at 78rmp (fast interpolation) and remember the acoustic effect on the sounds.
10.7. Aliasing 273
0.8
0.6
0.4
0.2
−0.2
−0.4
−10 −8 −6 −4 −2 0 2 4 6 8 10
Figure 10.13: The sinc function (solid) and its Lagrange approximation
(dashed) as in (10.34) for 100 factors in the product.
not bandlimited or, equivalently, if the sampling frequency is less than twice the Nyquist
frequency? We will develop the intuition by starting with the simple case of a single
sinusoid and we will move on to a formal demonstration of the aliasing phenomenon. In
the following examples we will work with frequencies in Hertz, both out of practicality
and to give an example of a different form of notation.
0.8
0.6
0.4
0.2
−0.2
−0.4
−0.6
−0.8 x(t)
x[n]
−1
0 0.5 1 1.5 2 2.5 3 3.5
Time (sec) −3
x 10
ωb = (2πf0 Ts ) + 2kπ
and choose k ∈ Z so that ωb falls in the [−π, π] interval. Seen the other way, all continuous-
time frequencies of the form
f = fb + kFs
In other words, two continuous-time exponential which are Fs Hz apart will give rise to
a single discrete-time complex exponential, whose amplitude is equal to the sum of the
amplitudes of both the original sinusoids.
We can arrive at an expression for x[n] also from Xc (jΩ), the Fourier transform of the
continuous-time function xc (t); indeed:
Z ∞
1
x[n] = xc (nTs ) = Xc (jΩ)ejΩ nTs dΩ. (10.40)
2π −∞
The idea is to split the integration interval in the above expression as the sum of non
overlapping intervals whose width is equal to the sampling bandwidth Ωs = 2ΩN ; this
10.7. Aliasing 277
stems from the realization that, in the inversion process, all frequencies Ωs apart will give
undistinguishable contribution to the discrete-time spectrum. We have:
∞ Z (2k+1)ΩN
1 X
x[n] = Xc (jΩ)ejΩ nTs dΩ
2π
k=−∞ (2k−1)ΩN
∞ Z ΩN
1 X
= Xc (jΩ − jkΩs )ejΩ nTs dΩ (10.41)
2π −Ω N
k=−∞
Z ΩN ( X ∞
)
1
= Xc (jΩ − jkΩs ) ejΩ nTs dΩ (10.42)
2π −ΩN
k=−∞
Z ΩN
1
= X̃c (jΩ)ejΩ nTs dΩ (10.43)
2π −ΩN
Z π
1 1 θ
= X̃c (j )ejθn dθ (10.44)
2π −π Ts Ts
A few notes on the above derivation:
(a) In (10.41) we have exploited the Ωs -periodicity of ejΩ nTs (i.e. ej(Ω+kΩs )nTs = ejΩnTs ).
(b) In (10.42) we have interchanged the order of integration and summation. This can be
done under fairly broad conditions on xc (t), for which we refer to the bibliography.
(c) In (10.43) we have defined
∞
X
X̃c (jΩ) = Xc (jΩ − jkΩs )
k=−∞
result is a particular version of a more general result in Fourier theory called the Poisson
sum formula. In particular, when xc (t) is ΩN -bandlimited, the copies in the periodized
spectrum do not overlap and the (periodic) discrete-time spectrum between −π and π is
simply
1 ω
X(ejω ) = Xc (j ).
Ts Ts
10.7.4 Examples
Figures 10.16 to 10.19 illustrate several examples of the relationship between the continuous-
time spectrum and the discrete-time spectrum. For all figures, the top panel shows the
continuous-time spectrum, with labels indicating the Nyquist frequency (where applicable)
and the sampling frequency. In particular:
• Figure 10.16 shows the result of sampling a bandlimited signal with a sampling
frequency in excess of the minimum (twice the Nyquist frequency); in this case
we say that the signal has been oversampled. The result is that in the periodized
spectrum the copies do not overlap and the discrete-time spectrum is just a scaled
version of the original spectrum (with even a narrower support than the full [−π, π]
range because of the oversampling).
• Figure 10.17 shows the result of sampling a bandlimited signal with a sampling
frequency exactly equal to twice the Nyquist frequency; in this case we say that
the signal has been critically sampled. In the periodized spectrum the copies again
do not overlap and the discrete-time spectrum is a scaled version of the original
spectrum.
• Figure 10.18 shows the result of sampling a bandlimited signal with a sampling fre-
quency less than the minimum sampling frequency. Now in the periodized spectrum
the copies do overlap, and the resulting discrete-time spectrum is an aliased version
of the original; the original spectrum cannot be reconstructed from the sampled
signal.
• Finally, Figure 10.19 shows the result of sampling a non-bandlimited signal with a
sampling frequency which is chosen as a tradeoff between alias and number of samples
per second. The idea is to disregard the low-energy “tails” of the original spectrum
so that their alias does not corrupt too much the discrete-time spectrum. In the
periodized spectrum the copies do overlap and the resulting discrete-time spectrum
is an aliased version of the original, which is similar to the original; the original
spectrum, however, cannot be reconstructed from the sampled signal. In a practical
10.7. Aliasing 279
sampling scenario, the correct design choice would have been to lowpass filter (in
the continuous-time domain) the original signal so as to eliminate the spectral tails
beyond ±Ωs /2.
280 Chapter 10.
(a)
X(jΩ)
1
0
ΩN Ωs
(b)
1
X∼(jΩ)
c
0.5
0
(c)
1/Ts
)
jω
X(e
0
−1 ΩN/Ωs 1
ω/π
(a)
1
X(jΩ)
0
ΩN Ωs
(b)
1
X∼(jΩ)
c
0.5
0
(c)
1/Ts
)
jω
X(e
0
−1 1
ω/π
(a)
X(jΩ)
1
0
ΩN Ω
s
(b)
1.5
1
X∼(jΩ)
c
0.5
0
(c)
1/T
s
)
jω
X(e
0
−1 1
ω/π
(a)
1
X(jΩ)
0
Ωs
(b)
1.5
1
X∼(jΩ)
c
0.5
0
(c)
1/Ts
)
jω
X(e
0
−1 1
ω/π
(d) Plot the magnitude spectrum |X(ejω )| for Fs = 10, 20, 40 and 100 Hz.
(e) Explain the results obtained in the previous part in terms of aliasing effects.
Solution:
(a) It can easily be seen that Xa (F ) = 21 (δ(F − F0 ) + δ(F + F0 )). Indeed,
Z
Xa (F ) = cos(2πF0 t)e−j2πF t dt
Z Z
1 j2πF0 t −j2πF t −j2πF0 t −j2πF t
= e e dt + e e dt
2
1
= (δ(F − F0 ) + δ(F + F0 ))
2
using the hint.
10.7. Aliasing 285
2 f0
S
:
P ω
−j 2πk
(b) We know that X(ejω ) = T1 ∞ k=−∞ Xa (e
jT T ). Further, we know that ω = 2π
F
Fs .
P∞ P
Hence, X(F ) = Fs k=−∞ Xa (F −kFs ) = F2s ∞ k=−∞ [δ(F − F0 − kFs ) + δ(F + F0 − kFs )].
This means that the spectrum of the continuous time signal is repeated every Fs when
we sample it.
(c) See Fig. 10.21.
(d) See Fig. 10.22.
(e) When the sampling frequency is less than or equal to 2 times F0 , i.e., when Fs ≤ 2F0 ,
286 Chapter 10.
10.8 Problems
Problem 10.1 Consider a real function f (t) for which the Fourier transform is well
defined:
Z ∞
F (jΩ) = f (t)e−jΩt dt . (10.46)
−∞
Suppose that we only possess a discrete-time version of f (t), that is, we only know the
values of f (t) at times t = n∆, n ∈ Z for a fixed interval ∆. We want to approximate
F (jΩ) with the following expression:
∞
X
F̂ (jΩ) = ∆ · f (n∆)e−j∆nΩ . (10.47)
n=−∞
Observe that F (jΩ) in (10.46) is computed using the values of f (t) for all t, while the
approximation in (10.47) uses only the values of f (t) for a countable number of t.
Consider now the periodic repetition of F (jΩ):
∞
X 2π
F̃ (jΩ) = F (j(Ω + n)) . (10.48)
n=−∞
∆
That is, F (jΩ) is repeated (with possible overlapping) with period 2π/∆ (same ∆ as in the
approximation (10.47)).
(a) Show that the approximation F̂ (jΩ) is equal to the periodic repetition of F (jΩ), i.e.,
F̂ (jΩ) = F̃ (jΩ).
10.8. Problems 287
Figure 10.22: Spectrum of X(F ) for sampling frequencies Fs = 10, 20, 40, 100 Hz
288 Chapter 10.
for any value of ∆. (Hint: consider the periodic nature of F̃ (jΩ) and remember that
a periodic function has a Fourier series expansion).
(b) Give a qualitative description of the result.
(c) For F (jΩ) as in Figure 10.23, sketch the resulting approximation F̂ (jΩ) for ∆ =
2π/Ω0 ,∆ = π/Ω0 and ∆ = π/(100/Ω0 ).
F (jΩ)
−Ω0 Ω0 Ω
Problem 10.2 One of the standard ways of describing the sampling operation relies on
the concept of “modulation by a pulse train”. Choose a sampling interval Ts and define a
continuous-time pulse train p(t) as:
∞
X
p(t) = δ(t − kTs ).
k=−∞
This is tricky to show, so just take the result as is. The “sampled” signal is simply the
modulation of an arbitrary-continuous time signal x(t) by the pulse train:
xs (t) = p(t) x(t)
This sampled signal is still continuous but, by the properties of the delta function, it is
nonzero only at multiples of Ts ; in a sense, xs (t) is a discrete-time signal brutally embedded
in the continuous time world.
10.A. The Sinc Product Expansion Formula 289
Here’s the question: derive the Fourier transform of xs (t) and show that if x(t) is
bandlimited to π/Ts then we can reconstruct x(t) from xs (t).
Problem 10.3 Consider a real, continuous-time signal xc (t) with the following spectrum:
Xc (jΩ)
Ω0 2Ω0
(a) What is the bandwidth of the signal? What is the minimum sampling period in order
to satisfy the sampling theorem?
(b) Take a sampling period Ts = π/Ω0 ; clearly, with this sampling period, there will be
aliasing. Plot the DTFT of the discrete-time signal xa [n] = xc (nTs ).
(d) With such a scheme available, we can therefore exploit aliasing to reduce the sampling
frequency necessary to sample a bandpass signal. In general, what is the minimum
sampling frequency to be able to reconstruct with the above strategy a real signal
whose frequency support on the positive axis is [Ω0 , Ω1 ] (with the usual symmetry
around zero, of course)?
We will present two proofs; the first was proposed by Euler in 1748 and, while it certainly
lacks rigor by modern standards, it has the irresistible charm of elegance and simplicity
in that it relies only on basic algebra. The second proof is more rigorous, and is based
on the theory of Fourier series for periodic functions; relying on Fourier theory, however,
hides most of the convergence issues.
290 Chapter 10.
Euler’s Proof Consider the N roots of unity for N odd. They will be z = 1 plus N − 1
complex conjugate roots of the form z = e±jωN k for k = 1, . . . , (N − 1)/2 and ωN = 2π/N .
If we group the complex conjugate roots pairwise we can factor the polynomial z N − 1 as
(N −1)/2
Y
z N − 1 = (z − 1) (z 2 − 2z cos(ωN k) + 1).
k=1
Now replace z and a in the above formula by z = (1 + x/N ) and a = (1 − x/N ); we obtain:
(N −1)/2
x N x N 4x Y x2
1+ − 1− = (1 − cos(ωN k) + 2 (1 + cos(ωN k))
N N N N
k=1
(N −1)/2
4x Y x2 1 + cos(ωN k)
= (1 − cos(ωN k)) 1 + 2 ·
N N 1 − cos(ωN k)
k=1
(N −1)/2
Y x2 (1 + cos(ωN k))
= Ax 1+ 2
N (1 − cos(ωN k))
k=1
Q(N −1)/2
where A is just the finite product (4/N ) k=1 (1 − cos(ωN k)). The value A is also the
coefficient for the degree-one term x in the right-hand side, and it can be easily seen from
the expansion of the left hand-side that A = 2 for all N ; actually, this is an application
of Pascal’s triangle, and it was proven by Pascal in the general case in 1654. As N grows
large we have that
x N
1± ≈ e±x ;
N
at the same time, if N is large, then ωN = 2π/N is small and, for small values of the
angle, the cosine can be approximated as
cos(ω) ≈ 1 − ω 2 /2
so that the denominator in the general product term can in turn be approximated as:
4k2 π 2
N 2 (1 − cos((2π/N )k) ≈ N 2 · = 2k2 π 2 .
2N 2
10.A. The Sinc Product Expansion Formula 291
By the same token, for large N , the numerator can be approximated as 1+cos((2π/n)k) ≈
2 and therefore the above expansion becomes (by bringing A = 2 over to the left-hand
side):
ex − e−x x2 x2 x2
=x 1+ 2 1+ 2 1 + 2 ...
2 π 4π 9π
Finally, we replace x by jπt to obtain
∞
sin(πt) Y t2
= 1− 2 .
πt n=1
n
Rigorous Proof Consider the Fourier series expansion of the even function f (x) = cos(τ x)
periodized over the interval [−π, π]. We have
∞
1 X
f (x) = a0 + an cos(nx)
2
n=1
with
Z
1 π
an = cos(τ x) cos(nx)dx
π −π
Z π
2 1
= [cos((τ + n)x) + cos((τ − n)x)]dx
π 0 2
1 sin((τ + n)π) sin((τ − n)π)
= +
π τ +n τ −n
2 sin(τ π) (−1) τ n
=
π τ 2 − n2
so that
2τ sin(τ π) 1 cos(x) cos(2x) cos(3x)
cos(τ x) = 2
− 2 + − 2 + ...
π 2τ τ − 1 τ 2 − 22 τ − 32
In particular, for x = π we have
2τ 1 1 1 1
cot(πτ ) = + + + + . . .
π 2τ 2 τ 2 − 1 τ 2 − 22 τ 2 − 32
which we can rewrite as
X ∞
1 −2τ
π cot(πτ ) − =
πτ n − τ2
2
n=1
292 Chapter 10.