0% found this document useful (0 votes)
11 views21 pages

Chapter 1 - Foundations

Uploaded by

lemoke0731
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views21 pages

Chapter 1 - Foundations

Uploaded by

lemoke0731
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Elec4621:

Advanced Digital Signal Processing


Chapter 1: Foundations of Digital
Signal Processing
Dr. D. S. Taubman
March 3, 2011

1 Introduction
For some, the material presented in these notes may seem excessively mathe-
matical: Why do we need to go back to thinking of linear spaces, inner products,
etc.? What have matrices got to do with signal processing? Who cares about
orthogonality anyway?
For others, however, this presentation may provide insights which link pre-
viously disconnected ideas. My purpose is to show you both the simple and
the deep. It may be a relief to some of you to see that discrete convolution
is such a mind-bogglingly simple concept, that linear operations on sequences
of samples are directly analogous to matrix operations on nite length vectors,
and that there is nothing mysterious at all about Parseval’’s theorem. On the
other hand, I would like you to try to grapple with the signicance of aliasing,
the interpretation of frequency in both the discrete and continuous domains and
the extent to which discrete Fourier transforms are true frequency transforms.
Some of these questions will be answered more fully as the subject progresses,
especially when we come to consider random processes, auto-correlation and
power spectral densities.
Lecture notes will be handed out regularly in lectures, but the lectures them-
selves will sometimes follow the material in a dierent order to that in the notes.
This is because the methodical style appropriate to a written presentation is not
usually helpful in lectures. For this reason, you are recommended to take your
own notes during lectures to remind yourself of the ow of ideas. There is very
little point in trying to write everything down. Instead, try to focus on the ow
of the lecture, sketching connections, ““mind-maps,”” etc., which emerge as you
go. This way, you will have a chance of taking away useful concepts, which will
last even after you forget the details.

1
c
?Taubman, 2003 ELEC4621: Foundations Page 2

2 Linear Spaces
2.1 Denitions
A linear space is a set, S, whose elements we identify as vectors, v, having the
following properties:

1. Closure under addition: if v1 , v2 5 S, then v1 + v2 5 S


2. Closure under scalar multiplication: if v 5 S and a 5 R, then av 5 S

An inner product space is a vector space endowed with an inner product,


kv, wl 5 R, satisfying

1. Symmetry: kv, wl = kw, vl. Note, however, that if the linear space is
dened over the eld of complex numbers, C, instead of R, then we have
kv, wl = kw, vl , where  denotes the complex conjugate of .
2. Linearity: kv, wl = kv, wl and kv+u, wl = kv, wl+ku, wl. By symmetry,
we must also have kv, wl = kv, wl and kv, w +ul = kv, wl+kv, ul, unless
the vector space is dened over C, in which case kv, wl =  kv, wl.
s
3. Cauchy-Schwarz inequality: kv, wl  kv, vl · kw, wl with equality if and
only if v = w for some  5 R.

Some denitions and consequences:


s
Norms: We write nvn = kv, vl for the norm of vector, v.

•• It follows that nvn  0, with equality if and only if v = 0.


•• The Cauchy-Schwarz inequality may written kv, wl  nvn · nwn

Triangle inequality: nv + wn  nvn + nwn


Orthogonal vectors: We say that v and w are orthogonal if kv, wl = 0.
Orthonormal vectors: We say that v and w are orthonormal if kv, wl = 0
and nvn = nwn = 1.
Orthonormal expansions: Let {# i } be an orthonormal set of vectors, i.e.
# i , # j are orthonormal for any i 9= j. Moreover, suppose that vector v
can be S expressed as a linear combination of these orthonormal
S vectors, i.e.
v = i ai # i . Then ai = kv, # i l and nvn2 = kv, vl = i |ai |2 .
Linear independence: We say that a collection of vectors, # i , are linearly
independent
S if the only combination of scale factors, ai 5 R, such that
i ai # i = 0, is ai = 0, ;i.

•• It follows that any collection of non-zero, mutually orthogonal vectors


must be linearly independent.
c
?Taubman, 2003 ELEC4621: Foundations Page 3

Basis: We say that the collection of vectors, {# i }, forms a basis for S if the
vectors are linearly independent and every vector, v 5 S,Smay be written
as a linear combination of the basis vectors, # i , i.e. v = i ai # i for some
ai 5 R. The number of elements in the basis, {# i }, is the dimension of
the vector space. If the basis consists of innitely many elements then we
have an innite dimensional vector space.

•• With respect to any particular basis, vectors in S may be equivalently


expressed in terms of the scaling factors, ai .

2.2 Important Examples


2.2.1 Finite Sequences (column/row vectors)
Consider the set of all column vectors (n-tuples),
3 4
v1
E v2 F
E F
v =E . F
C .. D
vn
where vi 5 R. In this case, we frequently use boldface, v, to denote a vector,
and vi to denote its elements.
The set of all such n-tuples forms an n-dimensional vector space where vector
addition and scalar multiplication are performed element-wise. We dene inner
products to be the familiar dot-product, i.e.
3 4
v1
[ n
 EE v2 F
F
kv, wl = vi wi = wt v = w1 w2 · · · wn E . F
i=1
C .. D
vn

If we dene the vector space over C instead of R, we must dene the inner
product by
3 4
v1
n
[  EE v2 F
F
kv, wl = vi wi = w v = w1 w2 · · · wn E . F
i=1
C .. D
vn

Note that w denotes the conjugate transpose of vector w. A trivial basis for
the linear space of n-tuples is
  t
0 ··· 0 1 0 ··· 0
#i =  ~} €
i
c
?Taubman, 2003 ELEC4621: Foundations Page 4

which is clearly orthonormal. The orthonormal expansion of any vector, v,


relative to this basis may be written:
[
v= ai # i , where ai = kv, # i l = vi
i

That is, the coe!cients, ai , are simply the sample values, vi .

2.2.2 Innite Sequences (discrete signals)


This is a natural extension from nite-dimensional vectors discussed before.
Consider the set of sequences,
v = (. . . , v0 , v1 , v2 , v3 , . . .)
where vi 5 R.
The set of all such sequences forms a vector space where again vector addition
and scalar multiplication are performed element-wise. If we restrict ourselves to
the set of all square-summable sequences, we can dene the inner product by
[
kv, wl = vi wi
i

The sequence v is said to be ““square-summable”” if


[ 2
|vi | < 4
i

If we dene the vector space over C instead of R, we dene the inner product
instead by [
kv, wl = vi wi
i
Again, an obvious orthonormal basis for the space is the set of singleton se-
quences,  
··· 0 0 1 0 0 ···
i =  ~} € (1)
i

2.2.3 Functions (continuous signals)


Consider the set of functions, f = f (x). These form a vector space if we dene
addition and scalar multiplication of functions point-wise, i.e.
f +g = h +, h(x) = f (x) + g(x), ;x
f = h +, h(x) = f (x), ;x
If we restrict our attention to the set of all square-integrable functions, we can
dene the inner product by
]
kf , gl = f (x)g(x) · dx
c
?Taubman, 2003 ELEC4621: Foundations Page 5

If we dene the vector space over C instead of R, we must use the following
more general inner product expression instead:
]
kf , gl = f (x)g  (x) · dx

s
Thus, the norm of a signal, nf n = kf , f l, may be interpreted as the square
root of its energy.

3 Linear Systems
A linear system is a function, H, which maps vectors v 5 S to vectors, H(v) =
v3 5 S 3 , having the linearity property:

H(v + w) = H(v) + H(w), ;v, w 5 Sv ,  5 R

Often S = S 3 , but there are many useful examples where this is not so. Consider,
for example, the unit sampling operator which maps continuous signals v into
sequences v3 , such that
ui = v (i)
The dual of sampling is interpolation, which maps sequences to continuous sig-
nals. In both cases, S and S 3 are dierent linear spaces. The Discrete Time
Fourier Transform (DTFT) is another example, mapping discrete discrete se-
quences to functions on the interval $ 5 (, ). We shall consider each of
these in detail shortly.

3.1 Finite-Dimensional Linear Systems


If S and S 3 are nite-dimensional vector spaces with dimension n and n3 , re-
spectively, then the linear system may be represented using a matrix,
3 3 4 3 4 3 4
v1 h1,1 h1,2 · · · h1,n3 v1
E v23 F E h2,1 h2,2 · · · h2,n3 F E v2 F
E F E F E F
v3 = E . F = H(v) = H · v = E . .. .. .. F · E .. F
C .. D C .. . . . D C . D
3
vn hn,1 hn,2 · · · hn,n3 vn

It is worth noting that each element of the ouput vector may be written as an
inner product:
vi3 = hi · v = kv, hi l
where hi is the i’’th column of H, and the super-script,  , denotes transposition
and complex conjugation together. When complex numbers are involved, the
transpose operation is almost always accompanied by complex conjugation in
virtually every useful mathematical relationship.
c
?Taubman, 2003 ELEC4621: Foundations Page 6

0 n 0 k n
H H

0 n 0 k n

Figure 1: Behaviour of a discrete-time linear time invariant operator.

The output vector may be expressed as a linear combination of n ““system


responses””:
[n
v3 = vi · hi
i=1

where hi , the i’’th column of H, is the response of the system to an input vector
which is zero everywhere except in the i’’th element, where it holds a 1.
Thus, the columns of the matrix operator may be interpreted as its system
responses, while the rows may be understood as combinatorial weights. The
operator itself may be interpreted and implemented either by summing the
weighted system responses, or by forming weighted sums of the input samples.

3.2 Discrete-Time LTI Systems


Let x  x[n] be a discrete-time signal (i.e. a sequence, or a function on Z) and
H a linear system with H(x) = y.
We write xk  x[n k] for the signal obtained by delaying x by k time steps.
H is LTI if for all input signals x, we have

H(xk ) = H(x)k = yk

That is, the response of the system to a delayed input signal is a correspondingly
delayed version of the response to the original signal. This is illustrated in Figure
1.
Recall that any sequence x may be written as a linear combination of the
orthonormal basis sequences  i dened in equation (1), i.e.
[ [ [
x= ai  i = kx,  i l i = x[i] i
i i i
c
?Taubman, 2003 ELEC4621: Foundations Page 7

0.3b [n] 0.3h[n]

0.3 H

0 n 0 n

0.7
+ +
0.7b [n-1] 0.7h[n-1]

0 n 0 n
+ +
0.4b [n-2] 0.4h[n-2]
0.4 H

0 n 0 n
=

=
0.7
x[n] y[n]
0.4 H
0.3

0 n 0 n

Figure 2: Convolution at work.

Since H is a linear operator we have


# $
[ [
y = H(x) = H x[n] n = x[i] · H( i )
n i

Finally, note that  i   i [n] = [n  i], where  =  0 is the unit impulse
sequence. That is,  i is a delayed version of the unit impulse sequence. Since
H is time-invariant, we must have H( i ) = H()i and so
[ [
y= x[i] · H()i = x[i]hi
i i

or [ [
y[n] = x[i]h[n  i] = h[k]x[n  k] = (h B x)[n] (2)
k=ni
i k

Thus, the LTI system is entirely characterized by its response to a unit impulse,
H () = h  h[n]. The summation in equation (2) is known as convolution.
Figure 2 illustrates the convolution principle.
As for nite-dimensional linear systems, this convolution relationship may
c
?Taubman, 2003 ELEC4621: Foundations Page 8

0 t 0 o t
H H

0 t 0 o t

Figure 3: Behaviour of a continuous-time linear time invariant operator.

alternatively be expressed in terms of inner products. In particular, we have


[ [
y[k] = x[i]h[k  i] = x[i]˜h̃[i  k] = kx, ˜h̃k l
i i

where ˜h̃  ˜h̃ [n] = h[n] is a time-reversed version of the impulse response, h 
h[n]. Thus, the kth output sample may be obtained by delaying (sliding to the
right) ˜h̃ by k time steps and taking the inner product (pairwise multiplication
and addition) of this delayed sequence with the input sequence x.

3.3 Continuous-Time LTI Systems


Let f  f (t) be a continuous-time signal (i.e. a function on R) and H a linear
system with H(f ) = g. We write f  f (t   ) for the signal obtained by
delaying f by an amount  . H is LTI if for all input signals, f , we have
H(f ) = H(f ) = g
That is, the response of the system to a delayed input signal is a correspondingly
delayed version of the response to the original signal. This is illustrated in Figure
3.
We can write ]
f (t) = f ( ) · (t   ) · d (3)

where (t) is the Dirac-delta function. This is not a real function, but a dis-
tribution —— it ““measures”” a function at some point. The Dirac-delta function
is dened, in fact, by equation (3). We think of   (t) as the unit impulse
signal, even though it is not a physically realizable signal. By analogy with the
discrete case, we have
]
g = H(f ) = f ( ) · h · d
c
?Taubman, 2003 ELEC4621: Foundations Page 9

where h = H() is the response of the LTI system to the unit impulse. This
analogy may be made rigorous, but we do not attempt to do so here. Writing
the above equation in full we have
] ]
g(t) = f ( ) · h (t) · d = f ( ) · h(t   ) · d
]
= h() · f (t  ) · d = (h B f )(t)

which is the well-known convolution integral. The operation of the convolution


integral may be pictured in a manner which is analogous to the discrete-time
convolution depicted in Figure 2
As in the discrete case, the convolution integral may be performed using
inner products. Specically, we have
] ]
g(p) = f ( ) · h(p   ) · d = f ( ) · ˜h̃p ( ) · d = kf , ˜h̃p l

where, again, ˜h̃  h(t) is a time-reversed version of the impulse response,


h  h(t). This means that the output value at time p may be obtained by
delaying (sliding to the right) ˜h̃ by time p and taking the inner product of this
delayed function with the input function f .

4 Fourier Transforms
4.1 Fourier Series

x(t) X(f)

t f
T0 F0

Consider periodic functions, x(t), with period T0 . Suppose also that the
““Dirichlet”” conditions are satised, i.e.
U T0
1. x(t) is absolutely integrable over one period, i.e. 0
|x(t)| · dt < 4;
2. x(t) has a nite number of extrema (local maxima or minima) within each
period; and
3. x(t) has at most a nite number of discontinuities within each period.
c
?Taubman, 2003 ELEC4621: Foundations Page 10

Then x(t) may be expanded in a Fourier series with


4
[
x(t) = X[n]ejn2F0 t
n=4
] T0
1
X[n] = x(t)ejn2F0 t · dt
T0 0

where the fundamental frequency, F0 = T10 .


Note that this may also be written as
s 4
[
x = T0 · X[n]# n
n=4
1
X[n] = s kx, # n l
T0
where the functions # n  # n (t) are dened by
1
# n (t) = s ej2nF0 t
T0
Here, we are dealing with a linear space of periodic functions and the relevant
inner product is given by
] T0
kv, wl = v(t)w(t) · dt
0

It is easily veried that the complex sinusoids # n are mutually orthogonal


with unit norm. It follows that the Fourier Series issessentially an orthonormal
expansion (up to a scaling of the coe!cients by T0 ). Moreover, since all
periodic signals x which satisfy the Dirichlet conditions with period T0 can be
written as linear combinations of the # n , these must constitute an orthonormal
basis.
Recall that for orthonormal expansions, the squared norm of any vector is
identical to the sum of the squares of the coe!cients in the expansion. That is,
] T0 4
[ 4
[
2 2 2
|x| (t) · dt = kx, xl = |kx, # n l| = T0 |X[n]|
0 n=4 n=4

This is known as Parseval’’s relationship for Fourier series. We may identify the
left hand side of the above equation with the energy in a single period. Here,
we think of x(t) as a voltage waveform across a 1 resistor, so that x2 (t) is the
instantaneous power dissipated in the resistor and its integral is the dissipated
energy. Alternatively, we may write
] T0 4
[
1
|x|2 (t) · dt = |X[n]|2
T0 0 n=4
c
?Taubman, 2003 ELEC4621: Foundations Page 11

where the left hand side represents the power of the periodic signal.
Although we have considered periodic functions, the Fourier Series expansion
is equally applicable to the class of nite support signals, dened on the interval
t 5 [0, T0 ], where we have only to view the nite support signal as a single period
of a hypothetical innite support signal; the integrals and inner products dened
above remain unchanged in this case. Alternatively, the Fourier Seriesexpansion 
is applicable to functions dened on any interval of length T0 , e.g. t 5  T20 , T20 .

4.1.1 Normalized Fourier Series


A common problem in engineering, is that the connections between related con-
cepts can easily get obscured by the notation. In particular, carrying absolute
time scales around in the various formulae only makes them harder to recognize
and harder to remember, without oering any particular advantage. In a suit-
able time scale (why do we have to use seconds?), we can always think of the
signal as having a period of T0 = 2. This is probably the most natural period
to adopt for a normalized Fourier series, since it is the period of the basic cosine
and sine functions. In this normalized framework, we may rewrite the above
relationships as follows:
4
[
x(t) = X[n]ejnt
n=4
] 
1
X[n] = x(t)ejnt · dt
2 
4
[ ]
1
|X[n]|2 = |x|2 (t) · dt
n=4
2 

4.2 Discrete-Time Fourier Transform (DTFT)

x[n] xˆˆ (t )

t </ / t
1

We have seen that the Fourier Series represents an invertible mapping be-
tween the inner product space formed by all nite support functions x(t) dened
on t 5 (, ) which satisfy the Dirichlet conditions, and innite length two-
sided sequences X[n]. By direct substitution, then, it is easy to verify that every
sequence x[n] may be represented by a nite support function, ˆx̂($), dened on
c
?Taubman, 2003 ELEC4621: Foundations Page 12

$ 5 (, ), as follows:


4
[
ˆx̂($) = x[n]ej$n
n=4
] 
1
x[n] = ˆx̂($)ej$n · d$ (4)
2 

This is known as the DTFT. Mathematically, it is no dierent from the Fourier


Series; however, the interpretation which we shall apply is quite dierent. The
interpretation will be that ˆx̂ ($) is also the true Fourier transform of a continuous
signal x (t), bandlimited to the range $ = 2f 5 (, ), having unit-spaced
samples x [n]. It is not immediately obvious from the above why this is the
correct interpretation. To make the interpretation stick, we will need to press
on a little further.
Continuing the relationship between the Fourier Series and DTFT, we see
that the family of functions {# n }, with
1
# n ($) = s ejn$
2
forms an orthonormal basis for functions satisfying the Dirichlet conditions
which are dened on $ 5 (, ), and the DTFT may be written as
4
[ 4
[
s
ˆx̂ ($)  ˆx̂ = kˆx̂, # n l# n = 2 x[n]# n
n=4 n=4
1
x[n] = s kˆx̂, # n l
2
From the properties of orthonormal expansions, then, we have
4
[ ] 
1 1
|x[n]|2 = nˆx̂n2 = |ˆx̂($)|2 · d$
n=4
2 2 

which is known as Parseval’’s relationship for discrete-time signals.

4.3 Continuous-Time Fourier Transform (CTFT or just


FT)

f (t ) fˆˆ (t )

t t
c
?Taubman, 2003 ELEC4621: Foundations Page 13

U 
Suppose the signal f (t) has nite energy. That is,  f 2 (t) dt < 4. Then
the following Fourier transform (FT) relationship holds:
] 4
ˆˆ
f ($) = f (t)ej$t · dt
4
] 4
1
f (t) = fˆˆ ($) ej$t · d$ (5)
2 4

4.3.1 Some Properties of the Fourier Transform


Conjugate Symmetry: If f (t) is a real-valued function,

fˆˆ($) = fˆˆ ($),

where a denotes the complex conjugate of a.


Time shift: Let f (t) = f (t   ), i.e. f (t) is obtained by delaying the signal
f (t) by time t; then
fˆˆ ($) = ej$ fˆˆ($).

Convolution: Let g(t) = (h B f )(t) be the convolution of signal f (t) and the
impulse response f (t) of an LTI lter; then

g($)
ˆˆ = ˆĥ($)fˆˆ($).

We say that ˆĥ($) is the transfer function of the LTI system.

Sinusoidal modulation: Let fm (t) = f (t)ej$0 t ; then fˆˆm ($) = fˆˆ($  $ 0 ).


Note that we are usually dealing with real-valued signals f (t), but fm (t)
is not generally real-valued. For real signals, let fc (t) = f (t) cos $ 0 t =
1 j$ 0 t
2 f (t)(e + ej$0 t ); then
1  ˆˆ 
fˆˆc ($) = f ($  $ 0 ) + fˆˆ($ + $ 0 )
2

General modulation: Let g(t) = f (t) · m(t); then


] 4
1 1  
g($)
ˆˆ = ˆm̂(!)fˆˆ($  !) · d! = ˆm̂ B fˆˆ ($)
2 4 2

Parseval’’s relation:
] 4 ] 4  
1  ˆˆ 2
|f (t)|2 · dt = f ($) · d$
4 2 4

Generalized Parseval’’s relation:


] 4 ] 4
1
f (t)g  (t) · dt = fˆˆ($)ˆˆ
g  ($) · d$
4 2 4
c
?Taubman, 2003 ELEC4621: Foundations Page 14

x(t) X(f)
1
1
0.8
0.8

0.6 0.6

0.4
0.4
0.2
0.2
-3 -2 -1 0 1 2 3 f
0
-1 0 1 2 t -0.2

Figure 4: Unit pulse, x (t) = (t), and its Fourier transform, X (f ) =


ˆx̂ ($)|$=2f .

d
Dierentiation: Let g(t) = dt f (t); then

g($)
ˆˆ = j$ · fˆˆ($)

Thus, dierentiating f (t) is equivalent to applying an LTI system (lter)


whose transfer function is ˆĥ($) = j$.

4.3.2 Important Examples


Impulse: f (t) = (t) =, fˆˆ($) = 1, ;$. Similarly, f (t) = 1, ;t =, fˆˆ($) =
($).
Rectangular pulse (time domain): Dene the ““pulse”” function,

1 if |t| < 12
(t) =
0 if |t|  12

ˆˆ
Its Fourier transform, ($), is given by

ˆˆ $ sin $/2
($) = sinc( ) =
2 $/2
These are illustrated in Figure 4.The way to remember this is that the
nulls of the Fourier transform of a unit length pulse lie at multiples of
$ = 2. This is because these frequencies correspond to sinusoids which
execute a whole number of cycles within the unit duration of the pulse,
and hence integrate out.
Rectangular pulse (Fourier domain): Let ˆx̂ ($) be the unit pulse on the
interval, $ 5 (, ),

1 if |$| < 
ˆx̂($) =
0 if |t|  
c
?Taubman, 2003 ELEC4621: Foundations Page 15

x(t) X(f)
1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

-2 -1 0 1 2
t -3 -2 -1 0 1 2 3
f

Figure 5: Triangle, x(t) = (t), and its Fourier transform, X(f ) = ˆx̂ ($)|$=2f .

Its inverse Fourier transform x (t), is given by


sin t
x (t) = sinc(t) =
t
Triangular waveform: Dene the triangular waveform,

1  |t| if |t| < 1
(t) =
0 if |t|  1
Now observe that (t) = ( B )(t), so that
ˆˆ $
($) = sinc2 ( )
2
These are illustrated in Figure 5.
Raised cosine: Dene the raised cosine function,
 1
Rc (t) = 2 (1 + cos t) if |t| < 1
0 if |t|  1
Its Fourier transform is
] 1
ˆR̂c ($) = 1
(1 + cos (t)) ej$t dt
1 2
]  
1 1 1 1
= ej$t + ej($)t + ej($+)t dt
2 1 2 2
1 ej$  ej$ 1 ej($)  ej($) 1 ej($+)  ej($+)
= + +
2 j$ 4 j ($  ) 4 j ($ + )
sin $ 1 sin ($  ) 1 sin ($ + )
= + +
$ 2 $ 2 $+
2
 sin $
=
($  ) ($ + ) $
These are illustrated in Figure6.
c
?Taubman, 2003 ELEC4621: Foundations Page 16

x(t) X(f)
1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

-2 -1 0 1 2 t -3 -2 -1 0 1 2 3 f

Figure 6: Raised cosine, x(t) = Rc (t), and its Fourier transform, X(f ) =
ˆx̂ ($)|$=2f .

5 Sampling and Interpolation


We have considered three dierent Fourier transforms:
Fourier Series: Maps periodic or nite support functions to an innite se-
quence of discrete frequency components.
Discrete-time Fourier Transform: Maps discrete signals to a continuous
frequency representation with nite support.
Fourier Transform: Maps continuous signals to a continuous frequency rep-
resentation with innite support.
The relationship between the last two is particularly important for our pur-
poses, since we often need to deal with both discrete-time and continuous signals
together. This relationship is embodied by the sampling theorem, which we de-
velop in the following simple steps:
•• Let f (t) be a signal with Fourier transform, fˆˆ($).
•• Let x[n] be obtained by impulsively sampling f (t) with a unit sampling
interval, i.e.
x[n] = f (t)|t=n (6)
and let ˆx̂($) denote the DTFT of x[n].
•• Suppose that f (t) is a bandlimited signal with
fˆˆ($) = 0, |$|   (7)

•• Observe that in this case the inverse DTFT integral and the inverse FT
integral, given in equations (4) and (5) respectively, are identical. Specif-
ically, we see that
]  ] 
1 1
ˆx̂($)ejn$ · d$ = x[n] = f (t)|t=n = fˆˆ($)ejn$ · d$
2  2 
c
?Taubman, 2003 ELEC4621: Foundations Page 17

FT
f (t ) fˆˆ (t )

interpolate
sample

identify
DTFT
x[n] xˆˆ (t )

Figure 7: The Nyquist Cycle: Interpolation of a bandlimited continuous signal,


f (t), from its impulse samples, x[n], based on the DTFT and FT relationships.

so that fˆˆ($) and ˆx̂($) must be identical. Since knowledge of ˆx̂($) is


equivalent to knowledge of x[n] and knowledge of fˆˆ($) is equivalent to
knowledge of f (t), we conclude that the sampled sequence x[n] captures
all the information in the continuous signal f (t), provided equation (7)
is satised. More generally, an impulsively sampled sequence of the form
x [n] = f (t)|t=nT captures all the information in the continuous sequence
f (t), provided the sampling interval satises T  0 , where fˆˆ ($) is ban-
dlimited to the region $ 5 ( 0 , 0 ). This is commonly known as the
Nyquist sampling limit.
•• To reinforce the above observation, we show how the original continuous
signal f (t) may be recovered from x[n]. The above reasoning is compactly
represented by the commutative diagram in Figure 7.Since we are able to
identify fˆˆ($) with ˆx̂($), it must be possible to reconstruct f (t) by rst
applying the forward DTFT to x[n] and then applying the inverse FT to
c
?Taubman, 2003 ELEC4621: Foundations Page 18

the result. We obtain


] 
1
f (t) = fˆˆ($)ej$t · d$
2 
] 
1
= ˆx̂($)ej$t · d$
2 
]  # [ 4
$
1 j$n
= x[n]e ej$t · d$
2  n=4
[4 ] 
1
= x[n] ej$(tn) · d$
n=4
2 
4
[ 4
[
= x[n] sinc (t  n) = x[n] sincn (t)
n=4 n=4

Thus, f (t) may be obtained directly from x[n], by so-called sinc inter-
polation. Specically, we interpolate the samples by translating a sinc
function to each sample location, weighting the translated sinc function
by the relevant sample value, and summing the weighted, translated sinc
functions.
The interpolation formula above provides a basis for the space of signals
bandlimited to $ 5 (, ). Specically, any signal, f  f (t), in this
space, may be represented as a linear combination of the signals, # n 
sinc (t  n), with
[4
f= x[n]# n
n=4

In fact, it is not di!cult to show that {# n } is an orthonormal basis, so


the sampling relationship is an orthonormal expansion of x(t) and we have
another Parseval relationship:
4
[ ] 4
2 2
|x[n]| = |f (t)| · dt
n=4 4

•• The Nyquist cycle in Figure 7 does more than just connect discrete and
continuous signals through sampling and interpolation formulae. It also
imparts meaning to the frequency representation embodied by the DTFT.
What is the meaning of a frequency of 1.0325 rad/s for discrete sequences
of samples? The answer is that the DTFT is identical to the Fourier
transform of the underlying continuous signal and derives all of its meaning
from the continuous signal. When we measure frequency content in the
DTFT domain, we are really looking at the frequencies of the underlying
continuous signal. Similarly, when we lter a discrete sequence, we are
really ltering the underlying continuous signal.
c
?Taubman, 2003 ELEC4621: Foundations Page 19

fˆˆ (t )

t
xˆˆ (t )

t
< 3/ </ +/ + 3/

Figure 8: Aliasing contributions to the DTFT, yˆˆ ($), of a sequence sampled


below the Nyquist rate.

•• As a nal note, we point out that signals are not generally exactly ban-
dlimited to any frequency range. For non-bandlimited signals, the equality
of fˆˆ($) and ˆx̂($) no longer holds. We nd more generally that
4
[
ˆx̂($) = fˆˆ ($  2k)
k=4

Thus, the spectrum of the discrete sequence is generally a sum of ““aliasing””


components, as shown in Figure 8

6 Convolution and Time Invariance


Let H be an LTI lter and h  h [n] its impulse response. We have seen that
the operation y = H (x) may be expressed in terms of h as
[
y= x [n] hn
n

and this is the most natural form of the convolution equation when thinking of
h as the response to a unit impulse at n = 0. We have also seen that convolution
may be expressed in terms of inner products as
G H
y [n] = x, ˜h̃n

This expression reveals the fact that convolution may be viewed as a sliding
window, weighted averaging operation. The weights are contained in ˜h̃, the
mirror image of h, which slides to the right one position at a time, generating
each consecutive output y [n], as it goes.
c
?Taubman, 2003 ELEC4621: Foundations Page 20

y[n] = ¦ x[k ]h[n < k ]


k
f (t ) x[n] y[n ] g (t )
Sampling Discrete Filtering Interpolation
fˆˆ (t ) xˆˆ (t ) = fˆˆ (t ) yˆˆ (t ) = gˆˆ (t ) gˆˆ (t )
yˆˆ (t ) = xˆˆ (t )hˆˆ(t )

g (t ) = ³ f (o )q (t < o )do
f (t ) g (t )
Continuous Filtering
fˆˆ (t ) gˆˆ (t )
gˆˆ (t ) = fˆˆ (t ) qˆˆ (t )

Figure 9: Equivalent discrete and continuous processing systems. The equiva-


lence holds so long as the sampling and interpolation are ideal and qˆˆ ($) agrees
with ˆĥ ($) on $ 5 (, ).

We may express the convolution operation in yet a third way, S in terms of


translated copies of the input sequence, x. Specically, y [n] = i h [i] x [n  i]
may be expressed as [
y= h [k] xk (8)
k

This means that the output sequence is a linear combination of shifted copies
of the input sequence.
This perspective provides perhaps the clearest explanation of why the Fourier
transform is a useful tool for analysing, designing and even implementing LTI
operators. In particular, the Fourier transform is essentially the only invertible
linear transform under which shifts become simple scale factors. Specically,

ˆx̂k ($) = ˆx̂ ($) · ej$k


~} €
scale factor

Using this property, together with equation (8) and linearity, we obtain
[
yˆˆ ($) = h [k] ˆx̂ ($) ej$k
k
[
= ˆx̂ ($) h [k] ej$k
k

= ˆx̂ ($) ˆĥ ($)

The same relationship may be derived for continuous-time signals. In fact,


since the discrete and continuous Fourier transforms are identical (assuming
Nyquist sampling) all operations which we perform on the discrete signal are
equivalent to operations performed on the original continuous signal.
Figure 9 makes this fact concrete. Virtually all signal processing systems
start with a continuous signal at the input and produce a continuous signal at
the output, but most of the operations in between are performed digitally in the
c
?Taubman, 2003 ELEC4621: Foundations Page 21

discrete domain. Provided we sample the continuous signal below the Nyquist
limit and synthesize the continuous signal using sinc interpolation, or a suitable
approximation, the end-to-end system will be linear and time invariant and
we have full control over its properties through design of the discrete lters. A
digital lter with DTFT ˆĥ ($) has exactly the same eect as any continuous lter
whose Fourier transform qˆˆ ($) agrees with ˆĥ ($) over the interval $ 5 (, ). If
qˆˆ ($) happens to be Nyquist bandlimited also1 , then ˆĥ ($) and qˆˆ ($) are identical,
so that h [n] and q (t) are related through sampling and interpolation, just as
the signals themselves are.

1 Unfortunately, this is never the case when we start with a prototype analog design (e.g.,

Butterworth, Chebychev, ...) and try to emulate it digitally.

You might also like