Convolution and Frequency Response For LTI Systems: Hapter
Convolution and Frequency Response For LTI Systems: Hapter
C HAPTER 12
Convolution and Frequency Response
for LTI Systems
• An arbitrary signal x[.] can be written as a sum of scaled (or weighted) and shifted
unit sample functions. This is expressed in two ways below:
• The response of an LTI system to an input that is the scaled and shifted combina-
tion of other inputs is the same scaled combination—or superposition—of the corre-
spondingly shifted responses to these other inputs.
Since the response at time n to the input signal δ [n] is h[n], it follows from the two obser-
1
2 CHAPTER 12. CONVOLUTION AND FREQUENCY RESPONSE FOR LTI SYSTEMS
vations above that the response at time n to the input x[.] is, see Slide 12.2,
Example 1 Suppose h[n] = (0.5)n u[n], where u[n] denotes the unit step function defined
previously (taking the value 1 where its argument n is non-negative, and the value 0 when
the argument is strictly negative). If x[n] = 3δ [n] − δ [n − 1], then
From this we deduce, for instance, that y[n] = 0 for n < 0, and y[0] = 3, y[1] = 0.5, y[2] =
(0.5)2 , and in fact y[n] = (0.5)n for all n > 0.
The above example illustrates that if h[n] = 0 for n < 0, then the system output cannot
take nonzero values before the input takes nonzero values. Conversely, if the output never
takes nonzero values before the input does, then it must be the case that h[n] = 0 for n < 0.
In other words, this condition is necessary and sufficient for causality of the system.
Example 2 (Scale-&-Delay System) Consider the system S in Slide 12.3 that scales its
DT input by A and delays it by D > 0 units of time (or, if D is negative, advances it by
− D). This system is linear and time-invariant (as is seen quite directly by applying the
definitions from Chapter 10). It is therefore characterized by its unit sample response,
which is
h[n] = Aδ [n − D] .
We already know from the definition of the system that if the input at time n is x[n], the
output is y[n] = Ax[n − D], but let us check that the general expression in Equation (12.2)
gives us the same answer:
∞ ∞
y[n] = ∑ x[k]h[n − k] = ∑ x[k]Aδ [n − k − D] .
k=−∞ k=−∞
As the summation runs over k, we look for the unique value of k where the argument of
the unit sample function goes to zero, because this is the only value of k for which the unit
sample function is nonzero (and in fact equal to 1). Thus k = n − D, so y[n] = Ax[n − D],
as expected.
An input signal x[n] to this system gets scaled and delayed by each of these terms, with the
results added to form the output. This way of looking at the LTI system response yields
the expression
We’ve written n0 rather than the n we used before just to emphasize that this computation
involves summing over the dummy index k, with the other number being just a parameter,
fixed throughout the computation.
We first plot the time functions x[k] and h[k] on the k axis (with k increasing to the
right, as usual!)1 . How do we get h[n0 − k] from this? First note that h[−k] is obtained by
reversing h[k] in time, i.e., a flip of the function across the time origin. To get h[n0 − k],
we now slide this reversed time function, h[−k], to the right by n0 steps if n0 ≥ 0, or to the
left by −n0 steps if n0 < 0. To confirm that this prescription is correct, note that h[n0 − k]
should take the value h[0] at k = n0 .
With these two steps done, all that remains is to compute the sum in Equation (12.5).
This sum takes the same form as the familiar dot product of two vectors, one of which has
x[k] as its kth component, and the other of which has h[n0 − k] as its kth component. The
only twist here is that the vectors could be infinitely long. So what this steps boils down
to is taking an instant-by-instant product of the time function x[k] and the time function
h[n0 − k] that your preparatory “flip and slide” step has produced, then summing all the
products.
At the end of all this (and it perhaps sounds more elaborate than it is, till you get a
little practice), what you have computed is the value of the convolution for the single value
n0 . To compute the convolution for another value of the argument, say n1 , you repeat the
process, but sliding by n1 instead of n0 .
1
Does the time axis go from right to left when this material is taught in languages that write from right to
left?
4 CHAPTER 12. CONVOLUTION AND FREQUENCY RESPONSE FOR LTI SYSTEMS
To implement the computation in Equation (12.4), you do the same thing, except that
now it’s h[m] stays as it is, while x[m] gets flipped and slid by n to produce x[n − m], after
which you take the dot product. Either way, the result is evidently the same.
Example 1 revisited Suppose again that h[m] = (0.5)m u[m] and x[m] = 3δ [m] − δ [m − 1].
Then
x[−m] = −δ [−m − 1] + 3δ [−m] ,
which is nonzero only at m = −1 and m = 0. (Sketch this!) As a consequence, sliding x[−m]
to the left, to get x[n − m] when n < 0, will mean that the nonzero values of x[n − m] have
no overlap with the nonzero values of h[m], so the dot product will yield 0. This establishes
that y[n] = (x ∗ h)[n] = 0 for n < 0, in this example.
For n = 0, the only overlap of nonzero values in h[m] and x[n − m] is at m = 0, and we
get the dot product to be (0.5)0 × 3 = 3, so y[0] = 3.
For n > 0, the only overlap of nonzero values in h[m] and x[n − m] is at m = n − 1 and
m = n, and the dot product evaluates to
So we have completely recovered the answer we obtained in Example 1. For this example,
our earlier approach—which involved directly thinking about superposition of scaled and
shifted unit sample responses—was at least as easy as the graphical approach here, but in
other situations the graphical construction can yield more rapid or direct insights.
12.1.2 Deconvolution
We’ve seen in the previous chapter, specifically in Slides 11.12–11.24, how having an LTI
model for a channel allows us to predict or analyze the distorted output y[n] of the channel,
in response to a superposition of alternating positive and negative steps at the input x[n],
corresponding to a rectangular-wave baseband signal. That analysis was carried out in
terms of the unit step response, s[n], of the channel.
We now briefly explore one plausible approach to undoing the distortion of the channel,
assuming we have a good LTI model of the channel. This discussion is most naturally
phrased in terms of the unit sample response of the channel rather than the unit step re-
sponse. The idea is to process the received baseband signal y[n] through an LTI system, or
LTI filter, that is designed to cancel the effect of the channel.
Consider, as in the example of Slide 12.7, a channel that we model as LTI with unit
sample function
h1 [n] = δ [n] + 0.8δ [n − 1] .
This is evidently a causal model, and we might think of the channel as one that transmits
perfectly and instantaneously along some direct path, and also with a one-step delay and
some attenuation along some echo path.
Suppose our receiver filter is to be designed as a causal LTI system with unit sample
response
h2 [n] = h2 [0]δ [n] + h2 [1]δ [n − 1] + · · · + h2 [k]δ [n − k] + · · · . (12.6)
SECTION 12.1. A SECOND LOOK AT CONVOLUTION 5
Its input is y[n], and let us label its output as z[n]. What conditions must h2 [n] satisfy
if we are to ensure that z[n] = x[n] for all inputs x[n], i.e., if we are to undo the channel
distortion?
An obvious place to start is with the case where x[n] = δ [n]. If x[n] is the unit sample
function, then y[n] is the unit sample response of the channel, namely h1 [n], and z[n] will
then be given by z[n] = (h2 ∗ h1 )[n]. In order to have this be the input that went in, namely
x[n] = δ [n], we need
(h2 ∗ h1 )[n] = δ [n] . (12.7)
And if we satisfy this condition, then we will actually have z[n] = x[n] for arbitrary x[n],
because
z = h2 ∗ (h1 ∗ x) = (h2 ∗ h1 ) ∗ x = δ0 ∗ x = x ,
where δ0 [.] is our alternative notation for the unit sample function δ [n]. The last equality
above is a consequence of the fact that convolving any signal with the unit sample function
yields that signal back again; this is in fact what Equation (12.1) expresses.
The condition in Equation (12.7) ensures that the convolution carried out by the channel
is inverted or undone, in some sense, by the filter. We might say that the filter deconvolves
the output of the system to get the input (but keep in mind that it does this by a further
convolution!). In view of Equation (12.7), the function h2 [.] is also termed the convolutional
inverse of h1 [.], and vice versa.
So how do we find h2 [n] to satisfy Equation (12.7)? It’s not by a simple division of any
kind (though when we get to doing our analysis in the frequency domain shortly, it will
indeed be as simple as division). However, applying the “flip–slide–dot product” mantra
for computing a convolution, we find the following equations for the unknown coefficients
h2 [k]:
1 · h2 [0] = 1
0.8 · h2 [0] + 1 · h2 [1] = 0
0.8 · h2 [1] + 1 · h2 [2] = 0
...
0.8 · h2 [k − 1] + 1 · h2 [k] = 0
... ,
from which we get h2 [0] = 1, h2 [1] = −0.8, h2 [2] = −0.8h2 [1] = (−0.8)2 , and in general
h2 [k] = (−0.8)k u[k].
Deconvolution as above would work fine if our channel model was accurate, and if
there was no noise in the channel. Even assuming the model is sufficiently accurate, note
that any noise process w[.] that adds in at the output of the channel will end up adding
v[n] = (h2 ∗ w)[n] to the noise-free output, which is z[n] = x[n]. This added noise can
completely overwhelm the solution. For instance, if both x[n] and w[n] are unit samples,
then the output of the receiver’s deconvolution filter has a noise-free component of δ [n]
and an additive noise component of (−0.8)k u[k] that dwarfs the noise-free part. After
we’ve understood how to think about LTI systems in the frequency domain, it will become
much clearer why such deconvolution is so sensitive to noise.
6 CHAPTER 12. CONVOLUTION AND FREQUENCY RESPONSE FOR LTI SYSTEMS
where P is some fixed positive integer. The smallest positive integer P for which this
condition holds is referred to as the period of the signal (though the term is also used at
times for positive integer multiples of P), and the signal is called P-periodic.
While it may not be obvious that sinusoidal inputs to LTI systems give rise to sinusoidal
outputs, it’s not hard to see that periodic inputs to LTI systems give rise to periodic outputs
of the same period (or an integral fraction of the input period). The reason is that if the
P-periodic input x[.] produces the output y[.], then time-invariance of the system means
that shifting the input by P will shift the output by P. But shifting the input by P leaves the
input unchanged, because it is P-periodic, and therefore must leave the output unchanged,
which means the output must be P-periodic. (This argument actually leaves open the
possibility that the period of the output is P/ K for some integer K, rather than actually
P-periodic, but in any case we will have y[n + P] = y[n] for all n.)
cos(Ω0 n + θ0 ) = cos(ω0 nT + θ0 )
then yields the relation Ω0 = ω0 T (with the constraint |ω0 | ≤ π/ T, to reflect |Ω0 | ≤ π ). It is
now natural to think of 2π/(ω0 T) = 2π/Ω0 as the period of the DT sinusoid, measured in
samples. However, 2π/Ω0 may not be an integer!
Nevertheless, if 2π/Ω0 = P/ Q for some integers P and Q, i.e., if 2π/Ω0 is rational, then
indeed x[n + P] = x[n] for the signal in Equation (12.8), as you can verify quite easily. On
the other hand, if 2π/Ω0 is irrational, see Slide 12.12, the DT sequence in Equation (12.8)
will not actually be periodic: there will be no integer P such that x[n + P] = x[n] for all n.
With all that said, it turns out that the response of an LTI system to a sinusoid of the
form in Equation (12.8) is a sinusoid of the same (angular) frequency Ω0 , whether or not
the sinusoid is periodic. The easiest way to demonstrate this fact is to rewrite sinusoids in
terms of complex exponentials.
c1 .c2 = |c1 |.e j∠c1 .|c2 |.e j∠c2 = |c1 |.|c2 |.e j(∠c1 +∠c2 ) ,
i.e., the magnitude of the product is the product of the individual magnitudes, and the
angle of the product is the sum of the individual angles. It also follows that the inverse of a
complex number c has magnitude 1/|c| and angle −∠c.
Several other identities follow from Euler’s identity above. Most importantly,
1 jφ 1 jφ j − jφ
cos φ = e + e− j φ sin φ = e − e− j φ = e − e jφ . (12.10)
2 2j 2
Also, writing
e jA e jB = e j(A+ B) ,
8 CHAPTER 12. CONVOLUTION AND FREQUENCY RESPONSE FOR LTI SYSTEMS
and then using Euler’s identity to rewrite all three of these complex exponentials, and
finally multiplying out the left hand side, generates various useful identities, of which we
only list two:
cos(A) cos(B) = 12 cos(A + B) + cos(A − B) ;
cos(A ∓ B) = cos(A) cos(B) ± sin(A) sin(B) . (12.11)
which we will call the frequency response of the system, for a reason that will emerge
immediately below. Then the result in Equation (12.12) can be rewritten, using the second
SECTION 12.2. SINUSOIDAL INPUTS AND FREQUENCY RESPONSE 9
The result in Equation (12.15) is fundamental and important! It states that the entire effect
of an LTI system on a sinusoidal input at frequency Ω0 can be deduced from the (com-
plex) frequency response evaluated at the frequency Ω0 . The amplitude or magnitude of
the sinusoidal input gets scaled by the magnitude of the frequency response at the input
frequency, and the phase gets augmented by the angle or phase of the frequency response
at this frequency.
Now consider the same calculation with complex exponentials, Slide 12.16. Suppose
Thus the output of the system, when the input is the (everlasting) exponential in Equation
(12.16), is the same exponential, except multiplied by the following quantity evaluated at
Ω = Ω0 :
∞
∑ h[m]e− jΩm = C (Ω) − jS (Ω) = H( jΩ) . (12.18)
m=−∞
The first equality above comes from using Euler’s equality to write e− jΩm = cos(Ωm) −
j sin(Ωm), and then using the definitions in Equation (12.13). The second equality is simply
the result of recognizing the frequency response from the definition in Equation (12.14).
To now determine was happens to a sinusoidal input of the form in Equation (12.8), use
Equation (12.10) to rewrite it as
A0 j(Ω0 n+θ0 )
A0 cos(Ω0 n + θ0 ) = e + e− j(Ω0 n+θ0 ) ,
2
and then superpose the responses to the individual exponentials, using the result in Equa-
tion (12.17). The result (after algebraic simplification) will again be the expression in Equa-
tion (12.15), except scaled now by an additional A0 , because we scaled our input by this
additional factor in the current derivation (just to kick things up one step in generality).