Sundaram ECE301 Notes PDF
Sundaram ECE301 Notes PDF
Sundaram ECE301 Notes PDF
Course Notes
These notes very closely follow the book: Signals and Systems, 2nd edition, by
Alan V. Oppenheim, Alan S. Willsky with S. Hamid Nawab. Parts of the notes
are also drawn from
Shreyas Sundaram
Purdue University
iv
Contents
1 Introduction 1
1.1 Signals and Systems . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Outline of This Course . . . . . . . . . . . . . . . . . . . . . . . . 4
5.3.3 Conjugation . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.3.4 Differentiation . . . . . . . . . . . . . . . . . . . . . . . . 66
5.3.5 Time and Frequency Scaling . . . . . . . . . . . . . . . . 67
5.3.6 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.3.7 Parseval’s Theorem . . . . . . . . . . . . . . . . . . . . . . 68
5.3.8 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.3.9 Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . 72
7 Sampling 87
7.1 The Sampling Theorem . . . . . . . . . . . . . . . . . . . . . . . 87
7.2 Reconstruction of a Signal From Its Samples . . . . . . . . . . . 89
7.2.1 Zero-Order Hold . . . . . . . . . . . . . . . . . . . . . . . 89
7.2.2 First-Order Hold . . . . . . . . . . . . . . . . . . . . . . . 90
7.3 Undersampling and Aliasing . . . . . . . . . . . . . . . . . . . . . 90
7.4 Discrete-Time Processing of Continuous-Time Signals . . . . . . 91
viii CONTENTS
Introduction
Input Output
System
Note that we use square brackets to denote discrete-time signals, and round
brackets to denote continuous-time signals. Examples of continuous-time sig-
nals often include physical quantities, such as electrical currents, atmospheric
concentrations and phenomena, vehicle movements, etc. Examples of discrete-
time signals include the closing prices of stocks at the end of each day, population
demographics as measured by census studies, and the sequence of frames in a
digital video. One can obtain discrete-time signals by sampling continuous-time
signals (i.e., by selecting only the values of the continuous-time signal at certain
intervals).
Just as with signals, we can consider continuous-time systems and discrete-
time systems. Examples of the former include atmospheric, physical, electrical
and biological systems, where the quantities of interest change continuously over
time. Examples of discrete-time systems include communication and computing
systems, where transmissions or operations are performed in scheduled time-
slots. With the advent of ubiquitous sensors and computing technology, the
last few decades have seen a move towards hybrid systems consisting of both
continuous-time and discrete-time subsystems – for example, digital controllers
and actuators interacting with physical processes and infrastructure. We will
not delve into such hybrid systems in this course, but will instead focus on
systems that are entirely either in the continuous-time or discrete-time domain.
The term dynamical system loosely refers to any system that has an internal
state and some dynamics (i.e., a rule specifying how the state evolves in time).
1.1 Signals and Systems 3
This description applies to a very large class of systems, including individual ve-
hicles, biological, economic and social systems, industrial manufacturing plants,
electrical power grid, the state of a computer system, etc. The presence of dy-
namics implies that the behavior of the system cannot be entirely arbitrary; the
temporal behavior of the system’s state and outputs can be predicted to some
extent by an appropriate model of the system.
Example 1.1. Consider a simple model of a car in motion. Let the speed of
the car at any time t be given by v(t). One of the inputs to the system is the
acceleration a(t), applied by the throttle. From basic physics, the evolution of
the speed is given by
dv
= a(t). (1.1)
dt
The quantity v(t) is the state of the system, and equation (1.1) specifies the
dynamics. There is a speedometer on the car, which is a sensor that measures
the speed. The value provided by the sensor is denoted by s(t) = v(t), and this
is taken to be the output of the system.
Much of scientific and engineering endeavor relies on gathering, manipulating
and understanding signals and systems across various domains. For example,
in communication systems, the signal represents voice or data that must be
transmitted from one location to another. These information signals are often
corrupted en route by other noise signals, and thus the received signal must
be processed in order to recover the original transmission. Similarly, social,
physical and economic signals are of great value in trying to predict the current
and future state of the underlying systems. The field of signal processing studies
how to take given signals and extract desirable features from them, often via
the design of systems known as filters. The field of control systems focuses
on designing certain systems (known as controllers) that measure the signals
coming from a given system and apply other input signals in order to make
the given system behave in an desirable manner. Typically, this is done via a
feedback loop of the form
Desired Control
Output Input Output
Controller System
−
Sensor
• Time domain analysis of LTI systems: understand how the output of linear
time-invariant systems is related to the input
• Frequency domain analysis techniques and signal transformations (Fourier,
Laplace, z-transforms): study methods to study signals and systems from
a frequency domain perspective, gaining new ways to understand their
behavior
• Sampling and Quantization: study ways to convert continuous-time sig-
nals into discrete-time signals, along with associated challenges
The material in this course will lay the foundations for future courses in control
theory (ECE 382, ECE 483), communication systems (ECE 440) and signal
processing (ECE438, 445).
Chapter 2
We will now identify certain useful properties and classes of signals and systems.
Recall that a continuous-time signal is denoted by f (t) (i.e., a function of the
real-valued variable t) and a discrete-time signal is denoted by f [n] (i.e., a
function of the integer-valued variable n). When drawing discrete-time signals,
we will use a sequence of dots to indicate the discrete nature of the time variable.
We will find it useful to discuss the energy and average power of any continuous-
time or discrete-time signal. In particular, the energy of a general (potentially
complex-valued) continuous-time signal f (t) over a time-interval [t1 , t2 ] is de-
fined as Z t2
E[t1 ,t2 ] , |f (t)|2 dt,
t1
where |f (t)| denotes the magnitude of the signal at time t.
Similarly, the energy of a general (potentially complex-valued) discrete-time
signal f [n] over a time-interval [n1 , n2 ] is defined as
n2
X
E[n1 ,n2 ] , |f [n]|2 .
n=n1
Note that we are defining the energy of an arbitrary signal in the above way;
this will end up being a convenient way to measure the “size” of a signal, and
may not actually correspond to any physical notion of energy.
We will also often be interested in measuring the energy of a given signal over
all time. In this case, we define
Z ∞
E∞ , |f (t)|2 dt
−∞
for discrete-time signals. Note that the quantity E∞ may not be finite.
Similarly, we define the average power of a continuous-time signal as
Z T
1
P∞ , lim |f (t)|2 dt,
T →∞ 2T −T
Based on the above definitions, we have three classes of signals: finite energy
(E∞ < ∞), finite average power (P∞ < ∞), and those that have neither finite
energy nor average power. An example of the first class is the signal f (t) = e−t
for t ≥ 0 and f (t) = 0 for t < 0. An example of the second class is f (t) = 1 for
all t ∈ R. An example of the third class is f (t) = t for t ≥ 0. Note that any
signal that has finite energy will also have finite average power, since
E∞
P∞ = lim =0
T →∞ 2T
for continuous-time signals with finite energy, with an analogous characteriza-
tion for discrete-time signals.
2.2 Transformations of Signals 7
be the energy of the signal over one period. The average power over that period
E
is then Pp = T0p , and since this extends over all time, this ends up being the
average power of the signal as well. For example, for the signal f (t) = ej(ω0 t+φ) ,
we have Z T Z T
1 1
P∞ = lim |f (t)|2 dt = lim 1dt = 1.
T →∞ 2T −T T →∞ 2T −T
2.4 Exponential and Sinusoidal Signals 9
φk (t) = ejkω0 t , k ∈ Z.
Although the signal f (t) given above is complex-valued in general, its real part
and imaginary part are sinusoidal. To see this, use Euler’s formula to obtain
ω0 n2 = ω0 n1 + 2πk
or
ω0 n2 = −ω0 n1 + 2πk
for some positive integer k. In either case, we see that ω0 has to be a rational
multiple of 2π. In fact, when ω0 is not a rational multiple of 2π, the function
cos(ω0 n) never takes the same value twice for positive values of n.
The second difference from continuous-time complex exponentials pertains to
the period of oscillation. Specifically, even for periodic discrete-time complex
2.4 Exponential and Sinusoidal Signals 11
exponentials, increasing the frequency does not necessarily make the period
smaller. Consider the signal g[n] = ej(ω0 +2π)n , i.e., a complex exponential with
frequency ω0 + 2π. We have
Since ejω0 n = cos(ω0 n) + j sin(ω0 n), the frequency of oscillation of the discrete-
time complex exponential with frequency ω0 is the same as the frequency of
oscillation of the discrete-time complex exponential with frequency 2π − ω0 .
Thus, as ω0 crosses π and moves towards 2π, the frequency of oscillation starts
to decrease.
To illustrate this, it is again instructive to consider the sinusoidal signals f [n] =
cos(ω0 n) for ω0 ∈ {0, π2 , π, 3π
2 }. When ω0 = 0, the function is simply constant at
1 (and thus its period is undefined). We see that the functions with ω0 = π2 and
ω0 = 3π2 have the same period (in fact, they are exactly the same function).
2
The following table shows the differences between continuous-time and discrete-
time signals.
1 As we will see later in the course, the signals cos(ω n) and sin(ω n) correspond to
0 0
continuous-time signals of the form cos(ω0 t) and sin(ω0 t) that are sampled at 1 Hz. When
ω0
0 ≤ ω0 < π, this sampling rate is above the Nyquist frequency π , and thus the sampled
signals will be an accurate representation of the underlying continuous-time signal.
2 Note that cos(ω n) = cos((2π − ω )n) for any 0 ≤ ω ≤ π. The same is not true for
0 0 0
sin(ω0 n). In fact, one can show that for any two different frequencies 0 ≤ ω0 < ω1 ≤ 2π,
sin(ω0 n) and sin(ω1 n) are different functions.
12 Properties of Signals and Systems
ejω0 t ejω0 n
Distinct signals for different values of ω0 Identical signals for values of ω0 sepa-
rated by 2π
k
Periodic for any ω0 Periodic only if ω0 = 2π N for some in-
tegers k and N > 0.
Fundamental period: undefined for Fundamental period undefined for ω0 =
2π 2π
ω0 = 0 and ω0
otherwise 0 and k ω0
otherwise
Fundamental frequency ω0 Fundamental frequency ωk0
As with continuous-time signals, for any period N , we define the harmonic
family of discrete-time complex exponentials as
2π
φk [n] = ejk N n , k ∈ Z.
This is the set of all discrete-time complex exponentials that have a common
period N , and frequencies whose multiples of 2π
N . This family will play a role
in our analysis later in the course.
In other words, the unit step function can be viewed as a superposition of shifted
impulse functions.
Suppose we are given some arbitrary signal f [n]. If we multiply f [n] by the
time-shifted impulse function δ[n − k], we get a signal that is zero everywhere
except at n = k, where it takes the value f [k]. This is known as the sampling
or sifting property of the impulse function:
i.e., any function f [n] can be written as a sum of scaled and shifted impulse
functions.
2.5.2 Continuous-Time
The continuous-time unit step function is defined by
(
0 if t < 0
u(t) = .
1 if t ≥ 0
This function is drawn with an arrow at the origin (since it has no width and
infinite height). We will often be interested in working with scaled and time-
shifted versions of the continuous-time impulse function. Just as we did with
discrete-time functions, we can take a continuous-time function f (t) and repre-
sent it as Z ∞
f (t) = f (τ )δ(t − τ )dτ.
−∞
Input Output
System 1 System 2
System 1
Input Output
+
System 2
2.6 Properties of Systems 15
Input Output
+ System 1
System 2
is not memoryless, as the output depends on all of the input values from the
past.
Systems with memory are often represented as having some sort of state and
dynamics, which maintains the necessary information from the past. For exam-
ple, for the system given above, we can use the fundamental theorem of calculus
to obtain
dy(t)
= x(t)
dt
where the state of the system is y(t) (this is also the output), and the dynamics
of the state are given by the differential equation above. Similarly for the
16 Properties of Signals and Systems
discrete-time system
n
X
y[n] = x[k]
k=−∞
we have
n−1
X
y[n] = x[k] + x[n] = y[n − 1] + x[n]
k=−∞
which is a difference equation describing how the state y[n] evolves over time.
Invertibility
Causality
A system is causal if the output of the system at any time depends only on
the input at that time and from the past. In other words, for all t ∈ R, y(t)
depends only on x(τ ) for τ ≤ t. Thus, a causal system does not react to inputs
that will happen in the future. For a causal system, if two different inputs have
the same values up to a certain time, the output of the system due to those two
inputs will agree up to that time as well. All memoryless systems are causal.
There are various instances where we may wish to use noncausal systems. For
example, if we have time-series data saved offline, we can use the saved values
of the signal for k > n to process the signal at a given time-step n (this can be
used for music and video editing, for example). Alternatively, the independent
variable may represent space, rather than time. In this case, one can use the
values of the signal from points on either side of a given point in order to process
the signal.
2.6 Properties of Systems 17
Example 2.2. The signal y[n] = x[−n] is noncausal; for example, y[−1] = x[1],
and thus the output at negative time-steps depends on the input from positive
time-steps (i.e., in the future).
The signal y(t) = x(t) cos(t + 1) is causal; the t + 1 term does not appear in the
input, and thus the output at any time does not depend on values of the input
at future times.
Stability
The notion of stability is a critical system property. There are many different
notions of stability that can be considered, but for the purposes of this course,
we will say that a system is stable if a bounded input always leads to a bounded
output. In other words, for a continuous-time system, if there exists a constant
B1 ∈ R≥0 such that the input satisfies |x(t)| ≤ B1 for all t ∈ R, then there
should exist some other constant B2 ∈ R≥0 such that |y(t)| ≤ B2 for all t ∈
R. An entirely analogous definition holds for discrete-time systems. Loosely
speaking, for a stable system, the output cannot grow indefinitely when the
input is bounded by a certain value.
Example 2.3. The system y(t) = tx(t) is memoryless and causal, but not
stable. For example, if x(t) = 1 for all t ∈ R, we have y(t) = t which is not
bounded by any constant.
Similarly, the
Psystem y[n] = y[n − 1] + x[n] is not stable. This is seen by noting
n
that y[n] = k=−∞ x[k]. So, for example, if x[n] = u[n], we have y[n] = (n + 1)
if n ≥ 0, which is unbounded.
An example of a stable causal memoryless system is y(t) = cos(x(t)). Another
example of a stable and causal system is
(
0 if n < 0
y[n] = ,
αy[n − 1] + x[n] if n ≥ 0
Time-Invariance
and thus the output due to x(t − t0 ) is time-shifted version of y(t), as required.
An example of a time-varying system is y[n] = nx[n]. For example, if x[n] =
δ[n], then we have the output signal y[n] = 0 for all time. However, if x[n] =
δ[n − 1], then we have y[n] = 1 for n = 1 and zero for all other times. Thus a
shift in the input did not result in a simple shift in the output.
Linearity
1. Additivity: Suppose the output is y1 (t) when the input is x1 (t), and the
output is y2 (t) when the input is x2 (t). Then the output to x1 (t) + x2 (t)
is y1 (t) + y2 (t).
2. Scaling: Suppose the output is y(t) when the input is x(t). Then for any
complex number α, the output should be αy(t) when the input is αx(t).
Both properties together define the superposition property: if the input to the
system is α1 x1 (t) + α2 x2 (t), then the output should be α1 y1 (t) + α2 y2 (t). Note
that this must hold for any inputs and scaling parameters in order for the system
to qualify as linear. An entirely analogous definition holds for discrete-time
systems.
For any linear system, the output must be zero for all time when the input is
zero for all time. To see this, consider any arbitrary input x(t), and let the
corresponding output be y(t). Then, using the scaling property, the output to
2.6 Properties of Systems 19
αx(t) must be αy(t) for any scalar complex number α. Simply choosing α = 0
yields the desired result that the output will be the zero signal when the input
is the zero signal.
Example 2.5. The system y(t) = tx(t) is linear. To see this, consider two
arbitrary input signals x1 (t) and x2 (t), and two arbitrary scalars α1 , α2 . Then
we have
where y1 (t) and y2 (t) are the outputs due to x1 (t) and x2 (t), respectively.
The system y[n] = x2 [n] is nonlinear. Let y1 [n] = x21 [n] and y2 [n] = x22 [n].
Consider the input x3 [n] = x1 [n] + x2 [n]. Then the output due to x3 [n] is
y3 [n] = x23 [n] = (x1 [n] + x2 [n])2 6= x21 [n] + x22 [n]
in general. Thus the additivity property does not hold, and the system is
nonlinear.
The system y[n] = Re{x[n]} is nonlinear, where Re{·} denotes the real part
of the argument. To see this, let x[n] = a[n] + jb[n], where a[n] and b[n] are
real-valued signals. Consider a scalar α = j. Then we have
Analysis of Linear
Time-Invariant Systems
Consider a discrete-time system with input x[n] and output y[n]. First, define
the impulse response of the system to be the output when x[n] = δ[n] (i.e.,
the input is an impulse function). Denote this impulse response by the signal
h[n].
Now, consider an arbitrary signal x[n]. Recall from the sifting property of the
impulse function that
∞
X
x[n] = x[k]δ[n − k],
k=−∞
i.e., x[n] can be written as a superposition of scaled and shifted impulse func-
tions.
Since the system is time-invariant, the response of the system to the input
δ[t − k] is h[t − k]. By linearity (and specifically the scaling property), the
response to x[k]δ[n − k] is x[k]δ[n − k]. By the additivity property, the response
22 Analysis of Linear Time-Invariant Systems
P∞
to k=−∞ x[k]δ[n − k] is then
∞
X
y[n] = x[k]h[n − k].
k=−∞
The above is called the convolution sum; the convolution of the signals x[n]
and h[n] is denoted by
∞
X
x[n] ∗ h[n] = x[k]h[n − k].
k=−∞
Thus we have the following very important property of discrete-time LTI sys-
tems: if x[n] is the input signal to an LTI system, and h[n] is the
impulse response of the system, then the output of the system is
y[n] = x[n] ∗ h[n].
Example 3.1. Consider an LTI system with impulse response
(
1 if 0 ≤ n ≤ 3,
h[n] =
0 otherwise.
Then we have
∞
X
y[n] = x[n] ∗ h[n] = x[k]h[n − k].
k=−∞
When n = 1 we have
1
X
y[1] = x[k]h[1 − k] = x[0]h[1] + x[1]h[0] = 2.
k=0
Example 3.2. Consider an LTI system with impulse response h[n] = u[n].
Suppose the input signal is x[n] = αn u[n] with 0 < α < 1. Then we have
∞
X
y[n] = x[n] ∗ h[n] = x[k]h[n − k].
k=−∞
Since both x[k] = 0 for k < 0 and h[n − k] = 0 for k > n, we have
n n
X X 1 − αn+1
y[n] = x[k]h[n − k] = αk =
1−α
k=0 k=0
The expression on the right hand side is a superposition of scaled and shifted
impulse functions. Thus, when this signal is applied to an LTI system, the
output will be a superposition of scaled and shifted impulse responses. More
specifically, if h(t) is the output of the system when the input is x(t) = δ(t),
then the output for a general input x(t) is given by
Z ∞
y(t) = x(τ )h(t − τ )dτ.
−∞
To see this, start with the definition of convolution and perform a change of
variable by setting r = n − k. This gives
∞
X ∞
X
x[n] ∗ h[n] = x[k]h[n − k] = x[n − r]h[r] = h[n] ∗ x[n].
k=−∞ r=−∞
The same holds for the continuous-time convolution. Thus it does not matter
which of the signals we choose to flip and shift in the convolution operation.
The distributive property has implications for LTI systems connected in parallel:
y1 [n]
h1 [n]
x[n] y[n]
+
h2 [n]
y2 [n]
Let h1 [n] be the impulse response of System 1, and let h2 [n] be the impulse
response for System 2. Then we have y1 [n] = x[n]∗h1 [n] and y2 [n] = x[n]∗h2 [n].
Thus,
y[n] = y1 [n] + y2 [n] = x[ n] ∗ h1 [n] + x[n] ∗ h2 [n] = x[n] ∗ (h1 [n] + h2 [n]).
The above expression indicates that the parallel interconnection can equivalently
be viewed as x[n] passing through a single system whose impulse response is
h1 [n] + h2 [n]:
x[n] y[n]
h1 [n] + h2 [n]
In other words, it does not matter which order we do the convolutions. The
above relationships can be proved by manipulating the summations (or inte-
grals); we won’t go into the details here.
26 Analysis of Linear Time-Invariant Systems
y[n] = y1 [n] ∗ h2 [n] = (x[n] ∗ h1 [n]) ∗ h2 [n] = x[n] ∗ (h1 [n] ∗ h2 [n]).
x[n] y[n]
h1 [n] ∗ h2 [n]
Further note that since h1 [n] ∗ h2 [n] = h2 [n] ∗ h1 [n], we can also interchange the
order of the systems in the series interconnection as shown in Fig. 3.3, without
changing the overall input-output relationship between x[n] and y[n].
x[n] y[n]
h2 [n] h1 [n]
we see that y[n] will depend on a value of the input signal other than at time-
step n unless h[k] = 0 for all k 6= 0. In other words, for an LTI system to
be memoryless, we require h[n] = Kδ[n] for some constant K. Similarly, a
3.3 Properties of Linear Time-Invariant Systems 27
continuous-time LTI system is memoryless if and only if h(t) = Kδ(t) for some
constant K. In both cases, all LTI memoryless systems have the form
Consider an LTI system with impulse response h[n] (or h(t)). Recall that the
system is said to be invertible if the output of the system uniquely specifies the
input. If a system is invertible, there is another system (known as the “inverse
system”) that takes the output of the original system and outputs the input to
the original system, as shown in Fig. 3.4.
Suppose the second system is LTI and has impulse response hI [n]. Then, by
the associative property discussed earlier, we see that the series interconnection
of the system with its inverse is equivalent (from an input-output sense) to a
single system with impulse response h[n] ∗ hI [n]. In particular, we require
In other words, if we have an LTI system with impulse response h[n], and
another LTI system with impulse response hI [n] such that h[n] ∗ hI [n] = δ[n],
then those systems are inverses of each other. The analogous statement holds
in continuous-time as well.
Example 3.5. Consider the LTI system with impulse response h[n] = αn u[n].
One can verify that this impulse response corresponds to the system
n
X
y[n] = x[k]αn−k = αy[n − 1] + x[n].
k=−∞
Now consider the system yI [n] = xI [n] − αxI [n − 1], with input signal xI [n]
and output signal yI [n]. The impulse response of this system is hI [n] = δ[n] −
28 Analysis of Linear Time-Invariant Systems
Thus, the system with impulse response hI [n] is the inverse of the system with
impulse response h[n].
Recall that a system is causal if its output at time t depends only on the inputs
up to (and potentially including) t. To see what this means for LTI systems,
consider the convolution sum
∞
X ∞
X
y[n] = x[k]h[n − k] = x[n − k]h[k],
k=−∞ k=−∞
where the second expression follows from the commutative property of the con-
volution. In order for y[n] to not depend on x[n + 1], x[n + 2], . . ., we see that
h[k] must be zero for k < 0. The same conclusion holds for continuous-time
systems, and thus we have the following: A continuous-time LTI system is
causal if and only if its impulse response h(t) is zero for all t < 0. A
discrete-time LTI system is causal if and only if its impulse response
h[n] is zero for all n < 0.
Note that causality is a property of a system; however we will sometimes refer
to a signal as being causal, by which we simply mean that its value is zero for
n or t less than zero.
To see what the LTI property means for stability of systems, consider again the
convolution sum
∞
X
y[n] = x[k]h[n − k].
k=−∞
3.3 Properties of Linear Time-Invariant Systems 29
Note that
∞
X ∞
X
|y[n]| = x[k]h[n − k] ≤ |x[k]h[n − k]|
k=−∞ k=−∞
X∞
= |x[k]||h[n − k]|.
k=−∞
Now suppose that x[n] is bounded, i.e., there exists some B ∈ R≥0 such that
|x[n]| ≤ B for all n ∈ Z. Then the above expression becomes
∞
X
|y[n]| ≤ B |h[n − k]|.
k=−∞
P∞
Thus, if k=−∞ |h[n−k]| < ∞ (which means that h[n] is absolutely summable),
then |y[n]| will also P
be bounded for all n. It turns out that this is a necessary
∞
condition as well: if k=−∞ |h[n − k]| = ∞, then there is a bounded input that
causes the output to be unbounded.
The same conclusion holds in continuous-time as well. Thus,
R ∞ we have: A
continuous-time LTI system is stable if and only if −∞ |h(τ )|dτ < ∞.
P∞
A discrete-time LTI system is stable if and only if k=−∞ |h[k]| < ∞.
Example 3.6. Consider the LTI system with impulse response h[n] = αn u[n],
where α ∈ R. We have
∞ ∞
(
1
X X if |α| < 1
|h[k]| = |α|k = 1−|α| .
k=−∞ k=0
∞ if |α| ≥ 1
To see how the step response is related to the impulse response, note that
∞
X ∞
X n
X
s[n] = u[k]h[n − k] = u[n − k]h[k] = h[k].
k=−∞ k=−∞ k=−∞
This is equivalent to s[n] = s[n − 1] + h[n]. Thus, the step response of a discrete-
time LTI system is the running sum of the impulse response.
Note that this could also have been seen by noting that δ[n] = u[n] − u[n − 1].
If the impulse is applied to an LTI system, we get the impulse response h[n].
However, by the linearity property, this output must be the superposition of the
outputs due to u[n] and u[n − 1]. By the time-invariance property, the output
due to u[n−1] is s[n−1], and thus for LTI systems we have h[n] = s[n]−s[n−1],
which corroborates what we obtained above.
For continuous-time systems, we have the same idea:
Z ∞ Z ∞ Z t
s(t) = u(τ )h(t − τ )dτ = u(t − τ )h(τ )dτ = h(τ )dτ.
−∞ −∞ −∞
dv
= a(t).
dt
If we included wind resistance or friction (which produces a force that is pro-
portional to the velocity in the opposite direction of travel), we have
dv
= −αv(t) + a(t),
dt
where α > 0 is the coefficient of friction. Similarly, given an RC circuit, if we
define the voltage across the capacitor as the output, and the source voltage as
the input, then the input and output are again related via a differential equation
of the above form.
3.4 Differential and Difference Equation Models for Causal LTI
Systems 31
K 3t
Thus, the particular solution is given by yp (t) = 5 e for t > 0.
−2t K 3t
Together, we have y(t) = yh (t) + yp (t) = Ae + 5 e
for t > 0. Note that
the coefficient A has not been determined yet; in order to do so, we need more
information about the solutions to the differential equation, typically in the
form of initial conditions. For example, if we know that the system is at rest
until the input is applied (i.e., y(t) = 0 until x(t) becomes nonzero), we have
y(t) = 0 for t < 0. Suppose we are given the initial condition y(0) = 0. Then,
K K
y(0) = A + =0⇒A=− .
5 5
Thus, with the given initial condition, we have y(t) = K 3t
5 (e − e−2t )u(t).
The above example illustrates the general approach to solving linear differential
equations of the form
N M
X dk y X dk x
ak k = bk k .
dt dt
k=0 k=0
where x(t) is some given function. The idea will be to make yp (t) a linear
combination of terms that, when differentiated, yield terms that appear in x(t)
and its derivatives. Typically this only works when x(t) involves terms like
et , sin(t), cos(t), polynomials in t, etc. Let’s try another example.
Example 3.8. Consider the differential equation
for some constants A1 and A2 that will be determined from the initial conditions.
To find a particular solution, note that for t > 0, we have x0 (t) + x(t) = 4e4t +
e4t = 5e4t . Thus we search for a particular solution of the form yp (t) = Be4t
for t > 0. Substituting into the differential equation (3.1), we have
5
yp00 (t)+yp0 (t)−6yp (t) = x0 (t)+x(t) ⇒ 16Be4t +4Be4t −6Be4t = 5e4t ⇒ B = .
14
5 4t
Thus, yp (t) = 14 e for t > 0 is a particular solution.
The overall solution is then of the form y(t) = yh (t) + yp (t) = A1 e−3t + A2 e2t +
5 4t
14 e for t > 0. If we are told that the system is at rest until the input is
applied, and that y(0) = y 0 (0) = 0, we have
5
y(0) = A1 + A2 + =0
14
20
y 0 (0) = −3A1 + 2A2 + = 0.
14
1
Solving these equations, we obtain A1 = and A2 = − 21 . Thus, the solution is
7
1 −3t 1 2t 5 4t
y(t) = e − e + e u(t).
7 2 14
The overall solution will be of the form y[n] = yh [n] + yp [n], where yh [n] is a
homogeneous solution to
XN
ak yh [n − k] = 0,
k=0
and yp [n] is a particular solution satisfying the difference equation (3.2) for the
given function x[n]. In this case, we seek homogeneous solutions of the form
yh [n] = Aβ n for some A, β ∈ C, and seek particular solutions that have the
same form as the quantities that appear in x[n]. Let’s do an example.
34 Analysis of Linear Time-Invariant Systems
for n ≥ 1 (note that we don’t look at n = 0 here because we have not defined
n Solving this, we get B = −2. Thus the particular solution is yp [n] =
yp [−1]).
−2 31 for n ≥ 0.
n n
Now, we have y[n] = yh [n] + yp [n] = A 12 − 2 13 for n ≥ 0. Suppose we are
told that the system is at rest for n < 0, i.e., y[n] = 0 for n < 0. Looking at
equation (3.3), we have
1
y[0] − y[−1] = 1 ⇒ y[0] = 1.
2
Substituting the expression for y[n], we have
1 = y[0] = A − 2 ⇒ A = 3.
Drawing block diagrams for more general differential and difference equations
(involving more than just x[n] on the right hand side) is easier using Laplace
1 One can also calculate this using the homogeneous and particular solutions; in this case,
and z-transform techniques, and so we will defer a study of such equations until
then.
For the above equations, we start by writing the highest derivative of y (or the
most advanced version of y) in terms of all of the other quantities:
dN y(t) dN −1 y(t)
= −aN −1 − · · · − a0 y(t) + b0 x(t) (3.4)
dtN dtN −1
y[n + N ] = −aN −1 y[n + N − 1] − · · · − a0 y[n] + b0 x[n]. (3.5)
Next, we use a key building block: the integrator block (for continuous-time)
or the delay block (for discrete-time). Specifically, the integrator block is a
system whose output is the integral of the input, and the delay block is a system
whose output is a delayed version of the input. Thus, if we feed dy dt into the
integrator block, we get y(t) out, and if we feed y[n + 1] into the delay block,
we get y[n] out, as shown in Fig. 3.5.
dy(t)
dt R y(t) y[n + 1] y[n]
D
dN −1 y(t) dN −2 y(t)
dN y(t) dtN −1 dtN −2 dy(t)
dtN R R dt R y(t)
···
y[n + N − 1] y[n + N − 2]
y[n + N ] y[n + 1] y[n]
D D ··· D
This series chain of integrator (or delay) blocks provides us with all of the signals
needed to represent (3.4) and (3.5). Specifically, from equation (3.4), we see that
N −1
dN y(t)
dtN
is a linear combination of the signals d dtN −1y(t)
, · · · , y(t), x(t). Thus, to
N
generate the signal d dty(t)
N , we simply take the signals from the corresponding
integrator blocks, multiply them by the coefficients, and add them all together.
The same holds true for the signal y[n + N ] in (3.5).
Chapter 4
Fourier Series
Representation of Periodic
Signals
In the last part of the course, we decomposed signals into sums of scaled and
time-shifted impulse functions. For LTI systems, we could then write the output
as a sum of scaled and time-shifted impulse responses (using the superposition
property). In this part of the course, we will consider alternate (and very useful)
decompositions of signals as sums of scaled complex exponential functions.
As we will see, such functions exhibit some nice behavior when applied to LTI
systems. This particular chapter will focus on decomposing periodic signals into
complex exponentials (leading to the Fourier Series), and subsequent chapters
will deal with the decomposition of more general signals.
Recall that a complex exponential has the form x(t) = est (in continuous-time),
and x[n] = z n (in discrete-time), where s and z are general complex numbers.
Let’s start with a continuous-time LTI system with impulse response h(t). When
we apply the complex exponential x(t) = est to the system, the output is given
38 Fourier Series Representation of Periodic Signals
by
Z ∞ Z ∞
y(t) = x(t) ∗ h(t) = h(τ )x(t − τ )dτ = h(τ )es(t−τ ) dτ
−∞ −∞
Z ∞
=e st
h(τ )e−sτ dτ.
−∞
Let us define
Z ∞
H(s) = h(τ )e−sτ dτ.
−∞
If, for the given complex number s, the above integral exists (i.e., is finite), then
H(s) is just some complex number. Thus, we see that for an LTI system, if we
apply the complex exponential x(t) = est as an input, we obtain the quantity
y(t) = H(s)est
as an output. In other words, we get the same complex exponential out of the
system, just scaled by the complex number H(s). Thus, the signal est is called
an eigenfunction of the system, with eigenvalue H(s).
The same reasoning applies for discrete-time LTI systems. Consider an LTI
system with impulse response h[n], and input x[n] = z n . Then,
∞
X ∞
X
y[n] = x[n] ∗ h[n] = h[k]x[n − k] = h[k]z n−k
k=−∞ k=−∞
∞
X
=z n
h[k]z −k .
k=−∞
Let us define
∞
X
H(z) = h[k]z −k .
k=−∞
If this sum converges for the given choice of complex number z, then H(z) is just
some complex number. Thus, we see again that for a discrete-time LTI system
with the complex exponential x[n] = z n as an input, we obtain the quantity
y[n] = H(z)z n
As we will see later in the course, the quantities H(s) and H(z) are the Laplace
Transform and z-Transform of the impulse response of the system, respec-
tively.
Note that the above translates to superpositions P of complex exponentials in an
n
natural way. Specifically, if the input is x(t) = i=1 ai esi t for some complex
numbers a1 , . . . , an and s1 , . . . , sn , we have
n
X
y(t) = ai H(si )esi t .
i=1
Example 4.2. Consider the system y(t) = x(t−t0 ), where t0 ∈ R. The impulse
response if this system is h(t) = δ(t − t0 ), and thus
Z ∞ Z ∞
H(s) = h(t)e−st dt = δ(t − t0 )e−st dt = e−st0 .
−∞ −∞
Suppose we apply the signal x(t) = cos(ω0 t) to the system. We expect the
output to be cos(ω0 (t − t0 )), based on the definition of the system. Let’s verify
this using the identities we derived earlier. We have
Thus, when we apply the input cos(ω0 t), the output is given by
x(t) = ejω0 t .
2π
Recall that this signal is periodic with fundamental period T = ω0
(assuming
ω0 > 0). Based on this complex exponential, we can define an entire harmonic
family of complex exponentials, given by
φk (t) = ejkω0 t , k ∈ Z.
Note that T may not be the fundamental period of the signal φk (t), however.
Since each of the signals in the harmonic family is periodic with period T , a
linear combination of signals from that family is also periodic. Specifically,
consider the signal
∞ ∞
2π
X X
x(t) = ak ejkω0 t = ak ejk T t .
k=−∞ k=−∞
Suppose we are given a certain periodic signal x(t) with fundamental period T .
Define ω0 = 2π
T and suppose that we can write x(t) as
∞ ∞
!
jk 2π
X X
x(t) = ak e T t = ak ejkω0 t
k=−∞ k=−∞
Example 4.3. Consider the signal x(t) = cos(ω0 t), where ω0 > 0. We have
1 jω0 t 1 −jω0 t
x(t) = e + e .
2 2
This is the Fourier Series representation of x(t); it has only first harmonics, with
coefficients a1 = a−1 = 21 .
Similarly, consider the signal x(t) = sin(ω0 t). We have
1 jω0 t 1
x(t) = e − e−jω0 t .
2j 2j
1
Once again, the signal has only first harmonics, with coefficients a1 = 2j and
1
a−1 = − 2j .
Suppose that we have a periodic signal that has a Fourier Series representation
∞
X
x(t) = ak ejkω0 t . (4.1)
k=−∞
Now suppose that x(t) is real, i.e., x∗ (t) = x(t). Taking the complex conjugate
of both sides of the above expression, we have
∞
X
x∗ (t) = a∗k e−jkω0 t .
k=−∞
Comparing the terms, we see that for any k ∈ Z, the coefficient of ejkω0 t is ak
on the left hand side, and is a∗−k on the right hand side. Thus, for real signals
x(t), the Fourier Series coefficients satisfy
a−k = a∗k
42 Fourier Series Representation of Periodic Signals
for all k ∈ Z. Substituting this into the Fourier Series representation (4.1) we
have
∞
X
ak ejkω0 t + a−k e−jkω0 t
x(t) = a0 +
k=1
X∞
ak ejkω0 t + a∗k e−jkω0 t
= a0 +
k=1
X∞
2Re ak ejkω0 t ,
= a0 +
k=1
where Re is the real part of the given complex number. If we write ak in polar
form as rk ejθk , the above expression becomes
∞
X n o ∞
X
x(t) = a0 + 2Re rk ej(kω0 t+θk ) = a0 + 2 rk cos(kω0 t + θk ).
k=1 k=1
We will soon see conditions under which a signal will have such a representation,
but for now, suppose that we are just interested in finding the coefficients ak ,
k ∈ Z. To do this, multiply x(t) by e−jnω0 t , where n is some integer. This gives
∞
X ∞
X
−jnω0 t jkω0 t −jnω0 t
x(t)e = ak e e = ak ej(k−n)ω0 t .
k=−∞ k=−∞
Now suppose that we integrate both sides of the above equation from t0 to t0 +T
for any t0 :
Z t0 +T ∞
Z t0 +T X ∞
X Z t0 +T
x(t)e−jnω0 t dt = ak ej(k−n)ω0 t dt = ak ej(k−n)ω0 t dt.
t0 t0 k=−∞ k=−∞ t0
Otherwise, if n 6= k, we have
Z t0 +T t0 +T
1
ej(k−n)ω0 t dt = ej(k−n)ω0 t
t0 j(k − n)ω0 t0
1
= ej(k−n)ω0 (t0 +T ) − ej(k−n)ω0 t0
j(k − n)ω0
= 0.
Thus, we have
Z t0 +T ∞
X Z t0 +T
−jnω0 t
x(t)e dt = ak ej(k−n)ω0 t dt = an T,
t0 k=−∞ t0
or equivalently,
1 t0 +T
Z
an =x(t)e−jnω0 t dt
T t0
where t0 is any arbitrary starting point. In other words, we obtain the Fourier
coefficient an by multiplying the signal x(t) by e−jnω0 t and then integrating the
resulting product over any period.
Example 4.4. Consider the signal
0
− T2 ≤ t < T1
x(t) = 1 −T1 ≤ t ≤ T1 ,
T1 < t < T2
0
if k 6= n, and nonzero otherwise. Note that φ∗n (t) is the complex conjugate
of φn (t). We then derived the expressions for the coefficients by using the
orthogonality property. However, that derivation assumed that the signal could
be written as a linear combination of the functions in the harmonic family, and
then derived the coefficient expressions. Here will justify this by first trying to
approximate a given signal by a finite number of functions from the harmonic
family and then taking the number of approximating functions to infinity. We
will start by reviewing how to approximate a given vector by other vectors, and
then explore the analogy to the approximation of functions.
Note that v10 v2 = 0 and thus v1 and v2 are orthogonal. Suppose we wish to
approximate the vector x using a linear combination of the vectors v1 and v2 .
In other words, we wish to find coefficients a and b so that the approximation
x̂ = av1 + bv2
e = x − x̂ = x − av1 − bv2 .
4.3 Calculating the Fourier Series Coefficients 45
Note that e is a vector, where the i-th component is the approximation error
for the i-th component of x. We try to minimize e21 + e22 + e23 , which is given by
e21 + e22 + e23 = e0 e = (x − av1 − bv2 )0 (x − av1 − bv2 )
= x0 x − ax0 v1 − bx0 v2 − av10 x + a2 v10 v1 + abv10 v2
− bv20 x + abv20 v1 + b2 v22 .
Noting that v1 and v2 are orthogonal, we have
e0 e = x0 x − ax0 v1 − bx0 v2 − av10 x + a2 v10 v1 − bv20 x + b2 v22 .
This is a convex function of the scalars a and b. If we wish to minimize e0 e, we
take the derivative with respect to these scalars and set it equal to zero. This
yields
∂e0 e
= −x0 v1 − v10 x + 2av10 v1 = 0
∂a
v0 x
⇒ a = 01
v1 v1
0
∂e e
= −x0 v2 − v20 x + 2bv20 v2 = 0
∂b
v0 x
⇒ b = 02 ,
v2 v2
where we used the fact that x0 v1 = v10 x and x0 v2 = v20 x (since these quantities
are all scalars). In terms of the vectors given above, we obtain
1 5
a= = 1, b = .
1 2
Entirely analogous ideas hold when we are trying to approximate one function
as a linear combination of a set of orthogonal functions (as in the Fourier series).
Given a set of orthogonal functions φk (t) over an interval [a, b], suppose we wish
to approximate a given function x(t) as a linear combination of some finite
number of these functions. Specifically, suppose that we are given some positive
integer N , and wish to find the best coefficients αk ∈ C such that the estimate
N
X
x̂(t) = ak φk (t)
k=−N
and the squared error over the entire interval [a, b] is then defined as
Z b
|e(t)|2 dt.
a
Here, we will allow e(t) to a general complex valued function, so the absolute
value in the integral is interpreted as the magnitude of the complex number
e(t), i.e., the square error over the interval [a, b] is given by
Z b
e∗ (t)e(t)dt.
a
Consider the harmonic family φk (t) = ejkω0 t , and suppose that we wish to find
the best approximation of a given T -periodic signal x(t) as a linear combination
of φk (t) for −N ≤ k ≤ N , i.e.,
N
X
x̂(t) = ak ejkω0 t ,
k=−N
with error
N
X
e(t) = x(t) − x̂(t) = x(t) − ak ejkω0 t .
k=−N
We evaluate the squared error over any interval of length T (since the functions
φk (t) are orthogonal over such intervals):
Z t0 +T Z t0 +T
Squared Error = |e(t)|2 dt = e∗ (t)e(t)dt
t0 t0
N
! N
!
Z t0 +T X X
∗
= x (t) − a∗k e−jkω0 t x(t) − ak e jkω0 t
dt
t0 k=−N k=−N
N N
!
Z t0 +T X X
∗ ∗
= x (t)x(t) − x (t) ak e jkω0 t
− x(t) a∗k e−jkω0 t dt
t0 k=−N k=−N
N N
!
Z t0 +T X X
+ a∗k e−jkω0 t ak e jkω0 t
dt
t0 k=−N k=−N
N N
!
Z t0 +T Z t0 +T X X
∗
= 2
|x(t)| dt + −x (t) ak e jkω0 t
− x(t) a∗k e−jkω0 t dt
t0 t0 k=−N k=−N
N N
!
Z t0 +T X X
+ a∗k an e−jkω0 t ejnω0 t dt
t0 k=−N k=−N
4.3 Calculating the Fourier Series Coefficients 47
Z t0 +T N
X Z t0 +T N
X Z t0 +T
∗
= 2
|x(t)| dt − ak x (t)e jkω0 t
dt − a∗k x(t)e−jkω0 t dt
t0 k=−N t0 k=−N t0
N N
!
X X Z t0 +T
+ a∗k an e−jkω0 t ejnω0 t dt
k=−N n=−N t0
Z t0 +T N
X Z t0 +T N
X Z t0 +T
∗
= 2
|x(t)| dt − ak x (t)e jkω0 t
dt − a∗k x(t)e−jkω0 t dt
t0 k=−N t0 k=−N t0
N
X
+T |ak |2 ,
k=−N
R t0 +T
where we used the fact that t0
e−jkω0 t ejnω0 t dt = 0 if k 6= n and T otherwise.
Our job is to find the best coefficients ak , −N ≤ k ≤ N to minimize the square
error. Thus, we first write ak = bk +jck , where bk , ck ∈ R, and then differentiate
the above expression with respect to bk and ck and set the result to zero. After
some algebra, we obtain the optimal coefficient as
Z t0 +T
1
ak = bk + jck = x(t)e−jkω0 t dt,
T t0
which is exactly the same expression we found for the Fourier series coefficients
earlier. Note again why we bothered to go through this exercise. Here, we
did not assume that a signal x(t) had a Fourier series representation; we sim-
ply asked how to best approximate a given signal by a linear combination of
complex exponentials, and found the resulting coefficients. These coefficients
match exactly the coefficients that we obtained by assuming that the signal had
a Fourier series representation, and lends some justification for the validity of
the earlier analysis.
As N gets larger, the approximation error will get smaller and smaller. The
R t +T
question is then: will t00 |e(t)|2 dt go to zero as N goes to infinity? If so,
then the signal would, in fact, have a Fourier series representation (in the sense
of having asymptotically zero error between the true signal and the approxima-
tion). It turns out that most periodic signals of practical interest will satisfy
this property.
There are various different sufficient conditions that guarantee that a given
signal x(t) will have a Fourier series representation. One commonly used set of
conditions are known as the Dirichlet conditions stated as follows.
A periodic signal x(t) has a Fourier series representation if all three of the
following conditions are satisfied:
48 Fourier Series Representation of Periodic Signals
• In any finite interval of time x(t) has bounded variation, meaning that it
has only a finite number of minima and maxima during any single period
of the signal.
• In any finite interval of time, there are only a finite number of discontinu-
ities, and each of these discontinuities are finite.
We won’t go into the proof of why these conditions are sufficient here, but it
suffices to note that signals that violate the above conditions (the last two in
particular) are somewhat pathological. The first condition guarantees that the
Fourier series coefficients are finite, since
Z
1 t0 +T 1 t0 +T
Z
1 t0 +T
Z
−jkω0 t
x(t)e−jkω0 t dt =
|ak | = x(t)e dt ≤ |x(t)| dt.
T t0 T t0 T t0
Thus if the signal is absolutely integrable over a period, then |ak | < ∞ for all
k ∈ Z.
Gibbs Phenomenon
If x(t) has a Fourier series representation, then that representation will exactly
equal x(t) at all points t where x(t) is continuous. At points of discontinuity in
x(t), the value of the Fourier series representation will be equal to the average of
the values of x(t) on either side of the discontinuity. One particularly interest-
ing phenomenon occurs at points of discontinuity: the Fourier series typically
overshoots the signal x(t). The height of the overshoot stays constant as the
number of terms N in the approximation increases, but the width shrinks. Thus,
asymptotically, the error goes to zero (although technically the two signals are
not exactly the same at the discontinuity).
to denote that the T -periodic signal x(t) has Fourier series coefficients ak , k ∈ Z.
4.4 Properties of Continuous-Time Fourier Series 49
4.4.1 Linearity
Suppose we have two signals x1 (t) and x2 (t), each of which is periodic with
period T . Let
FS FS
x1 (t) ←→ ak , x2 (t) ←→ bk .
For any complex scalars α, β, let g(t) = αx1 (t) + βx2 (t). Then
FS
g(t) ←→ αak + βbk .
The above property follows immediately from the definition of the Fourier series
coefficients (since integration is linear).
Example 4.5. Consider x1 (t) = cos(ω0 t) and x2 (t) = sin(ω0 t). We have
α jω0 t α −jω0 t β β
g(t) = α cos(ω0 t) + β sin(ω0 t) = e + e + ejω0 t − e−jω0 t
2 2 2j 2j
α β α β
= + e jω0 t
+ − e−jω0 t .
2 2j 2 2j
Thus, we see that each Fourier series coefficient of g(t) is indeed given by a
linear combination of the corresponding Fourier series coefficients of x1 (t) and
x2 (t).
dt̄
Define t̄ = t − τ , so that dt = 1. Then we have
Z t0 −τ +T Z t0 −τ +T
1 −jkω0 (t̄+τ ) −jkω0 τ 1
bk = x(t̄)e dt̄ = e x(t̄)e−jkω0 t̄ dt̄ = e−jkω0 τ ak .
T t0 −τ T t0 −τ
Thus
FS
x(t − τ ) ←→ e−jkω0 τ ak .
Example 4.6. Consider x(t) = cos(ω0 t), and let g(t) = x(t − τ ) We have
Thus we have
FS
x(−t) ←→ a−k ,
i.e., the Fourier series coefficients for a time-reversed signal are just the time-
reversal of the Fourier series coefficients for the original signal. Note that this
is not true for the output of LTI systems (a time reversal of the input to an
LTI system does not necessarily mean that the output is a time-reversal of the
original output).
Example 4.7. Consider x(t) = sin(ω0 t) and define g(t) = x(−t). First, note
that
1 1
x(t) = sin(ω0 t) = ejω0 t − e−jω0 t = a1 ejω0 t + a−1 e−jω0 t .
2j 2j
We have
1 1
g(t) = sin(−ω0 t) = e−jω0 t − ejω0 t = b1 ejω0 t + b−1 e−jω0 t ,
2j 2j
1 1
and thus we see that b1 = − 2j = a−1 and b−1 = 2j = a1 .
Thus, g(t) has the same Fourier series coefficients as x(t), but the Fourier series
representation has changed: the frequency is now ω0 α rather than ω0 , to
reflect the fact that the harmonic family is in terms of the new period Tα rather
than T .
Example 4.8. Consider x(t) = cos(ω0 t), which has series representation
1 jω0 t 1 −jω0 t
x(t) = e + e .
2 2
Then we have
1 jω0 αt 1 −jω0 αt
g(t) = x(αt) = cos(ω0 αt) = e + e .
2 2
4.4 Properties of Continuous-Time Fourier Series 51
4.4.5 Multiplication
Let x1 (t) and x2 (t) be T -periodic signals with Fourier series ak and bk re-
spectively. Consider the signal g(t) = x1 (t)x2 (t), and note that g(t) is also
T -periodic. We have
∞
X ∞
X ∞
X ∞
X
g(t) = x1 (t)x2 (t) = al ejlω0 t bn ejnω0 t = al bn ej(l+n)ω0 t .
l=−∞ n=−∞ l=−∞ n=−∞
Define k = l + n, so that
∞ ∞ ∞ ∞
!
X X X X
g(t) = al bk−l ejlω0 t = al bk−l ejkω0 t .
l=−∞ l=−∞ k=−∞ l=−∞
In other words, the Fourier series coefficients of a product of two signals are
given by the convolution of the corresponding Fourier series coefficients.
Example 4.9. Consider the signals x1 (t) = cos(ω0 t) and x2 (t) = sin(ω0 t).
Define g(t) = x1 (t)x2 (t). The Fourier series representations of x1 (t) and x2 (t)
are given by
Denote the Fourier series coefficients of x1 (t) by the sequence ak , with a−1 =
a1 = 21 , and ak = 0 otherwise. Similarly, denote the Fourier series coefficients
1 1
of x2 (t) by the sequence bk , with b−1 = − 2j , b(1) = 2j , and bk = 0 otherwise.
Denote the Fourier series coefficients of g(t) by ck , k ∈ Z. Then we have
∞
X
ck = ak ∗ bk = al bk−l .
l=−∞
1 1
Convolving the two sequences given above, we see that c−2 = − 4j , c2 = 4j , and
ck = 0 otherwise. Thus
1 j2ω0 t 1
g(t) = x1 (t)x2 (t) = cos(ω0 t) sin(ω0 t) = e − e−j2ω0 t .
4j 4j
1
Noting that cos(ω0 t) sin(ω0 t) = 2 sin(2ω0 t), we see that the above expression is,
in fact, correct.
52 Fourier Series Representation of Periodic Signals
This leads to Parseval’s Theorem: for a T -periodic signal x(t) with Fourier
series coefficients ak , k ∈ Z, we have
Z t0 +T ∞
1 X
|x(t)|2 dt = |ak |2 .
T t0 k=−∞
Example 4.10. Consider the signal x(t) = cos(ω0 t), with Fourier coefficients
a1 = a−1 = 12 and ak = 0 otherwise. We have
1 t0 +T 1 t0 +T 1 t0 +T 1
Z Z Z
1
|x(t)|2 dt = cos2 (ω0 t)dt = (1 + cos(2ω0 t))dt = .
T t0 T t0 T t0 2 2
We also have
∞
X 1 1 1
|ak |2 = a21 + a2−1 = + = ,
4 4 2
k=−∞
which agrees with the direct calculation of average power, as indicated by Par-
seval’s Theorem.
At this point, we encounter the first main difference between the discrete-time
and continuous-time Fourier series. Recall that a discrete-time complex expo-
nential with frequency ω0 is the same as a discrete-time complex exponential
with frequency ω0 + 2π. Specifically, for any k ∈ Z, consider
φk+N [n] = ej(k+N )ω0 n = ejkω0 n ejN ω0 n = ejkω0 n ,
since N ω0 = 2π. Thus, there are only N different complex exponentials in the
discrete-time harmonic family for the fundamental frequency ω0 , and so we have
the following.
X−1
n0 +N X−1
n0 +N
2π
x[n] = ak ejkω0 n = ak ejk N n (4.2)
k=n0 k=n0
In other words, the discrete-time signal can be written in terms of any N con-
tiguous multiples of the fundamental frequency ω0 . Since there are only N
coefficients ak in this representation, and since it does not matter which con-
tiguous N members of the harmonic family we choose, the Fourier series
coefficients are N -periodic as well, i.e., ak = ak+N for all k ∈ Z.
On the other hand, if r − k is not a multiple of N , we use the finite sum formula
to obtain
X−1
n1 +N
ej(k−r)ω0 n1 − ej(k−r)ω0 (n1 +N )
ej(k−r)ω0 n = = 0.
n=n1
ej(k−r)ω0 − 1
1 X−1
n1 +N
ak = x[n]e−jkω0 n
N n=n1
Example 4.11. Consider the N -periodic signal x[n], which is equal to 1 for
−N1 ≤ n ≤ N1 and zero otherwise (modulo the periodic constraints).
We have
N1
1 X
ak = e−jkω0 n .
N
n=−N1
2N1 +1
If k = 0, we have a0 = N . For k ∈ {1, 2, . . . , N − 1}, we have (via the finite
sum formula)
1 1 1
1 ejkω0 N1 − ejkω0 (N1 +1) 1 e−jkω0 2 ejkω0 (N1 + 2 ) − ejkω0 (N1 + 2 )
ak = =
N 1 − e−jkω0 N e−jkω0 12 1 1
ejkω0 2 − e−jkω0 2
1 sin(kω0 (N1 + 21 ))
= .
N sin(k ω20 )
The discrete-time Fourier series has properties that can be derived in almost
the same way as the properties for the continuous-time Fourier series (linearity,
time-shifting, etc.). Here we will just discuss the multiplication property and
Parseval’s theorem.
First, let’s start with an example. Consider two 2-periodic signals x1 [n] and
x2 [n]. We know that the Fourier series respresentations can be uniquely specified
by two coefficients. Specifically,
2π
Now, note that ω0 = 2 , and thus ej2ω0 n = 1. This gives
where we used the fact that b1 = b−1 by the periodic nature of the discrete-time
Fourier series coefficients. The above expressions show that the Fourier series
coefficients of the product of the two signals are given by a form of convolution of
the coefficients of those signals; however, the convolution is over a finite number
of terms, as opposed to over all time-indices.
Let us generalize this to functions with a larger period. Let x1 [n] and x2 [n]
be two N -periodic discrete-time signals, with discrete-time Fourier series coef-
ficients ak and bk , respectively. We have
N
X −1 N
X −1
x1 [n] = ak ejkω0 n , x2 [n] = bk ejkω0 n .
k=0 k=0
56 Fourier Series Representation of Periodic Signals
where we have used l and r as the indices in the Fourier series in order to keep
the terms in the two sums distinct.
Define the new variable k = l + r. Substituting into the above expression, this
gives
N
X −1 l+N
X−1
g[n] = al bk−l ejkω0 n
l=0 k=l
−1 −1 X−1
N N l+N
!
X X
jkω0 n jkω0 n
= al bk−l e + al bk−l e
l=0 k=l k=N
−1 −1
N N l−1
!
X X X
jkω0 n j(k+N )ω0 n
= al bk−l e + al bk+N −l e
l=0 k=l k=0
−1 −1
N N l−1
!
X X X
jkω0 n jkω0 n
= al bk−l e + al bk−l e
l=0 k=l k=0
N
X −1 N
X −1
= al bk−l ejkω0 n
l=0 k=0
−1 −1
N N
!
X X
= al bk−l ejkω0 n
k=0 l=0
1 X−1
n0 +N
|x[n]|2 ,
N n=n0
where n0 is any integer. Parseval’s theorem for discrete-time signals states the
following.
1 X−1
n0 +N N
X −1
2
|x[n]| = |ak |2 .
N n=n0 k=0
The following example illustrates the application of the various facts that we
have seen about the discrete-time Fourier series.
Example 4.12. Suppose we are told the following facts about a discrete-time
signal x[n].
• x[n] has the minimum power per period of all signals satisfying the pre-
ceding three conditions.
The above facts are sufficient for us to uniquely determine x[n]. First, note that
the Fourier series representation of x[n] is
5 5
π
X X
jkω0 n
x[n] = ak e = ak ejk 3 n ,
k=0 k=0
π
To use the third fact, note that (−1)n = e−jπn = e−j3 3 n . Thus, the third fact
seems to be related to the Fourier series coefficient for k = 3. Specifically, we
have
7
1 X 1
a3 = x[n]e−jπn = .
N 6
k=2
58 Fourier Series Representation of Periodic Signals
To use the last fact, note from Parseval’s theorem that the power of the signal
over one period is given by
5 5
1 X X
|x[n]|2 = |ak |2
N n=0
k=0
= |a0 |2 + |a1 |2 + |a2 |2 + |a3 |2 + |a4 |2 + |a5 |2 .
We are told that x[n] has the minimum average power over all signals that
satisfy the other three conditions. Since the other three conditions have already
set a0 and a1 , the average power is given by setting all of the other Fourier series
coefficients to 0. Thus, we have
5
X π 1 1
x[n] = ak ejk 3 n = a0 + a3 ejπn = + (−1)n .
3 6
k=0
Chapter 5
The Continuous-Time
Fourier Transform
However, we note that x(t) = x̃(t) in the interval of integration, and thus
Z T
1 2
ak = x(t)e−jkω0 t dt.
T − T2
Furthermore, since x(t) is zero for all t outside the interval of integration, we
can expand the limits of the integral to obtain
Z ∞
1
ak = x(t)e−jkω0 t dt.
T −∞
Let us define
Z ∞
X(jω) = x(t)e−jωt dt.
−∞
This is called the Fourier transform of the signal x(t), and the Fourier series
coefficients can be viewed as samples of the Fourier transform, scaled by T1 , i.e.,
1
ak = X(jkω0 ), k ∈ Z.
T
2π
Since ω0 = T , this becomes
∞
1 X
x̃(t) = X(jkω0 )ejkω0 t ω0 .
2π
k=−∞
Now consider what happens as the period T gets bigger. In this case, x̃(t)
approaches x(t), and so the above expression becomes a representation of x(t).
As T → ∞, we have ω0 → 0. Since each term in the summand can be viewed
as the area of the rectangle whose height is X(jkω0 )ejkω0 t and whose base goes
from kω0 to (k + 1)ω0 , we see that as ω0 → 0, the sum on the right hand side
approaches the area underneath the curve X(jω)ejωt (where t is held fixed).
Thus, as T → ∞ we have
Z ∞
1
x(t) = X(jω)ejωt dω.
2π −∞
The Fourier transform X(jω) is also called the spectrum of the signal, as it
represents the contribution of the complex exponential of frequency ω to the
signal x(t).
Example 5.1. Consider the signal x(t) = e−at u(t), a ∈ R>0 . The Fourier
transform of this signal is
Z ∞ Z ∞
X(jω) = x(t)e−jωt dt = e−at e−jωt dt
−∞ 0
1 ∞
=− e−(a+jω)t
a + jω 0
1
= .
a + jω
To visualize X(jω), we plot its magnitude and phase on separate plots (since
X(jω) is complex-valued in general). We have
1 ω
|X(jω)| = √ , ∠X(jω) = − tan−1 .
a + ω2
2 a
The plots of these quantities are show in Fig. 4.5 of the text.
Example 5.2. Consider the signal x(t) = δ(t). We have
Z ∞
X(jω) = δ(t)e−jωt dt = 1.
−∞
In other words, the spectrum of the impulse function has an equal contribution
at all frequencies.
Example 5.3. Consider the signal x(t) which is equal to 1 for −T1 ≤ t ≤ T1
and zero elsewhere. We have
Z T1
1 2 sin(ωT1 )
X(jω) = e−jωt dt = ejωT1 − e−jωT1 = .
−T1 jω ω
62 The Continuous-Time Fourier Transform
We have
Z ∞ Z W
1 1
x(t) = X(jω)ejωt dω = ejωt dω
2π −∞ 2π −W
1 1 jωt W
= e −W
2π jt
sin(W t)
= .
πt
The previous two examples showed the following. When x(t) is a square pulse,
then X(jω) = 2 sin(ωT
ω
1)
and when X(jω) is a square pulse, x(t) = sin(W
πt
t)
. This
is an example of the duality property of Fourier transforms, which we will see
later.
sin(W t)
Functions of the form πt will show up frequently, and are called sinc func-
tions. Specifically
sin(πθ)
sinc(θ) = .
πθ
2 sin(ωT1 ) ωT1
sin(W t) W Wt
Thus ω = 2T1 sinc π and πt = π sinc π .
Just as we saw with the Fourier series for periodic signals, there are some rather
mild conditions under which a signal x(t) is guaranteed to have a Fourier trans-
form (such that the inverse Fourier transform converges to the true signal).
Specifically, there are a set of sufficient conditions (also called Dirichlet condi-
tions) under which a continuous-time signal x(t) is guaranteed to have a Fourier
transform:
R∞
1. x(t) is absolutely integrable: −∞
|x(t)|dt < ∞.
2. x(t) has a finite number of maxima and minima in any finite interval.
3. x(t) has a finite number of discontinuities in any finite interval, and each
of these discontinuities is finite.
If all of the above conditions are satisfied, x(t) is guaranteed to have a Fourier
transform. Note that this only a sufficient set of conditions, and not necessary.
5.2 Fourier Transform of Periodic Signals 63
An alternate sufficient condition is that the signal have finite energy (i.e., that
it be square integrable):
Z ∞
|x(t)|2 dt < ∞.
−∞
For example, the signal x(t) = 1t u(t − 1) is square integrable, but not absolutely
integrable. Thus the finite energy condition guarantees that x(t) will have a
Fourier transform, whereas the Dirichlet conditions do not apply.
X(jω) = 2πδ(ω − ω0 ),
i.e., the frequency domain signal is a single impulse at ω = ω0 , with area 2π.
Using the inverse Fourier transform, we obtain
Z ∞ Z ∞
1
x(t) = X(jω)ejωt dω = δ(ω − ω0 )ejωt dω
2π −∞ −∞
= ejω0 t .
Example 5.5. Consider the signal x(t) = cos(ω0 t). The Fourier series coeffi-
cients are a1 = a−1 = 21 . Thus, the Fourier transform of this signal is given
by
Thus, if x(t) is an impulse train with period T , its Fourier transform is also an
impulse train in the frequency domain, except with period 2π T . Once again, we
see that if T increases (i.e., the period increases in the time-domain) we obtain
a time-shrinking in the frequency domain.
X(jω) = F {x(t)}
x(t) = F −1 {X(jω)}.
5.3.1 Linearity
The first property of Fourier transforms is easy to show:
5.3.2 Time-Shifting
Suppose x(t) is a signal with Fourier transform X(jω). Define g(t) = x(t − τ )
where τ ∈ R. Then we have
Z ∞ Z ∞
−jωt
G(jω) = g(t)e dt = x(t − τ )e−jωt dt = e−jωτ X(jω).
−∞ −∞
Thus
F {x(t − τ )} = e−jωτ X(jω).
Note the implication: if we time-shift a signal, the magnitude of its Fourier
transform is not affected. Only the phase of the Fourier transform gets shifted
by −ωτ at each frequency ω.
5.3.3 Conjugation
Consider a signal x(t). We have
Z ∞ ∗ Z ∞
X ∗ (jω) = x(t)e−jωt dt = x∗ (t)ejωt dt.
−∞ −∞
Thus, Z ∞
X ∗ (−jω) = x∗ (t)e−jωt dt = F {x∗ (t)}.
−∞
The above is true for any signal x(t) that has a Fourier transform. Now suppose
additionally that x(t) is a real-valued signal. Then we have x∗ (t) = x(t) for all
t ∈ R. Thus
X ∗ (−jω) = F {x∗ (t)} = F {x(t)} = X(jω).
Based on the above relationship between X ∗ (−jω) and X(jω) for real-valued
signals, we see the following. Write X(jω) in polar form as
X(jω) = |X(jω)|ej∠X(jω) .
Then we have
X(−jω) = X ∗ (jω) = |X(jω)|e−j∠X(jω) .
Thus, for any ω, X(−jω) has the same magnitude as X(jω), and the phase of
X(−jω) is the negative of the phase of X(jω). Thus, when plotting X(jω), we
66 The Continuous-Time Fourier Transform
only have to plot the magnitude and phase for positive values of ω, as the plots
for negative values of ω can be easily recovered according to the relationships
described above.
Example 5.7. Consider again the signal x(t) = e−at u(t); we saw earlier that
the Fourier transform of this signal is
1
X(jω) = .
a + jω
It is easy to verify that
1
X(−jω) = = X ∗ (jω),
a − jω
as predicted. Furthermore, we can see from the plots of the magnitude and
phase of X(jω) that the magnitude is indeed an even function, and the phase
is an odd function.
Suppose further that x(t) is even (in addition to being real-valued). Then we
have x(t) = x(−t). Then we have
Z ∞ Z ∞ Z ∞
X(−jω) = jωt
x(t)e dt = jωt
x(−t)e dt = x(t)e−jωt dt
−∞ −∞ −∞
= X(jω).
This, together with the fact that X(−jω) = X ∗ (jω) for real-valued signals
indicates that X(jω) is real-valued and even.
Similarly, if x(t) is real-valued and odd, we have X(jω) is purely imaginary and
odd.
Example 5.8. Consider the signal x(t) = e−a|t| , where a is a positive real
number. This signal is real-valued and even. We have
Z ∞ Z 0 Z ∞
−jωt at −jωt
X(jω) = x(t)e dt = e e dt + e−at e−jωt dt
−∞ −∞ 0
1 1
= +
a − jω a + jω
2a
= 2 .
a + ω2
As predicted, X(jω) is real-valued and even.
5.3.4 Differentiation
Consider the inverse Fourier transform
Z ∞
1
x(t) = X(jω)ejωt dω.
2π −∞
5.3 Properties of the Continuous-Time Fourier Transform 67
5.3.6 Duality
We have already seen a few examples of the duality property: suppose x(t) has
Fourier transform X(jω). Then if we have a time-domain signal that has the
same form as X(jω), the Fourier transform of that signal will have the same
form as x(t). For example, the square pulse in the time-domain had a Fourier
transform in the form of a sinc function, and a sinc function in the time-domain
had a Fourier transform in the form of a square pulse.
We can consider another example. Suppose x(t) = e−|t| . Then one can verify
that
2
F {e−|t| } = .
1 + ω2
Specifically we have
Z ∞
−|t| 1 2
e = e−jωt dω
2π −∞ 1 + ω 2
68 The Continuous-Time Fourier Transform
Thus, we have
2
F{ } = 2πe−|ω| .
1 + t2
Duality also applies to properties of the Fourier transform. For example, recall
that differentiation in the time-domain corresponds to multiplication by jω in
the frequency domain. We will now see that differentiation the frequency domain
corresponds to multiplication by a certain quantity of t in the time-domain. We
have Z ∞
dX(jω)
= x(t)(−jt)e−jωt dt.
dω −∞
Thus, differentiation in the frequency domain corresponds to multiplication by
−jt in the time-domain.
Z ∞ Z ∞
1
|x(t)|2 dt = |X(jω)|2 dω
−∞ 2π −∞
5.3.8 Convolution
Consider a signal x(t) with Fourier transform X(jω). From the inverse Fourier
transform, we have Z ∞
1
x(t) = X(jω)ejωt dω.
2π −∞
5.3 Properties of the Continuous-Time Fourier Transform 69
This has the interpretation that x(t) can be written as a superposition of com-
plex exponentials (with frequencies spanning the entire real axis). From earlier
in the course, we know that if the input to an LTI system is ejωt , then the
output will be H(jω)ejωt , where
Z ∞
H(jω) = h(t)e−jωt dt.
−∞
In other words, H(jω) is the Fourier transform of the impulse response h(t).
This, together with the LTI property of the system, implies that
Z ∞ Z ∞
1 jωt 1
x(t) = X(jω)e dω ⇒ X(jω)H(jω)ejωt dω = y(t).
2π −∞ 2π −∞
Thus, we see that the Fourier transform of the output y(t) is given by
Y (jω) = H(jω)X(jω).
In other words:
This is potentially the most important fact pertaining to LTI systems and fre-
quency domain analysis. Let’s derive this another way just to reinforce the
fact.
Suppose that we have two signals x(t) and h(t), and define
Z ∞
y(t) = x(t) ∗ h(t) = x(τ )h(t − τ )dτ.
−∞
We have
Z ∞ Z ∞ Z ∞
−jωt
Y (jω) = y(t)e dt = x(τ )h(t − τ )dτ e−jωt dt
−∞ −∞ −∞
Z ∞ Z ∞
−jωt
= x(τ ) h(t − τ )e dt dτ
−∞ −∞
Z∞
= x(τ )e−jωτ H(jω)dτ
−∞
Z ∞
= H(jω) x(τ )e−jωτ dτ
−∞
= H(jω)X(jω).
In the third line, we used the time-shifting property of the Fourier transform.
Thus we see that convolution of two signals in the time-domain corresponds to
multiplication of the signals in the frequency domain, i.e.,
70 The Continuous-Time Fourier Transform
One thing to note here pertains to the existence of the Fourier transform of h(t).
Specifically, recall that the LTI system is stable if and only if
Z ∞
|h(t)|dt < ∞.
−∞
This is precisely the first condition in the Dirichlet conditions; thus, as long as
the system is stable and the impulse response satisfies the other two conditions
(which almost all real systems would), the Fourier transform is guaranteed to
exist. If the system is unstable, we will need the machinery of Laplace transforms
to analyze the input-output behavior, which we will defer to a later discussion.
The convolution - multiplication property is also very useful for analysis of
interconnected linear systems. For example, consider the series interconnection
shown in Fig. 5.1.
We have
y(t) = y1 (t) ∗ h2 (t) = (x(t) ∗ h1 (t)) ∗ h2 (t) = x(t) ∗ (h1 (t) ∗ h2 (t)).
This reinforces what we saw earlier, that the series interconnection of LTI sys-
tems can be lumped together in a single LTI system whose impulse response
is the convolution of the impulse responses of the individual systems. In the
frequency domain, their Fourier transforms get multiplied together.
One of the important implications of the convolution property is that it allows
us to investigate the effect of systems on signals in the frequency domain. For
example, this facilitates the design of appropriate filters for signals, as illustrated
in the following example.
Example 5.9. Consider a signal v(t) which represents a measurement of some
relatively low frequency content (such as a voice signal). Suppose that we mea-
sure this signal via a microphone, whose output is given by
where W is the highest frequency of the underlying signal v(t). If we feed x(t)
into this filter, the output will have Fourier transform given by
If all of the frequency content of the noise n(t) occurs at frequencies larger than
W , then we see that N (jω)H(jω) = 0, and thus
In other words, we have recovered the voice signal v(t) by passing x(t) through
the low-pass filter.
Recall that the inverse Fourier transform of the given H(jω) is
sin(W t)
h(t) = .
πt
However, there are various challenges with implementing an LTI system with
this impulse response. One is that this is noncausal, and thus one must poten-
tially include a sufficiently large delay (followed by a truncation of the signal) in
order to apply it. Another problem is that it contains many oscillations, which
may not be desirable for an impulse response.
Instead of the above filter, suppose consider another filter whose impulse re-
sponse is
h2 (t) = e−at u(t).
This filter can be readily implemented with an RC circuit (with the input signal
being applied as an input voltage, and the output signal being the voltage across
the capacitor). The Fourier transform of this impulse response is
1
H2 (jω) = .
jω + a
The magnitude plot of this Fourier transform has content at all frequencies,
and thus this filter will not completely eliminate all of the high frequency noise.
However, by tuning the value of a, one can adjust how much of the noise affects
the filtered signal. Note that this filter will also introduce phase shifts at dif-
ferent frequencies, which will also cause some distortion of the recovered signal.
72 The Continuous-Time Fourier Transform
5.3.9 Multiplication
We just saw that multiplication in the time domain corresponds to convolu-
tion in the frequency domain. By duality, we obtain that multiplication in the
frequency domain corresponds to convolution in the time-domain. Specifically,
consider two signals x1 (t) and x2 (t), and define g(t) = x1 (t)x2 (t). Then we have
Z ∞ Z ∞
G(jω) = g(t)e−jωt dt = x1 (t)x2 (t)e−jωt dt
−∞ −∞
Z ∞ Z ∞
1
= x2 (t) X1 (jθ)ejθt dθe−jωt dt
2π −∞ −∞
Z ∞ Z ∞
1
= X1 (jθ) x2 (t)e−j(ω−θ)t dtdθ
2π −∞ −∞
Z ∞
1
= X1 (jθ)X2 (j(ω − θ))dθ.
2π −∞
Thus,
Z ∞
1 1
F {x1 (t)x2 (t)} = (X1 (jω) ∗ X2 (jω)) = X1 (jθ)X2 (j(ω − θ))dθ.
2π 2π −∞
Now consider the signal x(t) = s(t)p(t), with Fourier transform given by
Z ∞
1
X(jω) = S(jθ)P (j(ω − θ))dθ
2π −∞
Z ∞
1
= S(jθ)δ(ω − θ − ω0 )dθ
2 −∞
Z ∞
1
+ S(jθ)δ(ω − θ + ω0 )dθ
2 −∞
1 1
= S(j(ω − ω0 )) + S(j(ω + ω0 )).
2 2
Thus, multiplying the signal s(t) by p(t) results in a signal x(t) whose frequency
spectrum consists of two copies of the spectrum of s(t), cenetered at the fre-
quencies ω0 and −ω0 and scaled by 21 .
5.3 Properties of the Continuous-Time Fourier Transform 73
The above example illustrates the principle behind amplitude modulation (AM)
in communication and radio systems. A low frequency signal (such as voice) is
amplitude modulated to a higher frequency that is reserved for that signal. It
is then transmitted at that frequency to the receiver. The following example
illustrates how the receiver can recover the transmitted signal.
Example 5.11. Consider the signal x(t) = s(t)p(t) from the previous example.
Its frequency spectrum has two copies of the spectrum of s(t), located at ±ω0 .
We want to recover the original signal s(t) from x(t). To do this, suppose we
multiply x(t) by cos(ω0 t) again, to obtain
As above, we have
1 1
Y (jω) = X(j(ω − ω0 )) + X(j(ω + ω0 )).
2 2
By drawing this, we see that the frequency spectrum of y(t) contains three
copies of the spectrum of s(t): one copy centered at ω = 0 (and scaled by 21 ),
one copy at 2ω0 scaled by 14 , and one copy at −2ω0 , scaled by 41 . Thus, if we
want to recover s(t) from y(t), we simply apply a low pass filter to it (and scale
it by 2).
74 The Continuous-Time Fourier Transform
Chapter 6
1 X−1
n0 +N
ak = x̃[n]e−jω0 kn ,
N n=n0
76 The Discrete-Time Fourier Transform
1 X−1
n0 +N
1 X
∞
ak = x[n]e−jω0 kn = x[n]e−jω0 kn .
N n=n0
N n=−∞
∞
X
X(ejω ) , x[n]e−jωn .
n=−∞
From this, we see that ak = N1 X(ejkω0 ), i.e., the discrete-time Fourier series
coefficients are obtained by sampling the discrete-time Fourier transform at
periodic intervals of ω0 . Also note that X(ejω ) is periodic in ω with period 2π
(since e−jωn is 2π-periodic).
Using the Fourier series representation of x̃[n], we now have
N −1 N −1 N −1
X 1 X 1 X
x̃[n] = ak ejkω0 n = X(ejkω0 )ejkω0 n = X(ejkω0 )ejkω0 n ω0 .
N 2π
k=0 k=0 k=0
Once again, we see that each term in the summand represents the area of a
rectangle of width ω0 obtained from the curve X(ejω )ejω . As N → ∞, we have
ω0 → 0. In this case, the sum of the areas of the rectangles approaches the
integral of the curve X(ejω )ejωn , and since the sum was over only N samples
of the function, the integral is only over one interval of length 2π. Since x̃[n]
approaches x[n] as N → ∞, we have
Z
1
x[n] = X(ejω )ejωn dω.
2π 2π
Since the frequency spectrum of X(ejω ) is only uniquely specified over an in-
terval of length 2π, we have to be careful about what we mean by “high” and
“low” frequencies. Recalling the discussion of discrete-time complex exponen-
tials, high-frequency signals in discrete-time have frequencies close to odd multi-
ples of π, whereas low-frequency signals have frequencies close to even multiples
of π.
Example 6.1. Consider the signal
x[n] = an u[n], |a| < 1.
We have
∞
X ∞
X
X(ejω ) = x[n]e−jωn = an e−jωn
n=−∞ n=0
X∞
= (ae−jω )n
n=0
1
= .
1 − ae−jω
If we plot the magnitude of X(ejω ), we see an illustration of the “high” versus
“low” frequency effect. Specifically, if a > 0 then the signal x[n] does not have
any oscillations and |X(ejω )| has its highest magnitude around even multiples
of π. However, if a < 0, then the signal x[n] oscillates between positive and
negative values at each time-step; this “high-frequency” behavior is captured
by the fact that |X(ejω )| has its largest magnitude near odd multiples of π. See
Figure. 5.4 in OW for an illustration.
Example 6.2. Consider the signal
x[n] = a|n| , |a| < 1.
We have
∞
X ∞
X
X(ejω ) = x[n]e−jωn = a|n| e−jωn
n=−∞ n=−∞
−1
X ∞
X
= a−n e−jωn + an e−jωn
n=−∞ n=0
X∞ ∞
X
= n jωn
a e + an e−jωn
n=1 n=0
jω
ae 1
= +
1 − aejω 1 − ae−jω
1 − a2
= .
1 − 2a cos(ω) + a2
78 The Discrete-Time Fourier Transform
i.e., a set of impulse functions spaced 2π apart on the frequency axis. To verify
this, note that the inverse Fourier transform is given by
Z
1
X(ejω )ejωn dω.
2π 2π
The integral is only over an interval of length 2π, and there is at most one
impulse function from X(ejω ) in any such interval. Let that impulse be located
at ω0 + 2πr for some r ∈ Z. Then we have
Z Z
1 1
X(ejω )ejωn dω = 2πδ(ω −ω0 −2πr)ejωn dω = ej(ω0 +2πr)n = ejω0 n .
2π 2π 2π 2π
∞
X 2πk
X(ejω ) = 2πak δ ω − .
N
k=−∞
X(ejω ) = X(ej(ω+2π) ).
This comes out of the fact that discrete-time complex exponentials are periodic
in frequency with period 2π.
6.3.2 Linearity
It is easy to see that
6.3.5 Conjugation
For any discrete-time signal (that has a Fourier transform), we have
X(ejω ) = X ∗ (e−jω ).
6.3.6 Time-Reversal
Consider the time-reversed signal x[−n]. We have
Z 2π Z 2π
1 −jωn 1
x[−n] = jω
X(e )e dω = X(e−jω )ejωn dω
2π 0 2π 0
F {x[−n]} = X(e−jω ).
Together with the conjugation property, we see that for real-valued even signals
(where x[n] = x[−n]), we have
Thus, the Fourier transform of real, even signals is also real and even.
6.3 Properties of the Discrete-Time Fourier Transform 81
Thus, the signal x(k) [n] is obtained by spreading the points of x[n] apart by k
samples and placing zeros between the samples. We have
∞
X ∞
X
F {x(k) [n]} = x(k) [n]e−jωn = x(k) [rk]e−jωrk
n=−∞ r=−∞
since x(k) [n] is nonzero only at integer multiples of k. Since x(k) [rk] = x[r], we
have
X∞
F {x(k) [n]} = x[r]e−jωrk = X(ejkω )
r=−∞
where (
1 n ∈ {0, 2, 4}
g[n] = .
0 otherwise
82 The Discrete-Time Fourier Transform
2
X 1 − e−3jω sin( 23 ω)
H(ejω ) = e−jωn = = e−jω .
n=0
1−e −jω sin( ω2 )
sin(3ω)
G(ejω ) = H(ej2ω ) = e−2jω
sin(ω)
Finally,
sin(3ω)
X(ejω ) = G(ejω ) + 2e−jω G(ejω ) = e−2jω (1 + 2e−jω ) .
sin(ω)
Thus
dX(ejω )
F {nx[n]} = j .
dω
6.3.10 Convolution
Just as in continuous time, the discrete time signal ejωn is an eigenfunction
of discrete-time LTI systems. Specifically, if ejωn is applied to a (stable) LTI
system with impulse response h[n], the output of the system will be H(ejω )ejωn .
6.3 Properties of the Discrete-Time Fourier Transform 83
The expression on the right hand side is the output y[n] of the system when the
input is x[n]. Thus we have
The signal w3 [n] is given by w3 [n] = (−1)n w2 [n], and thus W3 (ejω ) = W2 (ej(ω−π) ).
Putting this together with the expression for W2 (ejω ), we obtain
From the bottom path, we have W4 (ejω ) = Hlp (ejω )X(ejω ). Thus, we have
Y (ejω ) = W3 (ejω ) + W4 (ejω ) = Hlp (ejω ) + Hlp (ej(ω−π) ) X(ejω ).
Recall that Hlp (ej(ω−π) ) is a high-pass filter centered at π. Thus, this system
acts as a bandstop filter, blocking all frequencies in a certain range and letting
all of the low and high frequency signals through.
6.3.11 Multiplication
Consider two signals x1 [n] and x2 [n], and define g[n] = x1 [n]x2 [n]. The discrete-
time Fourier transform of g[n] is given by
∞
X
G(ejω ) = x1 [n]x2 [n]e−jωn .
n=−∞
84 The Discrete-Time Fourier Transform
This resembles the typical convolution of the signals X1 (ejω ) and X2 (ejω ), ex-
cept that the integral is over only an interval of length 2π as opposed over the
entire frequency axis. This is called the periodic convolution of the two signals.
Recall that we saw the same thing when we considered the discrete-time Fourier
series of the product of two periodic discrete-time signals.
Example 6.7. Consider the two signals
sin( π2 n) sin( π4 n)
x1 [n] = , x2 [n] = .
πn πn
The Fourier transforms of these signals are square pulses, where the pulse cen-
tered at 0 extend from − π2 to π2 (for X1 (ejω )) and from − π4 to π4 (for X2 (ejω )).
The Fourier transform of g[n] = x1 [n]x2 [n] is given by
Z
1
G(ejω ) = X1 (ejθ )X2 (ej(ω−θ) )dθ.
2π 2π
Since we can choose any interval of length 2π to integrate over, let’s choose the
interval [−π, π) for convenience. We also only need to determine the values of
the Fourier transform for values of ω between −π and π, since the transform is
periodic. Depending on the value of ω, there are different cases that occur:
• If −π ≤ ω < − 3π jθ
4 , then there is no overlap in the signals X1 (e ) and
j(ω−θ)
X2 (e ), and thus G(jω) is zero.
• If − 3π π
4 ≤ ω < − 4 , then there is partial overlap in the signals; the product
is a rectangle with support from − π2 to ω + π4 , and thus G(jω) evaluates
1
to 2π (ω + 3π
4 ).
• If − π4 ≤ ω < π
4, there is full overlap and G(jω) is 1 π
2π ( 2 ) = 14 .
π 3π 1 3π
• If 4 ≤ω< 4 , there is partial overlap and G(jω) is 2π ( 4 − ω).
6.3 Properties of the Discrete-Time Fourier Transform 85
3π
• If 4 ≤ ω < π, there is no overlap and G(jω) is zero.
Note that since we are only integrating over θ between −π and π, the values of
X(ejθ ) outside of that interval does not matter. Thus, we could also create a
new signal X̂1 (ejθ ) which is equal to X1 (ejθ ) over the interval [−π, π) and zero
everywhere else. The Fourier transform can then be written as
Z Z ∞
jω 1 jθ j(ω−θ) 1
G(e ) = X1 (e )X2 (e )dθ = X̂1 (ejθ )X2 (ej(ω−θ) )dθ,
2π 2π 2π −∞
i.e., it is the usual convolution of the signals X̂1 (ejω ) and X2 (ejω ).
86 The Discrete-Time Fourier Transform
Chapter 7
Sampling
Note that the values of the signal x(t) are irrelevant outside of the points where
the impulse functions in p(t) occur (i.e., at the sampling instants). Let us con-
sider the frequency spectra of these signals. Specifically, by the multiplication
property of Fourier transforms, we have
Z ∞
1
Xp (jω) = X(jθ)P (j(ω − θ))dθ.
2π −∞
Furthermore, since p(t) is periodic, we saw that the Fourier transform of p(t)
will be given by
∞
2π X
P (jω) = δ(ω − nωs ),
Ts n=−∞
as the Fourier series coefficients of p(t) are each T1s . Thus,
∞ Z ∞
1 X
Xp (jω) = X(jθ)δ(ω − θ − nωs )dθ
Ts n=−∞ −∞
∞
1 X
= X(j(ω − nωs )).
Ts n=−∞
Thus, the frequency spectrum of xp (t) consists of copies of the frequency spec-
trum of x(t), where each copy is shifted (in frequency) by an integer multiple of
the sampling frequency ωs and scaled by T1s (see Fig. 7.1).
If we want to be able to reconstruct x(t) from its sampled version xp (t), we would
like to make sure that there is an exact copy of X(jω) that can be extracted
from Xp (jω). Based on the above discussion, we see that this will be the case if
no two copies of X(jω) overlap in Xp (jω). Looking at Fig. 7.1, this will occur
as long as
ωs − ωM > ωM ,
or equivalently,
ωs > 2ωM ,
where ωM is the largest frequency at which x(t) has nonzero content. This leads
to the sampling theorem.
X(jω)
ω
ωM ωM
Xp (jω)
1
Ts X(0)
−ωs ωs ω
−ωs − ωM −ωs + ωM −ωM ωM ωs − ωM ωs + ωM
Figure 7.1: The frequency spectrum of the signal x(t) and the signal xp (t).
This has magnitude Ts at ω = 0 (like the ideal reconstructor), and the first
frequency at which it is equal to zero is at ωs (unlike the ideal reconstructor
that cuts off at ω2s ). Furthermore, this frequency spectrum is not bandlimited,
and thus the copies of X(jω) in the spectrum of Xp (jω) will leak into the
reconstructed signal under the ZOH.
90 Sampling
h0 (t) h1 (t)
1 1
t t
Ts −Ts Ts
The magnitude of this filter is smaller than that of H0 (jω) outside of ω2s , al-
though it is still not bandlimited. Furthermore, the FOH is noncausal, but can
be made causal with a delay of Ts .
Higher order filters are also possible, and can be defined as a natural extension
of zero and first order holds.
Xp (jω)
1
2Ts
ω
−9 −8 −7 −6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6 7 8 9
2ω0 = 2, we can reconstruct x(t) from its samples. Let us choose ωs = 4. The
frequency spectrum of xp (t) = x(t)p(t) is shown in Fig. 7.3.
Now consider another signal x1 (t) = cos(ω1 t), and suppose that we sample this
signal at ωs = 4. Let xp1 (t) be the resulting (continuous-time) sampled signal.
For what value of ω1 will the frequency spectrum Xp1 (jω) look exactly the same
as Xp (jω)?
To answer this, note that X1 (jω) has impulses located at ±ω1 , and Xp1 (jω)
will have impulses located at kωs ± ω1 , for k ∈ Z. Looking at the frequency
spectrum of Xp (jω) in Fig. 7.3, we see that ω1 should be odd (otherwise, Xp1 (jω)
will have impulses at some even frequencies, whereas all of the impulses are at
odd frequencies in Xp (jω)). Suppose we try ω1 = 3. Then Xp1 (jω) will have
impulses at ±3, which matches two of the impulses in Xp (jω). We should check
the copies of the signals in Xp1 (jω) as well. Specifically, there will be a copy
centered at ωs = 4, with one impulse three units to the left (at ω = 1) and
one impulse three units to the right (at ω = 7). Similarly, the copy centered at
2ωs will have one impulse at 5 and one impulse at 11. The same is true for the
negative frequencies. Thus, we see that if ω1 = 3, then Xp1 (jω) looks exactly
the same as Xp (jω), and thus the signals x(t) = cos(t) and x1 (t) = cos(3t) look
exactly the same if sampled at ωs = 4.
One the other hand, if we take the discrete-time Fourier transform of the se-
quence xd [n], we have
∞
X ∞
X
Xd (ejω ) = xd [n]e−jωn = x(nTs )e−jωn .
n=−∞ n=−∞
|H(jω)| ∠H(jω)
π
ωc 2
ω ω
−ωc ωc −ωc ωc
Thus far, we have seen ways to take time-domain signals and transform them
into frequency-domain signals, by identifying the amount of contribution of
complex exponentials of given frequencies to the signal. Specifically, for pe-
riodic signals, we started with the Fourier series representation of a signal in
terms of its harmonic family. For more general absolutely integrable signals,
we generalized the Fourier series to the Fourier transform, where the signal is
represented in terms of complex exponentials of all frequencies (not just those
from the harmonic family).
To develop this, first recall that complex exponentials of the form est are eigen-
functions of LTI systems, even when s is a general complex number. Specifically,
if x(t) = est is the input to an LTI system with impulse response h(t), we have
Z ∞ Z ∞
y(t) = x(t) ∗ h(t) = x(t − τ )h(t)dτ = est h(t)e−sτ dτ.
−∞ −∞
Note that the limits of the integration go from −∞ to ∞, and thus this is called
the bilateral Laplace transform. When the limits only go from 0 to ∞, it
is called the unilateral Laplace transform. For the purposes of this course,
if we leave out the qualifier, we mean the bilateral transform. Note that when
s = jω, then X(s) is just the Fourier transform of x(t) (assuming the transform
exists). However, the benefit of the Laplace transform is that it also applies to
signals that do not have a Fourier transform. Specifically, note that s can be
written as s = σ + jω, where σ and ω are real numbers. Then we have
Z ∞
X(s) = X(σ + jω) = x(t)e−σt e−jωt dt.
−∞
Thus, for a given s = σ+jω, we can think of the Laplace transform as the Fourier
transform of the signal x(t)e−σt . Even if x(t) is not absolutely integrable, it may
be possible that x(t)e−σt is absolutely integrable if σ is large enough (i.e., the
complex exponential can be chosen to cancel out the growth of the signal in the
Laplace transform).
Example 8.1. Consider the signal x(t) = e−at u(t) where a is some real number.
The Laplace transform is given by
Z ∞ Z ∞
X(s) = x(t)e−st dt = e−(s+a)t dt
−∞ 0
∞
1 −(s+a)t
=− e
s+a
0
1
= ,
s+a
as long as Re{s + a} > 0, or equivalently Re{s} > −a. Note that if a is positive,
then the integral converges for Re{s} = 0 as well, in which case we get the
1
Fourier transform X(jω) = jω+a . However, if a is negative, then the signal
does not have a Fourier transform (but it does have a Laplace transform for s
with a sufficiently large real part).
Example 8.2. Consider the signal x(t) = −e−at u(−t) where a is a real number.
8.1 The Laplace Transform 97
We have
Z ∞ Z 0
−st
X(s) = x(t)e dt = − e−(s+a)t dt
−∞ −∞
1 0
= e−(s+a)t
s+a −∞
1
= ,
s+a
as long as Re{s + a} < 0, or equivalently, Re{s} < −a.
Comparing the above examples, we notice that both the signals e−at u(t) and
−e−at u(−t) had the same Laplace transform s+a1
, but that the ranges of s for
which each had a Laplace transform was different.
of those roots), and the roots of D(s) are called the poles of X(s) (evaluating
X(s) at a pole will yield ∞). We can draw the poles and zeros in the s-plane
using ◦ for zeros and × for poles.
Example 8.4. Consider the signal
4 1
x(t) = δ(t) − e−t u(t) + e2t u(t).
3 3
The Laplace transform of δ(t) is
Z ∞
L{δ(t)} = δ(t)e−st dt = 1
−∞
for any value of s. Thus the ROC for δ(t) is the entire s-plane. Putting this
with the other two terms, we have
4 1 1 1 (s − 1)2
X(s) = 1 − + = ,
3s+1 3s−2 (s + 1)(s − 2)
with ROC Re{s} > 2.
Thus, as long as x(t)e−σt is absolutely integrable, this integral exists. Note that
this does not depend on the value of ω. Thus, we have the following fact about
the ROC.
For the next property of the ROC, suppose that the signal x(t) has a Laplace
transform given by a rational function. We know that the poles of this function
are the set of complex s such that X(s) is infinite. Since X(s) is given by the
Laplace transform integral, we see that the ROC cannot contain any poles of
X(s).
The third property pertains to signals that are of finite duration (and absolutely
integrable). Specifically, suppose that x(t) is nonzero only between two finite
times T1 and T2 . Then we have
Z T2
X(s) = x(t)e−st dt
T1
To see why this is true, first note that since x(t) is right-sided, there exists some
T1 such that x(t) = 0 for all t < T1 . If s with Re{s} = σ0 is in the ROC, then
x(t)e−σ0 t is absolutely integrable, i.e.,
Z ∞
|x(t)|e−σ0 t dt < ∞.
T1
Now suppose that we consider some σ1 > σ0 . If T1 > 0, then e−σ1 t is always
smaller than e−σ0 t over the region of integration, and thus x(t)e−σ1 t will also
be absolutely integrable. If T1 < 0, then
Z ∞ Z 0 Z ∞
|x(t)|e−σ1 t dt = |x(t)|e−σ1 t dt + |x(t)|e−σ1 t dt
T1 T1 0
Z0 Z ∞
≤ |x(t)|e−σ1 t dt + |x(t)|e−σ0 t dt.
T1 0
100 The Laplace Transform
The first term is finite (since it is integrating some signal of finite duration),
and the second term is finite since x(t)e−σ0 t is absolutely integrable. Thus, once
again, x(t)e−σ1 t is absolutely integrable, and thus s with Re{s} ≥ σ0 also falls
within the ROC of the signal.
The same reasoning applies to show the following property.
If x(t) is two-sided, we can write x(t) as x(t) = xR (t) + xL (t), where xR (t) is a
right-sided signal and xL (t) is a left-sided signal. The former has an ROC that
is the region to the right of some line in the s-plane, and the latter has an ROC
that is the region to the left of some line in the s-plane. Thus, the ROC for x(t)
contains the intersection of these two regions (if there is no intersection, x(t)
does not have a Laplace transform).
Note that we modify the definition of u(t) in this expression so that u(0) = 21 ,
so that x(0) = 1 as required. As this modification is only at a single point
(of zero width and finite height), it will not make a difference to the quantities
calculated by integrating the signals.
The signal e−bt u(t) has Laplace transform
1
L{e−bt u(t)} = ,
s+b
with ROC Re{s} > −b. The signal ebt u(−t) has Laplace transform
−1
L{ebt u(−t)} = ,
s−b
with ROC Re{s} < b. If b ≤ 0, then these two ROCs do not overlap, in which
case x(t) does not have a Laplace transform. However, if b > 0, then x(t) has
the Laplace transform
1 1
L{x(t)} = − ,
s+b s−b
with ROC −b < Re{s} < b.
8.3 The Inverse Laplace Transform 101
As we will see soon, a rational Laplace transform X(s) can be decomposed into
a sum of terms, each of which correspond to an exponential signal. The ROC
for X(s) consists of the intersection of the ROCs for each of those terms, and
since none of the ROCs can contain poles, we have the following property.
Since this is just the Fourier transform of x(t)e−σt , we can use the inverse Fourier
transform formula to obtain
Z ∞
−σt 1
x(t)e = X(σ + jω)ejωt dω.
2π −∞
1 1 1
= − .
s(s + 1) s s+1
102 The Laplace Transform
Now each of these terms is of a form that we know (they correspond to complex
exponentials). So, for example, if the ROC for X(s) is the region to the right of
the imaginary axis, since the ROC consists of the intersection of the ROCs of
both of the terms, we know that both terms must be right-sided signals. Thus,
Similarly, if the ROC is between the lines Re{s} = −1 and Re{s} = 0, the first
term is left-sided and the second term is right-sided, which means
Finally, if the ROC is to the right of the line Re{s} = −1, both terms are
left-sided and thus
x(t) = −u(−t) + e−t u(−t).
1
In the above example, we “broke up” the function s(s+1) into a sum of simpler
functions, and then applied the inverse Laplace Transform to each of them. This
is a general technique for inverting Laplace Transforms, which we now study.
(s + z1 )(s + z2 ) · · · (s + zm )
X(s) = K .
(s + p1 )(s + p2 ) · · · (s + pn )
Recall that the zeros of X(s) are given by −z1 , −z2 , . . . , −zm , and the poles are
−p1 , −p2 , . . . , −pn . First, suppose each of the poles are distinct and that X(s)
is strictly proper. We would like to write
k1 k2 kn
X(s) = + + ··· + ,
s + p1 s + p2 s + pn
8.4 Some Properties of the Laplace Transform 103
k1 (s + pi ) k2 (s + pi ) kn (s + pi )
(s + pi )X(s) = + + · · · + ki + · · · + .
s + p1 s + p2 s + pn
Now if we let s = −pi , then all terms on the right hand side will be equal to
zero, except for the term ki . Thus, we obtain
ki = (s + pi )X(s)|s=−pi .
s+5
Example 8.8. Consider X(s) = s3 +3s2 −6s−8 . The denominator is factored as
4 4
k1 = (s + 1)X(s)|s=−1 = =−
(−3)(3) 9
7 7
k2 = (s − 2)X(s)|s=2 = =
(3)(6) 18
1 1
k3 = (s + 4)X(s)|s=−4 = = .
(−3)(−6) 18
The partial fraction expansion when some of the poles are repeated is obtained
by following a similar procedure, but it is a little more complicated. We will
not worry too much about this scenario here. One can also do a partial fraction
expansion of nonstrictly proper functions by first dividing the denominator into
the numerator to obtain a constant and a strictly proper function, and then
applying the above partial fraction expansion.
8.4.1 Convolution
Consider two signals x1 (t) and x2 (t) with Laplace transforms X1 (s) and X2 (s)
and ROCs R1 and R2 , respectively. Then
L{x1 (t) ∗ x2 (t)} = X1 (s)X2 (s),
with ROC containing R1 ∩ R2 . Thus, convolution in the time-domain maps to
multiplication in the s-domain (as was the case with Fourier transforms).
1
Example 8.9. Consider the convolution u(t) ∗ u(t). Since L{u(t)} = s with
ROC Re{s} > 0, we have
1
L{u(t) ∗ u(t)} = ,
s2
with ROC containing the region Re{s} > 0.
8.4.2 Differentiation
Consider a signal x(t), with Laplace transform X(s) and ROC R. We have
Z
1
x(t) = X(s)est ds.
2π
Differentiating both sides with respect to t, we have
Z
dx(t) 1
= sX(s)est ds.
dt 2π
Thus, we see that
dx
L{ } = sX(s),
dt
with ROC containing R. More generally,
dm x
L{ } = sm X(s) .
dtm
8.4.3 Integration
Given a signal x(t) whose Laplace transform has ROC R, consider the integral
Rt
−∞
x(τ )dτ . Note that
Z t
x(τ )dτ = u(t) ∗ x(t),
−∞
Y (s) = H(s)X(s),
assuming all Laplace transforms exist. Using the expressions for H(s) and X(s),
we can thus calculate Y (s) (and its ROC), and then use an inverse Laplace
transform to determine y(t).
Example 8.10. Consider an LTI system with impulse response h(t) = e−2t u(t).
Suppose the input is x(t) = e−3t u(t). The Laplace transforms of h(t) and x(t)
are
1 1
H(s) = , X(s) = ,
s+2 s+3
with ROCs Re{s} > −2 and Re{s} > −3, respectively. Thus we have
1 1
Y (s) = H(s)X(s) = ,
s+2s+3
with ROC Re{s} > −2. Using partial fraction expansion, we have
1 1
Y (s) = − ,
s+2 s+3
and thus y(t) = e−2t u(t) − e−3t u(t).
Example 8.11. Consider an LTI system with impulse response h(t) = −e4t u(−t)
and input x(t) = e2t u(t), where we interpret u(−t) as being 1 for t < 0. The
Laplace transforms are
1 1
H(s) = , X(s) = ,
s−4 s−2
with ROCs Re{s} < 4 and Re{s} > 2, respectively. Since there is a nonempty
intersection, we have
1 1 1 1 1 1
Y (s) = H(s)X(s) = = − ,
s−4s−2 2s−4 2s−2
with ROC 2 < Re{s} < 4. Thus, y(t) is two-sided, and given by
1 1
y(t) = − e4t u(−t) + e2t u(t).
2 2
106 The Laplace Transform
or equivalently Pm
bk sk
Y (s) = Pnk=0 k
X(s).
k=0 ak s
| {z }
H(s)
Thus, the impulse response of the differential equation is just the inverse Laplace
transform of H(s) (corresponding to an appropriate region of convergence).
Y (s) 1 1
H(s) = = 3 = .
X(s) s + 2s2 − s − 2 (s − 1)(s + 1)(s + 2)