Signals and Systems Lecture Notes: Dr. Mahmoud M. Al-Husari
Signals and Systems Lecture Notes: Dr. Mahmoud M. Al-Husari
Al-Husari
1 Introduction 5
1.1 Signals and Systems Defined . . . . . . . . . . . . . . . . . . . . . 5
1.2 Types of Signals and Systems . . . . . . . . . . . . . . . . . . . . 6
1.2.1 Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.2 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3 Description of Systems 37
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2 Systems Characteristics . . . . . . . . . . . . . . . . . . . . . . . 37
3.2.1 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.2.2 Invertibility . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.2.3 Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.2.4 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.2.5 Time Invariance . . . . . . . . . . . . . . . . . . . . . . . 40
3.2.6 Linearity and Superposition . . . . . . . . . . . . . . . . . 41
3.3 Linear Time-invariant Systems . . . . . . . . . . . . . . . . . . 42
3.3.1 Time-Domain Analysis of LTI systems . . . . . . . . . . 42
3
4 CONTENTS
Introduction
7
8 CHAPTER 1. INTRODUCTION
Systems with more than one input and more than one output are called MIMO
(Multi-Input Multi-Output). Figures 1.3 through 1.5 shows examples of
systems.
Figure 1.3: Communication between two people involving signals and signal
processing by systems.
1.2.1 Signals
A continuous-time (CT) signal is one which is defined at every instant of time
over some time interval. They are functions of a continuous time variable. We
often refer to a CT signal as x(t). The independent variable is time t and
can have any real value, the function x(t) is called a CT function because it is
defined on a continuum of points in time.
1.2.2 Systems
A CT system transforms a continuous time input signal into CT outputs as
shown in Figure 1.8
• Analysis problems
• Synthesis problems
In Analysis problems one is usually presented with a specific system and is
interested in characterizing it in detail to understand how it will respond to vari-
ous inputs. On the other hand, synthesis problems requires designing
systems to process signals in a particular way to achieve desired outputs. Our
main focus in this course are on analysis problems.
Chapter 2
Mathematical Description
of Signals
for any integer value of n, where T > 0 is the period of the function and
−∞ < t < ∞. The signal repeats itself every T sec. Of course, it also repeats
every 2T, 3T and nT . Therefore, 2T, 3T and nT and are all periods of the
function because the function repeats over any of those intervals. The minimum
positive interval over which a function repeats itself is called the fundamental
period T0 , (Figure 2.1). T0 is the smallest value that satisfies the condition
11
12 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS
Example 2.1 With respect to the signal shown in Figure 2.2 determine the fundamental
frequency and the fundamental angular frequency.
1
Solution It is clear that the
0.5
fundamental period T0 = 0.2sec.
Thus,
x(t)
0
1
f0 = = 5Hz
-0.5
0.2
ω0 = 2πf0 = 10π rad/sec.
-1
It repeats itself 5 times in one sec-
0 0.2 0.4 0.6 0.8 1
Time, seconds ond, which can be clearly seen in
Figure 2.2.
Figure 2.2: Signal of Example 2.1
Example 2.2 A real valued sinusoidal signal x(t) can be expressed mathematically by
x(t) = A sin(ω0 t + φ) (2.4)
Show that x(t) is periodic.
Solution For x(t) to be periodic it must satisfy the condition x(t) = x(t+T0 ),
thus
x(t + T0 ) = A sin(ω0 (t + T0 ) + φ)
= A sin(ω0 t + φ + ω0 T0 )
Recall that sin(α + β) = sinα cos β + cosα sinβ, therefore
x(t + T0 ) = A [sin(ω0 t + φ) cos ω0 T0 + cos(ω0 t + φ) sin ω0 T0 ] (2.5)
2π
Substituting the fundamental period T0 = ω0 in (2.5) yields
then lT1 = kT2 and since k and l are integers x1 (t) + x2 (t) are periodic with
period lT1 . Thus the expression (2.6) follows with T = lT1 . In addition, if k
and l are co-prime (i.e. k and l have no common integer factors other than 1,)
then T = lT1 is the fundamental period of the sum x1 (t) + x2 (t).
πt
πt
Let x1 (t) = cos 2 and x2 (t) = cos 3 , determine if x1 (t) + x2 (t) is periodic. Example 2.3
Solution x1 (t) and x2 (t) are periodic with the fundamental periods T1 = 4
(since ω1 = π2 = 2π
T1 =⇒ T1 = 4) and similarly T2 = 6. Now
T1 4 2
= =
T2 6 3
then with k = 2 and l = 3, it follows that the sum x1 (t) + x2 (t) is periodic with
fundamental period T = lT1 = (3)(4) = 12sec.
x(t)
1 − |t| , −1 < t < 1
x(t) =
0, otherwise
0
-3 -2 -1 0 1 2 3
It is clear that this function is well defined mathematically (Figure 2.3). Time, seconds
0
The size of any entity is a number that indicates the largeness or strength of that
entity. Generally speaking, the signal amplitude varies with time. How can a
signal as a one shown in Figure 2.5, that exists over a certain time interval with
varying amplitude be measured by one number that will indicate the signal size
or signal strength? One must not consider only signal amplitude but also the
0
Time, seconds duration. If for instance one wants to measure the size of a human by a single
Figure 2.5: What is the size of number one must not only consider his height but also his width. If we make
a signal?
a simplifying assumption that the shape of a person is a cylinder of variable
radius r (which varies with height h) then a reasonable measure of a human
size of height H is his volume given by
Z H
V =π r2 (h)dh
0
Arguing in this manner, we may consider the area under a signal as a possible
measure of its size, because it takes account of not only the amplitude but
also the duration. However this will be a defective measure because it could
be a large signal, yet its positive and negative areas could cancel each other,
indicating a signal of small size. This difficulty can be corrected by defining the
signal size as the area under the square of the signal, which is always positive.
We call this measure the Signal Energy E∞ , defined for a real signal x(t) as
Z ∞
E∞ = x2 (t) dt (2.7)
−∞
2
(Note for complex signals |x(t)| = x(t)x∗ (t) where x∗ (t) is the complex
conjugate of x(t).) Signal energy for a DT signal is defined in an analogous
way as
X∞
2
E∞ = |x[n]| (2.9)
n=−∞
For many signals encountered in signal and system analysis, neither the integral
in Z ∞
2
E∞ = |x(t)| dt
−∞
converge because the signal energy is infinite. The signal energy must be finite
for it to be a meaningful measure of the signal size. This usually occurs because
the signal in not time-limited (Time limited means the signal is nonzero over
only a finite time.) An example of a CT signal with infinite energy would be a
sinusoidal signal
x(t) = A cos(2πf0 t).
For signals of this type, it is usually more convenient to deal with the average
signal power of the signal instead of the signal energy. The average signal power
of a CT signal is defined by
Z T /2
1 2
P∞ = lim |x(t)| dt (2.10)
T →∞ T −T /2
Note that the integral in (2.10) is the signal energy of the signal over a time T,
and is then divided by T yielding the average signal power over time T. Then
as T approached infinity, this average signal power becomes the average signal
power over all time. Observe also that the signal power P∞ is the time average
(mean) of the signal amplitude squared, that is, the mean-squared value of x(t). √
Indeed, the square root of P∞ is the familiar rms(root mean square) value of rms= P∞
x(t).
For periodic signals, the average signal power calculation may be simpler. The
average value of any periodic function is the average over any period. Therefore,
since the square of a periodic function is also periodic, for periodic CT signals
Z
1 2
P∞ = |x(t)| dt (2.13)
T T
R
where the notation T means the integration over one period (T can be any
period but one usually chooses the fundamental period).
16 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS
x(t) = A cos(ω0 t + φ)
Solution From the definition of signal power for a periodic signal in (2.13),
Z Z T0 /2
1 2 A2 2π
P∞ = |A cos(ω0 t + φ)| dt = cos2 t + φ dt (2.14)
T T T0 −T0 /2 T0
1
cos(α)cos(β) = [cos(α − β) + cos(α + β)]
2
in (2.14) we get
Z T0 /2
A2 4π
P∞ = 1 + cos t + 2φ dt (2.15)
2T0 −T0 /2 T0
Z T0 /2 Z T0 /2
A2 A2 4π
= dt + cos t + 2φ dt (2.16)
2T0 −T0 /2 2T0 −T0 /2 T0
| {z }
=0
the second integral on the right hand side of (2.16) is zero because it is the
integral of a sinusoid over exactly two fundamental periods. Therefore, the
2
power is P∞ = A2 . Notice that this result is independent of the phase φ and
the angular frequency ω0 . It depends only on the amplitude A.
Example 2.6 Find the power of the signal shown in Figure 2.6
0.5
Solution From the definition of signal power
for a periodic signal
x(t)
0
Z Z 0.25 Z 0.5
1 2 1
-0.5 P∞ = |x(t)| dt = 12 dt + (−1)2 dt
T T 0.5 0 0.25
-1 =1
-1 -0.5 0 0.5 1
Time, seconds
Comment The signal energy as defined in (2.7) or (2.8) does not indicate
the actual energy of the signal because the signal energy depends not only
on the signal but also on the load. To make this point clearer assume we
have a voltage signal v(t) across a resistor R, the actual energy delivered to
the resistor by the voltage signal would be
Z ∞ 2 Z ∞
|v(t)| 1 2 E∞
Energy = dt = |v(t)| dt =
−∞ R R −∞ R
Signals which have finite signal energy are referred to as energy signals and
signals which have infinite signal energy but finite average signal power are
referred to as power signals. Observe that power is the time average of energy.
Since the averaging is over an infinitely large interval, a signal with finite energy
has zero power, and a signal with finite power has infinite energy. Therefore, a
signal cannot both be an energy and power signal.. On the other hand, there
are signals that are neither energy nor power signals. The ramp signal is such an
example. Figure 2.7 Shows examples of CT and DT energy and power signals.
g(t) = g(−t)
18 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS
g(t) = −g(−t)
An even function has the same value at the instants t and -t for all values of
t. Clearly, g(t) in this case is symmetrical about the vertical axis (the vertical
axis acts as a mirror) as shown in Figure 2.8. On the other hand, the value
of an odd function at the instant t is the negative of its value at the instant
-t. Therefore, g(t) in this case is anti-symmetrical about the vertical axis, as
depicted in Figure 2.8.
Any function x(t) can be expressed as a sum of its even and odd components
x(t) = xe (t) + xo (t) . The even and odd parts of a function x(t) are
(a) Product of two even functions (b) Product of even and odd functions
(c) Product of even and odd functions (d) Product of two odd functions
Determine the even and odd components of the signal shown in Figure 2.11 Example 2.7
0.5
x(t)
0.5
xe (t)
0 0
-3 -2 -1 0 1 2 3 -3 -2 -1 0 1 2 3
Time, seconds
0.5
Figure 2.11: Finding even and
odd components of x(t)
xo (t)
-0.5
-3 -2 -1 0 1 2 3
This is Time, seconds
Figure 2.13(b) which we shall denote φ(t). Whatever happens in f (t) at some
instant t also happens in φ(t) but t0 seconds later at the instant t+t0 . Therefore
φ(t + t0 ) = f (t)
and
φ(t) = f (t − t0 ).
Therefore, to time shift a signal by t0 , we replace t with t − t0 . Thus f (t − t0 )
represents f (t) time shifted by t0 seconds. If t0 is positive, the shift is to the
right (delay). If t0 is negative, the shift is to the left (advance).
Solution First plot the signal x(t) to visualize the signal and the
1
important points of time (Figure 2.14). We can begin to understand how to
0.8
make this transformation by computing the values of x(t − 1) at a few selected
x(t)
points as shown in Table 2.1 of Figure 2.15. Next, plot x(t − 1) as function of 0.6
t. It should be apparent that replacing t by t − 1 has the effect of shifting the 0.4
0
-3 -2 -1 0 1 2 3
1.2 Time, seconds
t t−1 x(t − 1)
1 Figure 2.14: A Signal x(t)
-2 -3 0
-1 -2 0 0.8
0 -1 1
x(t − 1)
1 0 1 0.6
2 1 0.5
3 2 0 0.4
4 3 0
5 4 0 0.2
Table 2.1 0
-4 -3 -2 -1 0 1 2 3 4
Time, seconds
1.2
t 2t x(2t)
-2 -4 0 1
-1.5 -3 0
0.8
-1 -2 0
x(2t)
-0.5 -1 1
0.6
0 0 1
0.5 1 0.5 0.4
1 2 0
1.5 3 0 0.2
2 4 0
0
Table 2.2 -4 -3 -2 -1 0 1 2 3 4
Time, seconds
0.6
x
0.4 a > 1, the scaling results in compression, and if a < 1, the scaling results in
0.2 expansion.
0
-4 -3 -2 -1 0 1 2 3 4 5 6
Time, seconds
2.2.3 Time Reflection
Figure 2.17: Stretched version
of x(t). Also called time reversal, the signal φ(t) = x(−t). Observe that whatever
happens at the time instant t also happens at the instant −t. The mirror image
of x(t) about the vertical axis is x(−t). Recall that the mirror image of x(t)
about the horizontal axis is −x(t). Figure 2.18 shows a discrete time example
of time reflection.
x(bt − t0 )
In this case the sequence of time shifting and then time scaling is the simplest
path to correct transformation
t→t−t
0 t→bt
x(t) −−−−−→ x(t − t0 ) −−−→ x(bt − t0 )
Let a signal be defined as in Example 2.9. Determine and plot x(−3t − 4). Example 2.9
1
t t0 = −3t − 4 x(−3t − 4)
-2/3 -2 0 0.8
x(−3t − 4)
-1 -1 1
-4/3 0 1 0.6
-5/3 1 0.5
0.4
-2 2 0
-7/3 3 0 0.2
Table 2.3
0
-3 -2 -1 0 1 2
Time, seconds
Comment: When solving using the method of constructing a table as the one
shown in Table 2.3, it is much easier to start constructing your table from the
second column i.e. the time transformation argument of the function. The time
transformation argument in this example is −3t − 4 which can be labeled t0 .
Start with few selected points of t0 , find the corresponding t points and fill the
column corresponding tot. This could be done easily by writing an expression
0 t0 +4
of t in terms of t , t = − 3 . Finally, plot x(−3t − 4) as function of t.
t→t−4 t→−3t
x(t) −−−−→ x(t − 4) −−−−→ x(−3t − 4)
1 1
x(t)
x(t)
0.5 0.5
0 0
-3 -2 -1 0 1 2 3 4 5 6 7 -3 -2 -1 0 1 2 3 4 5 6 7
1 1
x(t − 4)
x(−3t)
0.5 0.5
0 0
-3 -2 -1 0 1 2 3 4 5 6 7 -3 -2 -1 0 1 2 3 4 5 6 7
x(−3(t + 4/3))
1 1
x(−3t − 4)
0.5 0.5
0 0
-3 -2 -1 0 1 2 3 4 5 6 7 -3 -2 -1 0 1 2 3 4 5 6 7
Time, seconds Time, seconds
(a) First Reflect and scale then shift (b) Alternative sequence of transformation
where
Another important function in the area of signals and systems is the exponential
signal est where s is complex in general. An exponential function in its most
general form is written as
x(t) = Cest
Therefore,
Where
σ<0 σ>0
1 σ=0
0
Time, seconds
Figure 2.22 shows different examples of the real part of the function
x(t) = Aejφ e(σ+jω)t .
26 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS
Time, seconds
which implies that the phase has the effect of time shifting the signal.
• Since complex exponential functions are periodic they have infinite total
energy but finite power.
Z Z t+T
1 jω t 2
P∞ = e 0 dt = 1 1.dτ = 1
T T T t
? k = 0 ⇒ Θk (t) is a constant.
2.3. USEFUL SIGNAL FUNCTIONS 27
The unit step is defined and used in signal and system analysis because it can
mathematically represent a very common action in real physical systems, fast
switching from one state to another. For example in the circuit shown in Figure
2.24 the switch moves from one position to the other at time t = 0. The voltage
applied to the RC circuit can be described mathematically by v0 (t) = vs (t)u(t).
The unit step function is very useful in specifying a function with different
mathematical description over different intervals. Consider, for example, the Figure 2.24: Simple RC circuit
rectangular pulse depicted in Figure 2.25(a). We can express such a pulse in
terms of the unit step function by observing that the pulse can be expressed as
the sum of the two delayed unit step functions as shown in Figure 2.25(b). The
unit step function delayed by t0 seconds is u(t − t0 ). From Figure 2.25(b), it is
clear that one way of expressing the pulse of Figure 2.25(a) is u(t−2)−u(t−4).
u(t − 2)
1
0
0 1 2 3 4 5 6
u(t − 4)
0 0
0 1 2 3 4 5 6 0 1 2 3 4 5 6
Time, seconds Time, seconds
5 -1
4
3
x(t)
x(t)
-2
2
1
Figure 2.29: The CT unit ramp 0 -3
function 0 2 4 -1 0 1 2 3 4 5
Time, seconds Time, seconds
Figure 2.28: Functions that change linearly before or after some time
Signals of this kind can be described with the use of the ramp function. The
CT unit ramp function (Figure 2.29) is the integral of the unit step function. It
2.3. USEFUL SIGNAL FUNCTIONS 29
is called the unit ramp function because, for positive t, its slope is one
( Z t
t, t > 0
ramp(t) = = u(λ) dλ = tu(t) (2.28)
0, t ≤ 0 −∞
The integral relationship in (2.28) between the CT unit step and CT ramp func-
tions is shown below in Figure 2.30.
Figure 2.30: Illustration of the integral relationship between the CT unit step
and the CT unit ramp.
Describe the signal shown in Figure 2.31 in terms of unit step functions. Example 2.10
Solution The signal x(t) illustrated in Figure 2.31 can be conveniently han-
dled by breaking it up into the two components x1 (t) and x2 (t), as shown in
Figure 2.32. Here, x1 (t) can be obtained by multiplying the ramp t by the pulse
u(t) − u(t − 2) as shown in Figure 2.32. Therefore,
x1 (t) = t[u(t) − u(t − 2)].
2
Similarly,
x2 (t) = −2(t − 3)[u(t − 2) − u(t − 3)]
x(t)
1
and
x(t) = x1 (t) + x2 (t)
0
= t[u(t) − u(t − 2)] − 2(t − 3)[u(t − 2) − u(t − 3)] -2 -1 0 1 2 3 4 5 6
Time, seconds
= tu(t) − 3(t − 2)u(t − 2) + 2(t − 3)u(t − 3). Figure 2.31: A signal x(t).
2 t 2 x1 (t)
1 1
0 0
-1 -1
-1 0 1 2 3 4 -1 0 1 2 3 4
2 −2(t − 3) 2 x2 (t)
1 1
0 0
-1 -1
-1 0 1 2 3 4 -1 0 1 2 3 4
Figure 2.32: Description of the signal of example 2.10 in terms of unit step
function.
2rect t+1
4
The notation used here is convenient, τ represent the width of the rectangle
2 function while the rectangle centre is at zero, therefore any time transformations
can be easily applied to the notation in (2.29), (Figure 2.34). A special case
of the rectangle function defined in (2.29) is when τ = 1, it is called the unit
1
rectangle function, rect(t) (also called the square pulse). It is a unit rectangle
function because its width, height, and area are all one. Note that one can
always perform time transformations to the unit rectangle function to arrive at
0
-5 -4 -3 -2 -1 0 1 2 3 4 5 the same notation used in (2.29). The signal in Figure 2.34 could be obtained
Figure 2.34: A shifted rect. by scaling and shifting the unit rectangle function. However the notation used
function with width four
seconds and center at -1.
in (2.29) is much easier when handling rectangle functions.
δ(t) = 0 t 6= 0 (2.30)
Z ∞
δ(t) dt = 1 (2.31)
−∞
Try to visualise this function: a signal of unit area equals zero everywhere
except at t = 0 where it is undefined !! To be able to understand the definition
of the delta function let us consider a unit-area rectangular pulse defined by the
function (Figure 2.35) (
1
, |t| < a2
δa (t) = a (2.32)
0, |t| > a2
2.3. USEFUL SIGNAL FUNCTIONS 31
Now imagine taking the limit of the function δa (t) as a approaches zero. Try
to visualise what will happen, the width of the rectangular pulse will become
infinitesimally small, a height that has become infinitely large, and an overall
area that has been maintained at unity. Using this approach to approximate
the unit impulse which is now defined as
δ(t) = lim δa (t) (2.33)
a→0
Other pulses, such as triangular pulse may also be used in impulse approxi-
mations (Figure 2.36). The area under an impulse is called its strength, or
sometimes its weight. An impulse with a strength of one is called a unit im-
pulse. The impulse cannot be graphed in the same way as other functions
because its amplitude is undefined when t = 0. For this reason a unit impulse is
represented by a vertical arrow a spear-like symbol. Sometimes, the strength of
the impulse is written beside it in parentheses, and sometimes the height of the
arrow indicates the strength of the impulse. Figure 2.37 illustrates some ways
of representing impulses graphically.
since the impulse exists only at t = 0. It is useful here to visualise the above
product of the two functions. The unit impulse δ(t) is the limit of the pulse
δa (t) defined in (2.32). The multiplication is then a pulse whose height at t = 0
g(0)/a and whose width is a. In the limit as a approaches zero the pulse becomes
an impulse and the strength is g(0), (Figure 2.38). Similarly, if a function g(t)
is multiplied by an impulse δ(t − t0 )(impulse located at t = t0 ), then
(Figure 2.38). Using the definition of δa (t) we can rewrite the integral as
Z a/2
1
A= g(t) dt (2.36)
a −(a/2)
2.3. USEFUL SIGNAL FUNCTIONS 33
Now imagine taking the limit of this integral as a approaches zero. In the limit,
the two limits of the integration approach zero from above and below. Since
g(t) is finite and continuous at t = 0, as a approaches zero in the limit the value
of g(t) becomes a constant g(0) and can be taken out of the integral. Then
Z
1 a/2 1
lim A = g(0) lim dt = g(0) lim (a) = g(0) (2.37)
a→0 a→0 a −(a/2) a→0 a
So in the limit as a approaches zero, the function δa (t) has the interesting
property of extracting (hence the name sifting) the value of any continuous
finite function g(t) at time t = 0 when the multiplication of δa (t) and g(t) is
integrated between any two limits which include time t = 0. Thus, in other
words Z ∞ Z ∞
g(t)δ(t) dt = lim g(t)δa (t) dt = g(0) (2.38)
−∞ a→0 −∞
This result means that the area under the product of a function with an impulse
δ(t) is equal to the value of that function at the instant where the unit impulse
is located. From (2.35) it follows that
Z ∞ Z ∞
g(t)δ(t − t0 ) dt = g(t0 ) δ(t − t0 ) dt = g(t0 ) (2.40)
−∞ −∞
| {z }
=1
The unit impulse δ(t) can be defined by using the sampling property that is
when it is multiplied by any function g(t), which is finite and continuous at
t = 0, and the product is integrated between limits which include t = 0, the
result is Z ∞
x(0) = x(t)δ(t) dt
−∞
One can argue here, what if I can find a function other than the impulse function
that satisfies the sampling property. The answer would be it must be equivalent
to the impulse δ(t). The next property explores this argument.
The result shows that du/dt satisfies the sampling property of δ(t). Therefore
du
= δ(t) (2.41)
dt
Consequently
Z t
δ(τ ) dτ = u(t) (2.42)
−∞
The result in (2.42) can be obtained graphically by observing that the area from
−∞ to t is zero if t < 0 and unity if t > 0
Z (
t
0, t<0
δ(τ ) dτ =
−∞ 1, t>0
= u(t)
The same result could have been obtained by considering a function g(t) and
its derivative ġ(t) as in Figure 2.39. In the limit as a approaches zero the
function g(t) approaches the unit step function. In that same limit the width
of ġ(t) approaches zero but maintains a unit area which the same as the initial
definition of δa (t). The limit as a approaches zero of ġ(t) is called the generalised
derivative of u(t).
Figure 2.39: Functions which approach the unit step and unit impulse
The important feature of the unit impulse function is not its shape but the fact
that its width approaches zero while the area remains at unity. Therefore when
time transformations are applied to δ(t), in particular scaling it is the strength
that matters and not the shape of δ(t), (Figure 2.40). It is helpful to note that
Z ∞ Z ∞
dλ 1
δ(αt)dt = δ(λ) = (2.43)
−∞ −∞ |α| |α|
and so
δ(t)
δ(αt) = (2.44)
|α|
2.3. USEFUL SIGNAL FUNCTIONS 35
8. x(t)δ(t) = x(0)δ(t).
The DT delta function δ[n] is referred to as the unit sample that occurs at n = 0 (
1, n=k
and the shifted function δ[n − k] as the sample that occurs at n = k . δ[n − k] =
0, n 6= k
n
X
• Case 1: δ[m] = 0 for n < 0, this is true since δ[m] has a value
m=−∞
of one only when m = 0 and zero anywhere else. The upper limit of
the summation is less than zero thus δ[m = 0] is not included in the
summation.
Figure 2.42: A DT unit im-
pulse function • Case 2: On the other hand if n > 0, δ[m = 0] will be included in
Xn
the summation, therefore δ[m] = 1.
m=−∞
In summary,
n
(
X 1, n≥0
δ[m] =
m=−∞
0, n<0
= u[n]
3. u[n] − u[n − 1] = δ[n], this can be clearly see in Figure 2.43 as you subtract
the two signals from each other.
∞
X
4. δ[n − k] = u[n].
k=0
5. x[n]δ[n] = x[0]δ[n].
6. x[n]δ[n − k] = x[k]δ[n − k].
7. The DT unit impulse is not affected by scaling, i.e. δ[αn] = δ[n].
8. I will leave out some other important properties to a later stage of this
course in particular the sifting property of the DT unit impulse.
sin(πt)
sinc(t) = (2.46)
πt
2.3. USEFUL SIGNAL FUNCTIONS 37
sin(πt) π cos(πt)
lim sinc(t) = lim = lim = 1.
t→0 t→0 πt t→0 π
38 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS
Chapter 3
Description of Systems
3.1 Introduction
The words signal and systems were defined very generally in Chapter 1. Sys-
tems can be viewed as any process or interaction of operations that transforms
an input signal into an output signal with properties different from those of the
input signals. A system may consist of physical components (hardware realiza-
tion) or may consist of an algorithm that computes the output signal from the
input signal (software realization).
One way to define a system is anything that performs a function, it operates Figure 3.1: CT and DT sys-
tems block diagrams
on something to produce something else. It can be thought of as a mathematical
operator. A CT system operates on a CT input signal to produce a CT output
i.e. y(t) = H {x(t)}. H is the operator denoting the action of a system, it
specifies the operation to be performed and also identifies the system. On the
other hand, a DT system operates on a DT signal to produce a DT output
(Figure 3.1). I will sometimes use the following notation to describe a system
H
x(t) −−−→ y(t)
39
40 CHAPTER 3. DESCRIPTION OF SYSTEMS
3.2.1 Memory
A systems output or response at any instant t generally depends upon the entire
past input. However, there are systems for which the output at any instant t
depends only on its input at that instant and not on any other input at any
other time. Such systems are said to have no memory or is called memoryless.
The only input contributing to the output of the system occurs at the same
time as the output. The system has no stored information of any past inputs
thus the term memoryless. Such systems are also called static or instantaneous
systems. Otherwise, the system is said to be dynamic (or a system with mem-
ory). Instantaneous systems are a special case of dynamic systems. Here are
some examples
• y(t) = 2x(t), this is a memoryless system.
• y(t) = x(2t), system with memory.
Z
1 t
• y(t) = x(τ ) dτ , system with memory.
C −∞
• A voltage divider circuit is a memoryless system (Figure 3.3).
Figure 3.3: A voltage divider
3.2.2 Invertibility
A system H performs certain operations on input signals. If we can obtain the
input x(t) back from the output y(t) by some operation, the system H is said
to be invertible. Thus, an inverse system H−1 can be created so that when
the output signal is fed into it, the input signal can be recovered (Figure 3.4).
For a non-invertible system, different inputs can result in the same output,
and it is impossible to determine the input for a given output. Therefore, for
an invertible system it is essential that distinct inputs applied to the system
produce distinct outputs so that there is one-to-one mapping between an input
and the corresponding output. An example of a system that is not invertible is
a system that performs the operation of squaring the input signals, y(t) = x2 (t).
3.2. SYSTEMS CHARACTERISTICS 41
For any given input x(t) it is possible to determine the value of the output y(t).
However, if we attempt topfind the output, given the input, by rearranging the
relationship into x(t) = y(t) we√ face a problem. The square root function
has multiple values, for example 4 = ±2. Therefore, there is no one to one
mapping between an input and the corresponding output signals. In other words
we have the same output for different inputs. For a system that is invertible,
consider an inductor whose input-output relationship is described by
Z t
1
y(t) = x(τ ) dτ
L −∞
d
the operation representing the inverse system is simply: L dt .
3.2.3 Causality
A causal system is one for which the output at any instant t0 depends only on
the value of the input x(t) for t ≤ t0 . In other words, the value of the current
output depends only on current and past inputs. This should seem obvious as
how could a system respond to an input signal that has not yet been applied.
Simply, the output cannot start before the input is applied. A system that
violates the condition of causality is called a noncausal system. A noncausal
system is also called anticipative which means the systems knows the input in
the future and acts on this knowledge before the input is applied. Noncausal
systems do not exist in reality as we do not know yet how to build a system
that can respond to inputs not yet applied. As an example consider the system
specified by y(t) = x(t + 1). Thus, if we apply an input starting at t = 0, the
output would begin at t = −1, as seen in Figure 3.5 hence a noncausal system.
is clearly a causal system since the output y(t) depends on inputs that occur
since −∞ up to time t (the upper limit of the integral). If the upper limit is
given as t + 1 the system is noncausal.
3.2.4 Stability
A system is stable if a bounded input signal yields a bounded output signal. A
signal is said bounded if its absolute value is less than some finite value for all
time,
|x(t)| < ∞, −∞ < t < ∞.
42 CHAPTER 3. DESCRIPTION OF SYSTEMS
x(t) x(t + 1)
1 1
0 0
-2 -1 0 1 2 -2 -1 0 1 2
Time, seconds Time, seconds
A system for which the output signal is bounded when the input signal is
bounded is called bounded-input-bounded-output (BIBO) stable system.
Figure 3.7: Outputs of the system described by y[n] = x[2n] to two different
inputs.
3.2. SYSTEMS CHARACTERISTICS 43
Additivity Property
The additivity property of a system implies that if several inputs are acting on
the system, then the total output of the system can be determined by considering
each input separately while assuming all the other inputs to be zero. The total
output is then the sum of all the component outputs. This property may be
expressed as follows: if an input x1 (t) acting alone produces an output y1 (t),
and if another input x2 (t), also acting alone, has an output y2 (t), then, with
both inputs acting together on the system, the total output will be y1 (t) + y2 (t).
Thus, if
H H
x1 (t) −−−→ y1 (t) and x2 (t) −−−→ y2 (t)
then
H
x1 (t) + x2 (t) −−−→ y1 (t) + y2 (t).
A system is linear if both the homogeneity and the additivity property are
satisfied. Both these properties can be combined into one property (superposi-
tion) which can be expressed as follows: if
H H
x1 (t) −−−→ y1 (t) and x2 (t) −−−→ y2 (t)
Determine whether the system described by the differential equation Example 3.2
is linear or nonlinear.
Solution Consider two individual inputs x1 (t) and x2 (t), the equations
describing the system for the two inputs acting alone would be
aÿ1 (t) + by12 (t) = x1 (t) and aÿ2 (t) + by22 (t) = x2 (t)
a[ÿ1 (t) + ÿ2 (t)] + b[y12 (t) + y22 (t)] = x1 (t) + x2 (t)
44 CHAPTER 3. DESCRIPTION OF SYSTEMS
Remark For a system to be linear a zero input signal implies a zero output.
Consider for an example the system
y[n] = 2x[n] + x0
The component of x[n] at n = k is x[k]δ[n − k], and x[n] is the sum of all these
components summed from k = −∞ to ∞. Therefore
3 x[n]
2
1
0
-3 -2 -1 0 1 2 3 n→
3
x[−2]δ[n + 2]
2
1
0
-3 -2 -1 0 1 2 3 n→
3
x[−1]δ[n + 1]
2
1
0
-3 -2 -1 0 1 2 3 n→
3
x[0]δ[n]
2
1
0
-3 -2 -1 0 1 2 3 n→
3
x[1]δ[n − 1]
2
1
0
-3 -2 -1 0 1 2 3 n→
3
x[2]δ[n − 2]
2
1
0
-3 -2 -1 0 1 2 3 n→
The expression in (3.1) is the DT version of the sifting property, x[n] is written
as a weighted sum of unit impulses.
Express the signal shown in Figure 3.9 as a weighted sum of impulse Example 3.3
components. x[n]
2
-1
-2 -1 0 1 2 3
3.3.2 The Convolution Sum n
Figure 3.9: A DT signal x[n]
We are interested in finding the system output y[n] to an arbitrary input x[n]
knowing the impulse response h[n] to a DT LTI system. There is a systematic
way of finding how the output responds to an input signal, it is called convolu-
tion. The convolution technique is based on a very simple idea, no matter how
complicated your input signal is, one can always express it in terms of weighted
impulse components. For LTI systems we can find the response of the system
to one impulse component at a time and then add all those responses to form
46 CHAPTER 3. DESCRIPTION OF SYSTEMS
2 x[n]
-1
-3 -2 -1 0 1 2 3
3
2 2δ[n + 1]
1
0
-1
-3 -2 -1 0 1 2 3
3
2 2δ[n]
1
0
-1
-3 -2 -1 0 1 2 3
3
2 −1δ[n − 1]
1
0
-1
-3 -2 -1 0 1 2 3
3
2 1δ[n − 2]
1
0
-1
-3 -2 -1 0 1 2 3
n
the total system response. Let h[n] be the system response (output) to impulse
input δ[n]. Thus if
H
δ[n] −−−→ h[n]
then because the system is time-invariance
H
δ[n − k] −−−→ h[n − k]
The left hand side is simply x[n] [see equation (3.1)], and the right hand is the
system response y[n] to input x[n]. Therefore
∞
X
y[n] = x[k]h[n − k] (3.2)
k=−∞
The summation on the RHS is known as the convolution sum and is denoted by
y[n] = x[n] ∗ h[n]. Now in order to construct the response or output of a DT LTI
system to any input x[n], all we need to know is the systems impulse response
h[n].
This property can be easily proven by starting with the definition of con-
volution
X∞
y[n] = x[k]h[n − k]
k=−∞
If x[n] and h[n] have lengths of m and n elements respectively, then the
length of y[n] is m + n − 1 elements. In some special cases this property
could be violated. One should be careful to count samples with zero am-
plitudes that exist in between the samples. Furthermore, the appearance
of the first sample in the output will be located at the summation of the
locations of the first appearing samples of each function.
We shall evaluate the convolution sum first by analytical method and later with
graphical aid.
Example 3.4 Determine y[n] = x[n] ∗ h[n] for x[n] and h[n] as shown in Figure 3.11
x[n] h[n]
2 2
1 1
0 0
-1 -1
-3 -2 -1 0 1 2 3 n→ -3 -2 -1 0 1 2 3 n→
Since the system is an LTI one, the output is simply (Figure 3.12)
Remark It would have been easier if we determined y[n] = h[n] ∗ x[n]!! Try
to verify this yourself ?? The answer would have been the same because of the
commutative property. Note also the output width is (3 + 4 − 1 = 6 elements),
the first sample in the output appears at [n = 0 + (−1) = −1] which follows
from the width property.
3.3. LINEAR TIME-INVARIANT SYSTEMS 49
h[n + 1]
2
0
-3 -2 -1 0 1 2 3 4 5
−h[n]
0
-1
-2
-3 -2 -1 0 1 2 3 4 5
h[n − 1]
2
0
-3 -2 -1 0 1 2 3 4 5
h[n − 2]
2
0
-3 -2 -1 0 1 2 3 4 5
The first transformation is a time reflected version of h[k], and the second trans-
formation shifts the already reflected function n units to the right for positive
n; for negative n, the shift is to the left. The convolution operation can be
performed as follows:
1. Reflect h[k] about the vertical axis (n = 0) to obtain h[−k].
2. Time shift h[−k] by n units to obtain h[n − k]. For n > 0, the shift is to
the right; for n < 0, the shift is to the left.
3. Multiply x[k] by h[n − k] and add all the products to obtain y[n]. The
procedure is repeated for each value n over the range −∞ to ∞.
Example 3.5 Determine y[n] = x[n] ∗ h[n] graphically, where x[n] and h[n] are defined as in
Figure 3.11.
Solution Before starting with the graphical procedure it is a good idea here
to determine where the first sample in the output will appear (this was found
earlier to be at n = −1). Furthermore, the width property implies that the
number of elements in y[n] are six. Thus, y[n] = 0 for −∞ < n ≤ −2, and
y[n] = 0 for n ≥ 5, hence the only interesting range for n is −1 ≤ n ≤ 4. Now
for n = −1
X∞
y[−1] = x[k]h[−1 − k]
k=−∞
and a negative n (n = −1) implies a time shift to the left for the function
h[−1 − k]. Next multiply x[k] by h[−1 − k] and add all the products to obtain
y[−1] = 2. We keep repeating the procedure incrementing n by one every time,
it is important to note here that by incrementing n by one every time means
shifting h[n − k] to the right by one sample. Figures 3.13 and 3.14 illustrates
the procedure for n = −1, 0, 1 and 2. Continuing in this matter for n = 3 and
4, we obtain y[n] as illustrated in Figure 3.15.
3.3. LINEAR TIME-INVARIANT SYSTEMS 51
h[−1 − k] h[−k]
2 2
1 1
0 0
-4 -3 -2 -1 0 1 2 3 4 k→ -4 -3 -2 -1 0 1 2 3 4 k→
x[k] x[k]
1 1
0 0
-1 -1
-4 -3 -2 -1 0 1 2 3 4 k→ -4 -3 -2 -1 0 1 2 3 4 k→
x[k]h[−1 − k] x[k]h[−k]
2 1
0
1
-1
0 -2
-4 -3 -2 -1 0 1 2 3 4 k→ -4 -3 -2 -1 0 1 2 3 4 k→
y[−1] = 2 y[0] = −1
h[1 − k] h[2 − k]
2 2
1 1
0 0
-4 -3 -2 -1 0 1 2 3 4 k→ -4 -3 -2 -1 0 1 2 3 4 k→
x[k] x[k]
1 1
0 0
-1 -1
-4 -3 -2 -1 0 1 2 3 4 k→ -4 -3 -2 -1 0 1 2 3 4 k→
x[k]h[1 − k] x[k]h[2 − k]
2 2
1 1
0 0
-1 -1
-4 -3 -2 -1 0 1 2 3 4 k→ -4 -3 -2 -1 0 1 2 3 4 k→
y[1] = 3/2 y[2] = 5/2
The alternative procedure is basically the same, the only difference is that in-
stead of presenting the data as graphical plots, we display it as a sequence of
numbers on tapes. The following example demonstrates the idea and should
become clearer.
52 CHAPTER 3. DESCRIPTION OF SYSTEMS
y[n]
3
-1
-3 -2 -1 0 1 2 3 4 5 n→
Example 3.6 Determine y[n] = x[n] ∗ h[n], using the sliding tape method where x[n] and
h[n] are defined as in Figure 3.11.
Solution In this procedure we write the sequences x[n] and h[n] in the slots of
two tapes. Now leave the x tape fixed to correspond to x[k]. The h[−k] tape is
obtained by time inverting the h[k] tape about the origin (n = 0) Figure 3.16.
x tape → 1 −1 1 1 x[k] 1 −1 1 1
h tape → 2 1 0.5 ← h[−k] 0.5 1 2
n=0 n=0
Before going any further we have to align the slots such that the first element in
the stationary x[k] tape corresponds to the first element of the already inverted
h[−k] tape as illustrated in Figure 3.17. We now shift the inverted tape by n
slots, multiply values on two tapes in adjacent slots, and add all the products to
find y[n]. Figure 3.17 show the cases for n = 0, 1 and 2. For the case of n = 1,
for example
y[1] = (1 × 0.5) + (−1 × 1) + (1 × 2) = 1.5
Continuing in the same fashion for all n we obtain the same answer as is in
Figure 3.15.
n=0
1 −1 1 1 y[−1] = 2
0.5 1 2 ← n = −1
1 -1 1 1 y[0] = −1
0.5 1 2 n=0
1 -1 1 1 y[1] = 1.5
0.5 1 2 →n=1
becomes exact, and the rectangular pulses becomes impulses delayed by various
amounts. The system response to the input x(t) is then given by the sum of
the system’s responses to each impulse component of x(t). Figure 3.19 shows
x(t) as a sum of rectangular pulses, each of width Δτ. In the limit as Δτ → 0,
each pulse approaches an impulse having a strength equal to the area under
that pulse. For example the pulse located at t = nΔτ can be expressed as
t − nΔτ
x(nΔτ )rect
Δτ
and will approach an impulse at the same location with strength x(nΔτ )Δτ ,
which can be represented by
[x(nΔτ )Δτ ] δ(t − nΔτ ) (3.12)
| {z }
strength
If we know the impulse response of the system h(t), the response to the impulse
in (3.12) will simply be [x(nΔτ )Δτ ]h(t − nΔτ ) since
H
δ(t) −−−→ h(t)
H
δ(t − nΔτ ) −−−→ h(t − nΔτ )
H
[x(nΔτ )Δτ ]δ(t − nΔτ ) −−−→ [x(nΔτ )Δτ ]h(t − nΔτ ) (3.13)
The response in (3.13) represents the response to only one of the impulse com-
ponents of x(t). The total response y(t) is obtained by summing all such com-
ponents (with Δτ → 0)
∞
X ∞
X
H
lim x(nΔτ )Δτ δ(t − nΔτ ) −−−→ lim x(nΔτ )Δτ h(t − nΔτ )
Δτ →0 Δτ →0
n=−∞ n=−∞
| {z } | {z }
T he input x(t) T he output y(t)
54 CHAPTER 3. DESCRIPTION OF SYSTEMS
and the integral in (3.14) is known as the convolution integral and denoted by
y(t) = x(t) ∗ h(t).
If x(t) has a duration of T1 and h(t) has a duration of T2 , then the du-
ration of y(t) is T1 + T2 . Furthermore, the appearance of the output will
be located at the summation of the times of where the two functions first
appear.
This property of the convolution integral has no counterpart for the con-
volution sum.
One of the crucial points to remember here is that the integration is performed
with respect to τ , so that t is just like a constant. This consideration is also
important when we sketch the graphical representations of the functions x(t)
and h(t − τ ). Both of these functions should be sketched as functions of τ , not
of t. The convolution operation can be performed as follows:
3. Time shift h(−τ ) by t0 seconds to obtain h(t − t0 ). For t0 > 0, the shift
is to the right; for t0 < 0, the shift is to the left.
4. Find the area under the product of x(τ ) by h(t0 − τ ) to obtain y(t0 ), the
value of the convolution at t = t0 .
Determine y(t) = x(t) ∗ h(t) for x(t) and h(t) as shown in Figure 3.20. Example 3.7
x(t) y(t)
1 1
0 0
-5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5
Time, seconds Time, seconds
Solution Figure 3.21(a) shows x(τ ) and h(−τ ) as functions of τ . The function
h(t − τ ) is now obtained by shifting h(−τ ) by t. If t is positive, the shift is to
the right; if t is negative the shift is to the left. Figure 3.21(a) shows that for
negative t, h(t − τ ) does not overlap x(τ ), and the product x(τ )h(t − τ ) = 0, so
that y(t) = 0 for t < 0. Figure 3.21(b) shows the situation for 0 ≤ t ≤ 2 , here
x(τ ) and h(t − τ ) do overlap and the product is nonzero only over the interval
0 ≤ τ ≤ t (shaded area). Therefore,
Z t
y(t) = x(τ )h(t − τ )dτ 0≤t<2
0
All we need to do now is to substitute correct expressions for x(τ ) and h(t − τ )
in this integral
Z t
y(t) = (1)(1)dτ = t 0≤t<2
0
As we keep right shifting h(−τ ) to obtain h(t − τ ), the next interesting range
for t is 2 ≤ t ≤ 4, (Figure 3.21(c).) Clearly, x(τ ) overlaps with h(t − τ ) over the
shaded interval, which is t − 2 ≤ τ < 2. Therefore,
Z 2
y(t) = x(τ )h(t − τ )dτ
t−2
Z 2
= (1)(1)dτ
t−2
=4−t 2≤t≤4
1 1 1
0 0 0
-5 0 1 2 τ→ -5 0 2 τ→ -5 0 1 2 τ→
0 0 0
-5 0 τ→ -5 t τ→ -5 t−2 t τ→
0 0 0
-5 0 τ→ -5 0 t τ→ -5 τ→
(a) (b) (c)
It is clear that for t > 4, x(τ ) will not overlap h(t − τ ) which implies y(t) = 0
for t > 4. Therefore the result of the convolution is (Figure 3.22), y(t)
2
0
t<0
t 0≤t<2
y(t) = 1
4 − t 2 ≤ t ≤ 4
0 t>4 0
-5 -4 -3 -2 -1 0 1 2 3 4 5
Time, seconds
Hint To check your answer, the convolution has the property that the Figure 3.22: Convolution of
area under the convolution integral is equal to the product of the areas x(t) and h(t)
of the two signals entering into the convolution. The area can be computed
by integrating equation (3.14) over the interval −∞ < t < ∞, giving
Z ∞ Z ∞Z ∞
y(t)dt = x(τ )h(t − τ )dτ dt
−∞ −∞ −∞
Z ∞ Z ∞
= x(τ ) h(t − τ )dt dτ
−∞ −∞
Z ∞
= x(τ ) [area under h(t)] dτ
−∞
= area under x(t) × area under h(t)
Determine graphically y(t) = x(t) ∗ h(t) for x(t) = e−t u(t) and h(t) = e−2t u(t). Example 3.8
Solution Figure 3.23 show x(t) and h(t) and Figure 3.24(a) shows x(τ ) and
h(−τ ) as functions of τ . The function h(t−τ ) is obtained by shifting h(−τ ) by t.
Clearly, h(t−τ ) does not overlap x(τ ) for t < 0, and the product x(τ )h(t−τ ) = 0,
so that y(t) = 0 for t < 0. Figure 3.24(b) shows the situation for t ≥ 0. Here
x(τ ) and h(t − τ ) do overlap over the shaded interval (0, t). Therefore,
Z t
y(t) = e−τ e−2(t−τ ) dτ
0
Z t
−2t
=e eτ dτ
0
= e−t − e−2t t≥0
x(t)
h(t)
1
1
e−t
e−2t
0 t→ 0 t→
1 1
0 0
0 τ→ 0 τ→
1 t<0 1 t>0
0 0
t 0 τ→ 0 t τ→
1 t<0 1 t>0
0 0
0 τ→ 0 t τ→
y(t) (a) (b)
0 t→
y(t) = (e−t − e−2t )u(t) t≥0
Figure 3.25: y(t)
Example 3.9 Compute the convolution x(t) ∗ h(t), where x(t) and h(t) are as in Figure 3.26
x(t)
h(t)
1
1
0 0
-1 0 t→ -1 0 1 t→
Here, x(t) has a simpler mathematical expression that that of h(t), so it is prefer-
able to invert x(t). Hence, we shall determine h(t) ∗ x(t) rather than x(t) ∗ h(t).
According to Figure 3.26 x(t) and h(t) are
( −1 ≤ t < 0
1 −1 ≤ t < 0 t + 1
x(t) = and h(t) = 1 0≤t<1
0 otherwise
0 otherwise
3.4. THE CONVOLUTION INTEGRAL 59
Figure 3.27 demonstrates the overlapping of the two signals h(τ ) and x(t − τ ).
We can see that for t < −2, the product h(τ )x(t − τ ) is always zero. For
−2 ≤ t < −1,
Z t+1
y(t) = (τ + 1)(1)dτ
−1
t+1
τ2
= + τ
2 −1
1
= (t + 2)2 − 2 ≤ t < −1
2
1 1
0 0
-1 0 1 τ→ -1 0 1 τ→
x(t − τ )
1 t < −2 1 −2 ≤ t < −1
0 0
t t+1 τ→ t t+1 τ→
1 t < −2 1 −2 ≤ t < −1
0 0
τ→ -1 t + 1 τ→
1 1
0 0
-1 0 1 τ→ -1 0 1 τ→
0≤t<1
1 −1 ≤ t < 0 1
x(t − τ )
0 0
t t+1 τ→ t t+1 τ→
0≤t<1
1 −1 ≤ t < 0 1
0 0
t 0 t+1 τ→ -4 t 1 τ→
(a) (b)
In other words the only memoryless systems are what we call constant gain
systems.
for a CT system to be causal, y(t) must not depend on x(τ ) for τ > t. We can
see that this will be so if
h(t − τ ) = 0 f or τ >t
Let λ = t − τ , implies
h(λ) = 0 f or λ<0
In this case the convolution integral becomes
Z t
y(t) = x(τ )h(t − τ )dτ
−∞
Z ∞
= h(τ )x(t − τ )dτ
0
In Chapter 3 we saw how to obtain the response of a linear time invariant sys-
tem to an arbitrary input represented in terms of the impulse function. The
response was obtained in the form of the convolution integral. In this chapter
we explore other ways of expressing an input signal in terms of other signals.
In particular we are interested in representing periodic signals in terms of com-
plex exponentials, or equivalently, in terms of sine and cosine waveforms. This
representation of signals leads to the Fourier series, named after the French
physicist Jean Baptiste Fourier. Fourier was the first to suggest that peri-
odic signals could be represented by a sum of sinusoids. The concept is really
simple: consider a periodic signal with fundamental period T0 and fundamental
frequency ω0 = 2πf , this periodic signal can be expressed as a linear combi-
nation of harmonically related sinusoids as shown in Figure 4.1. In Fourier
63
64 CHAPTER 4. THE FOURIER SERIES
We say the basis is orthogonal if the inner product of any two different vectors
of the basis set is zero, and to visualize, they are simply at right angles. It is
clear that the vectors e1 , e2 and e3 form an orthogonal basis since
< e1 , e2 > = eT
1 e2 = 0
< e2 , e3 > = eT
2 e3 = 0
< e3 , e1 > = eT
3 e1 = 0
4.1. ORTHOGONAL REPRESENTATIONS OF SIGNALS 65
where g ∗ (t) stands for the complex conjugate of the signal. For any basis set
to be orthogonal the inner product of every single element of the set to every
other element must be zero. A set of signals Φi , i = 0, ±1, ±2, ∙ ∙ ∙ , is said to be
orthogonal over an interval (a, b) if
Z (
b
En m=n
Φm (t)Φ∗n (t)dt = (4.3)
a 0 m 6= n
and En is simply the signal energy over the interval (a, b). If the energies En = 1
for all n, the Φn (t) are said to be orthonormal signals. An√orthogonal set can
always be normalized by dividing each signal in the set by E n .
Show that the signals Φm (t) = sin mt, m = 1, 2, 3, ∙ ∙ ∙ , form an orthogonal Example 4.1
set on the interval −π < t < π.
Solution We start by showing that the inner product of each single element
of the set to every other element is zero
Z π Z π
∗
Φm (t)Φn (t)dt = (sin mt)(sin nt)dt
−π −π
Z π Z
1 1 π
= cos(m − n)t dt − cos(m + n)t dt
2 −π 2 −π
(
π m=n
=
0 m 6= n
Since the energy in each signal equals π, the following set of signals forms an
orthonormal set over the interval −π < t < π
sin t sin 2t sin 3t
√ , √ , √ ,∙∙∙
π π π
Show that the signals Φk (t) = ej(kω0 t) , k = 0, ±1, ±2, ∙ ∙ ∙ , form an orthogonal Example 4.2
2π
set on the interval (0, T ) where T = ω 0
.
66 CHAPTER 4. THE FOURIER SERIES
and hence the signals √1T ejkω0 t forms an orthonormal set over the interval (0, T ).
Evaluating the integral for the case k 6= l is not trivial and is shown below
Z T T
1
ej(k−l)ω0 t dt = ej(k−l)ω0 t
0 j(k − l)ω0 0
1 h i 2π
j(k−l)ω0 T
= e −1 and ω0 =
j(k − l)ω0 T
=0
since ej2π(k−l) = 1 for k 6= l.
Now, consider expressing a signal x(t) with finite energy over the interval (a, b)
by an orthonormal set of signals Φi (t) over the same interval as
∞
X
x(t) = ci Φi (t) (4.4)
i=−∞
from (4.3) and since Φi (t) is an orthonormal set, (4.5) can be simplified as
Z b
ck = x(t)Φ∗k (t)dt k = 0, ±1, ±2, ∙ ∙ ∙ (4.6)
a
noting that the summation in (4.5) has a value only when i = k and the sum-
mation is always zero otherwise. Note also that (4.6) is the inner product of the
signal with the orthonormal set. If the set Φi (t) is an orthogonal set, then the
coefficients in (4.6) are
Z b
1
ck = x(t)Φ∗k (t)dt k = 0, ±1, ±2, ∙ ∙ ∙ (4.7)
Ek a
4.1. ORTHOGONAL REPRESENTATIONS OF SIGNALS 67
Consider the signal x(t) defined over the interval (0, 3) as shown in Figure (4.3). Example 4.3
x(t)
0
0 1 2 3 t
Time, seconds
1 1 1
0 0 0
0 1 2 3 t 0 1 2 3 t 0 1 2 3 t
The coefficients that represent signal x(t), obtained using equation (4.6), are
given by
Z 3
c1 = x(t)Φ∗1 (t)dt = 2
0
Z 3
c2 = x(t)Φ∗2 (t)dt = 0
0
Z 3
c3 = x(t)Φ∗3 (t)dt = 1
0
It is worth emphasizing here that the choice of the basis is not unique and many
other possibilities exist.
68 CHAPTER 4. THE FOURIER SERIES
∞
X
x(t) = cn ejnω0 t (4.8)
n=−∞
where, from (4.7), the cn are complex constants and are given by
Z T
1
cn = x(t)e−jnω0 t dt n = 0, ±1, ±2, ∙ ∙ ∙ (4.9)
T 0
Each term in the series has a period T , hence if the series converges, its sum is
periodic with period T . Such a series is called the complex exponential Fourier
series, and the cn are called the Fourier coefficients or spectral coefficients. Fur-
thermore, the interval of integration can be replaced by any other intervalR of
length T . Recall that we denote integration over an interval of length T by T .
Example 4.4 Find the exponential Fourier series for the signal x(t) in Figure 4.5
The line spectra of x(t) are shown in Figure 4.6. Note that the magnitude spec-
trum has even symmetry, the phase spectrum has odd symmetry. The spectrum
exists only at n = 0, ±1, ±2, ±3, ∙ ∙ ∙ ; corresponding to ω = 0, ±ω0 , ±2ω0 , ±3ω0 , ∙ ∙ ∙ ;
i.e., only at discrete values of ω. It is therefore a discrete spectrum and it is
Magnitude Spectrum
2K 2K
π π
2K 2K
3π 3π
2K 2K
5π 5π
-6 -5 -4 -3 -2 -1 0 1 2 3 4 5 n→
Phase Spectrum
π π π
2 2 2
− π2 − π2 − π2
-6 -5 -4 -3 -2 -1 0 1 2 3 4 5 n→
Figure 4.6: Line Spectra for the periodic signal x(t) of Example 4.4
70 CHAPTER 4. THE FOURIER SERIES
very common to see the spectrum written in terms of the discrete unit impulse
function. The magnitude spectrum cn in Figure 4.6 could be expressed as
2k 2k 2k 2k
cn = ∙ ∙ ∙ + δ[n + 3] + δ[n + 1] + [n − 1] + δ[n − 3] + ∙ ∙ ∙
3π π π 3π
Example 4.5 Find the exponential Fourier series of the impulse train δT (t) shown in Figure
4.7.
The amplitude spectrum for this function is given in Figure 4.8. Note that since
the Fourier coefficients are real, there is no need for the phase spectrum.
Find the exponential Fourier series for the square signal x(t) in Figure 4.9. Example 4.6
Z T /2
1
cn = x(t)ejnω0 t dt
T −T /2
Z T1
1 1
= 1 ejnω0 t dt = ejnω0 T1 − e−jnω0 T1
T −T1 jT nω0
2 2 2π
= sin(nω0 T1 ) = sin n T1
T nω0 T n 2πT
T
2T1 2nT1
= sinc (4.10)
T T
π
Consider the special case of T1 = 2and T = 2π in example 4.6. Hence,
1 n
cn = sinc
2 2
The line spectra to this particular case is shown in Figure 4.10. The frequency
spectra provides an alternative description to the signal x(t), namely the fre-
quency domain description. The time domain description of a signal x(t) is a
one as depicted in Figure 4.9. On the other hand the frequency domain de-
scription of x(t) is as shown in Figure 4.10. One can say the signal has a dual
identity: the time domain identity x(t) and the frequency domain identity (fre-
quency spectra). The two identities provide better understanding of a signal.
We notice some interesting features of these spectra. First, the spectra exist for
positive as well as negative values of ω (the frequency). Second, the magnitude
spectrum is an even function of ω and the phase spectrum is an odd function
ω, a property that we explain in details later.
72 CHAPTER 4. THE FOURIER SERIES
π
Figure 4.10: The magnitude and phase spectra for x(t) for T1 = 2 and T = 2π
It can be easily shown that this set is orthogonal over any interval T = 2π/ω0 ,
which is the fundamental period. Therefore,
Z (
0 m 6= n
cos nω0 t cos mω0 t dt = T (4.12)
T 2 m = n 6= 0
and (
Z
0 m 6= n
sin nω0 t sin mω0 t dt = T
(4.13)
T 2 m = n 6= 0
and Z
sin ω0 t cos mω0 t dt = 0 for all n and m (4.14)
T
If we select Φi (t) in (4.11) as basis function then we can now express a signal
x(t) by a trigonometric Fourier series over any interval of duration T as
or
∞
X
x(t) = a0 + [an cos nω0 t + bn sin nω0 t] (4.16)
n=1
Using Z
1
cn = x(t)Φn (t)dt
En T
In order to see the close connection of the trigonometric Fourier series with the
exponential Fourier series, we shall re-derive the trigonometric Fourier series
from the exponential Fourier series.
For real valued signal x(t), the complex conjugate of cn is given by
" Z #∗
T
1
c∗n = x(t)e−jnω0 t dt
T 0
Z
1 T
= x(t)ejnω0 t dt recall x(t) = x∗ (t) for a real signal
T 0
= c−n
Hence,
|c−n | = |cn | and ∠c−n = −∠cn (4.19)
74 CHAPTER 4. THE FOURIER SERIES
and this explains why the amplitude spectrum has even symmetry while the
phase spectrum has odd symmetry. This property can be clearly seen in the
examples of section 4.2.1. Furthermore, this allows us to regroup the exponential
series in (4.8) as follows
−1
X ∞
X
x(t) = c0 + cn ejnω0 t + cn ejnω0 t
n=−∞ n=1
X∞ ∞
X
= c0 + c−n e−jnω0 t + cn ejnω0 t
n=1 n=1
X∞
= c0 + c−n e−jnω0 t + cn ejnω0 t (4.20)
n=1
which is clearly as derived before. The result above becomes clearer when we
substitute e−jnω0 t = cos nω0 t − j sin nω0 t in
Z T
1
cn = x(t)e−jnω0 t dt
T 0
4.3. TRIGONOMETRIC FOURIER SERIES 75
to obtain
Z
1
cn = x(t) (cos nω0 t − j sin nω0 t) dt
T T
Z Z
1 1
= x(t) cos nω0 t dt − j x(t) sin nω0 t dt
T T T T
1 1
= α + jβ = an − j bn (4.22)
2 2
where the coefficients An and θn are computed from an and bn using (4.17) and
(4.18).
Find the compact trigonometric Fourier series for the periodic square wave x(t) Example 4.7
illustrated in Figure 4.9 where T1 = π/2 and T = 2π.
where Z Z π/2
1 1 1
a0 = x(t)dt = dt =
T T 2π −π/2 2
Note that a0 could have easily been deduced by inspecting x(t) in Figure 4.9, it
is the average value of x(t) over one period. Next,
Z π/2 nπ
2 2
an = cos nt dt = sin
2π −π/2 nπ 2
0 n even
2
= πn n = 1, 5, 9, 13, ∙ ∙ ∙
2
− πn n = 3, 7, 11, 15, ∙ ∙ ∙
76 CHAPTER 4. THE FOURIER SERIES
and Z π/2
2
bn = sin nt dt = 0
2π −π/2
Therefore
X∞
1
x(t) = + an cos nt
2 n=1
n odd
1 2 1 1 1
= + cos t − cos 3t + cos 5t − cos 7t + ∙ ∙ ∙ (4.26)
2 π 3 5 7
Only the cosine terms appear in the trigonometric series. The series is therefore
already in compact form except that the amplitudes of alternating harmonics
are negative. Now, by definition, amplitudes An are positive. The negative sign
can be expressed instead by introducing a phase of π radians since
− cos α = cos(α ± π)
and when compared with (4.23) and (4.24) implies |cn | = 12 An for n ≥ 1 and
∠cn = θn for positive n. From (4.19), ∠cn = −θn for negative n. In conclu-
sion, the connection between the trigonometric spectra with exponential spectra
can be summarized as follows. The dc components c0 and a0 are identical in
both spectra. Moreover, the exponential amplitude spectrum |cn | is half of the
trigonometric amplitude spectrum An for n ≥ 1. The exponential angle spec-
trum ∠cn is identical to the trigonometric phase spectrum θn for positive n and
−θn for negative n. We can therefore produce the exponential spectra merely
by inspection of trigonometric spectra, and vice versa. The following example
demonstrates this feature.
The trigonometric Fourier spectra of a certain periodic periodic signal x(t) Example 4.8
are shown in Figure 4.11(a). By inspecting these spectra, sketch the corre-
sponding exponential Fourier spectra.
An |cn |
16 16
12 12
8 8
4 4
0 0
-12 -9 -6 -3 0 3 6 9 12 -12 -9 -6 -3 0 3 6 9 12
θn π/2 ∠cn
π/4
0
−π/4 −π/4
−π/2 −π/2
-12 -9 -6 -3 0 3 6 9 12 -12 -9 -6 -3 0 3 6 9 12
(a) (b)
Example 4.9 For the train of impulses in Example 4.5 sketch the trigonometric spectrum
and write the trigonometric Fourier series.
Figure 4.12 shows the trigonometric Fourier spectrum. From this spectrum we
can express the impulse train δT (t) as
1 2π
δT (t) = [1 + 2(cos ω0 t + cos 2ω0 t + cos 3ω0 t + ∙ ∙ ∙ )] ω0 =
T T
Example 4.10 Find the compact trigonometric Fourier series for the triangular periodic signal
x(t) illustrated in Figure 4.13 and sketch the amplitude and phase spectra.
x(t)
-1
-4 -3 -2 -1 0 1 2 3 t→
where
(
2t |t| ≤ 12
x(t) =
2(1 − t) 12 < t ≤ 3
2
4.3. TRIGONOMETRIC FOURIER SERIES 79
= zero
Therefore
an = 0
Z 1/2 Z 3/2
bn = 2t sin nπt dt + 2(1 − t) sin nπ dt
−1/2 1/2
0 n even
8
= = n28π2 n = 1, 5, 9, 13, ∙ ∙ ∙
n2 π 2
− n28π2 n = 3, 7, 11, 15, ∙ ∙ ∙
Therefore
8 1 1 1
x(t) = sin πt − sin 3πt + sin 5πt − sin 7πt + ∙ ∙ ∙
π2 9 25 49
In order to plot Fourier spectra, the series must be converted into compact
trigonometric form as in Equation (4.25). This can be done using the identity
± sin kt = cos(kt ∓ π2 )
Hence, x(t) can be expressed as
8 1 1 1
x(t) = 2 cos(πt − π2 ) + cos(3πt + π2 ) + cos(5πt − π2 ) + cos(7πt + π2 ) + ∙ ∙ ∙
π 9 25 49
Note in this series how all the even harmonics are missing. The phases of odd
harmonics alternate from − π2 to π2 . Figure 4.14 shows amplitude and phase
spectra for x(t).
Comment The summation of sinusoids that are harmonically related will
result in a signal that is periodic. The ratio of any two frequencies is a
rational number. The largest positive number of which all the frequencies are
integral (integer) multiples is the fundamental frequency. Consider the the
signal x(t)
8
π2
8
9π 2 8
25π 2
0
0 1 2 3 4 5 6 7 8 9 n→
− π2
0 1 2 3 4 5 6 7 8 9 n→
Figure 4.14: Line Spectra for the triangular periodic signal x(t) of Example 4.10
4.5.1 Linearity
Suppose that x(t) and y(t) are periodic, both having the same period. Let their
Fourier series expansion be given by
∞
X
x(t) = βn ejnω0 t
n=−∞
X∞
y(t) = γn ejnω0 t
n=−∞
and let
z(t) = k1 x(t) + k2 y(t)
where k1 and k2 are arbitrary constants. If
FS
x(t) ←→ βn
FS
y(t) ←→ γn
where
Z
1
dn = x(t − τ )e−jnω0 t dt
T T
Z
−jnω0 τ 1
=e x(σ)e−jnω0 σ dσ
T T
= cn e−jnω0 τ
Therefore,
FS
x(t − τ ) ←→ cn e−jnω0 τ
where
Z
1
dn = ejmω0 t x(t)e−jnω0 t dt
T T
Z
1
= x(t)e−j(n−m)ω0 t dt
T T
= cn−m
Notice the duality between the time shifting and frequency shifting properties.
Shifting in one domain corresponds to multiplication by a complex exponential
in the other domain.
Figure 4.15: A signal x(t) and a time scaled version z(t) of that signal
The scaling operation simply changes the harmonic spacing from ω0 to aω0 ,
keeping the coefficients identical.
Thus
d FS
x(t) ←→ jnω0 cn
dt
This operation accentuates the high frequency components of the signal. Note
that differentiation forces the average value, dc component of x(t), of the differ-
entiated signal to be zero. Hence the Fourier series coefficient for n = 0 is zero.
So differentiation of a time function corresponds to a multiplication of it Fourier
series coefficients by an imaginary number whose value is a linear function of
the harmonic number.
84 CHAPTER 4. THE FOURIER SERIES
X∞
cn jnω0 t
= e
n=−∞
jnω0
Therefore, Z t
cn FS
x(λ)dλ ←→ , n 6= 0
−∞ jnω 0
Note that integration attenuates the magnitude of the high frequency compo-
nents of the signal. Integration may be viewed as an averaging operation and
thus tends to smooth signals in time, it is sometimes called a smoothing opera-
tion.
Figure 4.16: The effect of a nonzero average value on the integral of a periodic
signal x(t).
4.5.8 Multiplication
If x(t) and y(t) are periodic signals with the same period, their product z(t) is
z(t) = x(t)y(t). Let
FS FS FS
x(t) ←→ αn , y(t) ←→ βn and z(t) ←→ γn (4.30)
Then Z Z
1 −jω0 nt 1
γn = z(t)e dt = x(t)y(t)e−jω0 nt dt (4.31)
T T T T
4.5. PROPERTIES OF FOURIER SERIES 85
Then, using
∞
X ∞
X
y(t) = βn ejnω0 t = βm ejmω0 t
n=−∞ m=−∞
in (4.31), we have
Z ∞
!
1 X
γn = x(t) βm e jmω0 t
e−jω0 nt dt
T T m=−∞
Then
∞
X
γn = βm αn−m
m=−∞
Therefore,
∞
X
FS
x(t)y(t) ←→ βm αn−m = αn ∗ βn
m=−∞
4.5.9 Convolution
For periodic signals with the same period, a special form of convolution, known
as periodic or circular convolution, is defined by the integral
Z
1
z(t) = x(t) ~ y(t) = x(τ )y(t − τ )dτ
T T
where the integral is taken over one period. It can be shown that
FS
x(t) ~ y(t) ←→ αn βn
where αn and βn are the Fourier series coefficients of x(t) and y(t) respectively.
So convolution of two CT periodic signals with the same period corresponds to
multiplication of their Fourier series coefficients.
with the periodic square wave x(t) depicted in Figure 4.9 of Example 4.5 with
T1 = 14 and T = 1.
86 CHAPTER 4. THE FOURIER SERIES
Solution Both x(t) and y(t) have fundamental period T = 1. The Fourier
series coefficients for x(t) may be obtained from (4.10) as
n
2T1 2nT1 1
αn = sinc = sinc
T T 2 2
FS
Let z(t) = x(t)~y(t). The convolution property indicates x(t)~y(t) ←→ αn βn .
Hence the Fourier coefficients for z(t) are
(
1
n = ±1
α n βn = π
0 otherwise
By knowing what type of symmetry and the signal over half period, the
Fourier coefficients can be computed by integrating over only half the period
rather than a complete period. To prove this result, recall that
Z
1
a0 = x(t)dt (4.32)
T T
Z
2
an = x(t) cos nω0 t dt (4.33)
T T
Z
2
bn = x(t) sin nω0 t dt (4.34)
T T
4.5. PROPERTIES OF FOURIER SERIES 87
Recall also from section 2.1.4 that cos nω0 t is an even function and sin nω0 t is
an odd function. If x(t) is an even function, then x(t) cos nωo t is also an even
function and x(t) sin nω0 t is an odd function. The Fourier series of an even
signal x(t) having period T is
∞
X
x(t) = a0 + an cos nω0 t
n=1
with coefficients
Z
1
a0 = x(t)dt
T T
Z Z T /2
2 4
an = x(t) cos nω0 t dt = x(t) cos nω0 t dt (4.35)
T T | {z } T 0
even
since Z Z
T T /2
x(t) dt = 2 x(t)dt
−T |{z} 0
even
and
Z
2
bn = x(t) sin nω0 t dt
T T | {z }
odd
=0
since Z T
x(t) dt = 0
−T |{z}
odd
Similarly, if x(t) is an odd function, then x(t) cos nω0 t is an odd function and
x(t) sin nω0 t is an even function. Therefore
a0 = an = 0
Z
4 T /2
bn = x(t) sin nω0 t dt (4.36)
T 0
Observe that, because of symmetry, the integration required to compute the
coefficients need to be performed over only half the period.
If the two halves of one period of a periodic signal are of identical shape
except that one is the negative of the other, the periodic signal is said to have
a half wave symmetry. The signal in Figure 4.13 is a clear example of such
a symmetry. If a periodic signal x(t) with period T satisfies the half-wave
symmetry condition then
x(t − T2 ) = −x(t)
In this case all even-numbered harmonics vanish (note also a0 = 0) , and the
odd-numbered harmonic coefficients are given by
Z
4 T /2
an = x(t) cos nω0 t dt
T 0
Z
4 T /2
bn = x(t) sin nω0 t dt
T 0
The consequence of these symmetries are summarized in Table 4.1
88 CHAPTER 4. THE FOURIER SERIES
Symmetry a0 an bn Remarks
When x(t) has an even symmetry, bn = 0, and from Equation (4.27), cn = a2n
which is real (positive or negative). Hence ∠cn can only be 0 or ±π. Moreover,
we may compute cn = a2n using Equation (4.35). Similarly, when x(t) has an
odd symmetry, an = 0, and cn = − jb2n is imaginary (positive or negative).
Hence, ∠cn can only be 0 or ± π2 . We can compute cn = − jb2n using Equation
(4.36).
For example the complex exponential signal x(t) = cn ejnω0 t has an average
power of
Z
1
P = cn ejnω0 t c∗n e−jnωo t dt
T T
Z
1
= |cn |2 dt = |cn |2
T T
A question that may well be asked at this point: If x(t) = cn ejnω0 t has an
∞
X
average power |cn |2 will the signal x(t) = cn ejnω0 t has an average power
n=−∞
∞
X
2
|cn | . One can further ask what is the relationship between the average
n=−∞
power of the signal x(t) and its harmonics?
Z ∞ ∞
!∗
1 X X
P = cm ejmω0 t cn ejnω0 t dt
T T m=−∞ n=−∞
4.6. SYSTEM RESPONSE TO PERIODIC INPUTS 89
The integral in (4.38) is zero except for the special case when m = n. For this
specific condition the double summation reduces to a single summation and we
have a new relationship for the average power in terms of the magnitudes of the
coefficients
X∞ X∞
P = cn c∗n = |cn |2 (4.39)
n=−∞ n=−∞
The result indicates that the total average power of a periodic signal x(t) is
equal to the sum of the powers of its Fourier coefficients. Therefore, if we know
the function x(t), we can find the average power. Alternatively, if we know
the Fourier coefficients, we can find the average power. Interpreting the result
physically simply means writing a signal as a Fourier series does not change its
energy. A graph of |cn |2 versus ω can plotted, such a graph is called the power
spectrum.
We can apply the same argument to the trigonometric Fourier series. It can
be shown that
∞
1X 2
P = a20 + A
2 n=1 n
For complex exponential inputs of the form x(t) = ejωt , the output of the system
is
Z ∞
y(t) = h(τ )ejω(t−τ ) dτ
−∞
Z ∞
=e jωt
h(τ )e−jωτ dτ
−∞
By defining Z ∞
H(ω) = h(τ )e−jωτ dτ
−∞
90 CHAPTER 4. THE FOURIER SERIES
we can write
y(t) = H(ω)ejωt (4.41)
H(ω) is called the system frequency response and is constant for fixed ω. Equa-
tion (4.41) tells us that the system response to a complex exponential is also a
complex exponential, with the same frequency ω, scaled by the quantity H(ω).
The magnitude |H(ω)| is called the magnitude function of the system, and
∠H(ω) is known as the phase function of the system. In summary, the response
y(t) of a CT LTI system to an input sinusoid of period T is also a sinusoid of
period T . Knowing H(ω), we can determine if the system changes the amplitude
of the input and how much of a phase shift the system adds to the sinusoidal
input.
To determine the response y(t) of a LTI system to a periodic input x(t) we saw
earlier that
ejωt −→ H(ω)ejωt
|{z} | {z }
input output
The response y(t) of a LTI system to a periodic input with period T is periodic
with the same period.
Example 4.13 Find the output voltage y(t) of the system shown in Figure 4.17 if the in-
put voltage is the periodic signal x(t) = 4 cos t − 2 cos 2t, assume R = 1Ω and
L = 1H.
R/L
H(nω0 ) =
R/L + jnω0
R
For our case, ω0 = 1 and L = 1, hence
1
H(n) =
1 + jn
Using Euler identity the input signal is expressed as
93
94 CHAPTER 5. THE FOURIER TRANSFORM
0.2
0
-15 -10 -5 0 5 10 15 n
(a)
0.1
0
-30 -20 -10 0 10 20 30 n
(b)
0.05
0
-60 -40 -20 0 20 40 60 n
(c)
Figure 5.2: Line spectra for x(t). (a) Magnitude spectrum for T = 10T1 , (b)
Magnitude spectrum for T = 20T1 , (c) Magnitude spectrum for T = 40T1 .
Taking the limit as the period approaches infinity, the pulses in the periodic
signal repeat after an infinite interval. In other words we moved all the pulses
to infinity except the desired pulse located at the origin. The new function xT (t)
is a periodic signal and can be represented by an exponential Fourier series given
by
X∞
xT (t) = cn ejnω0 t (5.1)
n=−∞
where Z T /2
1
cn = xT (t)e−jnω0 t dt (5.2)
T −T /2
and
2π
ω0 =
T
Before taking any limiting operation, we need to do some adjustments so that
the magnitude components of the cn do not all go to zero as the period is
increased. We make the following changes
X(nω0 ) , T cn (5.3)
When we use this definition, (5.1) and (5.2) become
X∞
1
xT (t) = X(nω0 )ejnω0 t (5.4)
n=−∞
T
Z T /2
X(nω0 ) = xT (t)e−jnω0 t dt (5.5)
−T /2
In the limit as T → ∞, the discrete lines in the spectrum of xT merge and the
frequency spectrum becomes continuous i.e. Δω → 0, furthermore xT (t) → x(t).
Therefore,
∞
1 X
x(t) = lim xT (t) = lim X(nΔω)ej(nΔωt) Δω (5.7)
T →∞ Δω→0 2π
n=−∞
96 CHAPTER 5. THE FOURIER TRANSFORM
becomes Z ∞
1
x(t) = X(ω)ejωt dw (5.8)
2π −∞
Equations (5.8) and (5.9) are commonly referred to as the Fourier transform
pair. Equation (5.9) is known as the direct Fourier transform of x(t) (more
commonly just the Fourier transform). Equation (5.8) is known as the inverse
Fourier transform. Symbolically, we use the following operator notation:
Z ∞
X(ω) = F [x(t)] = x(t)e−jωt dt
−∞
Z ∞
1
x(t) = F −1 [X(ω)] = X(ω)ejωt dw
2π −∞
It is also useful to note that the complex exponential Fourier series coefficients
can be evaluated in terms of the the Fourier transform by using (5.3) to give
1
cn = X(ω) (5.10)
T ω=nω0
1
This means that the Fourier coefficients cn are T times the samples of X(ω)
uniformly spaced at intervals of ω0 .
Therefore
1 ω
|X(ω)| = √ and ∠X(ω) = − tan−1
a + ω2
2 a
The amplitude spectrum |X(ω)| and the phase spectrum ∠X(ω) are depicted
in Figure 5.4. Observe that |X(ω)| is an even function of ω, and ∠X(ω) is an
odd function of ω, as expected.
5.2. EXAMPLES OF THE FOURIER TRANSFORM 97
|X(ω)|
0.5
0
0 ω→
(a)
π/2
∠X(ω)
−π/2
0 ω→
(b)
Figure 5.4: Fourier spectra for x(t) = e−at u(t), a = 1. (a) Amplitude spectrum
|X(ω)|, (b) Phase spectrum ∠X(ω).
This inequality shows that the existence of the Fourier transform is assured
if condition (5.11) is satisfied. Although this condition is sufficient, it is not
necessary for the existence of the Fourier transform of a signal. We shall see
later many signals violates condition (5.11), but does have a Fourier transform.
Find the Fourier transform of the unit impulse δ(t). Example 5.2
or
F
δ(t) ←→ 1
If the impulse is time shifted, we have
Z ∞
F [δ(t − τ )] = δ(t − τ )e−jωt
−∞
= e−jωt = e−jωτ
t=τ
t
Example 5.3 Find the Fourier transform of the rectangular pulse x(t) = rect τ (Figure 5.5a).
Alternatively,
Z ∞
t
X(ω) = rect e−jωt dt
−∞ τ
Z τ /2 Z τ /2
= e−jωt dt = cos ωt −j sin ωt dt
| {z } | {z }
−τ /2 −τ /2
even odd
Z τ /2 ωτ
2
=2 cos ωt dt = sin
0 ω 2
ωτ
sin 2 ωτ
=τ ωτ
= τ sinc
2
2π
Therefore,
t ωτ
F
rect ←→ τ sinc (5.12)
τ 2π
Recall, sinc(x) = 0 when x = ±n. Hence, sinc ωτ ωτ
2π = 0 when 2π = ±n; that
2πn
is when ω = ± τ , (n = 1, 2, 3, ∙ ∙ ∙ ), as depicted in Figure 5.5b. The Fourier
transform X(ω) shown in Figure 5.5b is the amplitude spectrum since it exhibits
positive and negative values. Since X(ω) is a real valued function its phase is
zero for all ω. If the magnitude spectrum is required, the negative amplitudes
can be considered as a positive amplitude with a phase of −π or π. The magni-
tude spectrum |X(ω)| and the phase spectrum ∠X(ω) are shown in Figure 5.5c
and d respectively. Note that the phase spectrum, which is required to be an
odd function of ω since x(t) is real, may be drawn in several other ways because
a negative sign can be accounted for by a phase ±nπ where n is any odd integer.
5.2. EXAMPLES OF THE FOURIER TRANSFORM 99
Figure 5.5: A rect function x(t), its Fourier spectrum X(ω), magnitude spec-
trum |X(ω)|, and phase spectrum ∠X(ω)
Find the Fourier transform of a dc or constant signal x(t) = 1, −∞ < t < ∞. Example 5.4
The integral does not converge. Therefore, strictly speaking, the Fourier trans-
form does not exist. But we can avoid this problem by use of the following
trick
Z τ /2
X(ω) = lim (1)e−jωt dt
τ →∞ −τ /2
1 −jωτ /2
= lim − e − ejωτ /2 )
τ →∞ jω
This can be simplified to
2 ωτ
X(ω) = lim sin
τ →∞ ω
2ωτ
= lim τ sinc (5.13)
τ →∞ 2π
In the limit as τ → ∞, X(ω) in Equation (5.13) approaches the impulse function
δ(w) with strength 2π since the area under the sinc function is 2π from Figure
5.6 ( in our case A = τ and B = τ /2). Therefore,
F
1 ←→ 2πδ(ω)
The frequency function 2πδ(ω) is called the generalized Fourier transform of the
signal x(t). This result shows that the spectrum of a constant signal x(t) = 1
is an impulse 2πδ(ω). This situation can be thought of as a dc signal which has
a single frequency ω = 0 (dc).
100 CHAPTER 5. THE FOURIER TRANSFORM
Z ∞
A sin Bt A
Figure 5.6: Area under a sinc function = dt = π
−∞ Bt B
It is interesting to see the result in Example 5.4 using the inverse Fourier trans-
form as shown in the next example.
Solution On the basis of Equation (5.8) and the sifting property of the
impulse function,
Z ∞
1 1
F −1 [δ(ω)] = δ(ω)ejωt dw =
2π −∞ 2π
Therefore,
1 F
←→ δ(ω)
2π
or
F
1 ←→ 2πδ(ω)
Therefore,
1 jω0 t F
e ←→ δ(ω − ω0 )
2π
or
F
ejω0 t ←→ 2πδ(ω − ω0 )
It follows that
F
e−jω0 t ←→ 2πδ(ω + ω0 )
5.3. FOURIER TRANSFORM OF PERIODIC SIGNALS 101
Thus the Fourier transform of a periodic signal is simply an impulse train, with
impulses located at ω = nω0 , each of which has a strength of 2πcn , and all
impulses are separated from each other by ω0 .
A periodic function of considerable importance is that of a periodic sequence
of unit impulse functions shown in Figure 5.7. For convenience, we write such
X∞
a sequence with period T as δT (t) = δ(t − nT ). Because this is a periodic
n=−∞
2π
function, we can express it in terms of a Fourier series by choosing ω0 = T ,
(see example 4.5)
X∞
δT (t) = cn ejnω0 t
n=−∞
1
where cn = T so that
∞
1 X jnω0 t
δT (t) = e
T n=−∞
∞ ∞
2π X X
F [δT (t)] = δ(ω − nω0 ) = ω0 δ(ω − nω0 ) (5.15)
T n=−∞ n=−∞
The Fourier transform of a periodic impulse train in the time domain gives an
impulse train that is periodic in the frequency domain. The frequency spectrum
of δT (t) is shown in Figure 5.8.
Example 5.7 Find the Fourier transform of the periodic rectangular waveform x(t) shown in
Figure 5.9.
Substituting this into (5.14) yields the Fourier transform of the periodic rect-
5.4. PROPERTIES OF THE FOURIER TRANSFORM 103
angular pulses
∞
X
2nT1
X(ω) = 2T1 ω0 sinc δ(ω − nω0 )
n=−∞
T
X∞
nω0 T1
= 2T1 ω0 sinc δ(ω − nω0 )
n=−∞
π
The frequency spectrum is sketched in Figure 5.10. The dashed curve indicates
the weights of the impulse functions. Note that in distribution of impulses in
frequency, Figure 5.10 shows the particular case that T = 8T1
In the previous example it is interesting to note how the Fourier series co-
efficients of the periodic rectangular waveform x(t) is related to the Fourier
transform of the truncated signal xT (t) shown in Figure 5.9. From (5.10),
1
cn = XT (ω)
T ω=nω0
ωT1
The Fourier transform of XT (ω) = 2T1 sinc π (see example 5.3), therefore,
2T1 ωT1
cn = sinc
T π
ω=nω0
In words, the Fourier series coefficients of a periodic signal can be obtained from
samples of the Fourier transform of the truncated signal divided by the period
T , provided the periodic signal and the truncated one are equal in one period.
Note that in general the Fourier transform of a periodic function is not periodic.
5.4.1 Linearity
The Fourier transform is a linear operation based on the properties of integra-
tion. Thus if
F
x1 (t) ←→ X1 (ω)
F
x2 (t) ←→ X2 (ω)
then
F
αx1 (t) + βx2 (t) ←→ αX1 (ω) + βX2 (ω)
where α and β are arbitrary constants. The property is the direct result of the
linear operation of integration.
Proof Let z(t) = αx1 (t) + βx2 (t), the proof is trivial and follows as
Z ∞
Z(ω) = z(t)e−jωt dt
−∞
Z ∞
= [αx1 (t) + βx2 (t)]e−jωt dt
−∞
Z ∞ Z ∞
−jωt
=α x1 (t)e dt + β x2 (t)e−jωt dt
−∞ −∞
= αX1 (ω) + βX2 (ω)
Because of linearity
1 jωo t 1 −jωo t F
e + e ←→ πδ(ω − ω0 ) + πδ(ω + ω0 )
2 2
The Fourier spectrum of cos ω0 t consist of only two components of frequencies
ω and −ω0 . Therefore the spectrum has two impulses at ω and −ω0 .
Proof By definition
Z ∞
F [x(t − t0 )] = x(t − t0 )e−jωt dt
−∞
Letting u = t − t0 , we have
Z ∞
F [x(t − t0 )] = x(u)e−jω(u+to ) du
−∞
Z ∞
=e −jωt0
x(u)e−jωu du = e−jωt0 X(ω)
−∞
This result shows that if the signal is delayed in time by t0 , its magnitude spec-
trum remains unchanged. The phase spectrum, however, is changed and a phase
of −ωt0 , which is a linear function of ω, is added to each frequency component.
The slope of this linear phase term is equal to the time shift t0 .
Find the Fourier transform of the rectangular pulse x(t) = rect t−ττ /2 . Example 5.9
Solution The pulse x(t) is the rectangular pulse rect τt in Figure ?? de-
layed by τ /2 seconds. Hence,
according to (5.16), its Fourier transform is the
τ
Fourier transform of rect τt multiplied by e−jω 2 . Therefore
ωτ τ
X(ω) = τ sinc e−jω 2
2π
The amplitude spectrum |X(ω)| is the same as that indicated in Figure 5.5. But
the phase spectrum has an added linear term − ωτ
2 , as shown in Figure 5.11.
then
F
x(t)ejω0 t ←→ X(ω − ω0 ) (5.17)
106 CHAPTER 5. THE FOURIER TRANSFORM
Proof By definition
Z ∞
F [x(t)e jω0 t
]= x(t)ejω0 t e−jωt dt
−∞
Z ∞
= x(t)e−j(ω−ω0 )t dt
−∞
= X(ω − ω0 )
Example 5.10 Find the Fourier transform of the complex sinusoidal pulse x(t) defined as
(
ej10t , |t| ≤ π
x(t) =
0, otherwise
Therefore
t F
ej10t rect ←→ 2π sinc (ω − 10)
2π
Changing ω0 to −ω0 in Equation (5.17) yields
F
x(t)e−jω0 t ←→ X(ω + ω0 )
For real valued x(t), it is now easy to find the Fourier transform of x(t) multiplied
by a sinusoid since for example x(t) cos ω0 t can be expressed as
1
x(t) cos ω0 t = [x(t)ejω0 t + x(t)e−jω0 t ]
2
It follows from the frequency shifting property that
F 1
F [x(t) cos ω0 t] ←→ [X(ω − ω0 ) + X(ω + ω0 )] (5.18)
2
Multiplying a signal by a sinusoid cos ω0 t amounts to modulating the sinusoid
amplitude. Modulating means changing the amplitude of one signal by mul-
tiplying it by the other. This type of modulation is thus known as amplitude
modulation. The sinusoid cos ω0 t is called the carrier, the signal x(t) is the
modulating signal, and the signal x(t) cos ω0 t is the modulated signal. Further
discussion of modulation and demodulation will appear in chapter 6.
5.4. PROPERTIES OF THE FOURIER TRANSFORM 107
t
Find and sketch the Fourier transform of rect 4 cos 10t. Example 5.11
F
Solution From (5.12) we find rect 4t ←→ 4 sinc 2ω
π , which is depicted
in Figure 5.12(a). From (5.18) it follows that
F 1
x(t) cos 10t ←→ [X(ω + 10) + X(ω − 10)]
2
In this case X(ω) = 4 sinc( 2ω
π ). Therefore
F
x(t) cos 10t ←→ 2 sinc [2(ω + 10)] + 2 sinc[2(ω − 10)]
The spectrum of x(t) cos 10t is obtained by shifting X(ω) in Figure 5.12(b) to
the left by 10 and also to the right by 10, and then multiplying it by one-half,
as depicted in Figure 5.12(d).
However, if a < 0, the limits on the integral are reversed when the variable of
integration is changed so that
Z −∞
1
F [x(at)] = x(u)e−jωu/a du
a ∞
Z
1 ∞
1 ω
=− x(u)e−jωu/a du = − X for a < 0
a −∞ a a
We can write the two cases as one because the factor − a1 is always positive when
a < 0; i.e.,
1 ω
F [x(at)] = X (5.19)
|a| a
The frequency scaling property can be proven in a similar manner and the result
is
1 t F
x ←→ X (aω)
|a| a
If a is positive and greater than unity, x(at) is a compressed version of x(t) and
clearly the function X ωa represents the function X(ω) expanded in frequency
by the same factor a. The scaling property states that time compression of a
signal results in its spectral expansion, and time expansion of the signal results
in its spectral compression. Figure 5.13 shows two cases where the pulse length
differs by a factor of two. Notice the longer pulse in Figure 5.13a has a narrower
transform shown in Figure 5.13b.
t
Example 5.12 What is the Fourier transform of rect τ .
Solution The Fourier transform of rect t
τ is, by example 5.3,
ωτ
t F
rect ←→ τ sinc
τ 2π
5.4. PROPERTIES OF THE FOURIER TRANSFORM 109
By (5.19) the Fourier transform of rect 2t
τ is
ωτ
2t F τ
rect ←→ sinc
τ 2 4π
Solution To find the Fourier transform of this pulse we differentiate the pulse
successively as illustrated in Figure 5.14b and c. From the time differentiation
property (5.22)
d2 x(t) F
←→ (jω)2 X(ω) = −ω 2 X(ω)
dt2
d2 x
The dt2 , consists of a sequence of impulses, as depicted in Figure 5.14c; that is
d2 x(t) 2
= [δ(t + τ2 ) − 2δ(t) + δ(t − τ2 )]
dt2 τ
110 CHAPTER 5. THE FOURIER TRANSFORM
Figure 5.14: The Fourier transform using the time differentiation property.
and 2
τ sin( ωτ ωτ
8 4 ) τ
X(ω) = 2
sin 2 ωτ
4 = ωτ = sinc2
ω τ 2 4 2 4π
We must be careful when using the differentiation property. Note that since
dx(t)
F = jωX(ω)
dt
the differentiation property would suggest
h i
F dx(t)
dt
X(ω) = (5.21)
jω
5.4. PROPERTIES OF THE FOURIER TRANSFORM 111
du(t)
= δ(t)
dt
Taking Fourier transform of both sides yields
jωU (ω) = 1
We might be tempted at this stage to claim that the Fourier transform of u(t)
is
1
U (ω) =
jω
This is not true, since U (0) 6= 0 and the above result is indeterminate at ω = 0.
At this point the signum function sgn(t) becomes handy since we know that the
signal being an odd function must have an average value of zero. Furthermore,
we can attempt to find the Fourier transform of the unit step since we can al-
ways express the unit step in terms of the signum function.
Find the Fourier transform of the signum function x(t) = sgn(t). Example 5.14
dx(t)
= 2δ(t)
dt
Using the differentiation property and taking the Fourier transform of both sides
jωX(ω) = 2
or
jω F [sgn(t)] = 2
Hence
F 2
sgn(t) ←→
jω
We know that X(ω) = 0 because sgn(t) is an odd function and thus has zero
average value. This knowledge removes the indeterminacy at ω = 0 associated
with the differentiation property.
Find the Fourier transform of the unit step function u(t). Example 5.15
112 CHAPTER 5. THE FOURIER TRANSFORM
1 1
u(t) = + sgn(t)
2 2
By linearity of the Fourier transform, we obtain
F 1
u(t) ←→ πδ(ω) +
jω
Therefore, the Fourier transform of the unit step function contains an impulse
at ω = 0 corresponding to the average value of 12 .
then
F d
−jtx(t) ←→ X(ω)
dω
Proof Differentiation of both sides of (5.9) yields
Z ∞ Z ∞
d d
X(ω) = x(t) e −jωt
dt = −jtx(t) e−jωt dt
dω dω −∞ −∞
F d
tx(t) ←→ j X(ω)
dω
The frequency differentiation property can be extended to yield the following
F dn
tn x(t) ←→ (j)n X(ω) (5.22)
dω n
Hence,
F 1
te−at u(t) ←→
(a + jω)2
5.4. PROPERTIES OF THE FOURIER TRANSFORM 113
Proof By definition
Z t Z ∞ Z t
F x(τ )dτ = x(τ )dτ e−jωt dt
−∞ −∞ −∞
Z ∞ Z ∞
= x(τ )u(t − τ )dτ e−jωt dt
−∞ −∞
Interchanging the order of integration and noting that x(τ ) does not depend on
t, we have
Z t Z ∞ Z ∞
F x(τ )dτ = x(τ ) u(t − τ )e−jωt dt dτ
−∞ −∞ −∞
| {z }
Fourier transform of shifted u(t)
Z ∞
= x(τ ) U (ω)e−jωτ dτ
−∞
Z ∞
= U (ω) x(τ )e−jωτ dτ
−∞
| {z }
Simply X(ω)
1
U (ω) = πδ(ω) +
jω
Therefore,
Z t
1
F x(τ )dτ = πδ(ω) + X(ω)
−∞ jω
which can be written as
Z t
1
F x(τ )dτ = X(ω) + πX(0)δ(ω)
−∞ jω
The factor X(0) in the second term on the right follows from the sifting property
of the impulse function. This second term is needed to account for the average
value of x(τ ). Recall that a dc component will show as an impulse at ω = 0. If
114 CHAPTER 5. THE FOURIER TRANSFORM
x(τ ) has no dc component to consider the time integration property will simply
be Z t
F 1
x(τ )dτ ←→ X(ω)
−∞ jω
Example 5.17 Using the integration property derive the Fourier transform of the unit step
function u(t).
Solution The unit step may be expressed as the integral of the impulse
function Z t
u(t) = δ(τ ) dτ
−∞
F
Since δ(t) ←→ 1, using (5.23) suggests
Z t
F 1
u(t) = δ(τ )dτ ←→ + πδ(ω)
−∞ jω
as found earlier in Example 5.14.
= H(ω)X(ω) = Y (ω)
Thus convolution in the time domain is equivalent to multiplication in the fre-
quency domain. The use of the convolution property for LTI systems is demon-
strated in Figure 5.15. The amplitude and phase spectrum of the output y(t)
are related to the input x(t) and impulse response h(t) in the following manner:
|Y (ω)| = |X(ω)| |H(ω)|
∠Y (ω) = ∠X(ω) + ∠H(ω)
5.4. PROPERTIES OF THE FOURIER TRANSFORM 115
Using the time convolution property prove the integration property. Example 5.18
Solution Consider the convolution of x(t) with a unit step function it follows
Z ∞
x(t) ∗ u(t) = x(τ )u(t − τ ) dτ
−∞
The unit step function u(t − τ ) has a value zero for t < τ and a value of 1 for
t > τ , therefore, Z t
x(t) ∗ u(t) = x(τ ) dτ
−∞
Now from the time convolution property, it follows that
Z t
F 1
x(t) ∗ u(t) = x(τ )dτ ←→X(ω) + πδ(ω)
−∞ jω
1
= X(ω) + πX(0)δ(ω)
jω
using a different dummy variable since the variable ω is already used in (5.24),
therefore,
Z ∞ Z ∞
1
F [x(t)p(t)] = x(t) P (σ)e dσ e−jωt dt
jσt
−∞ 2π −∞
| {z }
p(t)
X(−ω) = X ∗ (ω)
5.4. PROPERTIES OF THE FOURIER TRANSFORM 117
Proof The property follows by taking the conjugate of both sides of (5.9)
Z ∞ ∗
X ∗ (ω) = x(t)e−jωt dt
−∞
Z ∞
= x∗ (t)ejωt dt
−∞
X ∗ (ω) = |X(ω)|e−j∠X(ω)
X(−ω) = |X(−ω)|ej∠X(−ω)
Since X ∗ (ω) = X(−ω), the last two equations are equal. It then follows that
showing that the magnitude spectrum is an even function of frequency and the
phase spectrum is an odd function of frequency.
Now suppose x(t) is purely imaginary so that x(t) = −x∗ (t). In this case,
we may write
Z ∞ ∗
∗ −jωt
X (ω) = x(t)e dt
−∞
Z ∞
= x∗ (t)ejωt dt
−∞
Z ∞
=− x(t)ejωt dt
−∞
= −X(−ω)
|X(ω)| = −|X(−ω)|
∠X(ω) = ∠X(−ω)
118 CHAPTER 5. THE FOURIER TRANSFORM
i.e., the magnitude spectrum is an odd function of frequency and the phase
spectrum is an even function of frequency.
Example 5.19 Show that if x(t) is real, the expression of the inverse Fourier transform in
(5.8) can be changed to an expression involving real cosinusoidal signals.
x(t) X(ω)
5.4.13 Duality
A duality exists between the time domain and the frequency domain. Equations
(5.8) and (5.9) show an interesting fact: the direct and the inverse transform
operations are remarkably similar. The property states that if
F
x(t) ←→ X(ω) (5.28)
then
F
X(t) ←→ 2πx(−ω) (5.29)
This property states that if x(t) has the Fourier transform X(ω) and a function
of time exists such that
X(t) = X(ω)
ω=t
then F [X(t)] = 2πx(−ω), where x(−ω) = x(t) .
t=−ω
Hence Z ∞
2πx(−t) = X(σ)e−jσt dσ
−∞
Changing t to ω yields
Z ∞
2πx(−ω) = X(σ)e−jσω dσ
−∞
Example 5.21 Use duality to find the inverse Fourier transform of X(ω) = sgn(ω)
j
x(t) =
πt
Consequently,
Z ∞ Z ∞
12
E∞ = |x(t)| dt = |X(ω)|2 dω (5.30)
−∞ 2π −∞
Solution Let
1
X(ω) =
jω + 2
Now the inverse Fourier transform of X(ω) is x(t) = e−2t u(t). Using (5.30)
Z ∞ Z ∞
1 2
dω = 2 |x(t)|2 dt
2π −∞ |jω + 2|2 −∞
Z ∞ Z ∞
2
2
dω = 4π e−4t dt
−∞ |jω + 2| 0
=π
For convenience, a summary of the basic Fourier transform properties are listed
in Table 5.2.
1 ω
Time Scaling x(at) X
|a| a
dn x(t)
Time differentiation (jω)n X(ω)
dtn
dn X(ω)
Frequency Differentiation (−jt)n x(t)
dω n
Z t
1
Time Integration x(τ )dτ X(ω) + πX(0)δ(ω)
−∞ jω
1
Multiplication x(t)p(t) X(ω) ∗ P (ω)
2π
1
The contribution by components within a band dω is 2π X(ω) dω = X(ω) dF ,
dω
where dF = 2π is the bandwidth in hertz. Clearly, X(ω) is the spectral density
per unit bandwidth (in hertz). It also follows that even if the amplitude of
any one component is zero, the relative amount of component of frequency ω
is X(ω). Although X(ω) is a spectral density, in practice it is often called the
spectrum of x(t) rather that the spectral density of x(t). More commonly, X(ω)
is called the Fourier spectrum (or Fourier transform) of x(t).
such bands and is indicated by the area under |X(ω)|2 as in (5.30). Therefore,
|X(ω)|2 is called energy spectral density or simply the energy spectrum and is
given the symbol Ψ, that is Ψ(ω) = |X(ω)|2 . Thus the energy spectrum is that
function
• that describes the relative amount of energy of a given signal x(t) versus
frequency.
• whose total area under Ψ(ω) is the energy of the signal.
Note that the quantity Ψ(ω) describes only the relative amount of energy at
various frequencies. For continuous Ψ(ω), the energy at any given frequency is
zero, it is the area under Ψ(ω) that contributes energy. The energy contained
within a band ω1 ≤ ω ≤ ω2 is
Z ω2
1
ΔE∞ = |X(ω)|2 dω (5.31)
2π ω1
For real valued time signals, Ψ(ω) is an even function and (5.30) can be reduced
to Z
1 ∞
E∞ = |X(ω)|2 dω (5.32)
π 0
Note that the energy spectrum of a signal depends on the magnitude of the
spectrum and not the phase.
Example 5.23 Find the energy of the signal x(t) = e−t u(t) in the frequency band −4 < ω < 4
Thus, approximately 84% of the total energy content of the signal lies in the
frequency band −4 < ω < 4. A result that could not be obtained in the time
domain.
5.5. ENERGY AND POWER SPECTRAL DENSITY 125
t
Define the truncated signal xT (t) as the product x(t)rect 2T . By using the
modulation property
t F 1 ωT
xT (t) = x(t)rect ←→ XT (ω) = X(ω) ∗ 2T sinc
2T 2π π
Z ∞
1 λT
= 2T sinc X(ω − λ) dλ
2π −∞ π
|XT (ω)|2
Next we form the function to obtain
2T
X∞ ∞
X
|XT (ω)|2 (ω − nω0 )T (ω − mω0 )T
= 2T cn c∗m sinc sinc
2T n=−∞ m=−∞
π π
The power spectrum of periodic signal x(t) is obtained by taking the limit of
the last expression as T → ∞. It has been observed earlier (see Example 5.4)
5.5. ENERGY AND POWER SPECTRAL DENSITY 127
X∞
|XT (ω)|2
S(ω) = lim = 2π |cn |2 δ(ω − nω0 ) (5.34)
T →∞ 2T n=−∞
Now that we have obtained our result, note that to convert any line power
∞
X
spectrum (i.e. |cn |2 ) to a power spectral density simply change the lines
n=−∞
to impulses. The weights (areas) of these impulses are equal to the squared
magnitudes of the line heights and multiplied by the factor 2π. Integrating the
power spectrum S(ω) in (5.34) over all frequencies yields
Z Z " ∞
#
1 ∞ ∞ X
2
P∞ = S(ω)dω = |cn | δ(ω − nω0 )dω
2π −∞ −∞ n=−∞
∞
X Z ∞
2
= |cn | δ(ω − nω0 )dω
n=−∞ −∞
| {z }
=1
∞
X
= |cn |2
n=−∞
Find the power spectral density for the periodic signal x(t) = A cos(ω0 t + θ) Example 5.23
5.6.4 Autocorrelation
A very important special case of the correlation function is the correlation of
a function with itself. This type of correlation is called the autocorrelation
function. If x(t) is an energy signal and real valued, its autocorrelation is
Z ∞ Z ∞
Rxx (t) = x(τ )x(τ − t)dτ = x(t + τ )x(τ )dτ
−∞ −∞
which is the total signal energy of the signal. On the other hand if x(t) is a
power signal, the autocorrelation is
Z Z
1 1
Rxx (t) = lim x(τ )x(τ − t)dτ = lim x(t + τ )x(τ )dτ
T →∞ T T T →∞ T T
which is the average signal power of the signal. Note that if x(t) is periodic the
limiting operation in the determination of Rxx (t) can be replaced by a compu-
tation over one period. The subscript (xx) of the autocorrelation function Rxx
is often written as Rx .
Determine and sketch the autocorrelation function of a periodic square wave Example 5.24
x(t) shown in Figure 5.20.
Solution Because x(t) is real valued and periodic the autocorrelation function
is given by Z
1
Rx (t) = x(τ )x(τ − t)dτ
T T
130 CHAPTER 5. THE FOURIER TRANSFORM
A graph of Rx (t) is shown in Figure 5.22. Note that since x(t) is periodic,
all calculations repeat over every period. It follows that the autocorrelation
function of a periodic waveform is periodic. It is interesting to notice that
2
Rx (0) = A2 , the average signal power of the signal, a result we should all be
familiar with by now!!
Therefore,
F
Rx (t) ←→ Ψ(ω)
and we conclude that the energy spectral density Ψ(ω) is the Fourier transform
of the autocorrelation function of energy signals. It is clear that Rx (t) provides
spectral information of x(t) directly. It is interesting to note that for real valued
energy signal x(t), we can write
F
Rx (t) ←→ X(ω)X(−ω)
We now have another method to find the power spectrum, i.e., first determine
the autocorrelation function and then take a Fourier transform.
Chapter 6
The continuous time Fourier transform is a very important tool that has numer-
ous applications in communication systems, signal processing, control systems,
and many other engineering disciplines. In this chapter, we discuss some of
these applications, including linear filtering, modulation, and sampling.
Figure 6.1: LTI system depicted in both the time and frequency domain.
133
134 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM
Solution First recognize that the impulse response to the system is h(t) =
e−t u(t). Therefore,
1
H(ω) =
1 + jω
and
H −1 (ω) = 1 + jω
In other words
X(ω) = (1 + jω)Y (ω)
or
d
x(t) = y(t) + y(t)
dt
6.1. SIGNAL FILTERING 135
Find the frequency response of the LTI system shown in Figure 6.4. Example 6.2
Solution First, the impulse response of the parallel system is h1 (t) = δ(t) −
δ(t − T ). The frequency response of the parallel part is
H1 (ω) = 1 − e−jωT
Therefore, the frequency response of the overall system in Figure 6.4 is the
product
1. Lowpass filter is a one that has its passband in the range of 0 < |ω| < ωc ,
where ωc is called the cutoff frequency of the lowpass filter, Figure 6.5a.
2. Highpass filter is a one that has its stopband in the range 0 < |ω| < ωc
and a passband that extends from ω = ωc to infinity, Figure 6.5b.
136 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM
3. Bandpass filter has its passband in the range 0 < ω1 < |ω| < ω2 < ∞ and
all other frequencies are stopped, Figure 6.5c.
4. Bandstop filter stops frequencies in the range 0 < ω1 < |ω| < ω2 < ∞ and
pass all other frequencies, Figure 6.5d.
For the system shown in Figure 6.7, often used to generate communication Example 6.3
signals, design an ideal filter assuming that a certain application requires y(t) =
3 cos 1200πt.
Solution The signals x1 (t) and x2 (t) are multiplied together to give
x3 (t) = x1 (t)x2 (t) = 10 cos 200πt cos 1000πt
Using Euler’s identity
ej1000πt + e−j1000πt
x3 (t) = 10 cos 200πt
2
= 5 cos 200πt ej1000πt + 5 cos 200πt e−j1000πt
We can use the frequency shifting property together with the Fourier transform
of cos ω0 t to find the frequency spectrum of x3 (t) as
X3 (ω) = 5π[δ(ω − 200π − 1000π) + δ(ω + 200π − 1000π)]
+ 5π[δ(ω − 200π + 1000π) + δ(ω + 200π + 1000π)]
= 5π[δ(ω − 1200π) + δ(ω − 800π) + δ(ω + 800π) + δ(ω + 1200π)]
The frequency spectra of x1 (t), x2 (t), and x3 (t) are shown in Figure 6.8. The
Fourier transform of the required signal y(t) is
Y (ω) = 3π[δ(ω − 1200π) + δ(ω + 1200π)]
It can be seen that this can be obtained from X3 (ω) by a high pass filter whose
frequency response is H(ω) shown in Figure 6.8, the filtering process can be
written as
Y (ω) = X3 (ω)H(ω)
where
ω
H(ω) = 0.6 1 − rect , 800π < ωc < 1200π
2ωc
138 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM
Example 6.4 Consider the periodic square wave given in Example 5.7 as the input signal to
an ideal low pass filter, sketch the output signal in the time domain if the cutoff
frequency |ωc | = 8ω0 .
Solution Recall in Example 5.7 we showed that the periodic square wave
has the Fourier transform depicted in Figure 6.9 which consists of impulses at
integer multiples of ω0 . If this signal is the input to an ideal low pass filter with
cutoff frequency ωc such that ωc = 8ω0 then the Fourier transform of the out-
put contains only the impulses lying within the filters bandwidth. The frequency
response of the low pass filter is drawn on top of the Fourier transform X(ω).
The output signal is plotted at the right side of Figure 6.9.
6.1.3 Bandwidth
The term bandwidth is applied to both signals and filters. It generally means a
range of frequencies. This could be the range of frequencies present in a signal
or the range of frequencies a filter allows to pass. Usually, only the positive
frequencies are used to describe the range of frequencies. For example, the ideal
6.1. SIGNAL FILTERING 139
low pass filter in the previous example with cutoff frequencies of ±8ω0 is said
to have a bandwidth of 8ω0 , even though the width of the filter is 16ω0 . The
ideal bandpass filter in Figure 6.5c has a bandwidth of ω2 − ω1 which is the
width of the region in positive frequency in which the filter passes a signal.
There are many different kinds of bandwidths, including absolute bandwidth,
3-dB bandwidth or half-power bandwidth and the null-to-null bandwidth or zero
crossing bandwidth.
Absolute Bandwidth
This definition is used in conjunction with band-limited signals. A signal x(t)
is called band-limited if its Fourier transform satisfies the conditions X(ω) = 0
for |ω| ≥ ωB . Furthermore, with the above conditions still satisfied it is also
called a baseband if X(ω) is centered at ω = 0. On the other hand if |X(ω)| = 0
for |ω − ω0 | ≥ ωB and X(ω) centered at ω0 it is called a bandpass signal, see
Figure6.10. If x(t) is a baseband signal and |X(ω)| = 0 outside the interval
B = ωB
But if x(t) is a bandpass signal and |X(ω)| = 0 outside the interval ω1 < ω < ω2 ,
then
B = ω2 − ω1
1
|X(ω1 )| = √ |X(0)|
2
√ inside the band 0 < ω < ω1 , the magnitude |X(ω)| falls no lower
Note that
than 1/ 2 of its value at ω = 0. The term 3-dB bandwidth comes from the
relationship
1
20 log10 √ = −3 dB
2
Example 6.5 Determine the 3-dB (half power) bandwidth for the signal x(t) = e−t/T u(t).
Solution The signal x(t) is a baseband signal and has the Fourier trans-
form
1
X(ω) = 1
T + jω
6.1. SIGNAL FILTERING 141
the first null in the spectrum below ωm , where ωm is the frequency at which
the spectrum has its maximum magnitude. For baseband signals , the spectrum
maximum is at ω = 0 and the bandwidth is the distance between the first null
and the origin.
1 (−t/RC)
h(t) = e u(t)
RC
6.2. AMPLITUDE MODULATION 143
tems is to convey information from one point to another. Prior to sending the
information signal through the transmission channel, the information signal is
converted to a useful form through what is known as the modulation process. In
amplitude modulation, the amplitude of a sinusoidal signal is constantly being
modified in proportion to a given signal. This has the effect of simply shifting
the spectrum of the given signal up and down by the sinusoid frequency in the
frequency domain.
The use of amplitude modulation may be advantageous whenever a shift in
the frequency components of a given signal is desired. Consider for example the
transmission of a human voice through satellite communications. The maximum
voice frequency is 3 kHz, on the other hand, satellite links operate at much
higher frequencies (3-30 GHz). For this form of transmission to be feasible, we
clearly need to do things: shift the essential spectral content of a speech signal
to some higher frequency so that it lies inside the assigned frequency range
for satellite transmission, and shift it back to its original frequency band on
reception. The first operation is simply called modulation, and the second we
call demodulation.
We consider a very simple method of modulation called the double-sideband,
suppressed-carrier, amplitude modulation (DSB/SC-AM). This type of modula-
tion is accomplished by multiplying the information-carrying signal m(t) (known
as modulating signal), by a sinusoidal signal called the carrier signal, cos ω0 t,
ω0 is the carrier frequency as shown in Figure 6.17.
trum of the output signal (modulated signal). As was indicated earlier by the
frequency shifting property of the Fourier transform, we have
F 1
y(t) = m(t) cos ω0 t ←→ [M (ω − ω0 ) + M (ω + ω0 )] = Y (ω) (6.1)
2
Recall that M (ω −ω0 ) is M (ω) shifted to the right by ω0 and M (ω +ω0 ) is M (ω)
shifted to the left by ω0 . Thus, the process of modulation shifts the spectrum of
the modulating signal to the left and right by ω0 (Figure 6.19). Note also that
the if bandwidth of m(t) is ωB , then as indicated in Figure 6.19, the bandwidth
of the modulated signal is 2ωB . We also observe that the modulated signal
spectrum spectrum centered at ω0 is composed of two parts: a portion that lies
above ω0 , known as the upper sideband (USB), and a portion that lies below ω0 ,
known as the lower sideband (LSB), thus the name double sideband (DSB). The
name suppressed carrier (SC) comes from the fact that DSB/SC spectrum does
not have the component of the carrier frequency ω0 . In other words, there is no
impulse at the carrier frequency in the spectrum of the modulated signal. The
relationship of ωB to ω0 is of interest. Figure 6.19 shows that ω0 ≥ ωB in order
to avoid overlap of the spectra centered at ±ω0 . If ω0 < ωB , the spectra overlap
and the information of m(t) is lost in the process of modulation, a loss which
makes it impossible to get back m(t) from the modulated signal m(t) cos ω0 t.
To be able extract the information signal m(t) from the modulated signal the
modulation process must be reversed at the receiving end. This process is called
demodulation. In effect, demodulation shifts back the message spectrum to its
original low frequency position. This can be done by multiplying the modulated
signal again with cos ω0 t (generated by a so-called local oscillator) and then
filtering the result, as shown in Figure 6.20. The local oscillator is tuned to
produce a sinusoid wave at the same frequency as the carrier frequency, this
demodulation technique is known as synchronous detection. To see how the
146 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM
system of Figure 6.20 works, note that since z(t) = y(t) cos ω0 t. It follows that
Z(ω) is
1 1
Z(ω) = Y (ω − ω0 ) + Y (ω + ω0 ) (6.2)
2 2
Substituting (6.1) for Y (ω) in (6.2) shows that Z(ω) illustrated in Figure 6.21
has three copies of M (ω)
1 1 1
Z(ω) = M (ω − ω0 − ω0 ) + M (ω + ω0 − ω0 )
2 2 2
1 1 1
+ M (ω + ω0 − ω0 ) + M (ω + ω0 + ω0 )
2 2 2
1 1 1
= M (ω − 2ω0 ) + M (ω) + M (ω + 2ω0 )
4 2 4
1
Note that we obtained the desired baseband spectrum, 2 M (ω), in Z(ω) in
addition to unwanted spectrum at ±2ω0 .
where the gain of the lowpass filter should be G = 2 and the cutoff frequency
should satisfy ωB < ωc < 2ω0 − ωB .
6.3. SAMPLING 147
6.3 Sampling
Sampling a signal is the process of acquiring its values only at discrete points in
time. The main reason we acquire signals in this way is that most signal pro-
cessing and analysis today is done using digital computers. A digital computer
requires that all information in processes be in the form of numbers. Therefore,
the samples are acquired and stored as numbers. Since the memory and mass
storage capacity of a computer are finite, it can only handle a finite number of
numbers. Therefore, if a digital computer is to be used to analyze a signal, it
can only be sampled for a finite time. The questions that may rise here are:
To what extent do the samples accurately describe the signal from which they
are taken? How can all the signal information be stored in a finite number of
samples? How much information is lost, if any, by sampling the signal?
the Fourier transform. As can be seen from Figure 6.22, the sampled signal is
considered to be the product (modulation) of the continuous time signal x(t)
and the impulse train δT (t) and, hence, is usually referred to as the impulse
modulation model for the sampling operation. To obtain a frequency domain
representation of the sampled signal, we can begin by deriving an expression for
the Fourier transform of the signal xs (t). To do this we represent the periodic
impulse train in terms of its Fourier transform. From (6.3), it follows that
∞ ∞
2π X X
F [δT (t)] = δ(ω − nω0 ) = ωs δ(ω − nωs ) (6.3)
T n=−∞ n=−∞
Figure 6.23: (a) Magnitude spectrum of a bandlimited signal. (b) The frequency
spectrum of a bandlimited signal which has been sampled at ωs > 2ωB .
6.3. SAMPLING 149
ω s − ω B ≥ ωB
as illustrated in Figure 6.23b. Thus, the signal x(t) can be recovered from its
samples only if
ωs ≥ 2ωB
This is the sampling theorem that we stated earlier. The maximum time spacing
between samples is
π
T =
ωB
If T does not satisfy this condition, the different components of Xs (ω) overlap
and we will not be able to recover x(t) exactly. This is referred to as aliasing.
If x(t) is not bandlimited there will always be aliasing irrespective of the chosen
sampling rate. Figure 6.25 shows the magnitude spectrum of a bandlimited
signal which has been impulse-sampled at twice its highest frequency. If the
sampling rate was lower than 2ωB the components of Xs (ω) would overlap
and no filter could recover the original signal directly from the sampled signal.
Figure 6.25: Magnitude spectrum of a bandlimited signal which has been sam-
pled at twice its highest frequency.
150 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM
2 j j
X(ω) = δ(ω) + δ(ω − π) − δ(ω + π)
3 3 3
The highest frequency in this signal is π rad/sec, so any sampling rate such that
ωs ≥ 2π will guarantee that we can exactly reconstruct the signal x(t) from its
samples. Figure 6.26a shows X(ω) and Figure 6.26b shows Xs (ω) for ωs = 6π.
The light coloured box in Figure 6.26b is the frequency response of the ideal
lowpass filter H(ω) whose cutoff frequency is 3π rad/sec. The output of the
ideal lowpass filter will be exactly equal to the original signal as required.
Figure 6.26: (a) Fourier transform of the signal x(t) (b) Fourier transform of
the sampled signal xs (t).